Panic

Panic Blog

From the desk of
logan
Engineering Dept.

Fun with Face Detection

Let’s face it (sorry): face detection is cool. It was a big deal when iPhoto added Faces support — the ability to automatically tag your photos with the names of your friends and family adds a personal touch. And Photo Booth and iChat gained some awesome new effects in OS X Lion that can automatically track faces in the frame to add spinning birds and lovestruck hearts and so on. While not always productively useful, face detection is a fun technique.

I’ve seen attempts at duplicating Apple’s face detection technology. (Apple is far from the first company to do it.) There are libraries on GitHub and various blog posts for doing so. But recently I realized that Apple added support for face detection in OS X Lion and iOS 5. It seemed to slip under my radar of new shiny things. Developers now have a direct link to this powerful technology on both platforms right out of the proverbial box.

Using Face Detection through Core Image

Apple’s face detection is exposed through Core Image, the super-useful image manipulation library. Two classes are important: CIDetector and CIFeature (along with its subclass, CIFaceFeature). With a little experimenting one night, I was able to get a sample app detecting faces within a static image in about 10 lines of code:

  1. // Create the image
  2. CIImage *image = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:@"Photo.jpg"]];
  3.  
  4. // Create the face detector
  5. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy, nil];
  6.  
  7. CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
  8.  
  9. // Detect the faces
  10. NSArray *faces = [faceDetector featuresInImage:image];
  11.  
  12. NSLog(@"%@", faces);

Note the dictionary of options. There is only one particularly useful key: CIDetectorAccuracy. It has two possible values: CIDetectorAccuracyLow and CIDetectorAccuracyHigh. The only difference: There seems to be additional processing performed on the image in order to detect faces, but at the cost of higher CPU usage and lower performance.

In cases where you are only apply detection to a single static image, high accuracy is best. Low accuracy becomes handy when manipulating many images at once, or applying the detector to a live video stream. You see about a 2-4x improvement in render time with low accuracy, but face tracking might pick up a couple of false-positives in the background once in a while, or be unable to detect a face at an angle away from the camera as well as high accuracy could.

Now that we have an array of faces, we can find out some information about each face within the image. CIFaceFeature exposes several useful properties to determine the bounding rectangle of the face, as well as the position of each eye and the mouth.

Using these metrics, it’s then possible to draw on top of the image to mark each facial feature. What you get is a futuristic sci-fi face tracker ala the Fifth Element. Leeloo Dallas Multipass, anyone?

  1. // Create an NSImage representation of the image
  2. NSImage *drawImage = [[NSImage alloc] initWithSize:NSMakeSize([image extent].size.width, [image extent].size.height)];
  3. [drawImage addRepresentation:[NSCIImageRep imageRepWithCIImage:image]];
  4.  
  5. [drawImage lockFocus];
  6.  
  7. // Iterate the detected faces
  8. for (CIFaceFeature *face in faces) {
  9. // Get the bounding rectangle of the face
  10. CGRect bounds = face.bounds;
  11.  
  12. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  13. [NSBezierPath strokeRect:NSRectFromCGRect(bounds)];
  14.  
  15. // Get the position of facial features
  16. if (face.hasLeftEyePosition) {
  17. CGPoint leftEyePosition = face.leftEyePosition;
  18.  
  19. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  20. [NSBezierPath strokeRect:NSMakeRect(leftEyePosition.x - 10.0, leftEyePosition.y - 10.0, 20.0, 20.0)];
  21. }
  22.  
  23. if (face.hasRightEyePosition) {
  24. CGPoint rightEyePosition = face.rightEyePosition;
  25.  
  26. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  27. [NSBezierPath strokeRect:NSMakeRect(rightEyePosition.x - 10.0, rightEyePosition.y - 10.0, 20.0, 20.0)];
  28. }
  29.  
  30. if (face.hasMouthPosition) {
  31. CGPoint mouthPosition = face.mouthPosition;
  32.  
  33. [[NSColor colorWithCalibratedWhite:1.0 alpha:1.0] set];
  34. [NSBezierPath strokeRect:NSMakeRect(mouthPosition.x - 10.0, mouthPosition.y - 10.0, 20.0, 20.0)];
  35. }
  36. }
  37.  
  38. [drawImage unlockFocus];

With a little more work, it’s pretty easy to apply this technique to live video from the device’s camera using AVFoundation. As you get back frames from AVFoundation, you perform face detection and modify the frame before it is displayed. But I’ll leave that as an activity for the reader. :-)

And amazingly, it even works with cats.

With a little more effort, I was able to grab the closest detected face’s region of the image, and do a simple copy-and-paste onto the other detected faces (adjusting for angle and distance, of course). Behold… Panic’s newest, most terrifying cloning technology!

Here’s a little sample app. Have fun!

Posted at 11:25 am 18 Comments

Nice!! Thank you!!

Alexandre

11/21/2012 1:48 PM

Wow! Just wow!

Wow !!!

I think I’ll use it in one of my apps…

That’s funny. Really nice idea. Thanks for the sample app.

Paul D. Waite

11/22/2012 3:12 AM

So now we know what a child born of Logan and Cabel would look like?

And no matter how much we drink, we can’t get its image out of our memory?

This would be a great way to ensure only MY cats get back in the house through the cat door.

Well, of course. Cats are people, too, y’know.

Pierre Lebeaupin

11/25/2012 6:36 AM

Back when I worked as a contractor programmer for NXP Software, before I even worked on iPhone stuff there, I implemented a demo for Mobile World Congress which used face positioning to insert the user’s face in a movie poster; I can’t claim credit for the idea, all credit for that goes to NXP Software marketing (and it might have been done before that), but I got to be the one having fun (some ironic, some non-ironic) with the Windows Mobile 6 phone and the face detection library. Some of the results, such as this: https://picasaweb.google.com/LifeVibesStudio/LifeVibesStudiosMWC2009Monday16th#5303385994183361234 , can still be seen at https://picasaweb.google.com/LifeVibesStudio .

This was a neat-o part of the demo; the only real issue is that the the face coordinates were not consistently positioned enough around the face for insertion to be perfect every time: if you go look the position of the mouth is too low for some of these, for instance. But maybe Apple’s stuff is better in this regard.

Where’s the Cyber Monday deals? :-) I was fixin’ to pick-up Coda 2 for $55

يوغا الضحك

12/12/2012 4:01 PM

מטפל הוליסטי -רב תחומי , מורה ליוגה
צחוק ומנחה לסדנאות צחוק , הומור וחשיבה יצירתית . מקיים סדנאות להכשרת מנחים ליוגה
צחוק . מנחה סדנאות צחוק לצוותים מקצועיים , קבוצות מבוגרים ולילדים , בשילוב ריקוד , משחק ויצירה .

Ha this is awesome!! :D

This is really great , but can you help me with the same on a webcam, help would be greatly appreciated.I am using VS2010 and i need it for C sharp.

I had been trying to do this in real time video, but wasn’t able to bump the speed above ~10fps. Did you have better luck with real time?

Alexander Kachkaev

5/3/2013 3:12 AM

Cool stuff! For those who want to use Core Image to detect faces in photographs, here is a simple command line tool you might find useful: https://github.com/kachkaev/CICommandLineFaceDetector You can integrate it with all kinds of apps if you want, that’s what it’s actually designed for.

Thanks, guys, for revealing Core Image for me!

Diogenes

9/3/2013 6:16 PM

Hi, do you have the code to change faces? or can i do that?

Frankie Boyer’s Guest Line Up for Tuesday 10

10/15/2013 3:53 PM

Litt implique que beaucoup de dames c habitu propres sac cardinal longchamp Dior sa seule possibilit d’avoir leurs petits modules en cherchant adorables et Les d mentionn ci-dessus r que la bl peut la bonne disjunction let flow reprendre l’look jeune des yeux. Fourre-tout brod est disponible en versions exquises et des couleurs vibrantes et aussi avoir un turn of expression uncommon. OK, maintenant laissez-nous discuter quelques exercices simples instrument of force a que nous pouvons faire let flow subordinate minimiser l’apparence de la partie sup des cuisses grandes, aussi connu sous le nom de sacoches. Vous pouvez envisager d’utiliser un sac de la r si vous l’utilisez let flow une event informelle. quel ils sont et de motion in a circle, il est courant let flow les pi d’of silver ind de poss beaucoup de d les soucis n en ce qui concerne un poignet wallet est que va garantie. Couture irr pauvres est vraiment une cardinal morte s hors de la fausse. Il pourrait difficile let flow les entreprises dont la age de d mouvement let flow cr cette stamping stable. Le sac de Abigail est un moyen fantastique d’inclure certains figure let flow votre soir Les couleurs sont essentiellement mod let flow obtenir une grande brillance et le lustre. Vous ne pouvez vraiment cr une education avec le sac cardinal parfait.

Hi

hi,

is it possible to use this face detection algorithm for improper lighting…