iOS Camera Overlay Example Using AVCaptureSession

I made a post back in 2009 on how to overlay images, buttons and labels on a live camera view using UIImagePicker. Well since iOS 4 came out there has been a better way to do this and it is high time I showed you how.

You can get the new project’s source code on GitHub.

This new example project is functionally the same as the other. It looks like this:

Screenshot 2009.12.10 18.46.38 iOS Camera Overlay Example Using AVCaptureSession

AR Overlay Example App

All the app does is allow you to push the “scan” button which then shows the “scanning” label for two seconds. Thats it. Not very exciting I know, but it nicely demonstrates how to layer UI objects over top of a live camera view.

This time, instead of a UIImagePicker, we are using the AVCaptureSession’s preview ability to show the live camera which makes things a little bit easier, and much more powerful.

In the project you will see the app is a standard, view type app created in Xcode 4. The app delegate creates an instance of the view controller, which is an AROverlayViewController. Here is the header and implementation code:

#import <UIKit/UIKit.h>
#import "CaptureSessionManager.h"

@interface AROverlayViewController : UIViewController {

@property (retain) CaptureSessionManager *captureManager;
@property (nonatomic, retain) UILabel *scanningLabel;

#import "AROverlayViewController.h"

@implementation AROverlayViewController

@synthesize captureManager;
@synthesize scanningLabel;

- (void)viewDidLoad {
	[self setCaptureManager:[[[CaptureSessionManager alloc] init] autorelease]];
	[[self captureManager] addVideoInput];
	[[self captureManager] addVideoPreviewLayer];
	CGRect layerRect = [[[self view] layer] bounds];
	[[[self captureManager] previewLayer] setBounds:layerRect];
	[[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),
	[[[self view] layer] addSublayer:[[self captureManager] previewLayer]];
  UIImageView *overlayImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"overlaygraphic.png"]];
  [overlayImageView setFrame:CGRectMake(30, 100, 260, 200)];
  [[self view] addSubview:overlayImageView];
  [overlayImageView release];
  UIButton *overlayButton = [UIButton buttonWithType:UIButtonTypeCustom];
  [overlayButton setImage:[UIImage imageNamed:@"scanbutton.png"] forState:UIControlStateNormal];
  [overlayButton setFrame:CGRectMake(130, 320, 60, 30)];
  [overlayButton addTarget:self action:@selector(scanButtonPressed) forControlEvents:UIControlEventTouchUpInside];
  [[self view] addSubview:overlayButton];
  UILabel *tempLabel = [[UILabel alloc] initWithFrame:CGRectMake(100, 50, 120, 30)];
  [self setScanningLabel:tempLabel];
  [tempLabel release];
	[scanningLabel setBackgroundColor:[UIColor clearColor]];
	[scanningLabel setFont:[UIFont fontWithName:@"Courier" size: 18.0]];
	[scanningLabel setTextColor:[UIColor redColor]]; 
	[scanningLabel setText:@"Scanning..."];
  [scanningLabel setHidden:YES];
	[[self view] addSubview:scanningLabel];	
	[[captureManager captureSession] startRunning];

- (void) scanButtonPressed {
	[[self scanningLabel] setHidden:NO];
	[self performSelector:@selector(hideLabel:) withObject:[self scanningLabel] afterDelay:2];

- (void)hideLabel:(UILabel *)label {
	[label setHidden:YES];

- (void)didReceiveMemoryWarning {
  [super didReceiveMemoryWarning];

- (void)dealloc {
  [captureManager release], captureManager = nil;
  [scanningLabel release], scanningLabel = nil;
  [super dealloc];


Mostly basic stuff here. The one interesting note is that we are creating an instance of a CaptureSessionManager class which is a custom made class to handle the AVCaptureSession. While it is not required to do this work in a separate class, it is a clean way to do it and provides for a good place to expand the capture session to do other things like analyze the live video output data stream for example.

Using the methods of the CaptureSessionManager class we turn on the video input and then turn on the preview layer and add it to the view controller’s view’s layer (man is that is a mouthful). The rest of the code is just adding the image, button and label and a button method that shows the label for two seconds when pressed.

Here is the header and implementation for the CaptureSessionManager:

#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>

@interface CaptureSessionManager : NSObject {


@property (retain) AVCaptureVideoPreviewLayer *previewLayer;
@property (retain) AVCaptureSession *captureSession;

- (void)addVideoPreviewLayer;
- (void)addVideoInput;

#import "CaptureSessionManager.h"

@implementation CaptureSessionManager

@synthesize captureSession;
@synthesize previewLayer;

#pragma mark Capture Session Configuration

- (id)init {
	if ((self = [super init])) {
		[self setCaptureSession:[[AVCaptureSession alloc] init]];
	return self;

- (void)addVideoPreviewLayer {
	[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
	[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];

- (void)addVideoInput {
	AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];	
	if (videoDevice) {
		NSError *error;
		AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
		if (!error) {
			if ([[self captureSession] canAddInput:videoIn])
				[[self captureSession] addInput:videoIn];
				NSLog(@"Couldn't add video input");		
			NSLog(@"Couldn't create video input");
		NSLog(@"Couldn't create video capture device");

- (void)dealloc {

	[[self captureSession] stopRunning];

	[previewLayer release], previewLayer = nil;
	[captureSession release], captureSession = nil;

	[super dealloc];


This is a pretty basic class from which you could build from to do a lot, but the basics are all we need for this example. In the header we define a couple properties, one for the session and one for the preview layer, and a couple of methods, one to start video input and one to turn on the preview layer.

In the init method all we do is initialize the session with:

[self setCaptureSession:[[AVCaptureSession alloc] init]];

The addVideoPreviewLayer and addVideoInput methods are similarly simple. The addVideoPreviewLayer method just sets up the preview layer. The addVideoInput method gets the device (with some error checking which is always a good idea, right?) and then sets the input using that device.

There is really not much to it and I think the best advice if you are trying to do something like this is to just download the source code and take a look at how it works for yourself.

I mentioned that this method gives you more power. That power comes from the ability to get and process the video output data stream using AVCaptureVideoDataOutput rather than having to take screenshots using an undocumented API and then process the resultant image as you had to do in the past. A great example of this is shown in the WWDC 2010 Session 409 video where they show you how to detect a certain color in the video output. The sample project used in that video, called FindMyiCone, is available in the source code for WWDC 2010. Everyone in the iOS developer program should have access to both of those through in the dev center.

Share and Enjoy:
  • printfriendly iOS Camera Overlay Example Using AVCaptureSession
  • digg iOS Camera Overlay Example Using AVCaptureSession
  • stumbleupon iOS Camera Overlay Example Using AVCaptureSession
  • delicious iOS Camera Overlay Example Using AVCaptureSession
  • facebook iOS Camera Overlay Example Using AVCaptureSession
  • yahoobuzz iOS Camera Overlay Example Using AVCaptureSession
  • twitter iOS Camera Overlay Example Using AVCaptureSession
  • googlebookmark iOS Camera Overlay Example Using AVCaptureSession
  • reddit iOS Camera Overlay Example Using AVCaptureSession

About jjob

I'm a technologist and a music producer. I make sound, write code and build things with electronics and microcontrollers.
This entry was posted in AR, code, iOS, iPad, iPhone and tagged , , , , , , , . Bookmark the permalink.

137 Responses to iOS Camera Overlay Example Using AVCaptureSession

  1. Pingback: iPhone Camera Overlay App With Custom Button Example | musicalgeometry

  2. Pingback: Still Image Capture Using AVCaptureSession | musicalgeometry

  3. Ricardo says:

    Excellent post, congratulations.

  4. Tate Allen says:


    Thanks for the wonderful tutorial! One question though. How would I take a picture when the user taps the scan button, then save it to the photo album?


  5. Jorge says:

    Great tutorial JJob!

    — Jorge

  6. Nathan says:

    Great post!! I am a total nooooob to xcode. If i deleted the AR Overlay View Controller in the MainWindow.xib file how do I replace the referencing outlets?

  7. kshitij says:

    Hey Jason,

    I wanted to overlay a button, using your technique but instead of taking still images, let UIImagePicker take a video and while the user is browsing the frames of the video give him the option to tap on the overlayed button/control and capture that frame.

    Do you think this is doable?

    I have asked this question here.

    Thanks for these tutorials.

    Appreciate it.

    • jjob says:

      I think you are better off using AVCaptureSession not UIImagePicker to record a video. Not sure how to grab a frame out of a previously recorded video but I would think it should be possible.

  8. Luke says:

    Great stuff, Do you happen to know what I would need to enter into my button action to change the camera device to rear/front once the camera is active ?..


  9. Jouni says:

    This is a great tutorial which actually works! Thnx!

  10. Paciencia says:

    hi, thanks for the post!
    I’m working on this right now, however I have my view on landscape, but my camera seems to be inverted.

    Any suggestion on how to fix this?


    • Marsman250 says:

      Add this to your CaptureSessionManager.m file (in – (void)addVideoPreviewLayer)

      previewLayer.orientation = UIInterfaceOrientationLandscapeRight; // Orient for Landscape Right

  11. Roger says:

    Hy.. great tutorial. i’m a newbie in iphone programming and i want to modify this … to actually grab the image within that rect and process it.. without closing the camera.i don’t know if u understand well what i want :).i think what i want is grab a picture while camera is still running and process it. can u tell me or indicate a place where to look so i can do this ? thanks in advance

    • jjob says:

      Yes, you can do this fairly easily using AVFoundation to grab the raw image data from a live stream. A really good example to look at for doing this is available in last years WWDC sample code and videos. I think the sample app is called FindMyiCone. You can access all the WWDC videos and sample code by logging into your developer account and clicking on Resources at the top of the page. You will then see the WWDC 2010 link at the bottom of the page. Click the link to open the videos in iTunes and you will then be able to grab all the videos and the sample code from iTunes.

      • Roger says:

        thanks for reply. I’ll take a look on that stuff in the morning … i hope i’ll be able to finish this part of my app tomorrow.

  12. Roger says:

    hello again .. i’ve finally managed to grab the image, but i don’t know how to process it for saving just what is within that rect .. :|. Can u give me a hand, please?

    • jjob says:

      If you are using AVCaptureStillImageOutput and already have a UIImage and all you want to do is crop the image then do the following:

      CGImageRef image = [yourCapturedUIImage CGImage]; // convert your UIImage to a CGImageRef
      CGRect rect = CGRectMake(x, y, width, height); // set the rect to crop (insert real values)
      CGImageRef croppedImage = CGImageCreateWithImageInRect(yourCapturedUIImage, rect); // crop the image and save as a new CGImageRef
      UIImage *croppedUIImage = [UIImage imageWithCGImage:croppedImage]; // convert the CGImage to a UIImage

      If instead you are capturing live video data with AVCaptureVideoDataOutputSampleBuffer (and its delegate method – captureOutput:didOutputSampleBuffer:fromConnection:) let me know and I might be able to help you capture and crop an image from that as well.

      • Roger says:

        I’m using AVCAptureStillImageOutput ,but i don’t know what “x”,y” to set … and the image it creates it’s rotated 90 degrees.
        what corner is (0,0)?

        • jjob says:

          CGRectMake is a C function that takes the origin (x,y) and the size (width,height) of the rect that you want to define. In UIKit the origin is in the top left corner with the axes extending down and to the right. (FYI when using CoreGraphics the origin is in the bottom left corner)

          Not sure what your app is doing but the sample app I wrote was set up for portrait. Maybe you are trying to capture a landscape image in which case you would need to change some things to support that.

      • developer says:

        hey, I found this very very useful. But I am stuck at the point where I want to crop the image with respect to overlay rectangle on the center of preview layer. I am using didOutputSampleBuffer method but I want to pass cropped image to tesseract. Can you please help.

  13. Deep says:


    Your code is really helpful and clear to understand.Thanks for sharing it with us.
    Now what i have done is that i am displaying frame on camera but when i click picture my frame(which is png image) does not get included into picture , as frame is displayed on screen and camera takes picture of view in front of it so understood, but need to achieve it but no idea how can i add frame to picture taken.
    please guide me on this.

    • jjob says:

      You can combine images by drawing in a context and then saving the resulting context to an image.

      Assuming UIImage *frameImage is your frame and UIImage *photoImage is your photo, you would need to do something like this:

      CGSize size = CGSizeMake(320, 480); // define the size of your context
      UIGraphicsBeginImageContextWithOptions(size, NO, 0.0); // create the context with opaque = NO so there is an alpha channel and scale = 0.0 so that scale factor is scale of the device's main screen

      CGPoint photoPoint = CGPointMake(x, y); // define the placement of the photo in your context at x,y
      [photoImage drawAtPoint:photoPoint]; // draw the photo image in your context

      CGPoint framePoint = CGPointMake(x, y); // define the placement of the frame in your context at x,y
      [frameImage drawAtPoint:framePoint]; // draw the frame image in your context

      UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // save the context to a new image which you can now save

      UIGraphicsEndImageContext(); // close the context

      • Claire says:

        hi jjob,

        can i know where do you place the above code at? under which method?

        and referring to your codes, what/where is the *photoImage defined at?

        Would appreciate very much for your prompt reply!

        • jjob says:

          I recommend you download the sample app and explore that. You should be able to use that to inform any of your own projects.

  14. Barry Taylor says:

    Hi Jason.
    Excellent coding and info’, thanks very much, you are a star!

    The question I have, if you don’t mind, is this:

    I wish to place images on top of the live camera feed. The images are thumbnails of real photographs. I don’t need the camera to function as a camera at all, I am merely using it to display the live feed/camera view for ‘cosmetic’ purposes only.
    I need to be able to touch the thumbnail images (i.e. the thumbnails of the photos on top of the camera feed) to bring up info’ on those images, so my question is this:

    Is it possible to overlay ‘touchable/tapable’ small thumbnail images of real photos and have the ability to be be able to tap on them to bring up info’ in say a bubble text box?
    Plus is it possible to remove all the camera functionality such as the zoom slider and other buttons and the ‘touch screen to focus’ feature to leave just a clean uncluttered camera view with no camera functionality except to display the camera view/feed?

    I really appreciate your time and help. You are very kind and generous to offer the help you do.

    Best regards and thanks again for anything you can offer.


    • jjob says:

      Yes, this is all possible and my sample app demonstrates everything you want to do; placing buttons, images, and labels on top of a live camera view. Grab the code from GitHub and read this post. Everything you need should be there.

  15. suvarna says:

    how to caputer piture on 3G or u can say IOS 3.1.3?can plz help me out?

    • jjob says:

      If you are trying to do something similar to this sample app, I still have an iPhone 3G which I have updated to iOS 4.2 and it runs this sample code fine. It has been a long time since I built something for iOS 3.1, and I don’t really understand why you would want to do so. If you can’t update and use the AVCaptureSession method then you are out of luck for anything you could put in the store. You can look at my previous post that shows you a way that should work on iOS 3.1 but it uses methods that are no longer public.

      If instead you are just wondering how to take a picture on iOS 3.x then look at UIImagePicker. There are lots of tutorial on how to do this.

  16. JHil says:

    Awesome info! Thanks! Is it possible to place an editable textbox on the screen which could receive input from the overlay button(s)?

  17. David says:

    Excellent. I will probably be using parts of this code, including the CaptureSessionManager custom class, as part of an app I am developing. I’ll certainly cite this website, if only because you provide excellent, good quality, straight-to-the-point, easy to follow tutorials. Thanks a lot!

    • jjob says:

      Great to hear. Thanks for reading.

    • David says:

      That being said, I’m running into some memory leaking issues here… I am trying to, at the push of a button, present the user with a modal view controller that creates an instance of CaptureSessionManager and runs all the code above… the difference is that the button I used is a “cancel” button that, for now, simply calls [self dismissModalViewController]. I’m getting memory warnings if I repeatedly call and dismiss the modal view controller during a test… If this error is solely related to the modal view controller, then I will find it eventually, but are there any nuances with regard to your code that I should be aware of in terms of memory management? Is everything accounted for? Is there a most likely cause of leakage? I’d appreciate any help! Thanks.

      • jjob says:

        After you have made the viewController that you are presenting modally are you releasing the view? When you present it, the view controller gets retained so you need to release it as soon as you present it.

        You should be doing something like this:

        MyViewController *viewController = [[MyViewController alloc] init];
        [self presentModalViewController:viewController animated:YES];
        [viewController release];

  18. .sA.r says:

    Hi ! TY For this excellent tuto, very helpful !

    Is it possible to implement a zoom in the Camera Overlay ?


  19. Mo says:

    i don´t want the camera all over my display how can i change that ?

    • jjob says:

      I think you should just have to set the bounds of the previewLayer to something other than the entire view bounds.

      [[[self captureManager] previewLayer] setBounds:CGRectMake(x,y,width,height)];

      • Mo says:

        And when i just want the camera with a take picture button without the scanning label what should i remove ?

        Ps:Great tut! 😉

  20. jjob says:

    You can add or remove any UI elements on top of the previewLayer. If you don’t want the scanning label just look through the code for where it is added and everywhere it is referenced and remove all those references.

  21. iChathura says:

    Hi jjob,

    I used your tutorial to understand the AVCam source codes, however I have this simple question, in order to make the AVCaptureSession -startRunning method non blocking the UI thread, Apple sample code specified the following codes.
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
    [[[self captureManager] session] startRunning];

    However, when I call this, it won’t execute the callback method,
    – (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

    And that will block my objective to capture the frames coming out.
    Will be very helpful if you can point me to a right direction. Thanks again for the great tutorial.

    • jjob says:

      Hi @iChathura,

      Have you implemented the AVCaptureVideoDataOutputSampleBufferDelegate on your class? Have you provided a queue for the callbacks to be invoked on?

      The AVFoundation Programming Guide says:

      “The data output object uses delegation to send the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.”

      Try something like this:

      dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
      [output setSampleBufferDelegate:self queue:queue];

      Then, because your captureOutput:didOutputSampleBuffer:fromConnection: method will be called on a different thread make sure you create (and drain) an autorelease pool in it.

      Now in the callback method if you need to do anything on the main thread you can use performSelectorOnMainThread:withObject:waitUntilDone:

      Let me know if that works for you.

  22. Joseph says:

    Hi, I love your tutorials. And thank you for sharing this.

    I’m still pretty new to xcode and have much to learn. I have a little problem and I hope you can help me. I have a “mainView” and a button when clicked switch view to the camera view. Using your method above, how would I got back to the “mainView”? The code below is what I would use to switch view but requires using the interface builder:

    – (IBAction)mainView {
    mainView *main = [[mainView alloc] initWithNibName:nil bundle:nil];
    main.modalTransitionStyle = UIModalTransitionStyleCoverVertical;
    [self presentModalViewController:main animated:YES];
    [main release];

    I would like to have a button on the camera view to go back to my “mainView” How would I go about doing that? Thanks.

    • jjob says:

      Glad you like the tutorial!

      If you present a modal view controller all you have to do to get back to the previous view controller is [self dismissModalViewControllerAnimated:YES];

      By the way, it looks like your mainView class is actually a UIViewController subclass. You should really be naming it as such, as naming it a view is misleading. MainViewController would be a clearer name for your class.

      Hope that helps.

  23. Joseph says:

    Thank you very much. It worked.

    Yes, it is a UIViewController subclass and thank you for your advice. I will do that from now on. It’s great to see people like yourself taking the time to help others. And I’m learning more and more each day. Thank you again. 😀

  24. James says:

    Hi! Thanks so much for the tutorial. I’m trying to put OpenGL (3D lines) on top but I can’t figure out how to get the OpenGL stuff to appear in front of the camera feed. I can get other things, like images and buttons to be in front but the OpenGL renders to a layer beneath the video (which I can see when I’ve reduced the video size). Would you have any ideas how I could put 3D stuff on top of the live camera video? Thanks!

  25. Stanislav says:

    CaptureSessionManager is leaking. I was getting terrible memory leaks until I fixed
    [self setCaptureManager:[[CaptureSessionManager alloc] init]];
    [self setCaptureManager:[[[CaptureSessionManager alloc] init] autorelease]];

  26. taylor says:

    hi, I’m still new to xcode. I want to hide the overlayButton after it’s pressed. I could do it with interface bulider but I don’t know how to programmatically. could you please show me how?

    I tried to do it the same way you hide the label but it just won’t work. I tried it with

    – (void)scanButtonPressed {
    [[self overlayButton] setHidden:YES];


    – (void)scanButtonPressed {
    [self performSelector:@selector(hideButton)withObject:nil afterDelay:0.0];
    – (void)hideButton {
    [[self overlayButton] setHidden:YES];

    Both ways don’t work. It gave me method not found (return type default to ‘id’) warnings.

    so i tried adding to the .h
    @property (nonatomic, retain) UIButton *overlayButton;

    and to the .m
    @synthesize overlayButton;

    it would silence the warning but also brakes other part of your code.

    I still have much to learn and I would be really grateful if you would teach me how to fix my problem. Thank you.

    • jjob says:

      The problem you are having is that the overlay button is not declared as a property on the AROverlayViewController class. Because of this there is no such thing as [self overlayButton] and outside of its local scope you cannot reference it. To do what you want add a property to your header (this also automatically creates an instance variable). I am going to give the button a less generic name, I’ll call it scanButton

      @property (nonatomic, retain) UIButton *scanButton;

      Then in your implementation make sure you synthesize the property with:

      @synthesize scanButton;

      and do your memory management correctly by releasing in your dealloc:

      [scanButton release];

      Then you need to change where you set up the button. Here is one way to do that:

      [self setScanButton: [UIButton buttonWithType:UIButtonTypeCustom]];
      [[self scanButton] setImage:[UIImage imageNamed:@"scanbutton.png"] forState:UIControlStateNormal];
      [[self scanButton] setFrame:CGRectMake(130, 320, 60, 30)];
      [[self scanButton] addTarget:self action:@selector(scanButtonPressed) forControlEvents:UIControlEventTouchUpInside];
      [[self view] addSubview:[self scanButton]];

      Now, if you want the scanButton to hide when it is pressed you can modify the scanButtonPressed method to be:

      - (void) scanButtonPressed {
      [[self scanningLabel] setHidden:NO];
      [[self scanButton] setHidden:NO];
      [self performSelector:@selector(hideLabel:) withObject:[self scanningLabel] afterDelay:2];

      If you want the button to reappear when the scanning is done you could modify the hideLabel method to be:

      - (void)hideLabel:(UILabel *)label {
      [label setHidden:YES];
      [[self scanButton] setHidden:NO];

      Hope that helps answer your question.

  27. taylor says:

    Thank you very much! I just had to change [[self scanButton] setHidden:NO]; to YES in the scanButtonPressed. Other than that your code works perfectly. Thank you again. I really appreciate the help and I have learn a lot from your tutorials.

    • jjob says:

      If you really don’t want the button to reappear you should just remove the line I added to the hideLabel method. The button is already hidden so you don’t need to tell it to hide again. Glad to have helped. :-)

  28. jonnyH says:

    Hi, i was wondering instead of having the scanning label to appear after the button is pressed each time, how can I play an animation instead? I have 6 images called 1.png, 2.png etc..

    I have IBOutlet UIImageView *animate; in the .h

    and in the .m
    – (void) scanButtonPressed {
    animate.animationImages = [NSArray arrayWithObjects:
    [UIImage imageNamed:@”1.png”],
    [UIImage imageNamed:@”2.png”],
    [UIImage imageNamed:@”3.png”],
    [UIImage imageNamed:@”4.png”],
    [UIImage imageNamed:@”5.png”],
    [UIImage imageNamed:@”6.png”], nil];
    animate.animationDuration = 1.50;
    animate.animationRepeatCount = 0;
    [animate stopAnimating];

    I don’t know how to set it up for the images to appear in the view. Could you help me please? 😀 Thanks.

  29. Pingback: Capturing Video Frames as Image on iOS | GQAdonis

  30. SteveS says:

    Thanks for this tutorial!

    I’m trying to implement this in a tab bar application, and I would like to dismiss the overlay when the view is switched to another tab. I add the preview layer on an IBACTION method like this:

    [[self view] addSubview:self.overlayImageView];

    and try to remove it in viewWillDisappear like this:

    [self.overlayImageView removeFromSuperview];

    But a still image of the camera preview remains when I switch back to that tab.

    Any idea how to ‘restore’ the view to the state it was in before the previewLayer is added?


  31. Rick says:

    Hey guys – all awesome stuff here… I saw that some people asked if it were possible to set up the zooming function in this example… and it was said that it should be possible, but can you point me in the right direction to find the correct commands to enable zoom?


  32. Don Turner says:

    Hi jjob,

    I just wanted to say how much you rule for posting this code. It has been extremely helpful to me. All the other examples I’d looked at used UIImagePickerController which didn’t allow me the level of control I needed. Thanks so much.

    Just a small thing (and please feel free to ignore!) Since you’re using properties, you might want to consider using dot notation when referencing those properties e.g.

    [[[self captureManager] previewLayer] setBounds:layerRect];


    self.captureManager.previewLayer.bounds = layerRect;

    It just makes things a little easier to read imho.

    Anyway, thanks again!


    • jjob says:

      @Don You’re welcome. Glad the code was helpful. :-)

      As for using the dot syntax, it is a matter of personal preference. I prefer the Objective-C messaging syntax. I find it is less ambiguous and more aesthetically pleasing.

      Marcus Zarra has a good article on the subject on his Cocoa Is My Girlfriend blog in a post titled A Case Against Dot Syntax.

  33. amardeep says:

    Want to capture video. Please let me know how to capture video using the same code?

  34. Brandon says:

    Hi jjob,

    Great tutorial. I found it very concise and easy to follow. I was a little confused by the addition of the capture session manager class and took a little while to figure out how to get everything to talk to one another.

    I was able to get video preview and still image capture working with an image overlayed properly to the still. Next I need to capture the same video preview but as video instead of a still. I then need to overlay the same overlay image onto the entire duration of video. I must also capture audio. Can you point me in the right direction, I’ve searched all over. I was working from the AVCamDemo sample project from Apple, but it is too confusing for me with all the additional features. I tried extracting just the recording functions, but couldn’t get it to work.

    I assume the best way to go about what I need is to record a video file and audio file and then merge the two while adding the overlay image.

    I was able to do a similar thing in previous apps using the UIImagePicker and simply taking screenshots into an array and adding them together with audio, but this is limited as far as the framerate that I can get to without losing audio sync and I was unable to have screenshots include the live video preview layer.

    Any help you can offer would be great. Thanks

  35. Pingback: Tesseract OCR scannning « CopenhagenIOS

  36. Pingback: Tesseract OCR | AIPOPS

  37. Mohit says:

    Hi !
    is there any way i can convert the frames to UIImage , do some processing on the UIImage and then display the button on the retrieved coordinates after processing ?

    • jjob says:

      Sound perfectly possible. Though I haven’t done it before, you should be able to create a graphics context from a frame and do whatever you want to it, create a CGImage and then convert it to a UIImage. I’m sure there are lots of examples of this on the web.

  38. Tishan says:

    Hi Jason,

    Why is using AVCaptureSession better? In your tutorials, it seems like both methods can achieve the same thing in almost the same amount of code.

    • jjob says:

      AVCaptureSession is better because capturing audio-visual data and giving you real time access to the data and the built in ability to process that data in any way you wish is its actual intended purpose. Using the UIImagePicker can work in simple cases but is very limited. There are no sessions at WWDC on capturing AV or doing AR using UIImagePicker. The right thing to do is to use AVCaptureSession because it is what Apple built for us to do this. :-)

  39. You wrote a while ago that wanted some code to switch the cameras?
    Try this:

    - (BOOL) hasMultipleCameras {
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    if (devices != nil && [devices count] > 1) return YES;
    return NO;

    - (AVCaptureDevice *)cameraWithPosition : (AVCaptureDevicePosition) position
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices )
    if ( device.position == position )
    return device;

    return nil ;

    - (void) swapFrontAndBackCameras {
    //check for available cameras!
    if (![self hasMultipleCameras]) return;

    //assumes session is running
    NSArray *inputs = self.captureSession.inputs; //should only be one value!
    for ( AVCaptureDeviceInput *captureDeviceInput in inputs ) {
    AVCaptureDevice *device = captureDeviceInput.device ;
    if ( [device hasMediaType:AVMediaTypeVideo ] ) {
    AVCaptureDevicePosition position = device.position ;
    AVCaptureDevice *newCamera = nil ;
    AVCaptureDeviceInput *newInput = nil ;

    if (position == AVCaptureDevicePositionFront)
    newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
    newCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];

    [self initializeCaptureDevice:newCamera];
    newInput = [AVCaptureDeviceInput deviceInputWithDevice:newCamera error:nil];

    // beginConfiguration ensures that pending changes are not applied immediately
    [self.captureSession beginConfiguration ];

    [self.captureSession removeInput:captureDeviceInput]; //remove current
    [self.captureSession addInput:newInput]; //add new

    // Changes take effect once the outermost commitConfiguration is invoked.
    [self.captureSession commitConfiguration];
    break ;

  40. Rajesh says:

    Thanks a lot jjob…..Great post!

  41. Pingback: Decisions, Decisions, Decisions… « Augment Japan

  42. Anirudh says:

    Hi Jason,

    You have done a great job. I tried your code. I made some changes to have the camera record a video. Also I am able to overlay custom layers over the preview layer while recording. But I wanted to know if there is any way that I can record the video as well the custom layers that are being overlaid into a video file similar to watermarking the video with custom layers that can be saved in camera roll.


    • jjob says:

      Thanks @Anirudh, I think you can definitely do what you are describing but I haven’t ever done any work with recording video. I believe you should be able to process every single frame of the video as you record it and adding custom layers should be the same as adding them to a single image. This is one of the really big advantages of using AVCapture rather than UIImagePicker for this. It would be great if you could let us know if you figure this out.

    • Marty says:

      Have you succeeded in capturing the preview video with layers overlaid in the video?

  43. Marsman says:

    This implementation is an AWESOME replacement for your first offering. However, we seem to have lost some camera real estate. The substitution of the video camera (used in AVCaptureSession) vs the still camera (used in UIImagePicker) creates a size difference. Create separate apps (using your old implementation and the new one) and run them to see what I mean.

    Is there any way to utilize the still camera in the same fashion and get the real estate back?

    • jjob says:

      Strange, I thought that both apps used the full screen bounds for the camera view. Regardless, you have full control over the size of the camera view by setting the bounds of the preview layer. Currently that happens on line 22 of AROverlayViewController.m.

      • Marsman says:

        Adjusting the bounds will successfully scale of the camera view, but the amount of camera real-estate doesn’t change.

        To see the difference in what I’m referring to, open Apple’s Camera app. Set it to take still images and note the size of your room. Then switch to video mode. Big real estate difference.

        The Apple still image setting is representative of what the first incarnation of your app achieved. The Apple Video setting is representative of what your current app achieves.

        I suspect that the difference lies in the fact that your first app utilizes the cameraView (for still images) and the new incarnation utilizes AVCaptureeVideoPreviewLayer (for video input). Obviously Apple treats video input and still image input size differently.

        • jjob says:

          Ah, I see what you are saying. I’m not sure what can be done about that. Perhaps looking into the zoom capabilities? I honestly don’t know but if you discover anything it would be great if you could post it back here.

  44. Vakul Saini says:

    @jjob-Thanks for helping us……..nice job budy

  45. Vakul Saini says:

    @jjob- thanks for helping us… good job budy!!

  46. Vakul Saini says:

    I am new in iphone so excuse me if i did any foolish mistake,
    when i used this code this give following error, can u tell me what is the reason behind this….

    Undefined symbols for architecture i386:
    “_OBJC_CLASS_$_AVCaptureSession”, referenced from:
    objc-class-ref in CaptureSession.o
    “_OBJC_CLASS_$_AVCaptureVideoPreviewLayer”, referenced from:
    objc-class-ref in CaptureSession.o
    “_OBJC_CLASS_$_AVCaptureDevice”, referenced from:
    objc-class-ref in CaptureSession.o
    “_OBJC_CLASS_$_AVCaptureDeviceInput”, referenced from:
    objc-class-ref in CaptureSession.o
    “_AVLayerVideoGravityResizeAspectFill”, referenced from:
    -[CaptureSession addVideoPreviewLayer] in CaptureSession.o
    “_AVMediaTypeVideo”, referenced from:
    -[CaptureSession addVideoInput] in CaptureSession.o
    ld: symbol(s) not found for architecture i386
    clang: error: linker command failed with exit code 1 (use -v to see invocation)

    • jjob says:

      I think you forgot to add one of the frameworks. Take a look at the sample project and make sure you have included all the same frameworks as I used there. That should sort you out.

      • Vakul Saini says:

        These are the frameworks which i added.


        Now i have added two more AVFoundation and CoreMedia now it is working correctly.
        Sir i want one one more help , i want to resize the Camera it looks full screen but i want a camera view with (43.0,20.0,235.0,308.0) co-ordinates. and below camera i want to put own buttons.

        I kindly request you please help me.
        Thanks !!

        • jjob says:

          Take a look at line 15 of AROverlayViewController. This is where you set the size and position of the preview layer.

          CGRect layerRect = [[[self view] layer] bounds];
          [[[self captureManager] previewLayer] setBounds:layerRect];
          [[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),
          [[[self view] layer] addSublayer:[[self captureManager] previewLayer]];

          • Vakul Saini says:

            thanks jjob thanks a lot … you are awesome :-)

          • Vakul Saini says:

            One thing to ask, if i use UIImagePickerController instead of AVCaptureSession then what i have to do for make a custom camera view.

            picker=[[UIImagePickerController alloc]init];
            [self presentModalViewController:picker animated:YES];

            There is a code for capture a picture. How can i give a size to this picker view controller… ????

          • jjob says:

            As far as I know you cannot size the preview when using UIImagePickerController. This is another reason that using AVCaptureSession is the recommended way to capture images and video. It is far more flexible and far more powerful.

          • Vakul Saini says:

            One more question sir, please dont be angry on me :-)
            i want to add another pic on avcapturevideo, and now when we click the scan button both pic becomes in to one single pic, i mean to say the second pic overrite on first pic.
            for this i used the get current context method, but i dont want to take UIImagePicker so plz tell me how can i take a snapshot on scan button’s click.

            Thanks !!!
            your fan :-)

          • WK says:

            Hi JJOB, i am relatively new to objective c and Xcode and I need some clarification. How would I position/resize the ARoverlayviewcontroller? I am confused on where i can put coordinates and sizes.

          • jjob says:

            In viewDidLoad in AVOverlayViewController you should see:

            [[[self captureManager] previewLayer] setBounds:layerRect];
            [[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];

            You set the size with the first method and the centre point with the second.

  47. Prats says:

    Hi jjob,

    Thanks for the awesome tutorial & it has helped me a lot.
    I am able to take images & store it on photo Gallery,I was using UIImagePickerController to record videos, How ever I am struck with recording videos on to a directory using AVCapture. If you can help me with this , it would be a great help.(I am a newbie to iphone).Hope you are able to understand my problem.

    Thanks in advance

    • Prats says:

      Please point out to any of your old posts where I can understand AVCamDemo(Apple) in detail. Or some tutorial about how to use it. I even want to record audio & video separately.


  48. Alessandro Bava says:

    Ehi nice tutorial!!:)

    But how about add new views and use this code in Storyboard?
    If I try to use this code with the storyboard, the app is black, without any layer or view!
    How can i solve this problem? I want to reach a similar View (camera+layer watermark) after a series of option and if cicle.
    Can you help me ? :)

    • jjob says:

      There is no reason that you couldn’t set my example app up as one view in a multi view app. Whether you use Storyboard or not should be irrelevant. I don’t have time to make an example right now but if I find the tie to do it I will post it.

      • Alessandro Bava says:

        Thank you so much! because in this example you use right only the code, without use IB. I’ve some difficult to transpose this example with a multiview app using the storyboard. I’ll wait wishing new replies! I’m following you also on Twitter!! :)

  49. james says:

    Thank you very much for a very nice tutorial. I was able to get this up and running and also turn on the flash if needing extra light. I am very interested in how to zoom to say x2 or x4 as example.

    Could you point me in the direction that I should be looking to make this possible?

    thanks again .. I especially like the programatic way to add buttons and ect..

  50. manikandan says:

    hi, am planning to develop an application in iphone based on AR using Layar SDK () that implements the following functionality:

    1. When User clicks on the app, the mobile’s camera is turned on.
    2. When User hovers the camera over the poster of the movies say “Transformer”, “Am the legend”, so 3D image of the movie name is rendered on the poster (use OpenGL for creating images) visible in the screen.
    3. The dataset for the movies used should not be saved in the app. It must be downloaded from from Web server during run time of the app.

    “I dint get the sample code for the above description Project.” please some one favor me to guide some sample code for this type.

    Adance Thanks.

  51. Michael says:

    Instead of using the back camera… how do I use the front camera? What lines do I change?

    I really like your tutorials and I’m learning a lot.

    • jjob says:

      Change addVideoInput to the following:

      - (void)addVideoInput {
      NSArray *devices = [AVCaptureDevice devices];
      AVCaptureDevice *frontCamera;
      AVCaptureDevice *backCamera;

      for (AVCaptureDevice *device in devices) {

      NSLog(@"Device name: %@", [device localizedName]);

      if ([device hasMediaType:AVMediaTypeVideo]) {

      if ([device position] == AVCaptureDevicePositionBack) {
      NSLog(@"Device position : back");
      backCamera = device;
      else {
      NSLog(@"Device position : front");
      frontCamera = device;

      NSError *error = nil;
      AVCaptureDeviceInput *frontFacingCameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:frontCamera error:&error];
      if (!error) {
      if ([[self captureSession] canAddInput:frontFacingCameraDeviceInput])
      [[self captureSession] addInput:frontFacingCameraDeviceInput];
      else {
      NSLog(@"Couldn't add front facing video input");

  52. Will Kortum says:

    Sorry to bother you again, but the images that i have taken with this code seem to be in very low resolution. Do you have any thoughts on how to make it take the full resolution stills?

  53. Pingback: How To Use The Front Facing Camera With AVCaptureSession | musicalgeometry

  54. Vakul Saini says:

    Hi Jjob,

    Can you tell me one thing please.
    when i capture the picture this does not save with correct size, i want to save this with same size of iPhone Camera. so where i give the size and how ?

    Kindly help me.

  55. Pingback: Overlaying Controls On Camera View « iOS Developers

  56. Vakul Saini says:

    Hi Jjob,
    You’ve helped me before, i am grateful to you for that.
    I am bothering you again :-( Sorry for that.

    Can you tell me about airplay functionality in iPhone. i am making an app in which i have to give it please help me, how can i implement airplay in app.

    Thanks in advance!
    your fan :-)
    Vakul Saini

  57. macroswang says:

    In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). can you help me? Thanks in advance!

  58. Pingback: Video Composition: Adding a product image to a video? - Quora

  59. Can Ürek says:

    That sample save my life :) I hate UIImagePicker… Thanks for that.

  60. Prasanth says:

    Great job.
    A couple of typos
    In the init method all we do it -> In the init method all we do is
    There is really not much too it -> There is really not much to it

  61. otitserip says:

    hi, thanks for the code, theres any way of recording the video (not a still image) but video, and saving the image overlay with the video. For example, I want to add a overlay btw to put on camera preview a image, but when recording I need that image be recorded also. Thanks,

    • jjob says:

      That is definitely possible with AVCaptureSession. I have never done that but I recommend you take some time to digest the AVCaptureSession documentation. I’m sure that in there it will point you to what you need to know to do this. Shouldn’t be too different from doing it to a still image, you just need to add your overlay image to every frame of the video. If you figure it out it would be great for you to share your solution back here for everyone. :-)

  62. Corbin says:

    Your google rank for the keyword “AVCaptureSession” is higher than the iOS reference manual. That is so awesome. :)

  63. Klaus says:

    thanks for the code. I want to take a picture every 5 minutes, with a NSTimer and your code i can do that. Very Cool!
    But, if i try to take a picture in the background, the app throw an exception. The NSTimer runs in the Background fine.
    If the timer called the method to take the picture, this code
    NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
    throw this exception
    *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:] - NULL sample buffer.'

    in info.plist i insert two rows:
    Application does not run in background = NO
    Required background modes -> Item 0 = App plays audio

    What can i do to take a picture in the background? Big Thx!

    • jjob says:

      I’m not sure if you can take a picture while your app is in the background but maybe someone else knows otherwise. I’d be curious to know if you find out one way or another.

  64. Bullet says:


    Is it possible to scan the images or pictures and grab the text what we have on the images or pictures is it possible ? If possible please let me know how ? And send if u have code or urls to my id………


  65. Hello jj0b!

    Thank you for this tutorial.

    I have been trying to patch what I have ‘read’ into my rather ambitious application. I did not want to SPAM your comment site with all that code, so I asked on stackoverflow. Could you have a look at my question, please?

    • Angelika Ophagen says:

      Big push for my ego: I have found my own mistake. Big let-down: it was of the kind called “stupid”. I have admitted to my shame in stackoverflow so you can have a look if you wish to.

      So what! I have a camera preview again and inside my absurdly complex storyboarding thingie. Now *hey ho!* on to the overlay view, first XIB in my short Xcode history.

      Thanks to you in any case. You made this possible!

    • jjob says:

      Will take a look.

  66. Mirco says:

    How can I add a label to the video while it is recording, please?
    I want to add a label with the time.


  67. Himanshu Dhameja says:

    I am developing an iPad app which supports only landscape orientation,app needs to access gallery and camera accordingly in, but when ever i want to access the UIPickerController the app just gives an error that the orientation is not supported.

    Please help

  68. muruganandham says:

    Is there any possible way to add overlay view in a recorded video or recording video in iphone using avfoundation.

  69. JD Gauchat says:

    Thank you. I didn’t use your code but I took some ideas that helped me solved some issues that were driving me crazy! Good job!

  70. Hello jjob.
    Hope you are doing good.

    Can you tell me how to start and stop capturing session on UIButton click on automatically as well as to stop it. Actually I want to start AVCaptureSession on button click take pic from it and show it in another view controller and I come back to AVCaptureSession View it should restart session and take again the pic from same session. I am a new to iOS development. Can you suggest me what to do?

    Ashutosh Mishra

  71. Dinesh Sharma says:

    I need some help on Camera Overlay as , my custom image as Neckless not adjust on images which one capture by front camera . Custome images is showing in ear side but not proper location as I need to put that images on throat.

    Please help me on this and as reference check this in below link :

    Dinesh Sharma

  72. Valentine says:

    Thank you very much jjob, for the nice tutorial and the sample app.

    Worked like a charm.

  73. Ginofalaci says:

    Hi sir, Im very interested in your tutorial.
    What Im looking to do is to have the same thing (I dont need to record any photo or video, just to view it) but with an image (that I can choose in the Photo Library) in background with ability to change transparency.
    The thing is to take a photo of a scene, and then compare it in real time with the live camera.
    So I just need to input a pickerview in place of the scan image…
    Hope you understand.
    Thank you for your help and tutorials.

Leave a Reply

Your email address will not be published. Required fields are marked *