Search Results

Search found 779 results on 32 pages for 'uiimage'.

Page 2/32 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • UIImage resize (Scale proportion)

    - by Mustafa
    The following piece of code is resizing the image perfectly, but the problem is that it messes up the aspect ratio (resulting in a skewed image). Any pointers? // Change image resolution (auto-resize to fit) + (UIImage *)scaleImage:(UIImage*)image toResolution:(int)resolution { CGImageRef imgRef = [image CGImage]; CGFloat width = CGImageGetWidth(imgRef); CGFloat height = CGImageGetHeight(imgRef); CGRect bounds = CGRectMake(0, 0, width, height); //if already at the minimum resolution, return the orginal image, otherwise scale if (width <= resolution && height <= resolution) { return image; } else { CGFloat ratio = width/height; if (ratio > 1) { bounds.size.width = resolution; bounds.size.height = bounds.size.width / ratio; } else { bounds.size.height = resolution; bounds.size.width = bounds.size.height * ratio; } } UIGraphicsBeginImageContext(bounds.size); [image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)]; UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return imageCopy; }

    Read the article

  • ios saving uilabel and uiimageview as uiimage

    - by Ashraf Hussein
    I'm trying to add text to an image bu adding a uilabel as subview to a uiimageview I already did that but I want to save them as an image I'm using render in context but it's not working here's my code UIImage * img = [UIImage imageNamed:@"IMG_1650.JPG"]; float x = (img.size.width/imageView.frame.size.width) * touchPoint.x; float y = (img.size.height/imageView.frame.size.height) * touchPoint.y; CGPoint tpoint = CGPointMake(x, y); UIFont *font = [UIFont boldSystemFontOfSize:30]; context = UIGraphicsGetCurrentContext(); UIGraphicsBeginImageContextWithOptions(img.size, YES, 0.0); [[UIColor redColor] set]; for (UIView * view in [imageView subviews]){ [view removeFromSuperview]; } UILabel * lbl = [[UILabel alloc]init]; [lbl setText:txt]; [lbl setBackgroundColor:[UIColor clearColor]]; CGSize sz = [txt sizeWithFont:lbl.font]; [lbl setFrame:CGRectMake(touchPoint.x, touchPoint.y, sz.width, sz.height)]; lbl.transform = CGAffineTransformMakeRotation( -M_PI/4 ); [imageView addSubview:lbl]; [imageView bringSubviewToFront:lbl]; [imageView setImage:img]; [imageView.layer renderInContext:UIGraphicsGetCurrentContext()]; [lbl.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage * nImg = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(nImg, nil, nil, nil); THX

    Read the article

  • Im Stumped, Why is UIImage\Texture2d memory not being freed

    - by howsyourface
    I've been looking everywhere trying to find a solution to this problem. Nothing seems to help. I've set up this basic test to try to find the cause of why my memory wasn't being freed up: if (texture != nil) { [texture release]; texture = nil; } else { UIImage* ui = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]]; texture = [[Texture2D alloc] initWithImage:ui]; } Now i would place this in the touches began and test by monitoring the memory usage using intstruments at the start (normally 11.5 - 12mb) after the first touch, with no object existing the texture is created and memory jumps to 13.5 - 14 However, after the second touch the memory does decrease, but only to around 12.5 - 13. There is a noticeable chunk of memory still occupied. I tested this on a much larger scale, loading 10 of these large textures at a time The memory jumps to over 30 mb and remains there, but on the second touch after releasing the textures it only falls to around 22mb. I tried the test another time loading the images in with [uiimage imagenamed:] but because of the caching this method performs it just means that the full 30mb remains in memory.

    Read the article

  • Can I load a UIImage from a URL?

    - by progrmr
    I have a URL for an image (got it from UIImagePickerController) but I no longer have the image in memory (the URL was saved from a previous run of the app). Can I reload the UIImage from the URL again? I see that UIImage has a imageWithContentsOfFile: but I have a URL. Can I use NSData's dataWithContentsOfURL: to read the URL? EDIT based on @Daniel's answer I tried the following code but it doesn't work... NSLog(@"%s %@", __PRETTY_FUNCTION__, photoURL); if (photoURL) { NSURL* aURL = [NSURL URLWithString:photoURL]; NSData* data = [[NSData alloc] initWithContentsOfURL:aURL]; self.photoImage = [UIImage imageWithData:data]; [data release]; } When I ran it the console shows: -[PhotoBox willMoveToWindow:] file://localhost/Users/gary/Library/Application%20Support/iPhone%20Simulator/3.2/Media/DCIM/100APPLE/IMG_0004.JPG *** -[NSURL length]: unrecognized selector sent to instance 0x536fbe0 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSURL length]: unrecognized selector sent to instance 0x536fbe0' Looking at the call stack, I'm calling URLWithString, which calls URLWithString:relativeToURL:, then initWithString:relativeToURL:, then _CFStringIsLegalURLString, then CFStringGetLength, then forwarding_prep_0, then forwarding, then -[NSObject doesNotRecognizeSelector]. Any ideas why my NSString (photoURL's address is 0x536fbe0) doesn't respond to length? Why does it say it doesn't respond to -[NSURL length]? Doesn't it know that param is an NSString, not a NSURL?

    Read the article

  • Can't process UIImage from UIImagePickerController and app crashes..

    - by eimaikala
    Hello guys, I am new to iPhone sdk and can't figure out why my application crashes. In the .h I have: UIImage *myimage; //so as it can be used as global -(IBAction) save; @property (nonatomic, retain) UIImage *myimage; In the .m I have: @synthesize myimage; - (void)viewDidLoad { self.imgPicker = [[UIImagePickerController alloc] init]; self.imgPicker.allowsImageEditing = YES; self.imgPicker.delegate = self; self.imgPicker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; } -(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { myimage = [[info objectForKey:UIImagePickerControllerOriginalImage]retain]; [picker dismissModalViewControllerAnimated:YES]; } -(IBAction) process{ myimage=[self process:myimage var2:Val2 var3:Val3 var4:Val4]; UIImageWriteToSavedPhotosAlbum(myimage, nil, nil, nil); [myimage release]; } When the button process is clicked, the application crashes and really I have no idea why this happens. When i change it to: -(IBAction) process{ myimage =[UIImage imageNamed:@"im1.jpg"]; myimage=[self process:myimage var2:Val2 var3:Val3 var4:Val4]; UIImageWriteToSavedPhotosAlbum(myimage, nil, nil, nil); [myimage release]; } the process button works perfectly... Any help would be appreciated. Thanks in advance

    Read the article

  • Crop circular or elliptical image from original UIImage

    - by vikas ojha
    I am working on openCV for detecting the face .I want face to get cropped once its detected.Till now I got the face and have marked the rect/ellipse around it on iPhone. Please help me out in cropping the face in circular/elliptical pattern (UIImage *) opencvFaceDetect:(UIImage *)originalImage { cvSetErrMode(CV_ErrModeParent); IplImage *image = [self CreateIplImageFromUIImage:originalImage]; // Scaling down /* Creates IPL image (header and data) ----------------cvCreateImage CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels ); */ IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3); /*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/ cvPyrDown(image, small_image, CV_GAUSSIAN_5x5); int scale = 2; // Load XML NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_default" ofType:@"xml"]; CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL); // Check whether the cascade has loaded successfully. Else report and error and quit if( !cascade ) { NSLog(@"ERROR: Could not load classifier cascade\n"); //return; } //Allocate the Memory storage CvMemStorage* storage = cvCreateMemStorage(0); // Clear the memory storage which was used before cvClearMemStorage( storage ); CGColorSpaceRef colorSpace; CGContextRef contextRef; CGRect face_rect; // Find whether the cascade is loaded, to find the faces. If yes, then: if( cascade ) { CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20)); cvReleaseImage(&small_image); // Create canvas to show the results CGImageRef imageRef = originalImage.CGImage; colorSpace = CGColorSpaceCreateDeviceRGB(); contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault); //VIKAS CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef); CGContextSetLineWidth(contextRef, 4); CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5); // Draw results on the iamge:Draw all components of face in the form of small rectangles // Loop the number of faces found. for(int i = 0; i < faces->total; i++) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // Calc the rect of faces // Create a new rectangle for drawing the face CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i); // CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef, // CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale)); face_rect = CGContextConvertRectToDeviceSpace(contextRef, CGRectMake(cvrect.x*scale, cvrect.y , cvrect.width*scale , cvrect.height*scale*1.25 )); facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate]; facedetectapp.grabcropcoordrect=face_rect; NSLog(@" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height); CGContextStrokeRect(contextRef, face_rect); //CGContextFillEllipseInRect(contextRef,face_rect); CGContextStrokeEllipseInRect(contextRef,face_rect); [pool release]; } } CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect); UIImage *returnImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); cvReleaseMemStorage(&storage); cvReleaseHaarClassifierCascade(&cascade); return returnImage; } } Thanks Vikas

    Read the article

  • How does UIImage work in low-memory situations?

    - by kubi
    According to the UIImage documentation: In low-memory situations, image data may be purged from a UIImage object to free up memory on the system. Does anyone know how this works? It appears that this process is completely transparent and will occur in the background with no input from me, but I can't find any definitive documentation one way or the other. Second, will this data-purge occur when the image is not loaded by me? (I'm getting the image from UIImagePicker). Here's the situation: I'm taking a picture with the UIImagePickerController and and immediately taking that image and sending it to a new UIViewController for display. Sending the raw image to the new controller crashes my app with memory warnings about 30% of the time. Resizing the image takes a few moments, time that I'd rather not spend if there's a 3rd option available to me.

    Read the article

  • UIImage from NSDocumentDirectory leaking memory

    - by Emil
    Hey. I currently have this code: UIImage *image = [[UIImage alloc] initWithContentsOfFile:[imagesPath stringByAppendingPathComponent:[NSString stringWithFormat:@"/%@.png", [postsArrayID objectAtIndex:indexPath.row]]]]; It's loading in an image to set in a UITableViewCell. This obviously leaks a lot of memory (I do release it, two lines down after setting the cells image to be that image), and I'm not sure if it caches the image at all. Is there another way, that doesen't leak so much, I can use to load in images multiple times, like in a tableView, from the Documents-directory of my app? Thanks.

    Read the article

  • Release a retain UIImage property loaded via imageNamed?

    - by user158103
    In my class object i've defined a (nonatomic, retain) property for UIImage. I assigned this property with an image loaded via [UIImage imageNamed:@"file.png"]; If at some point I want to reassign this property to another image, should I have to release the prior reference? I am confused because by the retain property I know i should release it. But because imageNamed is a convenience method (does not use alloc), im not sure what rule to apply here. Thanks for the insight!

    Read the article

  • UIImage in UIView and display image parts out of bounds

    - by Mpampinos Holmens
    I have an image in a subview that is bigger than the subview it self, is it possible to display parts of the image that are out of the subview! have tried the view.clipsToBounds=NO; but no lack still. here is part of my code so far. xRayView = [[XRayView alloc] initWithFrame: CGRectMake((1024 - 800) / 2, self.device.frame.origin.y - 26, 800,530)]; xRayView.clipsToBounds=NO; [xRayView setForgroundImage: [UIImage imageWithContentsOfFile: [[NSBundle mainBundle] pathForResource: @"xray_foreground_01" ofType: @"png"]]]; [xRayView setBackgroundImage: [UIImage imageWithContentsOfFile: [[NSBundle mainBundle] pathForResource: @"xray_background_01" ofType: @"png"]]]; the foreground and the background images are bigger than the xray view

    Read the article

  • Downloading image from server without file extension (NSData to UIImage)

    - by Msencenb
    From my server I'm pulling down a url that is supposed to simply be a profile image. The relevant code for pulling down the image from the urls is this: NSString *urlString = [NSString stringWithFormat:@"%@%@",kBaseURL,profile_image_url]; profilePic = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]]; My url is in the format (note no file extension on the end since its dynamically rendered) localhost:8000/people/1/profile_image If I load the url in my browser the image displays; however the code above for pulling down the UIImage does not work. I've verified that the code does pull an image from a random site on the interwebs. Any thoughts on why this is happening?

    Read the article

  • How to disable multiple touches on a ScrollView and UIImage

    - by Rob
    I have a scrollview that I am loading images into that the user can touch and play a sound. However, the program is getting confused when I press one image with one finger and then another one with a different finger. It thinks you are pushing the same button again and therefore plays the sound again (so you have two of the same sounds playing at the same time even though you may have pressed a different sound button). I tried setting exclusiveTouch for each UIImage but that didn't seem to work in this case for some reason. What am I missing or is there a better way to do this? Here is some code: for creating buttons.... - (void) createButtons { CGRect myFrame = [self.outletScrollView bounds]; CGFloat gapX, gapY, x, y; int columns = 3; int myIndex = 0; int viewWidth = myFrame.size.width; int buttonsCount = [g_AppsList count]; float actualRows = (float) buttonsCount / columns; int rows = buttonsCount / columns; int buttonWidth = 100; int buttonHeight = 100; if (actualRows > rows) rows++; //set scrollview content size to hold all the glitter icons library gapX = (viewWidth - columns * buttonWidth) / (columns + 1); gapY = gapX; y = gapY; int contentHeight = (rows * (buttonHeight + gapY)) + gapY; [outletScrollView setContentSize: CGSizeMake(viewWidth, contentHeight)]; UIImage* myImage; NSString* buttonName; //center all buttons to view int i = 1, j = 1; for (i; i <= rows; i++) { //calculate gap between buttons gapX = (viewWidth - (buttonWidth * columns)) / (columns + 1); if (i == rows) { //this is the last row, recalculate gap and pitch gapX = (viewWidth - (buttonWidth * buttonsCount)) / (buttonsCount + 1); columns = buttonsCount; }//end else x = gapX; j = 1; for (j; j <= columns; j++) { //get shape name buttonName = [g_AppsList objectAtIndex: myIndex]; buttonName = [NSString stringWithFormat: @"%@.png", buttonName]; myImage = [UIImage imageNamed: buttonName]; TapDetectingImageView* imageView = [[TapDetectingImageView alloc] initWithImage: myImage]; [imageView setFrame: CGRectMake(x, y, buttonWidth, buttonHeight)]; [imageView setTag: myIndex]; [imageView setContentMode:UIViewContentModeScaleToFill]; [imageView setUserInteractionEnabled: YES]; [imageView setMultipleTouchEnabled: NO]; [imageView setExclusiveTouch: YES]; [imageView setDelegate: self]; //add button to current view [outletScrollView addSubview: imageView]; [imageView release]; x = x + buttonWidth + gapX; //increase button index myIndex++; }//end for j //increase y y = y + buttonHeight + gapY; //decrease buttons count buttonsCount = buttonsCount - columns; }//end for i } and for playing the sounds... - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { //stop playing theAudio.stop; // cancel any pending handleSingleTap messages [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(handleSingleTap) object:nil]; UITouch* touch = [[event allTouches] anyObject]; NSString* filename = [g_AppsList objectAtIndex: [touch view].tag]; NSString *path = [[NSBundle mainBundle] pathForResource: filename ofType:@"m4a"]; theAudio=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; theAudio.delegate = self; [theAudio prepareToPlay]; [theAudio setNumberOfLoops:-1]; [theAudio setVolume: g_Volume]; [theAudio play]; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { BOOL allTouchesEnded = ([touches count] == [[event touchesForView:self] count]); if (allTouchesEnded) { //stop playing theAudio.stop; }//end if //stop playing theAudio.stop; }

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • Rendering a CGPDFPage into a UIImage

    - by James Antrobus
    I'm trying to render a CGPDFPage (selected from a CGPDFDocument) into a UIImage to display on a view. I have the following code in MonoTouch which gets me part way there. RectangleF PDFRectangle = new RectangleF(0, 0, UIScreen.MainScreen.Bounds.Width, UIScreen.MainScreen.Bounds.Height); public override void ViewDidLoad () { UIGraphics.BeginImageContext(new SizeF(PDFRectangle.Width, PDFRectangle.Height)); CGContext context = UIGraphics.GetCurrentContext(); context.SaveState(); CGPDFDocument pdfDoc = CGPDFDocument.FromFile("test.pdf"); CGPDFPage pdfPage = pdfDoc.GetPage(1); context.DrawPDFPage(pdfPage); UIImage testImage = UIGraphics.GetImageFromCurrentImageContext(); pdfDoc.Dispose(); context.RestoreState(); UIImageView imageView = new UIImageView(testImage); UIGraphics.EndImageContext(); View.AddSubview(imageView); } A section of the CGPDFPage is displayed but rotated back-to-front and upside down. My question is, how do I select the full pdf page and flip it round to display correctly. I have seen a few examples using ScaleCTM and TranslateCTM but couldn't seem to get them working. Any examples in ObjectiveC are fine, I'll take all the help I can get :) Thanks

    Read the article

  • How to reduce UIImage size to a maximum as possible

    - by Tharindu Madushanka
    I am using following code to resize the image. Resize a UIImage Right Way And I use interpolation quality as kCGInterpolationLow. And then I use UIImageJPEGRepresentation(image,0.0) to get the NSData of that image. Still its a little bit high in size around 100kb. when I send it over the network. Can I reduce it further. If I am to reduce it more what could I do ? Thanks and Kind Regards,

    Read the article

  • Issue with UIImage files not being found on phone

    - by Driss Zouak
    Note: Using Monotouch. I have a directory called Images where I have all my PNG files. In my code I have the following _imgMinusDark = UIImage.FromFile("images/MinusDark.png"); On the simulator it runs fine, on the phone it's null. I have the Images folder content (all the PNGs) in my MonoDevelop marked as Content in terms of Build Action. What am I missing? thanks

    Read the article

  • How to save multiple UIImage to a file iPad

    - by aron
    I have a PDF reader that displays pages of the document. What I want to do is allow the user to draw over the PDF in a transparent view. Then I want to save the drawing (UIImage) to disk. If at all possible, I don't want to have the documents folder filled with files like documentName_page01.png, documentName_page02.png for every page that is drawn over. However, I can't figure out how to store these UIImages into a single file without it becoming unwieldy and memory intensive. Any ideas appreciated.

    Read the article

  • Uploading UIImage to server using UIImageJPEGRepresentation

    - by Thomas Joos
    hi all, I'm writing an app which uploads a UIImage to my server. This works perfect, as I see the pictures being added. I use the UIImageJPEGRepresentation for the image data and configure an NSMutableRequest. ( setting url, http method, boundary, content types and parameters ) I want to display an UIAlertView when the file is being uploaded and I wrote this code: //now lets make the connection to the web NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding]; NSLog(@"return info: %@", returnString); if (returnString == @"OK") { NSLog(@"you are snapped!"); // Show message image successfully saved UIAlertView *alert = [[UIAlertView alloc]initWithTitle:@"You are snapped!" message:@"BBC snapped your picture and will send it to your email adress!" delegate:self cancelButtonTitle:@"OK!" otherButtonTitles:nil]; [alert show]; [alert release]; } [returnString release]; The returnString outputs: 2010-04-22 09:49:56.226 bbc_iwh_v1[558:207] return info: OK The problem is that my if statements does not say returnstring == @"OK" as I don't get the AlertView. How should I check this returnstring value?

    Read the article

  • Reason why UIImage gives me a 'distorted' image sometimes

    - by Cedric Vandendriessche
    I have a custom UIView with a UILabel and a UIImageView subview. (tried using UIImageView subclass aswell). I assign an image to it and add the view to the screen. I wrote a function which adds the amount of LetterBoxes to the screen (my custom class): - (void)drawBoxesForWord:(NSString *)word { if(boxesContainer == nil) { /* Create a container for the LetterBoxes (animation purposes) */ boxesContainer = [[UIView alloc] initWithFrame:CGRectMake(0, 205, 320, 50)]; [self.view addSubview:boxesContainer]; } /* Calculate width of letterboxes */ NSInteger numberOfCharacters = [word length]; CGFloat totalWidth = numberOfCharacters * 28 + (numberOfCharacters - 1) * 3; CGFloat leftCap = (320 - totalWidth) / 2; [letters removeAllObjects]; /* Draw the boxes to the screen */ for (int i = 0; i < numberOfCharacters; i++) { LetterBox *letter = [[LetterBox alloc] initWithFrame:CGRectMake(leftCap + i * 31 , 0, 28, 40)]; [letters addObject:letter]; [boxesContainer addSubview:letter]; [letter release]; }} This gives me the image below: http://www.imgdumper.nl/uploads2/4ba3b2c72bb99/4ba3b2c72abfd-Goed.png But sometimes it gives me this: imgdumper.nl/uploads2/4ba3b2d888226/4ba3b2d88728a-Fout.png I add them to the same boxesContainer but they first remove themselves from the superview, so it's not like you see them double or something. What I find weird is that they are all good or all bad.. This is the init function for my LetterBox: if (self == [super initWithFrame:aRect]) { /* Create the box image with same frame */ boxImage = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height)]; boxImage.contentMode = UIViewContentModeScaleAspectFit; boxImage.image = [UIImage imageNamed:@"SpaceOpen.png"]; [self addSubview:boxImage]; /* Create the label with same frame */ letterLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height)]; letterLabel.backgroundColor = [UIColor clearColor]; letterLabel.font = [UIFont fontWithName:@"ArialRoundedMTBold" size:26]; letterLabel.textColor = [UIColor blackColor]; letterLabel.textAlignment = UITextAlignmentCenter; [self addSubview:letterLabel]; } return self;} Does anyone have an idea why this could be? I'd rather have them display correctly every time :)

    Read the article

  • Add shadow to UIImage drawn in rounded path

    - by Tom Irving
    I'm drawing a rounded image in a UITableView cell like so: CGRect imageRect = CGRectMake(8, 8, 40, 40); CGFloat radius = 3; CGFloat minx = CGRectGetMinX(imageRect); CGFloat midx = CGRectGetMidX(imageRect); CGFloat maxx = CGRectGetMaxX(imageRect); CGFloat miny = CGRectGetMinY(imageRect); CGFloat midy = CGRectGetMidY(imageRect); CGFloat maxy = CGRectGetMaxY(imageRect); CGContextMoveToPoint(context, minx, midy); CGContextAddArcToPoint(context, minx, miny, midx, miny, radius); CGContextAddArcToPoint(context, maxx, miny, maxx, midy, radius); CGContextAddArcToPoint(context, maxx, maxy, midx, maxy, radius); CGContextAddArcToPoint(context, minx, maxy, minx, midy, radius); CGContextClosePath(context); CGContextClip(context); [self.theImage drawInRect:imageRect]; This looks great, but I'd like to add a shadow to it for added effect. I've tried using something along the lines of: CGContextSetShadowWithColor(context, CGSizeMake(2, 2), 2, [[UIColor grayColor] CGColor]); CGContextFillPath(context); But this only works when the image has transparent areas, if the image isn't transparent at all, it won't even draw a shadow around the border. I'm wondering if there is something I'm doing wrong?

    Read the article

  • UIImage Rotation

    - by Kamchatka
    I display an image in a UIImageView (within a UIScrollView) which is also stored in CoreData. In the interface, I want the user to be able to rotate the picture by 90 degrees. I also want it to be saved in CoreData. What should I rotate in the display? the scrollview, the uiimageview or the image itself? (If possible I would like the rotation to be animated) But then I also have to save the picture to CoreData. I thought about changing the image orientation but this property is readonly.

    Read the article

  • Fatsest way to edit alpha of CGImage (or UIImage) with touch and then display?

    - by Pankaj
    I have two image views, one on top of the another, with two different images. As the user touches the image and moves his/her finger, the top image should become transparent along the touch points with a fixed radius. (Like the PhotoChop app). Currently I am doing it this way... For each touch. Get a copy of the image buffer from CGImage of the top image. Edit the alpha channel of the buffer to create a transparent circle centered at the touch point. Create new CGImage from the buffer. Create UIImage from the CGImage and use the new UIImage as the top image view's image. This works but as you can see too many copy, creates are involved and it is slow. Can somebody please suggest me a faster way of doing the same thing?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >