Search Results

Search found 114 results on 5 pages for 'cgimage'.

Page 5/5 | < Previous Page | 1 2 3 4 5 

  • Can't draw UImage in UIView::drawRect

    - by Joel
    I know this seems like a simple task, which is why I don't understand why I can't get the image to render. When I set up my UIView, I do the following: myUiView.backgroundColor = [UIColor clearColor]; myUiView.opaque = NO; I create and retain the UIImage in the init function of my UIView: image = [[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"test" ofType:@"png"]] retain]; then my drawRect looks like this: - (void) drawRect:(CGRect) rect { [image drawInRect:self.bounds]; } Ultimately I'll be manipulating that UIImage via bitmap context, and then in drawRect create a CGImage out of the context, and render that, but for now I'm just trying to get it rendering a known image. I've been digging through this site, as well as the documentation. I've gone down the CG path and tried drawing it with CGContextDrawImage by following the numerous examples other people have posted, but that didn't work either. So I've come back to what seems to be the most straightforward way to draw an image, but it isn't working. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • Is it good choice to move a Sublayer around a view using UIPanGestureRecognizer?

    - by Pranjal Bikash Das
    I have a CALayer and as sublayer to this CALayer i have added an imageLayer which contains an image of resolution 276x183. I added a UIPanGestureRecognizer to the main view and calculation the coordinates of the CALayer as follows: - (void)panned:(UIPanGestureRecognizer *)sender{ subLayer.frame=CGRectMake([sender locationInView:self.view].x-138, [sender locationInView:self.view].y-92, 276, 183); } in viedDidLoad i have: subLayer.backgroundColor=[UIColor whiteColor].CGColor; subLayer.frame=CGRectMake(22, 33, 276, 183); imageLayer.contents=(id)[UIImage imageNamed:@"A.jpg"].CGImage; imageLayer.frame=subLayer.bounds; imageLayer.masksToBounds=YES; imageLayer.cornerRadius=15.0; subLayer.shadowColor=[UIColor blackColor].CGColor; subLayer.cornerRadius=15.0; subLayer.shadowOpacity=0.8; subLayer.shadowOffset=CGSizeMake(0, 3); [subLayer addSublayer:imageLayer]; [self.view.layer addSublayer:subLayer]; It is giving desired output but a bit slow in the simulator. I have not yet tested it in Device. so my question is - Is it OK to move a CALayer containing an image??

    Read the article

  • Converting image to Black & White image on iPhone SDK issues.

    - by KfirS
    Hello, I've used a familiar code to convert image to Black & White which I've founded on several forums. The code is: CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray(); CGContextRef context = CGBitmapContextCreate(nil, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width, colorSapce, kCGImageAlphaNone); CGContextSetInterpolationQuality(context, kCGInterpolationHigh); CGContextSetShouldAntialias(context, NO); CGContextDrawImage(context, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), [originalImage CGImage]); CGImageRef bwImage = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSapce); UIImage *resultImage = [UIImage imageWithCGImage:bwImage]; // This is result B/W image. CGImageRelease(bwImage); return resultImage; When I'm using this code with on Horizontal image it's work fine, but when I'm trying a to use this code on Vertical image, the result is disproportionate image and rotate 90 degree left. Can anyone know what could be the problem? Thanks, Kfir.

    Read the article

  • How to clip and fill a canvas using an alpha mask

    - by Jay Koutavas
    I have some .png icons that are alpha masks. I need to render them as an drawable image using the Android SDK. On the iPhone, I use the following to get this result, converting the "image" alpha mask to the 'imageMasked' image using black as a fill: CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, thumbWidth, thumbHeight, 8, 4*thumbWidth, colorSpace, kCGImageAlphaPremultipliedFirst); CGRect frame = CGRectMake(0,0,thumbWidth,thumbHeight); CGContextClipToMask(context, frame, [image CGImage]); CGContextFillRect(context, frame); CGImageRef imageMasked = CGBitmapContextCreateImage(context); CGContextRelease(context); How do I accomplish the above in Android SDK? I've started to write the following: Drawable image = myPngImage; final int width = image.getMinimumWidth(); final int height = image.getMinimumHeight(); Bitmap imageMasked = Bitmap.createBitmap(width, height, Config.ARGB_8888); Canvas canvas = new Canvas(iconMasked); image.draw(canvas); ??? I'm not finding how to do the clipping on imageMasked using image.

    Read the article

  • How to reset the context to the original rectangle after clipping it for drawing?

    - by mystify
    I try to draw a sequence of pattern images (different repeated patterns in one view). So what I did is this, in a loop: CGContextRef context = UIGraphicsGetCurrentContext(); // clip to the drawing rectangle to draw the pattern for this portion of the view CGContextClipToRect(context, drawingRect); // the first call here works fine... but for the next nothing will be drawn CGContextDrawTiledImage(context, CGRectMake(0, 0, 2, 31), [img CGImage]); I think that after I've clipped the context to draw the pattern in the specific rectangle, I cut out a snippet from the big canvas and the next time, my canvas is gone. can't cut out another snippet. So I must reset that clipping somehow in order to be able to draw another pattern again somewhere else? Edit: In the documentation I found this: CGContextClip: "... Therefore, to re-enlarge the paintable area by restoring the clipping path to a prior state, you must save the graphics state before you clip and restore the graphics state after you’ve completed any clipped drawing. ..." Well then, how to store the graphics state before clipping and how to restore it?

    Read the article

  • Loading .png image from array of uint8_t into OpenGL ES texture

    - by unknownthreat
    Normally, when we want to load a texture for OpenGL ES with .png, we simply add the .png images into XCode. The .png files will be altered for optimization by XCode and these altered png files can loaded into OpenGL ES texture during the runtime. However, what I am trying to do is quite different. I am trying to load a .png file that is not from the prebuilt/compile. The png file will be transmitted externally from UDP, and it will be in the form of array of bytes. I am very sure that the png is transferred correctly, but when it comes to displaying the png image in the form of the OpenGL ES texture, the image somehow shows incorrectly. The colors that are being sent are presented but the positions are somehow very incorrect. However, the position of the colors still retain some aspects of the original position. Here: The left image shows the original .png, while the right shows the png being displayed on iPhone using OpenGL ES Texture. It looks more like the png data is not being decoded or incorrectly processed. Below is OpenGL ES code for turning the image into texture: - (void) setTextureFromImageByte: (uint8_t*)imageByte{ if (self = [super init]){ NSData* imageData = [[NSData alloc] initWithBytes: imageByte length: imageLength]; UIImage* img = [[UIImage alloc] initWithData: imageData]; CGImageRef image = img.CGImage; int width = 512; int height = 512; if (image){ int tempWidth = (int)width, tempHeight = (int)height; if ((tempWidth & (tempWidth - 1)) != 0 ){ NSLog(@"CAUTION! width is not power of 2. width == %d", tempWidth); }else if ((tempHeight & (tempHeight - 1)) != 0 ){ NSLog(@"CAUTION! height is not power of 2. height == %d", tempHeight); }else{ void *spriteData = calloc(width * 4, height * 4); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, width, height), image); CGContextRelease(spriteContext); glBindTexture(GL_TEXTURE_2D, 1); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 320, 435, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); } }else NSLog(@"ERROR: Image not loaded..."); [img release]; [imageData release]; } } So does anyone knows how to deal with this? Is it because of iPhone only accepts altered png from XCode? What can we do in this case in order to make the png image be able to display correctly?

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • Core Animation performance on iphone

    - by nico
    I'm trying to do some animations using Core Animation on the iphone. I'm using CABasicAnimation on CALayer. It's a straight forward animation from a random place at the top of the screen to the bottom of the screen at random speed, I have 30 elements that doing the same animation continuously until another action happens. But the performance on the iPhone 3G is very sluggish when the animations start. The image is only 8k. Is this the right approach? How should I change so it performs better. // image cached somewhere else. CGImageRef imageRef = [[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:name ofType:@"png"]] CGImage]; - (void)animate:(NSTimer *)timer { int startX = round(radom() % 320); float speed = 1 / round(random() % 100 + 2); CALayer *layer = [CALayer layer]; layer.name = @"layer"; layer.contents = imageRef; // cached image layer.frame = CGRectMake(0, 0, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)); int width = layer.frame.size.width; int height = layer.frame.size.height; layer.frame = CGRectMake(startX, self.view.frame.origin.y, width, height); [effectLayer addSublayer:layer]; CGPoint start = CGPointMake(startX, 0); CGPoint end = CGPointMake(startX, self.view.frame.size.height); float repeatCount = 1e100; CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"position"]; animation.delegate = self; animation.fromValue = [NSValue valueWithCGPoint:start]; animation.toValue = [NSValue valueWithCGPoint:end]; animation.duration = speed; animation.repeatCount = repeatCount; animation.autoreverses = NO; animation.removedOnCompletion = YES; animation.fillMode = kCAFillModeForwards; [layer addAnimation:animation forKey:@"position"]; } The animations are fired off using a NSTimer. animationTimer = [NSTimer timerWithTimeInterval:0.2 target:self selector:@selector(animate:) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:animationTimer forMode:NSDefaultRunLoopMode];

    Read the article

  • Changing RGB color image to Grayscale image using Objective C

    - by user567167
    I was developing a application that changes color image to gray image. However, some how the picture comes out wrong. I dont know what is wrong with the code. maybe the parameter that i put in is wrong please help. UIImage *c = [UIImage imageNamed:@"downRed.png"]; CGImageRef cRef = CGImageRetain(c.CGImage); NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef)); size_t w = CGImageGetWidth(cRef); size_t h = CGImageGetHeight(cRef); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; unsigned char* greyPixelData = (unsigned char*) malloc(w*h); for (int y = 0; y < h; y++) { for(int x = 0; x < w; x++){ int iter = 4*(w*y+x); int red = pixe lBytes[iter]; int green = pixelBytes[iter+1]; int blue = pixelBytes[iter+2]; greyPixelData[w*y+x] = (unsigned char)(red*0.3 + green*0.59+ blue*0.11); int value = greyPixelData[w*y+x]; } } CFDataRef imgData = CFDataCreate(NULL, greyPixelData, w*h); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData); size_t width = CGImageGetWidth(cRef); size_t height = CGImageGetHeight(cRef); size_t bitsPerComponent = 8; size_t bitsPerPixel = 8; size_t bytesPerRow = CGImageGetWidth(cRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGBitmapInfo info = kCGImageAlphaNone; CGFloat *decode = NULL; BOOL shouldInteroplate = NO; CGColorRenderingIntent intent = kCGRenderingIntentDefault; CGDataProviderRelease(imgDataProvider); CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent); UIImage* newImage = [UIImage imageWithCGImage:throughCGImage]; CGImageRelease(throughCGImage); newImageView.image = newImage;

    Read the article

  • Is there anything wrong with my texture loading method ?

    - by José Joel.
    I'm a noob in openGL and trying to learn as much as possible. I'm using this method to load my openGL textures, loading every .png as RGBA4444. I'm doing anything incorrect ? - (void)loadTexture:(NSString*)nombre { CGImageRef textureImage =[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:nombre ofType:nil]].CGImage; if (textureImage == nil) { NSLog(@"Failed to load texture image"); return; } textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage)); textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage)); imageSizeX= CGImageGetWidth(textureImage); imageSizeY= CGImageGetHeight(textureImage); GLubyte *textureData = (GLubyte *)calloc(1,textureWidth * textureHeight * 4); // Por 4 pues cada pixel necesita 4 bytes, RGBA CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast ); CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage); //Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA" void *tempData = malloc(textureWidth * textureHeight * 2); unsigned int* inPixel32 = (unsigned int*)textureData; unsigned short* outPixel16 = (unsigned short*)tempData; for(int i = 0; i < textureWidth * textureHeight ; ++i, ++inPixel32) *outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A free(textureData); textureData = tempData; CGContextRelease(textureContext); glGenTextures(1, &textures[0]); glBindTexture(GL_TEXTURE_2D, textures[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4 , textureData); free(textureData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } And this is my dealloc method: - (void)dealloc { glDeleteTextures(1,textures); [super dealloc]; }

    Read the article

  • Error With CBitmapContextCreate, CGContextDrawImage, CGBitmapContextCreateImage

    - by wsidell
    Error: CGBitmapContextCreate: invalid data bytes/row: should be at least 400 for 8 integer bits/component, 3 components, kCGImageAlphaNoneSkipFirst. Error: CGContextDrawImage: invalid context Error: CGBitmapContextCreateImage: invalid context Currently, I have in application that runs perfectly in OS 4.0, but I have been trying to get it to work properly in 3.1.3 and I keep getting the errors mentioned above. I am fairly new to iPhone development and am not exactly sure what the problem would be. I am using image resize code that I found in another post on stackoverflow. Here is the code: - (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize{ CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; Any help would be appreciated. If you need more info, I will gladly post it.

    Read the article

  • How to get rid of previous reflection when reflecting a UIImageView (with changing pictures)?

    - by epale
    Hi everyone, I have managed to use the reflection sample app from apple to create a reflection from a UIImageView. But the problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection. How do I ensure that the previous reflection is removed when I change to the next picture? Thank you so much. I hope my question is not too basic. Here is the codes which i used so far: //reflection self.view.autoresizesSubviews = YES; self.view.userInteractionEnabled = YES; // create the reflection view CGRect reflectionRect = currentView.frame; // the reflection is a fraction of the size of the view being reflected reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction; // and is offset to be at the bottom of the view being reflected reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height); reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect]; // determine the size of the reflection to create NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction; // create the reflection image, assign it to the UIImageView and add the image view to the containerView reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight]; reflectionView.alpha = kDefaultReflectionOpacity; [self.view addSubview:reflectionView]; //reflection */ Then the codes below are used to form the reflection: CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh) { CGImageRef theCGImage = NULL; // gradient is always black-white and the mask must be in the gray colorspace CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // create the bitmap context CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone); // define the start and end grayscale values (with the alpha, even though // our bitmap context doesn't support alpha the gradient requires it) CGFloat colors[] = {0.0, 1.0, 1.0, 1.0}; // create the CGGradient and then release the gray color space CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2); CGColorSpaceRelease(colorSpace); // create the start and end points for the gradient vector (straight down) CGPoint gradientStartPoint = CGPointZero; CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh); // draw the gradient into the gray bitmap context CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint, gradientEndPoint, kCGGradientDrawsAfterEndLocation); CGGradientRelease(grayScaleGradient); // convert the context into a CGImageRef and release the context theCGImage = CGBitmapContextCreateImage(gradientBitmapContext); CGContextRelease(gradientBitmapContext); // return the imageref containing the gradient return theCGImage; } CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // create the bitmap context CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, // this will give us an optimal BGRA format for the device: (kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst)); CGColorSpaceRelease(colorSpace); return bitmapContext; } (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height { if (!height) return nil; // create a bitmap graphics context the size of the image CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height); // offset the context - // This is necessary because, by default, the layer created by a view for caching its content is flipped. // But when you actually access the layer content and have it rendered it is inverted. Since we're only creating // a context the size of our reflection view (a fraction of the size of the main view) we have to translate the // context the delta in size, and render it. // CGFloat translateVertical= fromImage.bounds.size.height - height; CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical); // render the layer into the bitmap context CALayer *layer = fromImage.layer; [layer renderInContext:mainViewContentContext]; // create CGImageRef of the main view bitmap content, and then release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); // create a 2 bit CGImage containing a gradient that will be used for masking the // main view content to create the 'fade' of the reflection. The CGImageCreateWithMask // function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient CGImageRef gradientMaskImage = CreateGradientImage(1, height); // create an image by masking the bitmap of the mainView content with the gradient view // then release the pre-masked content bitmap and the gradient bitmap CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage); CGImageRelease(mainViewContentBitmapContext); CGImageRelease(gradientMaskImage); // convert the finished reflection image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:reflectionImage]; // image is retained by the property setting above, so we can release the original CGImageRelease(reflectionImage); return theImage; } */

    Read the article

  • Core Animation bad access on device

    - by user1595102
    I'm trying to do a frame by frame animation with CAlayers. I'm doing this with this tutorial http://mysterycoconut.com/blog/2011/01/cag1/ but everything works with disable ARC, when I'm try to rewrite code with ARC, it's works on simulator perfectly but on device I got a bad access memory. Layer Class interface #import <Foundation/Foundation.h> #import <QuartzCore/QuartzCore.h> @interface MCSpriteLayer : CALayer { unsigned int sampleIndex; } // SampleIndex needs to be > 0 @property (readwrite, nonatomic) unsigned int sampleIndex; // For use with sample rects set by the delegate + (id)layerWithImage:(CGImageRef)img; - (id)initWithImage:(CGImageRef)img; // If all samples are the same size + (id)layerWithImage:(CGImageRef)img sampleSize:(CGSize)size; - (id)initWithImage:(CGImageRef)img sampleSize:(CGSize)size; // Use this method instead of sprite.sampleIndex to obtain the index currently displayed on screen - (unsigned int)currentSampleIndex; @end Layer Class implementation @implementation MCSpriteLayer @synthesize sampleIndex; - (id)initWithImage:(CGImageRef)img; { self = [super init]; if (self != nil) { self.contents = (__bridge id)img; sampleIndex = 1; } return self; } + (id)layerWithImage:(CGImageRef)img; { MCSpriteLayer *layer = [(MCSpriteLayer*)[self alloc] initWithImage:img]; return layer ; } - (id)initWithImage:(CGImageRef)img sampleSize:(CGSize)size; { self = [self initWithImage:img]; if (self != nil) { CGSize sampleSizeNormalized = CGSizeMake(size.width/CGImageGetWidth(img), size.height/CGImageGetHeight(img)); self.bounds = CGRectMake( 0, 0, size.width, size.height ); self.contentsRect = CGRectMake( 0, 0, sampleSizeNormalized.width, sampleSizeNormalized.height ); } return self; } + (id)layerWithImage:(CGImageRef)img sampleSize:(CGSize)size; { MCSpriteLayer *layer = [[self alloc] initWithImage:img sampleSize:size]; return layer; } + (BOOL)needsDisplayForKey:(NSString *)key; { return [key isEqualToString:@"sampleIndex"]; } // contentsRect or bounds changes are not animated + (id < CAAction >)defaultActionForKey:(NSString *)aKey; { if ([aKey isEqualToString:@"contentsRect"] || [aKey isEqualToString:@"bounds"]) return (id < CAAction >)[NSNull null]; return [super defaultActionForKey:aKey]; } - (unsigned int)currentSampleIndex; { return ((MCSpriteLayer*)[self presentationLayer]).sampleIndex; } // Implement displayLayer: on the delegate to override how sample rectangles are calculated; remember to use currentSampleIndex, ignore sampleIndex == 0, and set the layer's bounds - (void)display; { if ([self.delegate respondsToSelector:@selector(displayLayer:)]) { [self.delegate displayLayer:self]; return; } unsigned int currentSampleIndex = [self currentSampleIndex]; if (!currentSampleIndex) return; CGSize sampleSize = self.contentsRect.size; self.contentsRect = CGRectMake( ((currentSampleIndex - 1) % (int)(1/sampleSize.width)) * sampleSize.width, ((currentSampleIndex - 1) / (int)(1/sampleSize.width)) * sampleSize.height, sampleSize.width, sampleSize.height ); } @end I create the layer on viewDidAppear and start animate by clicking on button, but after init I got a bad access error -(void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; NSString *path = [[NSBundle mainBundle] pathForResource:@"mama_default.png" ofType:nil]; CGImageRef richterImg = [UIImage imageWithContentsOfFile:path].CGImage; CGSize fixedSize = animacja.frame.size; NSLog(@"wid: %f, heigh: %f", animacja.frame.size.width, animacja.frame.size.height); NSLog(@"%f", animacja.frame.size.width); richter = [MCSpriteLayer layerWithImage:richterImg sampleSize:fixedSize]; animacja.hidden = 1; richter.position = animacja.center; [self.view.layer addSublayer:richter]; } -(IBAction)animacja:(id)sender { if ([richter animationForKey:@"sampleIndex"]) {NSLog(@"jest"); } if (! [richter animationForKey:@"sampleIndex"]) { CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:@"sampleIndex"]; anim.fromValue = [NSNumber numberWithInt:0]; anim.toValue = [NSNumber numberWithInt:22]; anim.duration = 4; anim.repeatCount = 1; [richter addAnimation:anim forKey:@"sampleIndex"]; } } Have you got any idea how to fix it? Thanks a lot.

    Read the article

< Previous Page | 1 2 3 4 5