Search Results

Search found 79 results on 4 pages for 'colorspace'.

Page 1/4 | 1 2 3 4  | Next Page >

  • Converting JPEG colorspace (Adobe RGB to sRGB) on Windows (.Net)

    - by Imageree
    I need to generate thumbnail and medium sized images from large photos. These smaller photos are for display in an online gallery. Many of the photographers are submitting JPEG images using Adobe RGB. I would like to use sRGB for all thumbnails and medium size images I am using dotnet (asp.net) and need a way to convert from Adobe RGB to sRGB without losing any quality. Thanks in advance.

    Read the article

  • Why does order matter for ImageMagick's -colorspace operation?

    - by Mark Trapp
    Starting with ImageMagick 6, the command style changed to solve a bunch of problems outlined in the Basic Usage document. That document does imply that for simple operations, one should only need to move the options from before the source file to between the source and output files to convert from the "old" style to the "new" style. However, this doesn't seem to work for the -colorspace operation. When I use the following command, I get an output file with the correct colors: convert -colorspace rgb input.pdf output.png But when I try to use the new command style, the -colorspace operation is never applied: convert input.pdf -colorspace rgb output.png Samples: Why does this occur?

    Read the article

  • OpenGL Colorspace Conversion

    - by Steven Behnke
    Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.

    Read the article

  • jQuery - color cycle animation with hsv colorspace?

    - by Sgt. Floyd Pepper
    Hi there, I am working on a project, which I need a special body-background for. The background-color should cycle through the color spectrum like a mood light. So I found this: http://buildinternet.com/2009/09/its-a-rainbow-color-changing-text-and-backgrounds/ "But" this snippet is working with the RGB colorspace which has some very light and dark colors in it. Getting just bright colors will only work with the HSV colorspace (e.g. having S and V static at 100 and letting H cycle). I don’t know how to convert and in fact how to pimp this snippet for my needs. Does anyone has an idea?? Thanks in advance. Best, Floyd

    Read the article

  • Java Convert ColorSpace of BufferedImage to CS_GRAY without using ConvertColorOp

    - by gmatt
    I know its possible to convert an image to CS_GRAY using public static BufferedImage getGrayBufferedImage(BufferedImage image) { BufferedImageOp op = new ColorConvertOp(ColorSpace .getInstance(ColorSpace.CS_GRAY), null); BufferedImage sourceImgGray = op.filter(image, null); return sourceImgGray; } however, this is a chokepoint of my entire program. I need to do this often, on 800x600 pixel images and takes about 200-300ms for this operation to complete, on average. I know I can do this alot faster by using one for loop to loop through the image data and set it right away. The code above on the other hand constructs a brand new 800x600 BufferedImage that is gray scale. I would rather just transform the image I pass in. Does any one know how to do this with a for loop and given that the image is RGB color space?

    Read the article

  • How to work with BufferedImage and YCbCr colorspace?

    - by Onlyx
    Hi everyone! I need to translate colors in bitmap loaded to BufferedImage from RGB to YCbCr (luminance and 2 channels chrominance) and back after process. I made it with functions used like rgb2ycbcr() in main method for each pixel, but it isn't so smart solution. I should use ColorSpace and ColorModel classes to get BufferedImage with correct color space. It would be more flexible method, but I don't know how to do that. I'm lost and I need some tips. Can somebody help me?

    Read the article

  • Generate jpeg compressed Tiff using RGB colorspace (using Java + JAI)

    - by nOiSe gaTe
    I'm trying to make tiled pyramidal tiffs from master tiff images using Java (JAI 1.1.3 + imageIO). The problem is: I NEED to make tiff files compressed in jpeg with RGB photometric interpretation and no matter what I try, the image still faces YCbCr photometric interpretation. Note: if I change compression type (eg. LZW or Deflate) I can get RGB colorspace but, as I said, I need jpeg compression. Except this detail the tiled PTiff I create it's ok, so I think it's better to focus the attention on simple compression step (uncompressed tiff -- jpeg tiff) this may be a basic example (removing any check to make it more readable) // reading MASTER BufferedImage img = ImageIO.read(uncompressedTiff); ImageOutputStream ios=null; ImageWriter writer=null; ios = ImageIO.createImageOutputStream(outFile); Iterator <ImageWriter> writers = ImageIO.getImageWritersByFormatName("tiff"); writer = writers.next(); // saving and compression params writer.setOutput(ios); ImageWriteParam param = writer.getDefaultWriteParam(); param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT); param.setCompressionType("JPEG"); param.setCompressionQuality(0.8f); param.setTilingMode(ImageWriteParam.MODE_EXPLICIT); param.setTiling(256, 256, 0, 0); IIOMetadata metadata = getWriteMD(writer, param); // writing writer.write(metadata, new IIOImage(img, null, metadata), param); in getWriteMD method: .... TIFFTag photoInterp = base.getTag(BaselineTIFFTagSet.TAG_PHOTOMETRIC_INTERPRETATION); TIFFField fieldPhotoInter = new TIFFField(photoInterp, BaselineTIFFTagSet.PHOTOMETRIC_INTERPRETATION_RGB); .... here is the full version of getWriteMD()

    Read the article

  • How to read and modify the colorspace of an image in c#

    - by Matthias
    I'm loading a Bitmap from a jpg file. If the image is not 24bit RGB, I'd like to convert it. The conversion should be fairly fast. The images I'm loading are up to huge (9000*9000 pixel with a compressed size of 40-50MB). How can this be done? Btw: I don't want to use any external libraries if possible. But if you know of an open source utility class performing the most common imaging tasks, I'd be happy to hear about it. Thanks in advance.

    Read the article

  • CIE XYZ colorspace: is it RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0 ) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut, therefore allowing for all colors to be reproduced. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore. My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA and RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • Magick++ Read Image with ICC colorspace

    - by FlashFan
    Hi guys I need to know how I can read an image which uses a separate ICC Color Profile. The image consists of 26'099'520 Bytes which is the result of 2480 width* 3508 height * 3 components per pixel. I tried it with the following code: Image * image = new Image(); Blob * blob = new Blob(imagedata.c_str(),imagedata.length()); image->read(*blob,Geometry(2480,3508),8,"RGB"); Blob * iccblob = new Blob(iccdata.c_str(),iccdata.length()); image->iccColorProfile(*iccblob); image->write("result.jpg"); But the colors are the same as when I don´t set the Icc-profile to the image. And the colors are wrong in both cases. Thanks for your help!

    Read the article

  • App using MonoTouch Core Graphics mysteriously crashes

    - by Stephen Ashley
    My app launches with a view controller and a simple view consisting of a button and a subview. When the user touches the button, the subview is populated with scrollviews that display the column headers, row headers, and cells of a spreadsheet. To draw the cells, I use CGBitmapContext to draw the cells, generate an image, and then put the image into the imageview contained in the scrollview that displays the cells. When I run the app on the iPad, it displays the cells just fine, and the scrollview lets the user scroll around in the spreadsheet without any problems. If the user touches the button a second time, the spreadsheet redraws and continues to work perfectly, If, however, the user touches the button a third time, the app crashes. There is no exception information display in the Application Output window. My first thought was that the successive button pushes were using up all the available memory, so I overrode the DidReceiveMemoryWarning method in the view controller and used a breakpoint to confirm that this method was not getting called. My next thought was that the CGBitmapContext was not getting released and looked for a Monotouch equivalent of Objective C's CGContextRelease() function. The closest I could find was the CGBitmapContext instance method Dispose(), which I called, without solving the problem. In order to free up as much memory as possible (in case I was somehow running out of memory without tripping a warning), I tried forcing garbage collection each time I finished using a CGBitmapContext. This made the problem worse. Now the program would crash moments after displaying the spreadsheet the first time. This caused me to wonder whether the Garbage Collector was somehow collecting something necessary to the continued display of graphics on the screen. I would be grateful for any suggestions on further avenues to investigate for the cause of these crashes. I have included the source code for the SpreadsheetView class. The relevant method is DrawSpreadsheet(), which is called when the button is touched. Thank you for your assistance on this matter. Stephen Ashley public class SpreadsheetView : UIView { public ISpreadsheetMessenger spreadsheetMessenger = null; public UIScrollView cellsScrollView = null; public UIImageView cellsImageView = null; public SpreadsheetView(RectangleF frame) : base() { Frame = frame; BackgroundColor = Constants.backgroundBlack; AutosizesSubviews = true; } public void DrawSpreadsheet() { UInt16 RowHeaderWidth = spreadsheetMessenger.RowHeaderWidth; UInt16 RowHeaderHeight = spreadsheetMessenger.RowHeaderHeight; UInt16 RowCount = spreadsheetMessenger.RowCount; UInt16 ColumnHeaderWidth = spreadsheetMessenger.ColumnHeaderWidth; UInt16 ColumnHeaderHeight = spreadsheetMessenger.ColumnHeaderHeight; UInt16 ColumnCount = spreadsheetMessenger.ColumnCount; // Add the corner UIImageView cornerView = new UIImageView(new RectangleF(0f, 0f, RowHeaderWidth, ColumnHeaderHeight)); cornerView.BackgroundColor = Constants.headingColor; CGColorSpace cornerColorSpace = null; CGBitmapContext cornerContext = null; IntPtr buffer = Marshal.AllocHGlobal(RowHeaderWidth * ColumnHeaderHeight * 4); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory."); try { cornerColorSpace = CGColorSpace.CreateDeviceRGB(); cornerContext = new CGBitmapContext (buffer, RowHeaderWidth, ColumnHeaderHeight, 8, 4 * RowHeaderWidth, cornerColorSpace, CGImageAlphaInfo.PremultipliedFirst); cornerContext.SetFillColorWithColor(Constants.headingColor.CGColor); cornerContext.FillRect(new RectangleF(0f, 0f, RowHeaderWidth, ColumnHeaderHeight)); cornerView.Image = UIImage.FromImage(cornerContext.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (cornerContext != null) { cornerContext.Dispose(); cornerContext = null; } if (cornerColorSpace != null) { cornerColorSpace.Dispose(); cornerColorSpace = null; } } cornerView.Image = DrawBottomRightCorner(cornerView.Image); AddSubview(cornerView); // Add the cellsScrollView cellsScrollView = new UIScrollView (new RectangleF(RowHeaderWidth, ColumnHeaderHeight, Frame.Width - RowHeaderWidth, Frame.Height - ColumnHeaderHeight)); cellsScrollView.ContentSize = new SizeF (ColumnCount * ColumnHeaderWidth, RowCount * RowHeaderHeight); Size iContentSize = new Size((int)cellsScrollView.ContentSize.Width, (int)cellsScrollView.ContentSize.Height); cellsScrollView.BackgroundColor = UIColor.Black; AddSubview(cellsScrollView); CGColorSpace colorSpace = null; CGBitmapContext context = null; CGGradient gradient = null; UIImage image = null; int bytesPerRow = 4 * iContentSize.Width; int byteCount = bytesPerRow * iContentSize.Height; buffer = Marshal.AllocHGlobal(byteCount); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory."); try { colorSpace = CGColorSpace.CreateDeviceRGB(); context = new CGBitmapContext (buffer, iContentSize.Width, iContentSize.Height, 8, 4 * iContentSize.Width, colorSpace, CGImageAlphaInfo.PremultipliedFirst); float[] components = new float[] {.75f, .75f, .75f, 1f, .25f, .25f, .25f, 1f}; float[] locations = new float[]{0f, 1f}; gradient = new CGGradient(colorSpace, components, locations); PointF startPoint = new PointF(0f, (float)iContentSize.Height); PointF endPoint = new PointF((float)iContentSize.Width, 0f); context.DrawLinearGradient(gradient, startPoint, endPoint, 0); context.SetLineWidth(Constants.lineWidth); context.BeginPath(); for (UInt16 i = 1; i <= RowCount; i++) { context.MoveTo (0f, iContentSize.Height - i * RowHeaderHeight + (Constants.lineWidth/2)); context.AddLineToPoint((float)iContentSize.Width, iContentSize.Height - i * RowHeaderHeight + (Constants.lineWidth/2)); } for (UInt16 j = 1; j <= ColumnCount; j++) { context.MoveTo((float)j * ColumnHeaderWidth - Constants.lineWidth/2, (float)iContentSize.Height); context.AddLineToPoint((float)j * ColumnHeaderWidth - Constants.lineWidth/2, 0f); } context.StrokePath(); image = UIImage.FromImage(context.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (gradient != null) { gradient.Dispose(); gradient = null; } if (context != null) { context.Dispose(); context = null; } if (colorSpace != null) { colorSpace.Dispose(); colorSpace = null; } // GC.Collect(); //GC.WaitForPendingFinalizers(); } UIImage finalImage = ActivateCell(1, 1, image); finalImage = ActivateCell(0, 0, finalImage); cellsImageView = new UIImageView(finalImage); cellsImageView.Frame = new RectangleF(0f, 0f, iContentSize.Width, iContentSize.Height); cellsScrollView.AddSubview(cellsImageView); } private UIImage ActivateCell(UInt16 column, UInt16 row, UIImage backgroundImage) { UInt16 ColumnHeaderWidth = (UInt16)spreadsheetMessenger.ColumnHeaderWidth; UInt16 RowHeaderHeight = (UInt16)spreadsheetMessenger.RowHeaderHeight; CGColorSpace cellColorSpace = null; CGBitmapContext cellContext = null; UIImage cellImage = null; IntPtr buffer = Marshal.AllocHGlobal(4 * ColumnHeaderWidth * RowHeaderHeight); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory: ActivateCell()"); try { cellColorSpace = CGColorSpace.CreateDeviceRGB(); // Create a bitmap the size of a cell cellContext = new CGBitmapContext (buffer, ColumnHeaderWidth, RowHeaderHeight, 8, 4 * ColumnHeaderWidth, cellColorSpace, CGImageAlphaInfo.PremultipliedFirst); // Paint it white cellContext.SetFillColorWithColor(UIColor.White.CGColor); cellContext.FillRect(new RectangleF(0f, 0f, ColumnHeaderWidth, RowHeaderHeight)); // Convert it to an image cellImage = UIImage.FromImage(cellContext.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (cellContext != null) { cellContext.Dispose(); cellContext = null; } if (cellColorSpace != null) { cellColorSpace.Dispose(); cellColorSpace = null; } // GC.Collect(); //GC.WaitForPendingFinalizers(); } // Draw the border on the cell image cellImage = DrawBottomRightCorner(cellImage); CGColorSpace colorSpace = null; CGBitmapContext context = null; Size iContentSize = new Size((int)backgroundImage.Size.Width, (int)backgroundImage.Size.Height); buffer = Marshal.AllocHGlobal(4 * iContentSize.Width * iContentSize.Height); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory: ActivateCell()."); try { colorSpace = CGColorSpace.CreateDeviceRGB(); // Set up a bitmap context the size of the whole grid context = new CGBitmapContext (buffer, iContentSize.Width, iContentSize.Height, 8, 4 * iContentSize.Width, colorSpace, CGImageAlphaInfo.PremultipliedFirst); // Draw the original grid into the bitmap context.DrawImage(new RectangleF(0f, 0f, iContentSize.Width, iContentSize.Height), backgroundImage.CGImage); // Draw the cell image into the bitmap context.DrawImage(new RectangleF(column * ColumnHeaderWidth, iContentSize.Height - (row + 1) * RowHeaderHeight, ColumnHeaderWidth, RowHeaderHeight), cellImage.CGImage); // Convert the bitmap back to an image backgroundImage = UIImage.FromImage(context.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (context != null) { context.Dispose(); context = null; } if (colorSpace != null) { colorSpace.Dispose(); colorSpace = null; } // GC.Collect(); //GC.WaitForPendingFinalizers(); } return backgroundImage; } private UIImage DrawBottomRightCorner(UIImage image) { int width = (int)image.Size.Width; int height = (int)image.Size.Height; float lineWidth = Constants.lineWidth; CGColorSpace colorSpace = null; CGBitmapContext context = null; UIImage returnImage = null; IntPtr buffer = Marshal.AllocHGlobal(4 * width * height); if (buffer == IntPtr.Zero) throw new OutOfMemoryException("Out of memory: DrawBottomRightCorner()."); try { colorSpace = CGColorSpace.CreateDeviceRGB(); context = new CGBitmapContext (buffer, width, height, 8, 4 * width, colorSpace, CGImageAlphaInfo.PremultipliedFirst); context.DrawImage(new RectangleF(0f, 0f, width, height), image.CGImage); context.BeginPath(); context.MoveTo(0f, (int)(lineWidth/2f)); context.AddLineToPoint(width - (int)(lineWidth/2f), (int)(lineWidth/2f)); context.AddLineToPoint(width - (int)(lineWidth/2f), height); context.SetLineWidth(Constants.lineWidth); context.SetStrokeColorWithColor(UIColor.Black.CGColor); context.StrokePath(); returnImage = UIImage.FromImage(context.ToImage()); } finally { Marshal.FreeHGlobal(buffer); if (context != null){ context.Dispose(); context = null;} if (colorSpace != null){ colorSpace.Dispose(); colorSpace = null;} // GC.Collect(); //GC.WaitForPendingFinalizers(); } return returnImage; } }

    Read the article

  • iPhone: CMYK Supported Pixel Formats woes

    - by user219393
    I am trying to create a image sized 1X1 in CMYK color space. as first step I am creating bitmapContext as CGColorSpaceRef cmykcolorSpace = CGColorSpaceCreateDeviceCMYK(); CGContextRef currentcontext = CGBitmapContextCreate(NULL, 1, 1, 8, //bitsPerComponent 4, // bytesPerRow cmykcolorSpace, kCGImageAlphaNone ); No matter what the combination for bitsPerComponent(8,16 or 32) for supported CMYK, there's always same error Unsupported pixel description - 4 components, 8 bits-per-component, 32 bits-per-pixel Any help is appreicated. These are the supported pixel formats 32 bpp, 8 bpc, kCGImageAlphaNone 64 bpp, 16 bpc, kCGImageAlphaNone 128 bpp, 32 bpc, kCGImageAlphaNone | kCGBitmapFloatComponents

    Read the article

  • Why does UIImageView "darken"/saturate PNG images, and can I stop it?

    - by Ben
    I have a PNG file in a UIImageView, and next to that I have an EAGLView which displays the continuation of that same image (long story) as a texture, carved from the same original PNG. The point is, that these images, which should match up flawlessly, actually have somewhat differing color saturation. Normally I'd blame my handling of the PNG texture load in GL, but when I hold Preview (with the PNG) up to the iPhone simulator, it's GL that's spot on, and the UIImageView that's wrong! It's taken the image and made it ever-so-slightly more saturated. The image view is opaque with 100% alpha. I verified this on a clean UIImageView with another PNG file when put next to Preview. Anyone know what's up?

    Read the article

  • How can I use the HSL colorspace in Java?

    - by Eric
    I've had a look at the ColorSpace class, and found the constant TYPE_HLS (which presumably is just HSL in a different order). Can I use this constant to create a Color from hue, saturation, and luminosity? If not, are there any Java classes for this, or do I need to write my own?

    Read the article

  • How can we change color of a text programmatically ?

    - by user297535
    My code is -(UIImage *)addText:(UIImage *)img text:(NSString *)text1{ int w = img.size.width; int h = img.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst); CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage); CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1); char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding]; CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman); CGContextSetTextDrawingMode(context, kCGTextFill); CGContextSetRGBFillColor(context, 255, 255, 255, 2); CGContextShowTextAtPoint(context, 10, 170, text, strlen(text)); CGImageRef imageMasked = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); return [UIImage imageWithCGImage:imageMasked]; } -(UIImage *)addText:(UIImage *)img text:(NSString *)text1{ int w = img.size.width; int h = img.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst); CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage); CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1); char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding]; CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman); CGContextSetTextDrawingMode(context, kCGTextFill); CGContextSetRGBFillColor(context, 255, 255, 255, 2); CGContextShowTextAtPoint(context, 10, 170, text, strlen(text)); CGImageRef imageMasked = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); return [UIImage imageWithCGImage:imageMasked]; } How can we change the color of the text programmatically? Answers will be greatly appreciated!

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • How to get rid of previous reflection when reflecting a UIImageView (with changing pictures)?

    - by epale
    Hi everyone, I have managed to use the reflection sample app from apple to create a reflection from a UIImageView. But the problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection. How do I ensure that the previous reflection is removed when I change to the next picture? Thank you so much. I hope my question is not too basic. Here is the codes which i used so far: //reflection self.view.autoresizesSubviews = YES; self.view.userInteractionEnabled = YES; // create the reflection view CGRect reflectionRect = currentView.frame; // the reflection is a fraction of the size of the view being reflected reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction; // and is offset to be at the bottom of the view being reflected reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height); reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect]; // determine the size of the reflection to create NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction; // create the reflection image, assign it to the UIImageView and add the image view to the containerView reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight]; reflectionView.alpha = kDefaultReflectionOpacity; [self.view addSubview:reflectionView]; //reflection */ Then the codes below are used to form the reflection: CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh) { CGImageRef theCGImage = NULL; // gradient is always black-white and the mask must be in the gray colorspace CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // create the bitmap context CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone); // define the start and end grayscale values (with the alpha, even though // our bitmap context doesn't support alpha the gradient requires it) CGFloat colors[] = {0.0, 1.0, 1.0, 1.0}; // create the CGGradient and then release the gray color space CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2); CGColorSpaceRelease(colorSpace); // create the start and end points for the gradient vector (straight down) CGPoint gradientStartPoint = CGPointZero; CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh); // draw the gradient into the gray bitmap context CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint, gradientEndPoint, kCGGradientDrawsAfterEndLocation); CGGradientRelease(grayScaleGradient); // convert the context into a CGImageRef and release the context theCGImage = CGBitmapContextCreateImage(gradientBitmapContext); CGContextRelease(gradientBitmapContext); // return the imageref containing the gradient return theCGImage; } CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // create the bitmap context CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, // this will give us an optimal BGRA format for the device: (kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst)); CGColorSpaceRelease(colorSpace); return bitmapContext; } (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height { if (!height) return nil; // create a bitmap graphics context the size of the image CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height); // offset the context - // This is necessary because, by default, the layer created by a view for caching its content is flipped. // But when you actually access the layer content and have it rendered it is inverted. Since we're only creating // a context the size of our reflection view (a fraction of the size of the main view) we have to translate the // context the delta in size, and render it. // CGFloat translateVertical= fromImage.bounds.size.height - height; CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical); // render the layer into the bitmap context CALayer *layer = fromImage.layer; [layer renderInContext:mainViewContentContext]; // create CGImageRef of the main view bitmap content, and then release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); // create a 2 bit CGImage containing a gradient that will be used for masking the // main view content to create the 'fade' of the reflection. The CGImageCreateWithMask // function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient CGImageRef gradientMaskImage = CreateGradientImage(1, height); // create an image by masking the bitmap of the mainView content with the gradient view // then release the pre-masked content bitmap and the gradient bitmap CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage); CGImageRelease(mainViewContentBitmapContext); CGImageRelease(gradientMaskImage); // convert the finished reflection image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:reflectionImage]; // image is retained by the property setting above, so we can release the original CGImageRelease(reflectionImage); return theImage; } */

    Read the article

  • iPhone Image Processing--matrix convolution

    - by James
    I am implementing a matrix convolution blur on the iPhone. The following code converts the UIImage supplied as an argument of the blur function into a CGImageRef, and then stores the RGBA values in a standard C char array. CGImageRef imageRef = imgRef.CGImage; int width = imgRef.size.width; int height = imgRef.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *pixels = malloc((height) * (width) * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * (width); NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); Then the pixels values stored in the pixels array are convolved, and stored in another array. unsigned char *results = malloc((height) * (width) * 4); Finally, these augmented pixel values are changed back into a CGImageRef, converted to a UIImage, and the returned at the end of the function with the following code. context = CGBitmapContextCreate(results, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGImageRef finalImage = CGBitmapContextCreateImage(context); UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)]; CGImageRelease(finalImage); NSLog(@"edges found"); free(results); free(pixels); CGColorSpaceRelease(colorSpace); return newImage; This works perfectly, once. Then, once the image is put through the filter again, very odd, unprecedented pixel values representing input pixel values that don't exist, are returned. Is there any reason why this should work the first time, but then not afterward? Beneath is the entirety of the function. -(UIImage*) blur:(UIImage*)imgRef { CGImageRef imageRef = imgRef.CGImage; int width = imgRef.size.width; int height = imgRef.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *pixels = malloc((height) * (width) * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * (width); NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); height = imgRef.size.height; width = imgRef.size.width; float matrix[] = {0,0,0,0,1,0,0,0,0}; float divisor = 1; float shift = 0; unsigned char *results = malloc((height) * (width) * 4); for(int y = 1; y < height; y++){ for(int x = 1; x < width; x++){ float red = 0; float green = 0; float blue = 0; int multiplier=1; if(y>0 && x>0){ int index = (y-1)*width + x; red = matrix[0]*multiplier*(float)pixels[4*(index-1)] + matrix[1]*multiplier*(float)pixels[4*(index)] + matrix[2]*multiplier*(float)pixels[4*(index+1)]; green = matrix[0]*multiplier*(float)pixels[4*(index-1)+1] + matrix[1]*multiplier*(float)pixels[4*(index)+1] + matrix[2]*multiplier*(float)pixels[4*(index+1)+1]; blue = matrix[0]*multiplier*(float)pixels[4*(index-1)+2] + matrix[1]*multiplier*(float)pixels[4*(index)+2] + matrix[2]*multiplier*(float)pixels[4*(index+1)+2]; index = (y)*width + x; red = red+ matrix[3]*multiplier*(float)pixels[4*(index-1)] + matrix[4]*multiplier*(float)pixels[4*(index)] + matrix[5]*multiplier*(float)pixels[4*(index+1)]; green = green + matrix[3]*multiplier*(float)pixels[4*(index-1)+1] + matrix[4]*multiplier*(float)pixels[4*(index)+1] + matrix[5]*multiplier*(float)pixels[4*(index+1)+1]; blue = blue + matrix[3]*multiplier*(float)pixels[4*(index-1)+2] + matrix[4]*multiplier*(float)pixels[4*(index)+2] + matrix[5]*multiplier*(float)pixels[4*(index+1)+2]; index = (y+1)*width + x; red = red+ matrix[6]*multiplier*(float)pixels[4*(index-1)] + matrix[7]*multiplier*(float)pixels[4*(index)] + matrix[8]*multiplier*(float)pixels[4*(index+1)]; green = green + matrix[6]*multiplier*(float)pixels[4*(index-1)+1] + matrix[7]*multiplier*(float)pixels[4*(index)+1] + matrix[8]*multiplier*(float)pixels[4*(index+1)+1]; blue = blue + matrix[6]*multiplier*(float)pixels[4*(index-1)+2] + matrix[7]*multiplier*(float)pixels[4*(index)+2] + matrix[8]*multiplier*(float)pixels[4*(index+1)+2]; red = red/divisor+shift; green = green/divisor+shift; blue = blue/divisor+shift; if(red<0){ red=0; } if(green<0){ green=0; } if(blue<0){ blue=0; } if(red>255){ red=255; } if(green>255){ green=255; } if(blue>255){ blue=255; } int realPos = 4*(y*imgRef.size.width + x); results[realPos] = red; results[realPos + 1] = green; results[realPos + 2] = blue; results[realPos + 3] = 1; }else { int realPos = 4*((y)*(imgRef.size.width) + (x)); results[realPos] = 0; results[realPos + 1] = 0; results[realPos + 2] = 0; results[realPos + 3] = 1; } } } context = CGBitmapContextCreate(results, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGImageRef finalImage = CGBitmapContextCreateImage(context); UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)]; CGImageRelease(finalImage); free(results); free(pixels); CGColorSpaceRelease(colorSpace); return newImage;} THANKS!!!

    Read the article

  • Flipping OpenGL texture

    - by Mk12
    When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them? glScalef(1.0f, -1.0f, 1.0f); vertically flipping the image files manually (in Photoshop) flipping them programatically after loading them (I don't know how) This is the method I'm using to load png textures, in my Utilities.m file (Objective-C): + (GLuint)loadPngTexture:(NSString *)name { CFURLRef textureURL = CFBundleCopyResourceURL( CFBundleGetMainBundle(), (CFStringRef)name, CFSTR("png"), CFSTR("Textures")); NSAssert(textureURL, @"Texture name invalid"); CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL); NSAssert(imageSource, @"Invalid Image Path."); NSAssert((CGImageSourceGetCount(imageSource) > 0), @"No Image in Image Source."); CFRelease(textureURL); CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL); NSAssert(image, @"Image not created."); CFRelease(imageSource); NSUInteger width = CGImageGetWidth(image); NSUInteger height = CGImageGetHeight(image); void *data = malloc(width * height * 4); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSAssert(colorSpace, @"Colorspace not created."); CGContextRef context = CGBitmapContextCreate( data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host); NSAssert(context, @"Context not created."); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); CGImageRelease(image); CGContextRelease(context); GLuint textureId; glGenTextures(1, &textureId); glBindTexture(GL_TEXTURE_2D, textureId); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, data); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE_SGIS); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE_SGIS); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); free(data); return textureId; } Also, another thing I was wondering about: If I made a simple 2d game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner, or would I run in to problems with other things (e.g. text rendering)? Thanks.

    Read the article

  • Image Masking Problem

    - by Adnan Sohail
    Hi I am doing image masking. First I was using C*GImageMaskCreate()* function for masking but now I am using your solution which you had posted for masking. My problem is it works fine on simulator but on device it is not transparent and it gives rectangle of mask image at background. The following is my code. Now tell me what I have to do? Thanks. CGContextRef mainViewContentContext; CGColorSpaceRef colorSpace; UIImage *orgImage=[UIImage imageNamed:@"orgimage.png"]; CGImageRef img = [orgImage CGImage]; colorSpace = CGColorSpaceCreateDeviceRGB(); mainViewContentContext = CGBitmapContextCreate (NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); CGColorSpaceRelease(colorSpace); if (mainViewContentContext==NULL) return NULL; CGRect subImageRect; CGImageRef tileImage; subImageRect = CGRectMake(xCord,yCord,width,height); tileImage = CGImageCreateWithImageInRect(img,subImageRect); CGImageRef maskImage = mask_image.CGImage; CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, width, height), maskImage); CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, width, height), tileImage); CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext]; CGImageRelease(mainViewContentBitmapContext);

    Read the article

  • iPhone - UIImage Leak, CGBitmapContextCreateImage Leak

    - by bbullis21
    Alright I am having a world of difficulty tracking down this memory leak. When running this script I do not see any memory leaking, but my objectalloc is climbing. Instruments points to CGBitmapContextCreateImage create_bitmap_data_provider malloc, this takes up 60% of my objectalloc. This code is called several times with a NSTimer. //GET IMAGE FROM RESOURCE DIR NSString * fileLocation = [[NSBundle mainBundle] pathForResource:imgMain ofType:@"jpg"]; NSData * imageData = [NSData dataWithContentsOfFile:fileLocation]; UIImage * blurMe = [UIImage imageWithData:imageData]; NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; UIImage * scaledImage = [blurMe _imageScaledToSize:CGSizeMake(blurMe.size.width / dblBlurLevel, blurMe.size.width / dblBlurLevel) interpolationQuality:3.0]; UIImage * labelImage = [scaledImage _imageScaledToSize:blurMe.size interpolationQuality:3.0]; UIImage * imageCopy = [[UIImage alloc] initWithCGImage:labelImage.CGImage]; [pool drain]; // deallocates scaledImage and labelImage imgView.image = imageCopy; [imageCopy release]; Below is the blur function. I believe the objectalloc issue is located in here. Maybe I just need a pair of fresh eyes. Would be great if someone could figure this out. Sorry it is kind of long... I'll try and shorten it. @implementation UIImage(Blur) - (UIImage *)blurredCopy:(int)pixelRadius { //VARS unsigned char *srcData, *destData, *finalData; CGContextRef context = NULL; CGColorSpaceRef colorSpace; void * bitmapData; int bitmapByteCount; int bitmapBytesPerRow; //IMAGE SIZE size_t pixelsWide = CGImageGetWidth(self.CGImage); size_t pixelsHigh = CGImageGetHeight(self.CGImage); bitmapBytesPerRow = (pixelsWide * 4); bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); colorSpace = CGColorSpaceCreateDeviceRGB(); if (colorSpace == NULL) { return NULL; } bitmapData = malloc( bitmapByteCount ); if (bitmapData == NULL) { CGColorSpaceRelease( colorSpace ); } context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst ); if (context == NULL) { free (bitmapData); } CGColorSpaceRelease( colorSpace ); free (bitmapData); if (context == NULL) { return NULL; } //PREPARE BLUR size_t width = CGBitmapContextGetWidth(context); size_t height = CGBitmapContextGetHeight(context); size_t bpr = CGBitmapContextGetBytesPerRow(context); size_t bpp = (CGBitmapContextGetBitsPerPixel(context) / 8); CGRect rect = {{0,0},{width,height}}; CGContextDrawImage(context, rect, self.CGImage); // Now we can get a pointer to the image data associated with the bitmap // context. srcData = (unsigned char *)CGBitmapContextGetData (context); if (srcData != NULL) { size_t dataSize = bpr * height; finalData = malloc(dataSize); destData = malloc(dataSize); memcpy(finalData, srcData, dataSize); memcpy(destData, srcData, dataSize); int sums[5]; int i, x, y, k; int gauss_sum=0; int radius = pixelRadius * 2 + 1; int *gauss_fact = malloc(radius * sizeof(int)); for (i = 0; i < pixelRadius; i++) { .....blah blah blah... THIS IS JUST LONG CODE THE CREATES INT FIGURES ........blah blah blah...... } if (gauss_fact) { free(gauss_fact); } } size_t bitmapByteCount2 = bpr * height; //CREATE DATA PROVIDER CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, srcData, bitmapByteCount2, NULL); //CREATE IMAGE CGImageRef cgImage = CGImageCreate( width, height, CGBitmapContextGetBitsPerComponent(context), CGBitmapContextGetBitsPerPixel(context), CGBitmapContextGetBytesPerRow(context), CGBitmapContextGetColorSpace(context), CGBitmapContextGetBitmapInfo(context), dataProvider, NULL, true, kCGRenderingIntentDefault ); //RELEASE INFORMATION CGDataProviderRelease(dataProvider); CGContextRelease(context); if (destData) { free(destData); } if (finalData) { free(finalData); } if (srcData) { free(srcData); } UIImage *retUIImage = [UIImage imageWithCGImage:cgImage]; CGImageRelease(cgImage); return retUIImage; } The only thing I can think of that is holding up the objectalloc is this UIImage *retUIImage = [UIImage imageWithCGImage:cgImage];...but how to do I release that after it has been returned? Hopefully someone can help please.

    Read the article

1 2 3 4  | Next Page >