Search Results

Search found 19 results on 1 pages for 'affinetransform'.

Page 1/1 | 1 

  • Java: Line appears when using AffineTransform to scale image

    - by Malakim
    Hi, I'm having a problem with image scaling. When I use the following code to scale an image it ends up with a line either at the bottom or on the right side of the image. double scale = 1; if (scaleHeight >= scaleWidth) { scale = scaleWidth; } else { scale = scaleHeight; } AffineTransform af = new AffineTransform(); af.scale(scale, scale); AffineTransformOp operation = new AffineTransformOp(af, AffineTransformOp.TYPE_NEAREST_NEIGHBOR); BufferedImage bufferedThumb = operation.filter(img, null); The original image is here: http://tinyurl.com/yzv6r7h The scaled image: http://tinyurl.com/yk6e8ga Does anyone know why the line appears? Thanks!

    Read the article

  • How can I draw on JPanel using another quadrant for the coordinates?

    - by Sanoj
    I would like to draw some shapes on a JPanel by overriding paintComponent. I would like to be able to pan and zoom. Panning and zooming is easy to do with AffineTransform and the setTransform method on the Graphics2D object. After doing that I can easyli draw the shapes with g2.draw(myShape) The shapes are defined with the "world coordinates" so it works fine when panning and I have to translate them to the canvas/JPanel coordinates before drawing. Now I would like to change the quadrant of the coordinates. From the 4th quadrant that JPanel and computer often uses to the 1st quadrant that the users are most familiar with. The X is the same but the Y-axe should increase upwards instead of downwards. It is easy to redefine origo by new Point(origo.x, -origo.y); But How can I draw the shapes in this quadrant? I would like to keep the coordinates of the shapes (defined in the world coordinates) rather than have them in the canvas coordinates. So I need to transform them in some way, or transform the Graphics2D object, and I would like to do it efficiently. Can I do this with AffineTransform too?

    Read the article

  • Java error on bilinear interpolation of 16 bit data

    - by Jon
    I'm having an issue using bilinear interpolation for 16 bit data. I have two images, origImage and displayImage. I want to use AffineTransformOp to filter origImage through an AffineTransform into displayImage which is the size of the display area. origImage is of type BufferedImage.TYPE_USHORT_GRAY and has a raster of type sun.awt.image.ShortInterleavedRaster. Here is the code I have right now displayImage = new BufferedImage(getWidth(), getHeight(), origImage.getType()); try { op = new AffineTransformOp(atx, AffineTransformOp.TYPE_BILINEAR); op.filter(origImage, displayImage); } catch (Exception e) { e.printStackTrace(); } In order to show the error I have created 2 gradient images. One has values in the 15 bit range (max of 32767) and one in the 16 bit range (max of 65535). Below are the two images 15 bit image 16 bit image These two images were created in identical fashions and should look identical, but notice the line across the middle of the 16 bit image. At first I thought that this was an overflow problem however, it is weird that it's manifesting itself in the center of the gradient instead of at the end where the pixel values are higher. Also, if it was an overflow issue than I would suspect that the 15 bit image would have been affected as well. Any help on this would be greatly appreciated.

    Read the article

  • How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage?

    - by Ian Charnas
    I am using OpenCV 2.2 on the iPhone to detect faces. I'm using the IOS 4's AVCaptureSession to get access to the camera stream, as seen in the code that follows. My challenge is that the video frames come in as CVBufferRef (pointers to CVImageBuffer) objects, and they come in oriented as a landscape, 480px wide by 300px high. This is fine if you are holding the phone sideways, but when the phone is held in the upright position I want to rotate these frames 90 degrees clockwise so that OpenCV can find the faces correctly. I could convert the CVBufferRef to a CGImage, then to a UIImage, and then rotate, as this person is doing: Rotate CGImage taken from video frame However that wastes a lot of CPU. I'm looking for a faster way to rotate the images coming in, ideally using the GPU to do this processing if possible. Any ideas? Ian Code Sample: -(void) startCameraCapture { // Start up the face detector faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"]; // Create the AVCapture Session session = [[AVCaptureSession alloc] init]; // create a preview layer to show the output from the camera AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session]; previewLayer.frame = previewView.frame; previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [previewView.layer addSublayer:previewLayer]; // Get the default camera device AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a AVCaptureInput with the camera device NSError *error=nil; AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error]; if (cameraInput == nil) { NSLog(@"Error to create camera capture:%@",error); } // Set the output AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init]; videoOutput.alwaysDiscardsLateVideoFrames = YES; // create a queue besides the main thread queue to run the capture on dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL); // setup our delegate [videoOutput setSampleBufferDelegate:self queue:captureQueue]; // release the queue. I still don't entirely understand why we're releasing it here, // but the code examples I've found indicate this is the right thing. Hmm... dispatch_release(captureQueue); // configure the pixel format videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil]; // and the size of the frames we want // try AVCaptureSessionPresetLow if this is too slow... [session setSessionPreset:AVCaptureSessionPresetMedium]; // If you wish to cap the frame rate to a known value, such as 10 fps, set // minFrameDuration. videoOutput.minFrameDuration = CMTimeMake(1, 10); // Add the input and output [session addInput:cameraInput]; [session addOutput:videoOutput]; // Start the session [session startRunning]; } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // only run if we're not already processing an image if (!faceDetector.imageNeedsProcessing) { // Get CVImage from sample buffer CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer); // Send the CVImage to the FaceDetector for later processing [faceDetector setImageFromCVPixelBufferRef:cvImage]; // Trigger the image processing on the main thread [self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO]; } }

    Read the article

  • View results of affine transform

    - by stckjp
    I am trying to find out the reason why when I apply affine transformations on an image in OpenCV, the result of it is not visible in the preview window, but the entire window is black.How can I find workaround for this problem so that I can always view my transformed image (the result of the affine transform) in the window no matter the applied transformation? Update: I think that this happens because all the transformations are calculated with respect to the origin of the coordinate system (top left corner of the image). While for rotation I can specify the center of the rotation, and I am able to view the result, when I perform scaling I am not able to control where the transformed image goes. Is it possible to somehow move the coordinate system to make the image fit in the window? Update2: I have an image which contains only ROI at some position in it (the rest of the image is black), and I need to apply a set of affine transforms on it. To make things simpler and to see the effect of each individual transform, I applied each transform one by one. What I noticed is that, whenever I move (translate) the image such that the center of the ROI is in the center of the coordinate system (top left corner of the view window), all the affine transforms perform correctly without moving. However, by translating the center of ROI at the center of the coordinate system, the upper and the left part of the ROI remain cut out of the current view window. If I move ROI's central point to another point in the view window (for example the window center), an affine transform of type: A=[a 0 0; 0 b 0] (A is 2x3 matrix, parameter of the warpAffine function) moves the image (ROI), outside of the view window (which doesn't happen if the ROI's center is in the top-left corner). How can I modify the affine transform so the image doesn't move out of its place (behaves the same way as when the ROI center is in the center of the coordinate system)?

    Read the article

  • How to retrieve a pixel in a tiff image (loaded with JAI)?

    - by Ed Taylor
    I'm using a class (DisplayContainer) to hold a RenderedOp-image that should be displayed to the user: RenderedOp image1 = JAI.create("tiff", params); DisplayContainer d = new DisplayContainer(image1); JScrollPane jsp = new JScrollPane(d); // Create a frame to contain the panel. Frame window = new Frame(); window.add(jsp); window.pack(); window.setVisible(true); The class DisplayContainer looks like this: import java.awt.event.MouseEvent; import java.awt.geom.AffineTransform; import javax.media.jai.RenderedOp; import com.sun.media.jai.widget.DisplayJAI; public class DisplayContainer extends DisplayJAI { private static final long serialVersionUID = 1L; private RenderedOp img; // Affine tranform private final float ratio = 1f; private AffineTransform scaleForm = AffineTransform.getScaleInstance(ratio, ratio); public DisplayContainer(RenderedOp img) { super(img); this.img = img; addMouseListener(this); } public void mouseClicked(MouseEvent e) { System.out.println("Mouseclick at: (" + e.getX() + ", " + e.getY() + ")"); // How to retrieve the RGB-value of the pixel where the click took // place? } // OMISSIONS } What I would like to know is how the RGB value of the clicked pixel can be obtained?

    Read the article

  • How can I transform a Point2f with a matrix on Android?

    - by Vivendi
    I'm developing for Android and I'm using the android.renderscript.Matrix3f class to do some calculations. What I need to do now is to now is to do something like mat.tranform(pointIn, pointOut); So I need to transform a matrix by a given Point class. In awt I would simply do this: AffineTransform t = new AffineTransform(); Point2D.Float p = new Point2D.Float(); t.transform( p, p ); But in Android I now have this: Matrix3f t = new Matrix3f(); PointF p = new PointF(); // Now I need to tranform it somehow.. But the Matrix3f class in Android doesn't have a Matrix.transform(Point2D ptSrc, Point2D ptDst) method. So I guess I have to do the transformation manually. But I'm not really sure how that works. From what I've seen it's something like a translate and then a rotate? Could anyone please tell me how to do this in code?

    Read the article

  • Using drawAtPoint with my CIImage not doing anything on screen

    - by Adam
    Stuck again. :( I have the following code crammed into a procedure invoked when I click on a button on my application main window. I'm just trying to tweak a CIIMage and then display the results. At this point I'm not even worried about exactly where / how to display it. I'm just trying to slam it up on the window to make sure my Transform worked. This code seems to work down through the drawAtPoint message. But I never see anything on the screen. What's wrong? Thanks. Also, as far as displaying it in a particular location on the window ... is the best technique to put a frame of some sort on the window, then get the coordinates of that frame and "draw into" that rectangle? Or use a specific control from IB? Or what? Thanks again. // earlier I initialize a NSImage from JPG file on disk. // then create NSBitmapImageRep from the NSImage. This all works fine. // then ... CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap]; if (inputCIimage == Nil) NSLog(@"could not create CI Image"); else { NSLog (@"CI Image created. working on transform"); CIFilter *transform = [CIFilter filterWithName:@"CIAffineTransform"]; [transform setDefaults]; [transform setValue:inputCIimage forKey:@"inputImage"]; NSAffineTransform *affineTransform = [NSAffineTransform transform]; [affineTransform rotateByDegrees:3]; [transform setValue:affineTransform forKey:@"inputTransform"]; CIImage * myResult = [transform valueForKey:@"outputImage"]; if (myResult == Nil) NSLog(@"Transformation failed"); else { NSLog(@"Created transformation successfully ... now render it"); [myResult drawAtPoint: NSMakePoint ( 0,0 ) fromRect: NSMakeRect ( 0,0,128,128 ) operation: NSCompositeSourceOver fraction: 1.0]; //100% opaque [inputCIimage release]; } }

    Read the article

  • Rotating a NetBeans Visual Library Widget

    - by Geertjan
    Trying to create a widget which, when clicked, rotates slightly further on each subsequent click: Above, the bird where the mouse is visible has been clicked a few times and so has rotated a bit further on each click. The code isn't quite right yet and I'm hoping someone will take this code, try it out, and help with a nice solution! public class BirdScene extends Scene {     public BirdScene() {         addChild(new LayerWidget(this));         getActions().addAction(ActionFactory.createAcceptAction(new AcceptProvider() {             public ConnectorState isAcceptable(Widget widget, Point point, Transferable transferable) {                 Image dragImage = getImageFromTransferable(transferable);                 if (dragImage != null) {                     JComponent view = getView();                     Graphics2D g2 = (Graphics2D) view.getGraphics();                     Rectangle visRect = view.getVisibleRect();                     view.paintImmediately(visRect.x, visRect.y, visRect.width, visRect.height);                     g2.drawImage(dragImage,                             AffineTransform.getTranslateInstance(point.getLocation().getX(),                             point.getLocation().getY()),                             null);                     return ConnectorState.ACCEPT;                 } else {                     return ConnectorState.REJECT;                 }             }             public void accept(Widget widget, final Point point, Transferable transferable) {                 addChild(new BirdWidget(getScene(), getImageFromTransferable(transferable), point));             }         }));     }     private Image getImageFromTransferable(Transferable transferable) {         Object o = null;         try {             o = transferable.getTransferData(DataFlavor.imageFlavor);         } catch (IOException ex) {         } catch (UnsupportedFlavorException ex) {         }         return o instanceof Image ? (Image) o : null;     }     private class BirdWidget extends IconNodeWidget {         private int theta = 0;         public BirdWidget(Scene scene, Image imageFromTransferable, Point point) {             super(scene);             setImage(imageFromTransferable);             setPreferredLocation(point);             setCheckClipping(true);             getActions().addAction(ActionFactory.createMoveAction());             getActions().addAction(ActionFactory.createSelectAction(new SelectProvider() {                 public boolean isAimingAllowed(Widget widget, Point localLocation, boolean invertSelection) {                     return true;                 }                 public boolean isSelectionAllowed(Widget widget, Point localLocation, boolean invertSelection) {                     return true;                 }                 public void select(final Widget widget, Point localLocation, boolean invertSelection) {                     theta = (theta + 100) % 360;                     repaint();                     getScene().validate();                 }             }));         }         @Override         public void paintWidget() {             final Image image = getImageWidget().getImage();             Graphics2D g = getGraphics();             g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);             Rectangle bounds = getClientArea();             AffineTransform newXform = g.getTransform();             int xRot = image.getWidth(null) / 2;             int yRot = image.getWidth(null) / 2;             newXform.rotate(theta * Math.PI / 180, xRot, yRot);             g.setTransform(newXform);             g.drawImage(image, bounds.x, bounds.y, null);         }     } } The problem relates to refreshing the scene after the rotation. But it would help if someone would just take the code above, add it to their own application, try it out, see the problem for yourself, and develop it a bit further!

    Read the article

  • Java paint speed relative to color model

    - by Jon
    I have a BufferedImage with an IndexColorModel. I need to paint that image onto the screen, but I've noticed that this is slow when using an IndexColorModel. However, if I run the BufferedImage through an identity affine transform it creates an image with a DirectColorModel and the painting is significantly faster. Here's the code I'm using AffineTransformOp identityOp = new AffineTransformOp(new AffineTransform(), AffineTransformOp.TYPE_BILINEAR); displayImage = identityOp.filter(displayImage, null); I have three questions 1. Why is painting the slower on an IndexColorModel? 2. Is there any way to speed up the painting of an IndexColorModel? 3. If the answer to 2. is no, is this the most efficient way to convert from an IndexColorModel to a DirectColorModel? I've noticed that this conversion is dependent on the size of the image, and I'd like to remove that dependency. Thanks for the help

    Read the article

  • iPhone OS: Rotate just the button images, not the views

    - by Jongsma
    Hi, I am developing an iPad application which is basically a big drawing canvas with a couple of button at the side. (Doesn't sound very original, does it? :P) No matter how the user holds the device, the canvas should remain in place and should not be rotated. The simplest way to achieve this would be to support just one orientation. However, I would like the images on the buttons to rotate (like in the iPhone camera app) when the device is rotated. UIPopoverControllers should also use the users current orientation (and not appear sideways). What is the best way to achieve this? (I figured I could rotate the canvas back into place with an affineTransform, but I don't think it is ideal.) Thanks in advance!

    Read the article

  • How can I rotate an image using Java/Swing and then set its origin to 0,0?

    - by JT
    I'm able to rotate an image that has been added to a JLabel. The only problem is that if the height and width are not equal, the rotated image will no longer appear at the JLabel's origin (0,0). Here's what I'm doing. I've also tried using AffineTransform and rotating the image itself, but with the same results. Graphics2D g2d = (Graphics2D)g; g2d.rotate(Math.toRadians(90), image.getWidth()/2, image.getHeight()/2); super.paintComponent(g2d); If I have an image whose width is greater than its height, rotating that image using this method and then painting it will result in the image being painted vertically above the point 0,0, and horizontally to the right of the point 0,0.

    Read the article

  • How to rotate an image properly in JPanel (Java)

    - by Bruce
    Hi guys, I'm working on rotating a loaded image. I set the graphics on a JPanel and then use standard AffineTransform in order to rotate it, say, 45 degrees. Unfortunately, the image is being cut, if it exceeds the panel area. How may I force JPanel to add scrolls to itself (while loading an image file, I would like to adjust the size of JPanel by adding the scrolls, without adjusting the size of JFrame). Or, in other words, how to correctly rotate the whole image? Thank you in advance for the reply!

    Read the article

  • Rotate a Swing JLabel

    - by Johannes Rössel
    I am currently trying to implement a Swing component, inheriting from JLabel which should simply represent a label that can be oriented vertically. Beginning with this: public class RotatedLabel extends JLabel { public enum Direction { HORIZONTAL, VERTICAL_UP, VERTICAL_DOWN } private Direction direction; I thought it's be a nice idea to just alter the results from getPreferredSize(): @Override public Dimension getPreferredSize() { // swap size for vertical alignments switch (getDirection()) { case VERTICAL_UP: case VERTICAL_DOWN: return new Dimension(super.getPreferredSize().height, super .getPreferredSize().width); default: return super.getPreferredSize(); } } and then simply transform the Graphics object before I offload painting to the original JLabel: @Override protected void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g.create(); switch (getDirection()) { case VERTICAL_UP: gr.translate0, getPreferredSize().getHeight()); gr.transform(AffineTransform.getQuadrantRotateInstance(-1)); break; case VERTICAL_DOWN: // TODO break; default: } super.paintComponent(gr); } } It seems to work—somehow—in that the text is now displayed vertically. However, placement and size are off: Actually, the width of the background (orange in this case) is identical with the height of the surrounding JFrame which is ... not quite what I had in mind. Any ideas how to solve that in a proper way? Is delegating rendering to superclasses even encouraged?

    Read the article

  • Using java2d user space measurements with TextLayout and LineBreakMeasurer

    - by Andrew Wheeler
    I have a java2d image defined in user space (mm) to print an identity card. The transformation to pixels is by using an AffineTransform for the required DPI (Screen or print). I want to wrap text across several lines but the the TextLayout does not respect user space co-ordinates. private void drawParagraph(Graphics2D g2d, Rectangle2D area, String text) { LineBreakMeasurer lineMeasurer; AttributedString string = new AttributedString(text); AttributedCharacterIterator paragraph = string.getIterator(); int paragraphStart = paragraph.getBeginIndex(); int paragraphEnd = paragraph.getEndIndex(); FontRenderContext frc = g2d.getFontRenderContext(); lineMeasurer = new LineBreakMeasurer(paragraph, frc); float breakWidth = (float)area.getWidth(); float drawPosY = (float)area.getY(); float drawPosX = (float)area.getX(); lineMeasurer.setPosition(paragraphStart); while (lineMeasurer.getPosition() < paragraphEnd) { TextLayout layout = lineMeasurer.nextLayout(breakWidth); drawPosY += layout.getAscent(); layout.draw(g2d, drawPosX, drawPosY); drawPosY += layout.getDescent() + layout.getLeading(); } } The above code determines font metrics using user space sizing of the Font and thus turn out rather large. The font size is calculated as best vertical fit for the number of lines in an area with the calculation as below. E.g. attr.put(TextAttribute.SIZE, (geTextArea().getHeight() / noOfLines - LINE_SPACING) ); When using g2d.drawString("Some text to display", x, y); the font size appears correct. Does anyone have a better way of doing text layout in user space co-ords?

    Read the article

  • Pythagoras tree with g2d

    - by owca
    I'm trying to build my first fractal (Pythagoras Tree): in Java using Graphics2D. Here's what I have now : import java.awt.*; import java.awt.geom.*; import javax.swing.*; import java.util.Scanner; public class Main { public static void main(String[] args) { int i=0; Scanner scanner = new Scanner(System.in); System.out.println("Give amount of steps: "); i = scanner.nextInt(); new Pitagoras(i); } } class Pitagoras extends JFrame { private int powt, counter; public Pitagoras(int i) { super("Pythagoras Tree."); setSize(1000, 1000); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); powt = i; } private void paintIt(Graphics2D g) { double p1=450, p2=800, size=200; for (int i = 0; i < powt; i++) { if (i == 0) { g.drawRect((int)p1, (int)p2, (int)size, (int)size); counter++; } else{ if( i%2 == 0){ //here I must draw two squares } else{ //here I must draw right triangle } } } } @Override public void paint(Graphics graph) { Graphics2D g = (Graphics2D)graph; paintIt(g); } So basically I set number of steps, and then draw first square (p1, p2 and size). Then if step is odd I need to build right triangle on the top of square. If step is even I need to build two squares on free sides of the triangle. What method should I choose now for drawing both triangle and squares ? I was thinking about drawing triangle with simple lines transforming them with AffineTransform but I'm not sure if it's doable and it doesn't solve drawing squares.

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Wrong background colors in Swing ListCellRenderer

    - by Johannes Rössel
    I'm currently trying to write a custom ListCellRenderer for a JList. Unfortunately, nearly all examples simply use DefaultListCellRenderer as a JLabel and be done with it; I needed a JPanel, however (since I need to display a little more info than just an icon and one line of text). Now I have a problem with the background colors, specifically with the Nimbus PLAF. Seemingly the background color I get from list.getBackground() is white, but paints as a shade of gray (or blueish gray). Outputting the color I get yields the following: Background color: DerivedColor(color=255,255,255 parent=nimbusLightBackground offsets=0.0,0.0,0.0,0 pColor=255,255,255 However, as can be seen, this isn't what gets painted. It obviously works fine for the selected item. Currently I even have every component I put into the JPanel the cell renderer returns set to opaque and with the correct foreground and background colors—to no avail. Any ideas what I'm doing wrong here? ETA: Example code which hopefully runs. public class ParameterListCellRenderer implements ListCellRenderer { @Override public Component getListCellRendererComponent(JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) { // some values we need Border border = null; Color foreground, background; if (isSelected) { background = list.getSelectionBackground(); foreground = list.getSelectionForeground(); } else { background = list.getBackground(); foreground = list.getForeground(); } if (cellHasFocus) { if (isSelected) { border = UIManager.getBorder("List.focusSelectedCellHighlightBorder"); } if (border == null) { border = UIManager.getBorder("List.focusCellHighlightBorder"); } } else { border = UIManager.getBorder("List.cellNoFocusBorder"); } System.out.println("Background color: " + background.toString()); JPanel outerPanel = new JPanel(new BorderLayout()); setProperties(outerPanel, foreground, background); outerPanel.setBorder(border); JLabel nameLabel = new JLabel("Factory name here"); setProperties(nameLabel, foreground, background); outerPanel.add(nameLabel, BorderLayout.PAGE_START); Box innerPanel = new Box(BoxLayout.PAGE_AXIS); setProperties(innerPanel, foreground, background); innerPanel.setAlignmentX(Box.LEFT_ALIGNMENT); innerPanel.setBorder(BorderFactory.createEmptyBorder(0, 10, 0, 0)); JLabel label = new JLabel("param: value"); label.setFont(label.getFont().deriveFont( AffineTransform.getScaleInstance(0.95, 0.95))); setProperties(label, foreground, background); innerPanel.add(label); outerPanel.add(innerPanel, BorderLayout.CENTER); return outerPanel; } private void setProperties(JComponent component, Color foreground, Color background) { component.setOpaque(true); component.setForeground(foreground); component.setBackground(background); } } The weird thing is, if I do if (isSelected) { background = new Color(list.getSelectionBackground().getRGB()); foreground = new Color(list.getSelectionForeground().getRGB()); } else { background = new Color(list.getBackground().getRGB()); foreground = new Color(list.getForeground().getRGB()); } it magically works. So maybe the DerivedColor with nimbusLightBackground I'm getting there may have trouble?

    Read the article

1