Search Results

Search found 6679 results on 268 pages for 'cube processing'.

Page 39/268 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Why do I get completely different results when saving a BitmapSource to bmp, jpeg, and png in WPF

    - by DanM
    I wrote a little utility class that saves BitmapSource objects to image files. The image files can be either bmp, jpeg, or png. Here is the code: public class BitmapProcessor { public void SaveAsBmp(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new BmpBitmapEncoder()); } public void SaveAsJpg(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new JpegBitmapEncoder()); } public void SaveAsPng(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new PngBitmapEncoder()); } private void Save(BitmapSource bitmapSource, string path, BitmapEncoder encoder) { using (var stream = new FileStream(path, FileMode.Create)) { encoder.Frames.Add(BitmapFrame.Create(bitmapSource)); encoder.Save(stream); } } } Each of the three Save methods work, but I get unexpected results with bmp and jpeg. Png is the only format that produces an exact reproduction of what I see if I show the BitmapSource on screen using a WPF Image control. Here are the results: BMP - too dark JPEG - too saturated PNG - correct Why am I getting completely different results for different file types? I should note that the BitmapSource in my example uses an alpha value of 0.1 (which is why it appears very desaturated), but it should be possible to show the resulting colors in any image format. I know if I take a screen capture using something like HyperSnap, it will look correct regardless of what file type I save to. Here's a HyperSnap screen capture saved as a bmp: As you can see, this isn't a problem, so there's definitely something strange about WPF's image encoders. Do I have a setting wrong? Am I missing something?

    Read the article

  • OpenCV: Shift/Align face image relative to reference Image (Image Registration)

    - by Abhischek
    I am new to OpenCV2 and working on a project in emotion recognition and would like to align a facial image in relation to a reference facial image. I would like to get the image translation working before moving to rotation. Current idea is to run a search within a limited range on both x and y coordinates and use the sum of squared differences as error metric to select the optimal x/y parameters to align the image. I'm using the OpenCV face_cascade function to detect the face images, all images are resized to a fixed (128x128). Question: Which parameters of the Mat image do I need to modify to shift the image in a positive/negative direction on both x and y axis? I believe setImageROI is no longer supported by Mat datatypes? I have the ROIs for both faces available however I am unsure how to use them. void alignImage(vector<Rect> faceROIstore, vector<Mat> faceIMGstore) { Mat refimg = faceIMGstore[1]; //reference image Mat dispimg = faceIMGstore[52]; // "displaced" version of reference image //Rect refROI = faceROIstore[1]; //Bounding box for face in reference image //Rect dispROI = faceROIstore[52]; //Bounding box for face in displaced image Mat aligned; matchTemplate(dispimg, refimg, aligned, CV_TM_SQDIFF_NORMED); imshow("Aligned image", aligned); } The idea for this approach is based on Image Alignment Tutorial by Richard Szeliski Working on Windows with OpenCV 2.4. Any suggestions are much appreciated.

    Read the article

  • Calculating length of objects in binary image - algorithm

    - by Jacek
    I need to calculate length of the object in a binary image (maximum distance between the pixels inside the object). As it is a binary image, so we might consider it a 2D array with values 0 (white) and 1 (black). The thing I need is a clever (and preferably simple) algorithm to perform this operation. Keep in mind there are many objects in the image. The image to clarify: Sample input image:

    Read the article

  • TMS320C64x Quick start reference for porgrammers

    - by osgx
    Hello Is thare any quickstart guide for programmers for writing DSP-accelerated appliations for TMS320C64x? I have a program with custom algorythm (not the fft, or usial filtering) and I want to accelerate it using multi-DSP coprocessor. So, how should I modify source to move computation from main CPU to DSPs? What limitations are there for DSP-running code? I have some experience with CUDA. In CUDA I should mark every function as being host, device, or entry point for device (kernel). There are also functions to start kernels and to upload/download data to/from GPU. There are also some limitations, for device code, described in CUDA Reference manual. I hope, there is an similar interface and a documentation for DSP.

    Read the article

  • How to use ImageMagick to batch desaturate images?

    - by Zando
    I have an image that is black text with some gray and pale yellowish background. I basically want to keep the text as black as possible, and make the gray and yellow comparatively lighter...at the very least, turn yellow into a light gray. What's the most efficient way to do that in ImageMagick?

    Read the article

  • C# manipulating video

    - by grey88
    i want to take a folder of pictures, and turn it into a slideshow video with music in the background. i have no idea how to do this, or where to get help, cos this isnt the kind of thing you can search in google. idk if there are api's for it, or if it can even be done in C#. maybe ill have to move the project to C++ or something, but first i need to know where the hell to start. thanks.

    Read the article

  • Copy & rename file using .bat file language?

    - by flyout
    I want to make a .bat to copy & rename a file multiple times. I want to have a list of names, and an original file, then I want to copy that file and rename it for each name on the list. How I can do this using a .bat file? Also is it possible to run winrar fromthe .bat to .rar or .zip every file after copying/renaming?

    Read the article

  • Biometric implementation in Java application and Image Comparision

    - by harigm
    How do I compare the 2 images in Java based web application. I have installed the Biometric thumb reader, I need to read the user Thumb and compare it with his thumb image which is captured during the registration process. Initially I am storing the image in the Mysql as Blob. Else I can store that image in a separate folder as well Please suggest which is best way to do 1)Shall i Use the Java script 2)Is there any built in Java API to perform this

    Read the article

  • Is using the .NET Image Conversion enough?

    - by contactmatt
    I've seen a lot of people try to code their own image conversion techniques. It often seems to be very complicated, and ends up using GDI+ function calls, and manipulating bits of the image. This has got me wondering if I am missing something in the simplicity of .NET's image conversion call when saving an image. Here's the code I have: Bitmap tempBmp = new Bitmap("c:\temp\img.jpg"); Bitmap bmp = new Bitmap(tempBmp, 800, 600); bmp.Save(c:\temp\img.bmp, //extension depends on format ImageFormat.Bmp) //These are all the ImageFormats I allow conversion to within the program. Ignore the syntax for a second ;) ImageFormat.Gif) //or ImageFormat.Jpeg) //or ImageFormat.Png) //or ImageFormat.Tiff) //or ImageFormat.Wmf) //or ImageFormat.Bmp)//or ); This is all I'm doing in my image conversion. Just setting the location of where the image should be saved, and passing it an ImageFormat type. I've tested it the best I can, but I'm wondering if I am missing anything in this simple format conversion, or if this is sufficient?

    Read the article

  • perl multiple tasks problem

    - by Alice Wozownik
    I have finished my earlier multithreaded program that uses perl threads and it works on my system. The problem is that on some systems that it needs to run on, thread support is not compiled into perl and I cannot install additional packages. I therefore need to use something other than threads, and I am moving my code to using fork(). This works on my windows system in starting the subtasks. A few problems: How to determine when the child process exits? I created new threads when the thread count was below a certain value, I need to keep track of how many threads are running. For processes, how do I know when one exits so I can keep track of how many exist at the time, incrementing a counter when one is created and decrementing when one exits? Is file I/O using handles obtained with OPEN when opened by the parent process safe in the child process? I need to append to a file for each of the child processes, is this safe on unix as well. Is there any alternative to fork and threads? I tried use Parallel::ForkManager, but that isn't installed on my system (use Parallel::ForkManager; gave an error) and I absolutely require that my perl script work on all unix/windows systems without installing any additional modules.

    Read the article

  • How to overwirte i-frames of a video?

    - by Dominik
    hi, i want to destroy alle i-frames of a video. doing this i want to check if encrypting only the i-frames of a video would be sufficient for making it unwatchable. How can i do this? Only removing them and recompressing the video would not be the same as really overwriting the iframe in the stream without recalculating b-frames etc.

    Read the article

  • convert RGB values to equivalent HSV values using python

    - by sree01
    Hi, I want to convert RGB values to HSV using python. I got some code samples, which gave the result with the S and V values greater than 100. (example : http://code.activestate.com/recipes/576554-covert-color-space-from-hsv-to-rgb-and-rgb-to-hsv/ ) . anybody got a better code which convert RGB to HSV and vice versa thanks

    Read the article

  • selecting the pixels from an image in opencv

    - by ajith
    hi everyone, this is refined version of my previous question http://stackoverflow.com/questions/2602628/computing-matting-laplacian-matrix-of-an-image actually i want to do following operation... summation for all k|(i,j)?wk [(Ii-µk)*(Ij-µk)]...where wk is 3X3 window & µk is mean of wk...here i dont know how to select Ii & Ij separately from an image which is 2 dimensional[Iij]...or does the eqn means anything else??please someone help me..

    Read the article

  • How to write PIL image filter for plain pgm format?

    - by Juha
    How can I write a filter for python imaging library for pgm plain ascii format (P2). Problem here is that basic PIL filter assumes constant number of bytes per pixel. My goal is to open feep.pgm with Image.open(). See http://netpbm.sourceforge.net/doc/pgm.html or below. Alternative solution is that I find other well documented ascii grayscale format that is supported by PIL and all major graphics programs. Any suggestions? br, Juha feep.pgm: P2 # feep.pgm 24 7 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 3 3 3 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 15 15 15 0 0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 15 0 0 3 3 3 0 0 0 7 7 7 0 0 0 11 11 11 0 0 0 15 15 15 15 0 0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 0 0 0 3 0 0 0 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 edit: Thanks for the answer, I works... but I need a solution that uses Image.open(). Most of python programs out there use PIL for graphics manipulation (google: python image open). Thus, I need to be able to register a filter to PIL. Then, I can use any software that uses PIL. I now think mostly scipy, pylab, etc. dependent programs.

    Read the article

  • Image Comparision

    - by harigm
    How do I compare the 2 images in Java based web application. I have installed the Biometric thumb reader, I need to read the user Thumb and compare it with his thumb image which is captured during the registration process. Initially I am storing the image in the Mysql as Blob. Else I can store that image in a separate folder as well Please suggest which is best way to do 1)Shall i Use the Java script 2)Is there any built in Java API to perform this

    Read the article

  • How would I write the following applescript in Obj-C AppScript? ASTranslate was of no help =(

    - by demonslayer319
    The translation tool isn't able to translate this working code. I copied it out of a working script. set pathToTemp to (POSIX path of ((path to desktop) as string)) -- change jpg to pict tell application "Image Events" try launch set albumArt to open file (pathToTemp & "albumart.jpg") save albumArt as PICT in file (pathToTemp & "albumart.pict") --the first 512 bytes are the PICT header, so it reads from byte 513 --this is to allow the image to be added to an iTunes track later. set albumArt to (read file (pathToTemp & "albumart.pict") from 513 as picture) close end try end tell The code is taking a jpg image, converting it to a PICT file, and then reading the file minus the header (the first 512 bytes). Later in the script, albumArt will be added to an iTunes track. I tried translating the code (minus the comments), but ASTranslate froze for a good 2 minutes before giving me this: Untranslated event 'earsffdr' #import "IEGlue/IEGlue.h" IEApplication *imageEvents = [IEApplication applicationWithName: @"Image Events"]; IELaunchCommand *cmd = [[imageEvents launch] ignoreReply]; id result = [cmd send]; #import "IEGlue/IEGlue.h" IEApplication *imageEvents = [IEApplication applicationWithName: @"Image Events"]; IEReference *ref = [[imageEvents files] byName: @"/Users/Doom/Desktop/albumart.jpg"]; id result = [[ref open] send]; #import "IEGlue/IEGlue.h" IEApplication *imageEvents = [IEApplication applicationWithName: @"Image Events"]; IEReference *ref = [[imageEvents images] byName: @"albumart.jpg"]; IESaveCommand *cmd = [[[ref save] in: [[imageEvents files] byName: @"/Users/Doom/Desktop/albumart.pict"]] as: [IEConstant PICT]]; id result = [cmd send]; 'crdwrread' Traceback (most recent call last): File "objcrenderer.pyc", line 283, in renderCommand KeyError: 'crdwrread' 'cascrgdut' Traceback (most recent call last): File "objcrenderer.pyc", line 283, in renderCommand KeyError: 'cascrgdut' 'crdwrread' Traceback (most recent call last): File "objcrenderer.pyc", line 283, in renderCommand KeyError: 'crdwrread' Untranslated event 'rdwrread' OK I have no clue how to make sense of this. Thanks for any and all help!

    Read the article

  • How to overwrite i-frames of a video?

    - by Dominik
    I want to destroy all i-frames of a video. Doing this I want to check if encrypting only the i-frames of a video would be sufficient for making it unwatchable. How can I do this? Only removing them and recompressing the video would not be the same as really overwriting the i-frame in the stream without recalculating b-frames etc.

    Read the article

  • Stripe suppression algorithm needed

    - by maximus
    I have images with text. There are dark stripes in image that still exists in binary image too. That makes characters connected with that stripe - it can be vertical or horizontal (or at some angle) I need to remove them from image at first, and then to binarize. I've seen bandpass filter in ImageJ program that have some options like - suppress horizontal stripes, and it works good, but it also apply a bandpass filtering. So any idea please how to do it. I think it should be done in frequency domain.

    Read the article

  • Shared value in parallel python

    - by Jonathan
    Hey all- I'm using ParallelPython to develop a performance-critical script. I'd like to share one value between the 8 processes running on the system. Please excuse the trivial example but this illustrates my question. def findMin(listOfElements): for el in listOfElements: if el < min: min = el import pp min = 0 myList = range(100000) job_server = pp.Server() f1 = job_server.submit(findMin, myList[0:25000]) f2 = job_server.submit(findMin, myList[25000:50000]) f3 = job_server.submit(findMin, myList[50000:75000]) f4 = job_server.submit(findMin, myList[75000:100000]) The pp docs don't seem to describe a way to share data across processes. Is it possible? If so, is there a standard locking mechanism (like in the threading module) to confirm that only one update is done at a time? l = Lock() if(el < min): l.acquire if(el < min): min = el l.release I understand I could keep a local min and compare the 4 in the main thread once returned, but by sharing the value I can do some better pruning of my BFS binary tree and potentially save a lot of loop iterations. Thanks- Jonathan

    Read the article

  • How can I overlay one image onto another?

    - by Edward Tanguay
    I would like to display an image composed of two images. I want image rectangle.png to show with image sticker.png on top of it with its left-hand corner at pixel 10, 10. Here is as far as I got, but how do I combine the images? Image image = new Image(); image.Source = new BitmapImage(new Uri(@"c:\test\rectangle.png")); image.Stretch = Stretch.None; image.HorizontalAlignment = HorizontalAlignment.Left; Image imageSticker = new Image(); imageSticker.Source = new BitmapImage(new Uri(@"c:\test\sticker.png")); image.OverlayImage(imageSticker, 10, 10); //how to do this? TheContent.Content = image;

    Read the article

  • In a digital photo, how can I detect if a mountain is obscured by clouds?

    - by Gavin Brock
    The problem I have a collection of digital photos of a mountain in Japan. However the mountain is often obscured by clouds or fog. What techniques can I use to detect that the mountain is visible in the image? I am currently using Perl with the Imager module, but open to alternatives. All the images are taken from the exact same position - these are some samples. My naïve solution I started by taking several horizontal pixel samples of the mountain cone and comparing the brightness values to other samples from the sky. This worked well for differentiating good image 1 and bad image 2. However in the autumn it snowed and the mountain became brighter than the sky, like image 3, and my simple brightness test started to fail. Image 4 is an example of an edge case. I would classify this as a good image since some of the mountain is clearly visible. UPDATE 1 Thank you for the suggestions - I am happy you all vastly over-estimated my competence. Based on the answers, I have started trying the ImageMagick edge-detect transform, which gives me a much simpler image to analyze. convert sample.jpg -edge 1 edge.jpg I assume I should use some kind of masking to get rid of the trees and most of the clouds. Once I have the masked image, what is the best way to compare the similarity to a 'good' image? I guess the "compare" command suited for this job? How do I get a numeric 'similarity' value from this?

    Read the article

  • Loading .png image from array of uint8_t into OpenGL ES texture

    - by unknownthreat
    Normally, when we want to load a texture for OpenGL ES with .png, we simply add the .png images into XCode. The .png files will be altered for optimization by XCode and these altered png files can loaded into OpenGL ES texture during the runtime. However, what I am trying to do is quite different. I am trying to load a .png file that is not from the prebuilt/compile. The png file will be transmitted externally from UDP, and it will be in the form of array of bytes. I am very sure that the png is transferred correctly, but when it comes to displaying the png image in the form of the OpenGL ES texture, the image somehow shows incorrectly. The colors that are being sent are presented but the positions are somehow very incorrect. However, the position of the colors still retain some aspects of the original position. Here: The left image shows the original .png, while the right shows the png being displayed on iPhone using OpenGL ES Texture. It looks more like the png data is not being decoded or incorrectly processed. Below is OpenGL ES code for turning the image into texture: - (void) setTextureFromImageByte: (uint8_t*)imageByte{ if (self = [super init]){ NSData* imageData = [[NSData alloc] initWithBytes: imageByte length: imageLength]; UIImage* img = [[UIImage alloc] initWithData: imageData]; CGImageRef image = img.CGImage; int width = 512; int height = 512; if (image){ int tempWidth = (int)width, tempHeight = (int)height; if ((tempWidth & (tempWidth - 1)) != 0 ){ NSLog(@"CAUTION! width is not power of 2. width == %d", tempWidth); }else if ((tempHeight & (tempHeight - 1)) != 0 ){ NSLog(@"CAUTION! height is not power of 2. height == %d", tempHeight); }else{ void *spriteData = calloc(width * 4, height * 4); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, width, height), image); CGContextRelease(spriteContext); glBindTexture(GL_TEXTURE_2D, 1); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 320, 435, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); } }else NSLog(@"ERROR: Image not loaded..."); [img release]; [imageData release]; } } So does anyone knows how to deal with this? Is it because of iPhone only accepts altered png from XCode? What can we do in this case in order to make the png image be able to display correctly?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >