Search Results

Search found 40168 results on 1607 pages for 'text processing'.

Page 146/1607 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Creating new image in a loop using OpenCV

    - by user565415
    I am programing some image conversion code with OpenCV and I don't know how can I create image memory buffer to load image on every iteration. I have number of iteration (maxImNumber) and I have an input image. In every loop program must create image that is resized and modified input image. Here is some basic code (concept). for (int imageIndex = 0; imageIndex < maxImNumber; imageIndex++){ cvCopy(inputImage, images[imageIndex], 0); cvReleaseImage(&inputImage); images[imageIndex+1] = cvCreateImage(cvSize((image[imageIndex]->width)/2, image[imageIndex]->height), IPL_DEPTH_8U, 1); for (i=1; i < image[imageIndex]->height; i++) { index = 0; // for(j=0; j < image[imageIndex]->width ; j=j+2){ // doing some basic matematical operation on image content and store it to new image images[imageIndex+1][i][index] = (image[imageIndex][i][j] + image[imageIndex][i][j+2])/2; index++ } } inputImage = cvCreateImage(cvSize((image[imageIndex+1]->width), image[imageIndex]->height), IPL_DEPTH_8U, 1); cvCopy(images[imageIndex+1], inputImage, 0); } Can somebody, please, explain how can I create this image buffer (images[]) and allocate memory for it. Also how can I access any image in this buffer? Thank you very much in advance!

    Read the article

  • Why are Asynchronous processes not called Synchronous?

    - by Balk
    So I'm a little confused by this terminology. Everyone refers to "Asynchronous" computing as running different processes on seperate threads, which gives the illusion that these processes are running at the same time. This is not the definition of the word asynchronous. a·syn·chro·nous –adjective 1. not occurring at the same time. 2. (of a computer or other electrical machine) having each operation started only after the preceding operation is completed. What am I not understanding here?

    Read the article

  • using Hibernate to loading 20K products, modifying the entity and updating to db

    - by Blankman
    I am using hibernate to update 20K products in my database. As of now I am pulling in the 20K products, looping through them and modifying some properties and then updating the database. so: load products foreach products session begintransaction productDao.MakePersistant(p); session commit(); As of now things are pretty slow compared to your standard jdbc, what can I do to speed things up? I am sure I am doing something wrong here.

    Read the article

  • Which library should I use for server-side image manipulation on Node.JS?

    - by Andrew
    I found a quite large list of available libraries on Node.JS wiki but I'm not sure which of those are more mature and provide better performance. Basically I want to do the following: load some images to a server from external sources put them onto one big canvas crop and mask them a bit apply a filter or two Resize the final image and give a link to it Big plus if the node package works on both Linux and Windows.

    Read the article

  • efficient video format/codec for sparse & binary blob tracking

    - by user391339
    I am working on a blob tracking project and have many high-definition videos that I would like to reduce in size for storage and downstream tracking/shape-analysis. I want to use a lossless method that takes advantage of the black and white nature of the video as well as the fact that not much is moving between individual frames. The videos are quite sparse, with 5 to 10 b&w blobs per frame occupying <30% of the space in total, with each blob moving <5-10% of the field of view between frames and not changing shape too much between 2-3 frames. I will work in Python, Matlab, or LabView for this project, and could use a batch utility if available. It may be worthwhile to export the files as compressed image stacks if a proper video format can't be found. What are the pros and cons of this? A video codec uses correlations between neighboring frames, so it should be more efficient, but not if the wrong one is chosen or if it is improperly configured.

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • Action T synchronous and asynchronous

    - by raffaeu
    Hi everybody I have a contextmenustrip control that allows you to execute an action is two different flawours. Sync and Async. I am trying to covert everything using Generics so I did this: public class BaseContextMenu<T> : IContextMenu { private T executor ... public void Exec(Action<T> action){ action.Invoke(this.executor); } public void ExecAsync(Action<T> asyncAction){ ... } How I can write the async method in order to execute the generic action and 'do something' with the menu in the meanwhile? I saw that the signature of BeginInvoke is something like: asyncAction.BeginInvoke(thi.executor, IAsyncCallback, object);

    Read the article

  • How do I prevent ImageMagick convert from scaling images *up*?

    - by Kyle
    I'm using ImageMagick's convert tool to generate image thumbnails for a web application. I'm using notation like so: 600x600> The images are indeed scaled to 600px wide/tall (depending on the longer side) and proportions are properly maintained, however images less than 600px in either direction are scaled up — this behavior is not desired. Is there a way to prevent convert from scaling images up if the destination dimensions both exceed the original image size?

    Read the article

  • Why some pictures are are crooked aftes using my function?

    - by Miko Kronn
    struct BitmapDataAccessor { private readonly byte[] data; private readonly int[] rowStarts; public readonly int Height; public readonly int Width; public BitmapDataAccessor(byte[] data, int width, int height) { this.data = data; this.Height = height; this.Width = width; rowStarts = new int[height]; for (int y = 0; y < Height; y++) rowStarts[y] = y * width; } public byte this[int x, int y, int color] // Maybe use an enum with Red = 0, Green = 1, and Blue = 2 members? { get { return data[(rowStarts[y] + x) * 3 + color]; } set { data[(rowStarts[y] + x) * 3 + color] = value; } } public byte[] Data { get { return data; } } } public static byte[, ,] Bitmap2Byte(Bitmap obraz) { int h = obraz.Height; int w = obraz.Width; byte[, ,] wynik = new byte[w, h, 3]; BitmapData bd = obraz.LockBits(new Rectangle(0, 0, w, h), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int bytes = Math.Abs(bd.Stride) * h; byte[] rgbValues = new byte[bytes]; IntPtr ptr = bd.Scan0; System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); BitmapDataAccessor bda = new BitmapDataAccessor(rgbValues, w, h); for (int i = 0; i < h; i++) { for (int j = 0; j < w; j++) { wynik[j, i, 0] = bda[j, i, 2]; wynik[j, i, 1] = bda[j, i, 1]; wynik[j, i, 2] = bda[j, i, 0]; } } obraz.UnlockBits(bd); return wynik; } public static Bitmap Byte2Bitmap(byte[, ,] tablica) { if (tablica.GetLength(2) != 3) { throw new NieprawidlowyWymiarTablicyException(); } int w = tablica.GetLength(0); int h = tablica.GetLength(1); Bitmap obraz = new Bitmap(w, h, PixelFormat.Format24bppRgb); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color kol = Color.FromArgb(tablica[i, j, 0], tablica[i, j, 1], tablica[i, j, 2]); obraz.SetPixel(i, j, kol); } } return obraz; } Now, if I do: private void btnLoad_Click(object sender, EventArgs e) { if (dgOpenFile.ShowDialog() == DialogResult.OK) { try { Bitmap img = new Bitmap(dgOpenFile.FileName); byte[, ,] tab = Grafika.Bitmap2Byte(img); picture.Image = Grafika.Byte2Bitmap(tab); picture.Size = img.Size; } catch (Exception ex) { MessageBox.Show(ex.Message); } } } Most of pictures are handled correctly butsome not. Example of picture that doesn't work: It produce following result (this is only fragment of picture) : Why is that?

    Read the article

  • output image is not displayed

    - by gerry chocolatos
    so, im doing an image detection which i have to process on each red, green, n blue element to get the edge map and combine them become one to show the output. but it doesnt show my output image would anyone pls be kind enough to help me? here is my code so far. //get the red element process_red = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr = buff_red.getRGB(j, i); int red = (clr & 0x00ff0000) >> 16; red = (0xFF<<24)|(red<<16)|(red<<8)|red; process_red[counter] = red; counter++; } } //set threshold value for red element int threshold = 100; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { int bin = (buff_red.getRGB(x, y) & 0x000000ff); if (bin < threshold) bin = 0; else bin = 255; buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin); } } and i do the same way for my green n blue elements. and then i wanted to get to combination of the three by doing it this way: //combine the three elements process_combine = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr_a = buff_red.getRGB(j, i); int ar = clr_a & 0x000000ff; int clr_b = buff_green.getRGB(j, i); int bg = clr_b & 0x000000ff; int clr_c = buff_blue.getRGB(j, i); int cb = clr_b & 0x000000ff; int alpha = 0xff000000; int combine = alpha|(ar<<16)|(bg<<8)|cb; process_combine[counter] = combine; counter++; } } buff_rgb = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB); Graphics rgb; rgb = buff_rgb.getGraphics(); rgb.drawImage(output_rgb, 0, 0, null); rgb.dispose(); repaint(); and to show the output whic is from the combining process, i use a draw method: g.drawImage(buff_rgb,800,100,this); but still it doesnt show the image. can anyone pls help me? ur help is really appreciated. thanks.

    Read the article

  • Java for loop with multiple incrementers

    - by user2517280
    Im writing a program which combines the RGB pixel values for 3 images, e.g. red pixel of image 1, green pixel of image 2 and blue pixel of image 3 and I want to then create a final image of it. Im using the code below, but this seems to be incrementing x2 and x3 whilst x1 is the same, i.e. not giving the right pixel value for same co-ordinate for each image. for (int x = 0; x < image.getWidth(); x++) { for (int x2 = 0; x2 < image2.getWidth(); x2++) { for (int x3 = 0; x3 < image3.getWidth(); x3++) { for (int y = 0; y < image.getHeight(); y++) { for (int y2 = 0; y2 < image2.getHeight(); y2++) { for (int y3 = 0; y3 < image3.getHeight(); y3++) { So I was wondering if anyone can tell me how to iterate through each of the 3 images on the same co-ordinate, so for example read 1, 1 of each image and record the red, green and blue value accordingly. Apologies if it doesnt make complete sense, its a bit hard to explain. I can iterate the values for one image fine but when I add in another, things start to go a bit wrong as obviously its quite a bit more complicated! I was thinking it might be easier to create an array and replace the according values in that just not sure how to do that effectively either. Thanks

    Read the article

  • How to unblend two images from a blend image

    - by gavishna
    I have blended/merged 2 images img1 and img2 with this code which is woking fine.What i want to know is how to obtain the original two images img1 and img2.The code for blending is as under img1=imread('C:\MATLAB7\Picture5.jpg'); img2=imread('C:\MATLAB7\Picture6.jpg'); for i=1:size(img1,1) for j=1:size(img1,2) for k=1:size(img1,3) output(i,j,k)=(img1(i,j,k)+img2(i,j,k))/2; end end end imshow(output,[0 255]);

    Read the article

  • Automatic people counting + twittering.

    - by c2h2
    Want to develop a system accurately counting people that go through a normal 1-2m wide door. and twitter whenever people goes in or out and tells how many people remain inside. Now, Twitter part is easy, but people counting is difficult. There is some semi existing counting solution, but they do not quite fit my needs. My idea/algorithm: Should I get some infra-red camera mounting on top of my door and constantly monitoring, and divide the camera image into several grid and calculating they entering and gone? can you give me some suggestion and starting point?

    Read the article

  • Matlab - Propagate unit vectors on to the edge of shape boundaries

    - by Graham
    Hi I have a set of unit vectors which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking the unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • Linear Interpolation. How to implement this algorithm in C ? (Python version is given)

    - by psihodelia
    There exists one very good linear interpolation method. It performs linear interpolation requiring at most one multiply per output sample. I found its description in a third edition of Understanding DSP by Lyons. This method involves a special hold buffer. Given a number of samples to be inserted between any two input samples, it produces output points using linear interpolation. Here, I have rewritten this algorithm using Python: temp1, temp2 = 0, 0 iL = 1.0 / L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 *iL) where x contains input samples, L is a number of points to be inserted, y will contain output samples. My question is how to implement such algorithm in ANSI C in a most effective way, e.g. is it possible to avoid the second loop? NOTE: presented Python code is just to understand how this algorithm works. UPDATE: here is an example how it works in Python: x=[] y=[] hold=[] num_points=20 points_inbetween = 2 temp1,temp2=0,0 for i in range(num_points): x.append( sin(i*2.0*pi * 0.1) ) L = points_inbetween iL = 1.0/L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 * iL) Let's say x=[.... 10, 20, 30 ....]. Then, if L=1, it will produce [... 10, 15, 20, 25, 30 ...]

    Read the article

  • Filtering spectrum using FIR filters

    - by Alex Hoppus
    If i have signal values x[T] and filter coefficients b[i], i can perform filtering using convolution. Suppose i have spectrum of x (after FFT) and i need to perform filtering using filters coefficients, how can i perform this? I heard that in frequency domain it will be multiplying, rather than convolution (time domain). But i can't find an equation to use it. I have 614000 values in y = fft(x[T]) vector and 119 filter coefficients (generated using fdatool), i can't multiply them directly ... Thanks.

    Read the article

  • Display a loading message / gif

    - by Pankaj
    I am facing a problem on load pdf on browser. its working fine but slow in load. That's why i want to display a loading message / gif until it's fully loaded so the user isn't looking at a blank screen. My question is same as previous question http://stackoverflow.com/questions/1138232/html-embedded-pdfs-onload

    Read the article

  • How to convert Bitmap to byte[,,] faster?

    - by Miko Kronn
    I wrote function: public static byte[, ,] Bitmap2Byte(Bitmap image) { int h = image.Height; int w = image.Width; byte[, ,] result= new byte[w, h, 3]; for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color c= image.GetPixel(i, j); result[i, j, 0] = c.R; result[i, j, 1] = c.G; result[i, j, 2] = c.B; } } return result; } But it takes almost 6 seconds to convert 1800x1800 image. Can I do this faster?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >