Search Results

Search found 9901 results on 397 pages for 'audio processing'.

Page 103/397 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Parallelizing some LINQ to XML

    - by Lol coder
    How can I make this code run in parallel? List<Crop> crops = new List<Crop>(); //Get up to 10 pages of data. for (int i = 1; i < 10; i++) { //i is basically used for paging. XDocument document = XDocument.Load(string.Format(url, i)); crops.AddRange(from c in document.Descendants("CropType") select new Crop { //The data here. }); }

    Read the article

  • What is the best way to detect white color?

    - by dnul
    I'm trying to detect white objects in a video. The first step is to filter the image so that it leaves only white-color pixels. My first approach was using HSV color space and then checking for high level of VAL channel. Here is the code: //convert image to hsv cvCvtColor( src, hsv, CV_BGR2HSV ); cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 ); for(int x=0;x<srcSize.width;x++){ for(int y=0;y<srcSize.height;y++){ uchar * hue=&((uchar*) (h_plane->imageData+h_plane->widthStep*y))[x]; uchar * sat=&((uchar*) (s_plane->imageData+s_plane->widthStep*y))[x]; uchar * val=&((uchar*) (v_plane->imageData+v_plane->widthStep*y))[x]; if((*val>170)) *hue=255; else *hue=0; } } leaving the result in the hue channel. Unfortunately, this approach is very sensitive to lighting. I'm sure there is a better way. Any suggestions?

    Read the article

  • what is problem in the following matlab codes

    - by raju
    img=imread('img27.jpg'); %function rectangle=rect_test(img) % edge detection [gx,gy]=gradient(img); gx=abs(gx); gy=abs(gy); g=gx+gy; g=abs(g); % make it 300x300 img=zeros(300); s=size(g); img(1:s(1),1:s(2))=g; figure; imagesc((img)); colormap(gray); title('Edge detection') % take the dct of the image IM=abs(dct2(img)); figure; imagesc((IM)); colormap(gray); title('DCT') % normalize m=max(max(IM)); IM2=IM./m*100; % get rid of the peak size_IM2=size(IM2); IM2(1:round(.05*size_IM2(1)),1:round(.05*size_IM2(2))) = 0; % threshold L=length( find(IM2>20)) ; if( L > 60 ) ret = 1; else ret = 0; end

    Read the article

  • Understanding The Convolution Matrix

    - by Ryan Naddy
    I am learning about the Convolution Matrix, and I understand how they work, but I don't understand how to know before hand what the output of a Matrix will look like. For example lets say I want to add a blur to an image, I could guess 10,000+ different combinations of numbers before I get the correct one. I do know though that this formula will give me a blur effect, but I have no idea why. float[] sharpen = new float[] { 1/9f, 1/9f, 1/9f, 1/9f, 1/9f, 1/9f, 1/9f, 1/9f, 1/9f }; Can anyone either explain to me how this works or point me to some article, that explains this? I would like to know before hand what a possible output of the matrix will be without guessing. Basically I would like to know why do we put that number in the filed, and why not some other number?

    Read the article

  • looping through variable post vars and adding them to the database

    - by Neil Hickman
    I have been given the task of devising a custom forms manager that has a mysql backend. The problem I have now encountered after setting up all the front end, is how to process a form that is dynamic. For E.G Form one could contain 6 fields all with different name attributes in the input tag. Form two could contain 20 fields all with different name attributes in the input tag. How would i process the forms without using up oodles of resource.

    Read the article

  • byte[] to wav file

    - by John
    Hi, It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels. Thanks for your help.

    Read the article

  • Django Photologue - use photo with original compression

    - by 123
    hi, I´m uploading photos with Django Photologue. Is it possible to leave the jpgs as the are? Even if I tell photosize to use Highest Quality compression the files end up having half as many kb as the originals. I must admit that the visable loss of quality is small but as i am a photographer i would like the images to apear exactly as i edited them (photoshop). I don´t need any of photosize´s cropping and effects tools. Can it be turned off completely? thanks for your answers.

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • I/O Asynchronous Completion

    - by lockedscope
    In the following, it is said that an I/O handle must be associated with the thread pool but i could not find where in the given example an handle is associated with the thread. Which function or code helps to bind the file handle in that example? Using asynchronous I/O completion events, a thread from the thread pool processes data only when the data is received, and once the data has been processed, the thread returns to the thread pool. To make an asynchronous I/O call, an operating-system I/O handle must be associated with the thread pool and a callback method must be specified. When the I/O operation completes, a thread from the thread pool invokes the callback method. http://msdn.microsoft.com/en-us/library/aa720215(VS.71).aspx

    Read the article

  • What is the wrong of this converted code?

    - by Gum Slashy
    I'm developing shape identification project using javacv and I have found some opencv code to identify U shapes in particular image and I have try to convert it in to javacv but it doesn't provide same out put. Can you please help me to convert this opencv code into javacv? This is Opencv code import cv2 import numpy as np img = cv2.imread('sofud.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) ret,thresh = cv2.threshold(gray,127,255,1) contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) if 10 < w/float(h) or w/float(h) < 0.1: cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2) cv2.imshow('res',img) cv2.waitKey(0) cv2.destroyAllWindows() This is the expected output This is the code that I have converted import com.googlecode.javacpp.Loader; import com.googlecode.javacv.CanvasFrame; import static com.googlecode.javacpp.Loader.*; import static com.googlecode.javacv.cpp.opencv_core.*; import static com.googlecode.javacv.cpp.opencv_imgproc.*; import static com.googlecode.javacv.cpp.opencv_highgui.*; import java.io.File; import javax.swing.JFileChooser; public class TestBeam { public static void main(String[] args) { CvMemStorage storage=CvMemStorage.create(); CvSeq squares = new CvContour(); squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage); JFileChooser f=new JFileChooser(); int result=f.showOpenDialog(f);//show dialog box to choose files File myfile=null; String path=""; if(result==0){ myfile=f.getSelectedFile();//selected file taken to myfile path=myfile.getAbsolutePath();//get the path of the file } IplImage src = cvLoadImage(path);//hear path is actual path to image IplImage grayImage = IplImage.create(src.width(), src.height(), IPL_DEPTH_8U, 1); cvCvtColor(src, grayImage, CV_RGB2GRAY); cvThreshold(grayImage, grayImage, 127, 255, CV_THRESH_BINARY); CvSeq cvSeq=new CvSeq(); CvMemStorage memory=CvMemStorage.create(); cvFindContours(grayImage, memory, cvSeq, Loader.sizeof(CvContour.class), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); System.out.println(cvSeq.total()); for (int i = 0; i < cvSeq.total(); i++) { CvRect rect=cvBoundingRect(cvSeq, i); int x=rect.x(),y=rect.y(),h=rect.height(),w=rect.width(); if (10 < (w/h) || (w/h) < 0.1){ cvRectangle(src, cvPoint(x, y), cvPoint(x+w, y+h), CvScalar.RED, 1, CV_AA, 0); //cvSeqPush(squares, rect); } } CanvasFrame cnvs=new CanvasFrame("Beam"); cnvs.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE); cnvs.showImage(src); //cvShowImage("Final ", src); } } This is the out put that I got please can some one help me to solve this problem ?

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • How to scale JPEG image down so that text is clear as possible?

    - by Juha Syrjälä
    I have some JPEG images that I need scale down to about 80% of original size. Original image dimension are about 700px × 1000px. Images contain some computer generated text and possibly some graphics (similar to what you would find in corporate word documents). How to scale image so that the text is as legible as possible? Currently we are scaling the imaeg down using bicubic interpolation, but that makes the text blurry and foggy.

    Read the article

  • OpenAL device, buffer and context relationship

    - by Markus
    I'm trying to create an object oriented model to wrap OpenAL and have a little problem understanding the devices, buffers and contexts. From what I can see in the Programmer's Guide, there are multiple devices, each of which can have multiple contexts as well as multiple buffers. Each context has a listener, and the alListener*() functions all operate on the listener of the active context. (Meaning that I have to make another context active first if I wanted to change it's listener, if I got that right.) So far, so good. What irritates me though is that I need to pass a device to the alcCreateContext() function, but none to alGenBuffers(). How does this work then? When I open multiple devices, on which device are the buffers created? Are the buffers shared between all devices? What happens to the buffers if I close all open devices? (Or is there something I missed?)

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >