Search Results

Search found 340 results on 14 pages for 'opencv'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Will the values of label and its correspondences change if Image is rotated?

    - by Vinayak Agarwal
    Hi all I have an image in which a text like "VINAYAK 123" is written. The text in the image is at a certain angle, say 30degrees. Now when I extract the labels of the connected components, I get V-1, I-2, N-3, A-4,Y-5,A-6,K-7, 1-8,2-9,3-10( Format: Character- Label No.). Now I rotate the image 30 degrees in the clockwise direction to make the text in the image horizontal. My question is that now if I extract the labels of the connected components, will the character and the label no. correspondence still remain the same? Thanks in advance!

    Read the article

  • what is the reason of Invalid Address specified to RtlFreeHeap

    - by carl
    the develop environment is vs2008, the language is c++, when I release the problem,at beginning it run with out problem but after several minutes it stop and show error like that : HEAP[guessModel.exe]: Invalid Address specified to RtlFreeHeap( 003E0000, 7D7C737B ). who can tell me the reason of the error. thank you very much.

    Read the article

  • How to browse a Matrix returned by the OpenCV2.0 Canny() function ?

    - by user290613
    Hi all, I've been browsing the web to get an answer but no way to put my hand on it. I call the Canny function that fills me a matrix Canny(src, dst, 220, 299, 3 ); the dst variable is now a Matrix that is according to me of type CV_8UC1, 8bit, 1channel. Now I would like to browse this matrix according to rows and columns, so there was a trick that I've seen on stackoverflow dst.at<uchar*>(3, 5) In my case it crashes. (Assertion Failed, unknown function, cxmat.hpp line 450) My matrix information are myMatrix.width() = 181 myMatrix.height() = 65 One more thing, when I do std::cout << dst.depth() << std::endl; It always retrieves me 0 and not 8 (which I guess Im suppose to get) Thank you very much for all who will reply,

    Read the article

  • Template Matching 2 ROI in a single video capture in real time

    - by YS
    Hi, I am working on a project to perform template matching on a video captured via my webcam. I am able to create 2 template from the webcam capture, but I am unable to perform template matching for both. The program can run only with either template matching, but not both. My program sequence is: Capture from webcam get template 1 get template 2 perform template 1 matching with webcam capture then perform template 2 matching with webcam capture if fail, stop. Can any expert advice me on this?

    Read the article

  • Extract co-ordinates from a vector of co-ordinates and save to file

    - by barsil sil
    I have a vector which contains a list co-ordinates ...x1,y1 ; x2,y2....xn,yn I am trying to extract each individual element which is a co-ordinate and then save them to file as a nice delineated co-ord pair which can be easily read. Or what would be nice i to save them so I can plot something in excel e.t.c (as cols of x and y values). My original vector size is 31, and was originally constructed as vector<vector<Point> > myvector( previous vector.size() ); Thanks !

    Read the article

  • What is the best way to detect white color?

    - by dnul
    I'm trying to detect white objects in a video. The first step is to filter the image so that it leaves only white-color pixels. My first approach was using HSV color space and then checking for high level of VAL channel. Here is the code: //convert image to hsv cvCvtColor( src, hsv, CV_BGR2HSV ); cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 ); for(int x=0;x<srcSize.width;x++){ for(int y=0;y<srcSize.height;y++){ uchar * hue=&((uchar*) (h_plane->imageData+h_plane->widthStep*y))[x]; uchar * sat=&((uchar*) (s_plane->imageData+s_plane->widthStep*y))[x]; uchar * val=&((uchar*) (v_plane->imageData+v_plane->widthStep*y))[x]; if((*val>170)) *hue=255; else *hue=0; } } leaving the result in the hue channel. Unfortunately, this approach is very sensitive to lighting. I'm sure there is a better way. Any suggestions?

    Read the article

  • cvWarpPerspective, having transformation matrix, how to extract the quad points?

    - by Stevecao
    I have the 3x3 transformation matrix that goes through the cvWarpPerspective, I would like to extract the four corner coordinates value. CvMat* M; M = xxxxxxxxxxx ;// Matrix was generated by a certain process cvWarpPerspective( img, transformed, M, CV_INTER_LINEAR + CV_WARP_FILL_OUTLIERS, cvScalarAll( 0 ) ); // this creates a complete black new image transformed, from this image i would like to know the 4 corner coordinates

    Read the article

  • Convolve a column vector

    - by Geoff
    This is an OpenCV2 question. I have a matrix: cv::Mat_<Point3f> points; representing some space curve. I want to smooth it (using, for example a Gaussian kernel). I have tried using: cv::Mat_<Point3f> result; cv::GaussianBlur(points, result, cv::Size(4 * sigma, 1), sigma, sigma, cv::BORDER_WRAP); But I get the error: Assertion failed (columnBorderType != BORDER_WRAP)

    Read the article

  • Why I get a segmentation fault?

    - by frx08
    Why I get a segmentation fault? int main() { int height, width, step, step_mono, channels; int y, x; char str[15]; uchar *data, *data_mono; CvMemStorage* storage = cvCreateMemStorage(0); CvSeq* contour = 0; CvPoint* p; CvFont font; CvCapture *capture; IplImage *frame = 0, *mono_thres = 0; capture = cvCaptureFromAVI("source.avi"); //capture video if(!cvGrabFrame(capture)) exit(0); frame = cvRetrieveFrame(capture); //capture the first frame from video source cvNamedWindow("Result", 1); while(1){ mono_thres = cvCreateImage(cvGetSize(frame), 8, 1); height = frame -> height; width = frame -> width; step = frame -> widthStep; step_mono = mono_thres -> widthStep; channels = frame -> nChannels; data = (uchar *)frame -> imageData; data_mono = (uchar *)mono_thres -> imageData; //converts the image to a binary highlighting the lightest zone for(y=0;y < height;y++) for(x=0;x < width;x++) data_mono[y*step_mono+x*1+0] = data[y*step+x*channels+0]; cvThreshold(mono_thres, mono_thres, 230, 255, CV_THRESH_BINARY); cvFindContours(mono_thres, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); //gets the coordinates of the contours and draws a circle and the coordinates in that point p = CV_GET_SEQ_ELEM(CvPoint, contour, 1); cvCircle(frame, *p, 1, CV_RGB(0,0,0), 2); cvInitFont(&font, CV_FONT_HERSHEY_PLAIN, 1.1, 1.1, 0, 1); sprintf(str, "(%d ,%d)", p->x, p->y); cvPutText(frame, str, cvPoint(p->x+5,p->y-5), &font, CV_RGB(0,0,0)); cvShowImage("Result", mono_thres); //next frame if(!cvGrabFrame(capture)) break; frame = cvRetrieveFrame(capture); if((cvWaitKey(10) & 255) == 27) break; } cvReleaseCapture(&capture); cvDestroyWindow("Result"); return 0; }

    Read the article

  • Memory leaks getting sub-images from video (cvGetSubRect)

    - by dnul
    Hi, i'm trying to do video windowing that is: show all frames from a video and also some sub-image from each frame. This sub-image can change size and be taken from a different position of the original frame. So , the code i've written does basically this: cvQueryFrame to get a new image from the video Create a new IplImage (img) with sub-image dimensions ( window.height,window.width) Create a new Cvmat (mat) with sub-image dimensions ( window.height,window.width) CvGetSubRect(originalImage,mat,window) seizes the sub-image transform Mat (cvMat) to img (IplImage) using cvGetImage my problem is that for each frame i create new IplImage and cvMat which take a lot of memory and when i try to free the allocated memory I get a segmentation fault or in the case of the CvMat the allocated space does not get free (valgrind keeps telling me its definetly lost space). the following code does it: int main(void){ CvCapture* capture; CvRect window; CvMat * tmp; //window size window.x=0;window.y=0;window.height=100;window.width=100; IplImage * src=NULL,*bk=NULL,* sub=NULL; capture=cvCreateFileCapture( "somevideo.wmv"); while((src=cvQueryFrame(capture))!=NULL){ cvShowImage("common",src); //get sub-image sub=cvCreateImage(cvSize(window.height,window.width),8,3); tmp =cvCreateMat(window.height, window.width,CV_8UC1); cvGetSubRect(src, tmp , window); sub=cvGetImage(tmp, sub); cvShowImage("Window",sub); //free space if(bk!=NULL) cvReleaseImage(&bk); bk=sub; cvReleaseMat(&tmp); cvWaitKey(20); //window dimensions changes window.width++; window.height++; } } cvReleaseMat(&tmp); does not seem to have any effect on the total amount of lost memory, valgrind reports the same amount of "definetly lost" memory if i comment or uncomment this line. cvReleaseImage(&bk); produces a segmentation fault. notice i'm trying to free the previous sub-frame which i'm backing up in the bk variable. If i comment this line the program runs smoothly but with lots of memory leaks I really need to get rid of memory leaks, can anyone explain me how to correct this or even better how to correctly perform image windowing? Thank you

    Read the article

  • Threshold of blurry image - part 2

    - by 1''
    How can I threshold this blurry image to make the digits as clear as possible? In a previous post, I tried adaptively thresholding a blurry image (left), which resulted in distorted and disconnected digits (right): Since then, I've tried using a morphological closing operation as described in this post to make the brightness of the image uniform: If I adaptively threshold this image, I don't get significantly better results. However, because the brightness is approximately uniform, I can now use an ordinary threshold: This is a lot better than before, but I have two problems: I had to manually choose the threshold value. Although the closing operation results in uniform brightness, the level of brightness might be different for other images. Different parts of the image would do better with slight variations in the threshold level. For instance, the 9 and 7 in the top left come out partially faded and should have a lower threshold, while some of the 6s have fused into 8s and should have a higher threshold. I thought that going back to an adaptive threshold, but with a very large block size (1/9th of the image) would solve both problems. Instead, I end up with a weird "halo effect" where the centre of the image is a lot brighter, but the edges are about the same as the normally-thresholded image: Edit: remi suggested morphologically opening the thresholded image at the top right of this post. This doesn't work too well. Using elliptical kernels, only a 3x3 is small enough to avoid obliterating the image entirely, and even then there are significant breakages in the digits:

    Read the article

  • Image rotate opecv error

    - by avd
    When I use this code to rotate the image, the destination image size remains same and hence the image gets clipped. Please provide me a way/code snippet to resize accordingly (like matlab does in imrotate) so that image does not get clipped and outlier pixels gets filled with all white instead of black. void imrotate(std::string imgPath,std::string angleStr,std::string outPath) { size_t found1,found2; found1=imgPath.find_last_of('/'); found2=imgPath.size()-4; IplImage* src=cvLoadImage(imgPath.c_str(), -1);; IplImage* dst; dst = cvCloneImage( src ); int angle = atoi(angleStr.c_str()); CvMat* rot_mat = cvCreateMat(2,3,CV_32FC1); CvPoint2D32f center = cvPoint2D32f( src->width/2, src->height/2 ); double scale = 1; cv2DRotationMatrix( center, angle, scale, rot_mat ); cvWarpAffine( src, dst, rot_mat); char angStr[4]; sprintf(angStr,"%d",angle); cvSaveImage(string(outPath+imgPath.substr(found1+1,found2-found1-1)+"_"+angStr+".jpg").c_str(),dst); cvReleaseImage(&src); cvReleaseImage(&dst); cvReleaseMat( &rot_mat ); } Original Image: Rotated Image:

    Read the article

  • image processing problem

    - by riyana
    i'm working on detecting shape of any object.i've a binary image where background is white pixels and foreground/object is black pixel. now i need to detect the shape of the area where there are black pixels.how can i do it?the shape may be of a man/car/box etc. plz help

    Read the article

  • What happens if I give more inputs in estimateRigidTransform or getAffineTransform?

    - by user3531608
    I am using estimateRigidTransform with about two vectors of 100 points and is working FINE. But somehow getAffineTransform doesn't work. I know that findHomography finds the best matrix using RANSAC and getPerspectiveTransform needs only 4 points. My question is what happens if I give more inputs in estimateRigidTransform or getAffineTransform? Does it take only 4 points from input matrix? Or do some kind of RANSAC?

    Read the article

  • Would opencv be a good choice for image colour summarization?

    - by codecowboy
    I would like to analyse a set of hundreds of thousands of product images (clothing, electronic goods etc) and retrieve the dominant colours in each. I'm only interested in the top 3 or 4 colours. The aim is to achieve a degree of certainty that x image is mostly red or image y is mostly orange and blue. The images are likely to be colour jpegs of reasonable quality and approximately 100kb in size. I would like to use C# and the solution should run on a Linux server, preferably using open source libraries. Would opencv be a good choice for this? What other libraries or specific algorithms might be helpful?

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Video capture Performance

    - by volting
    I have noticed high CPU utilization in a number of applications (except mplayer) which read from the embedded webcam on my laptop. Bizarrely CPU utilization varies proportionately to the level of illumination present. I know that that high CPU usage has nothing to do with rendering the video, as I have written a simple app using the OpenCV library to simply grab frames from the webcam, and cpu usage is still high. I think that mplayer might be using my GPU (and the other apps aren't), but since its not an issue with rendering, I dont think this explains anything. Cheese Low light --- ~12% CPU Bright Light ---- ~63% CPU Camorama Low light --- ~7% CPU Bright Light ---- ~30% CPU Opencv C++ library, (display in a single highgui window) Low light --- ~13% CPU Bright Light ---- ~40% CPU (same test on windows 7, 4-9%) Mplayer No problem, 1-2% regardless of light levels Note: If all I want't to do is capture a feed from my webcam I would use mplayer and forget about it, but I'm developing an application which uses the OpenCV to capture a video feed among other things, performance is important.

    Read the article

  • Linking Libraries in iOS?

    - by Joey Green
    This is probably a totally noob question but I have missing links in my mind when thinking about linking libraries in iOS. I usually just add a new library that's been cross compiled and set the build and linker paths without really know what I'm doing. I'm hoping someone can help me fill in some gaps. Let's take the OpenCV library for instance. I have this totally working btw because of a really well written tutorial( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ), but I'm just wanting to know what is exactly going on. What I'm thinking is happening is that when I build OpenCV for iOS is that your creating object code that gets placed in the .a files. This object code is just the implementation files( .m ) compiled. One reason you would want to do this is to make it hard to see the source code and so that you don't have to compile that source code every time. The .h files won't be put in the library ( .a ). You include the .h in your source files and these header files communicate with the object code library ( .a ) in some way. You also have to include the header files for your library in the Build Path and the Library itself in the Linker Path. So, is the way I view linking libraries correct? If , not can someone correct me on this ?

    Read the article

  • Linking Libraries in iOS?

    - by Bob Dole
    This is probably a totally noob question but I have missing links in my mind when thinking about linking libraries in iOS. I usually just add a new library that's been cross compiled and set the build and linker paths without really know what I'm doing. I'm hoping someone can help me fill in some gaps. Let's take the OpenCV library for instance. I have this totally working btw because of a really well written tutorial( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ), but I'm just wanting to know what is exactly going on. What I'm thinking is happening is that when I build OpenCV for iOS is that your creating object code that gets placed in the .a files. This object code is just the implementation files( .m ) compiled. One reason you would want to do this is to make it hard to see the source code and so that you don't have to compile that source code every time. The .h files won't be put in the library ( .a ). You include the .h in your source files and these header files communicate with the object code library ( .a ) in some way. You also have to include the header files for your library in the Build Path and the Library itself in the Linker Path. So, is the way I view linking libraries correct? If , not can someone correct me on this ?

    Read the article

  • Embedded computer vision platforms

    - by Egon
    Hi folks, I am planning to start a computer vision based project on a smart phone platform. I know iPhone ( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ) and Andriod ( http://github.com/billmccord/OpenCV-Android ) have openCV support. I am interested in knowing how was your experience with the level of integration, support and ease of building good apps on either platform. Also I do want to consider windows phone 7 ( and Zune) as a platform, Are there any Computer Vision libraries for that platform or any good development tools( does Aforgenet work or any other good suggestion) ? Also can you suggest some popular augmented reality apps which uses cutting edge technology ( I am aware of http://www.pranavmistry.com/projects/sixthsense/ ) Thnx in advance!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >