Search Results

Search found 34280 results on 1372 pages for 'image search'.

Page 470/1372 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Lucene.Net PrefixQuery

    - by Sole
    Hi, i´m development a suggest box for my site search service. I has to search fields like these: Visual Basic Enterprise Edition Visual C++ Visual J++ My code is: Directory dir = Lucene.Net.Store.FSDirectory.GetDirectory("Index", false); IndexSearcher searcher = new Lucene.Net.Search.IndexSearcher( dir,true); Term term = new Term("nombreAnalizado", _que); PrefixQuery query = new PrefixQuery(term); TopDocs topDocs = searcher.Search(query, 10000); This code works well in this case: "Enterprise" will match "Visual Basic Enterprise Edition" But "Enterprise E" doesn´t match anything. I removed white spaces at indexing time and when the user is searching. Thanks.

    Read the article

  • Yii - Custom GridView with Multiple Tables

    - by savinger
    So, I've extended GridView to include an Advanced Search feature tailored to the needs of my organization. Filter - lets you show/hide columns in the table, and you can also reorder columns by dragging the little drag icon to the left of each item. Sort - Allows for the selection of multiple columns, specify Ascending or Descending. Search - Select your column and insert search parameters. Operators tailored to data type of selected column. Version 1 works, albeit slowly. Basically, I had my hands in the inner workings of CGridView, where I snatch the results from the DataProvider and do the searching and sorting in PHP before rendering the table contents. Now writing Version 2, where I aim to focus on clever CDbCriteria creation, allowing MySQL to do the heavy lifting so it will run quicker. The implementation is trivial when dealing with a single database table. The difficulty arises when I'm dealing with 2 or more tables... For example, if the user intends to search on a field that is a STAT relation, I need that relation to be present in my query. Here's the question. How do I assure that Yii includes all with relations in my query so that I include comparisons? I've included all my relations with my criteria in the model's search function and I've tried CDbCriteria's together ... public function search() { $criteria=new CDbCriteria; $criteria->compare('id', $this->id); $criteria->compare( ... ... $criteria->with = array('relation1','relation2','relation3'); $criteria->together = true; return new CActiveDataProvider( get_class($this), array( 'criteria'=>$criteria, 'pagination' => array('pageSize' => 50) ));} But I still get errors like this... CDbCommand failed to execute the SQL statement: SQLSTATE[42S22]: Column not found: 1054 Unknown column 't.relation3' in 'where clause'. The SQL statement executed was: SELECT COUNT(DISTINCT `t`.`id`) FROM `table` `t` LEFT OUTER JOIN `relation_table` `relation0` ON (`t`.`id`=`relation0`.`id`) LEFT OUTER JOIN `relation_table` `relation1` ON (`t`.`id`=`relation1`.`id`) WHERE (`t`.`relation3` < 1234567890) Where relation0 and relation1 are BELONGS_TO relations, but any STAT relations are missing. Furthermore, why is the query a SELECT COUNT(DISTINCT 't'.'id') ?

    Read the article

  • How to Display a Bmp in a RTF control in VB.net

    - by Gerolkae
    I Started with this C# Question I'm trying to Display a bmp image inside a rtf Box for a Bot program I'm making. This function is supposed to convert a bitmap to rtf code whis is inserted to another rtf formatter srtring with additional text. Kind of like Smilies being used in a chat program. For some reason the output of this function gets rejected by the RTF Box and Vanishes completly. I'm not sure if it the way I'm converting the bmp to a Binary string or if its tied in with the header tags 'returns the RTF string representation of our picture Public Shared Function PictureToRTF(ByVal Bmp As Bitmap) As String Dim stream As New MemoryStream() Bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp) Dim bytes As Byte() = stream.ToArray() Dim str As String = BitConverter.ToString(bytes, 0).Replace("-", String.Empty) 'header to string we want to insert Using g As Graphics = Main.CreateGraphics() xDpi = g.DpiX yDpi = g.DpiY End Using Dim _rtf As New StringBuilder() ' Calculate the current width of the image in (0.01)mm Dim picw As Integer = CInt(Math.Round((Bmp.Width / xDpi) * HMM_PER_INCH)) ' Calculate the current height of the image in (0.01)mm Dim pich As Integer = CInt(Math.Round((Bmp.Height / yDpi) * HMM_PER_INCH)) ' Calculate the target width of the image in twips Dim picwgoal As Integer = CInt(Math.Round((Bmp.Width / xDpi) * TWIPS_PER_INCH)) ' Calculate the target height of the image in twips Dim pichgoal As Integer = CInt(Math.Round((Bmp.Height / yDpi) * TWIPS_PER_INCH)) ' Append values to RTF string _rtf.Append("{\pict\wbitmap0") _rtf.Append("\picw") _rtf.Append(Bmp.Width.ToString) ' _rtf.Append(picw.ToString) _rtf.Append("\pich") _rtf.Append(Bmp.Height.ToString) ' _rtf.Append(pich.ToString) _rtf.Append("\wbmbitspixel24\wbmplanes1") _rtf.Append("\wbmwidthbytes40") _rtf.Append("\picwgoal") _rtf.Append(picwgoal.ToString) _rtf.Append("\pichgoal") _rtf.Append(pichgoal.ToString) _rtf.Append("\bin ") _rtf.Append(str.ToLower & "}") Return _rtf.ToString End Function

    Read the article

  • Django select max id

    - by pistacchio
    Hi, given a standard model (called Image) with an autoset 'id', how do I get the max id? So far I've tried: max_id = Image.objects.all().aggregate(Max('id')) but I get a 'id__max' Key error. Trying max_id = Image.objects.order_by('id')[0].id gives a 'argument 2 to map() must support iteration' exception Any help?

    Read the article

  • Proper snowball analyzer configuration when using Grails Searchable Plugin

    - by Wirsbro
    To improve stemming we want to switch from the default analyzer to snowball, however, having a lot of difficulty with the proper settings and would appreciate any help. In Environment: - Sun's Java 1.6.16 - Grails 1.2.2 - Searchable Plug-In 0.5.5 Config.groovy: Have tried both settings: compassSettings = ['compass.engine.analyzer.stemmed.type': 'snowball', 'compass.engine.analyzer.stemmed.name': 'English'] compassSettings = ['compass.engine.analyzer.snowball.type': 'snowball', 'compass.engine.analyzer.snowball.name': 'English', 'compass.engine.analyzer.search.type': 'snowball', 'compass.engine.analyzer.search.name': 'English'] Search.groovy - The Invocation: def searchResult = searchableService.search(params.q, withHighlighter: { highlighter, index, sr if (!sr.highlights) { sr.highlights = [] } try { sr.highlights[index] = highlighter.fragments("content")[0..2].join(" ") } catch (IndexOutOfBoundsException ex) { sr.highlights[index] = highlighter.fragment("content") } }) def suggestion = searchableService.suggestQuery(params.q) if (suggestion != params.q) { searchResult.suggestedQuery = suggestion }

    Read the article

  • Struts2: Reading from database and populating JSP with results

    - by teehoo
    For a school project I am creating a simple search engine (using Struts2), where I read from a database, and redirect the user to a new page that shows the results. my struts.xml file is as follows: <action name="searchRooms" class="cz.vutbr.fit.Search" method="execute"> <result name="success">/pages/showSearchResults.jsp</result> <result name="input">/pages/search.jsp</result> </action> I have no idea what to search on Google to achieve this. I'm looking for a simple answer or some keywords to use for searching on Google.

    Read the article

  • Gtk, Setting GtkWindow Background from gtkrc file.

    - by PP
    I am trying to set background image to GtkWindow through gtkrc file using pixmap engine but it is not working out following is the rc file. style "theme-window" = "default" { xthickness = 1 ythickness = 1 GtkButton::inner-border = {10, 10, 10, 10} text[NORMAL] = "#FFFFFF" text[ACTIVE] = "#000000" text[PRELIGHT] = "#FFFFFF" text[INSENSITIVE] = "#787878" text[SELECTED] = "#FFFFFF" engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "NarrowVideo.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = FLAT_BOX state = ACTIVE recolorable = TRUE file = "NarrowVideo.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } class "GtkWindow" style "theme-window"

    Read the article

  • Code golf: the Mandelbrot set

    - by Stefano Borini
    Usual rules for the code golf. Here is an implementation in python as an example from PIL import Image im = Image.new("RGB", (300,300)) for i in xrange(300): print "i = ",i for j in xrange(300): x0 = float( 4.0*float(i-150)/300.0 -1.0) y0 = float( 4.0*float(j-150)/300.0 +0.0) x=0.0 y=0.0 iteration = 0 max_iteration = 1000 while (x*x + y*y <= 4.0 and iteration < max_iteration): xtemp = x*x - y*y + x0 y = 2.0*x*y+y0 x = xtemp iteration += 1 if iteration == max_iteration: value = 255 else: value = iteration*10 % 255 print value im.putpixel( (i,j), (value, value, value)) im.save("image.png", "PNG") The result should look like this Use of an image library is allowed. Alternatively, you can use ASCII art. This code does the same for i in xrange(40): line = [] for j in xrange(80): x0 = float( 4.0*float(i-20)/40.0 -1.0) y0 = float( 4.0*float(j-40)/80.0 +0.0) x=0.0 y=0.0 iteration = 0 max_iteration = 1000 while (x*x + y*y <= 4.0 and iteration < max_iteration): xtemp = x*x - y*y + x0 y = 2.0*x*y+y0 x = xtemp iteration += 1 if iteration == max_iteration: line.append(" ") else: line.append("*") print "".join(line) The result ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** *************************************** ************************************** ************************************* ************************************ ************************************ *********************************** *********************************** ********************************** ************************************ *********************************** ************************************* ************************************ *********************************** ********************************** ******************************** ******************************* **************************** *************************** ***************************** **************************** **************************** *************************** ************************ * * *********************** *********************** * * ********************** ******************** ******* ******* ******************* **************************** *************************** ****************************** ***************************** ***************************** * * * **************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ********************************************************************************

    Read the article

  • UISearch bar case sensitive off ? ? ?

    - by Flodev03
    hi all, in a table view i have set a UISearchBar set the delegate and add the protocol. When user tap a word everything is ok exept that the search of "tennis" and "Tennis" is not the same. How can i make the search bar a non case sensitive UISearchBar, i have search a lot please help it would be great thanks to all here is my code where i think evrything happens : thanks to all !!!! - (void)searchBar:(UISearchBar *)searchBar textDidChange:(NSString *)searchText { [tableData removeAllObjects];// remove all data that belongs to previous search if([searchText isEqualToString:@""]||searchText==nil){ [myTableView reloadData]; return; } NSInteger counter = 0; for(NSString *name in dataSource) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSRange r = [name rangeOfString:searchText]; if(r.location != NSNotFound) [tableData addObject:name]; counter++; [pool release]; } [myTableView reloadData]; }

    Read the article

  • Can I have a macro run whenever I save a file in Visual Studio 2005?

    - by Mark
    When I save a file in Visual Studio 2005, I'd like to have a macro also run that updates a copyright (through a regular expression search and replace). I'm not new to regular expressions, but I am new to VB/VBA and Visual Studio macros, so what I need help with specifically is: getting a macro to run upon save, preferably after I press CTRL-S but before it actually writes the file so that the results of the search and replace are actually saved without having to save twice calling search and replace for a regular expression from inside the VB/VBA macro Thanks, Mark

    Read the article

  • Shorthand for nested null checking C#

    - by Myster
    As far as I know there is not a significantly more elegant way to write the following.... string src; if((ParentContent!= null) &&(ParentContent.Image("thumbnail") != null) &&(ParentContent.Image("thumbnail").Property("src") != null)) src = ParentContent.Image("thumbnail").Property("src").Value Do you think there should be a C# language feature to make this shorter? And if so, what should it look like? for example, something like extending the ?? operator string src = ParentContent??.Image("thumbnail")??.Property("width")??.Value; Apologies for the rather contrived example, and my over-simplified solution.

    Read the article

  • What is the proper way to align UITableViewCells when only some have an imageView?

    - by Topher Fangio
    Hello all, I am new to iPhone programming and working on my first real application (i.e. one not written in a book or online) and I've run into a small problem which I could solve a multitude of ways, but feel like there should be a good solution that perhaps I am just missing. Here is the scenario: I have a UITableView with a bunch of standard UITableViewCells in it. What I want to do is toggle a green check mark when the cell is selected and I have that part working (note: I'm already using the accessoryType for something else, so I can't use it for the checkmark...besides, it's not as pretty). Unfortunately, when I toggle the checkmark like so: if (...) { cell.imageView.image = [UIImage imageNamed:@"checkmark.png"]; } else { cell.imageView.image = nil; } It makes the cell's label bounce back and forth depending on whether it is checked or not. What is the proper way to align the cell's text (set via cell.textLabel.text) regardless of whether or not it has an image set? The solutions I have come up with are: Create a blank 40x40 png image in Photoshop and set the unchecked to that Create a blank 40x40 image solely in code Set some setting that I don't know about that will align it for me Create a subclass of UITableCellView that does what I need (which would be stupid, I'd just go with option 1...) Suggestions? Thoughts? Comments? Thank you very much :-) P.S. I'd like the solution to work with OS 3.0 and 4.0 if that makes any sort of difference.

    Read the article

  • Bug when drawing a QImage on a widget with PIL and PyQt

    - by oulipo
    I'm trying to write a small graphic application, and I need to construct some image using PIL that I show in a widget. The image is correctly constructed (I can check with im.show()), I can convert it to a QImage, that I can save normally to disk (using QImage.save), but if I try to draw it directly on my QWidget, it only show a white square. Here I commented out the code that is not working (converting the Image into QImage then QPixmap result in a white square), and I made a dirty hack to save the image to a temporary file and load it directly in a QPixmap, which work but is not what I want to do https://gist.github.com/f6d479f286ad75bf72b7 Someone has an idea? If it can help, when I try to save my QImage in a BMP file, I can access its content, but if I try to save it to a PNG it is completely white

    Read the article

  • To store images from UIGetScreenImage() in NSMutable Array

    - by sujyanarayan
    Hi, I'm getting images from UIGetScreenImage() and storing directly in mutable array like:- image = [UIImage imageWithScreenContents]; [array addObject:image]; [image release]; I've set this code in timer so I cant use UIImagePNGRepresentation() to store as NSData as it reduces the performance. I want to use this array directly after sometime i.e after capturing 1000 images in 100 seconds. When I use the code below:- UIImage *im = [[UIImage alloc] init]; im = [array objectAtIndex:i]; UIImageWriteToSavedPhotosAlbum(im, nil, nil, nil); the application crashes. And I dont want to use UIImagePNG or JPGRepresentation() in timer as it reduces performance. My problem is how to use this array so that it is converted into image. If anybody has idea related to it please share with me. Thanks in Advance.

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Overlaying 2D paths on UIImage without scaling artifacts

    - by tat0
    I need to draw a path along the shape of an image in a way that it is always matching its position on the image independent of the image scale. Think of this like the hybrid view of Google Maps where streets names and roads are superimposed on top of the aerial pictures. Furthermore, this path will be drawn by the user's finger movements and I need to be able to retrieve the path keypoints on the image pixel coordinates. The user zooms-in in order to more precisely set the paths location. I manage to somehow make it work using this approach: -Create a custom UIView called CanvasView that handles touches interaction and delivers scaling, rotation, translation values to either the UIImageView or PathsView (see bellow) depending on a flag: deliverToImageOrPaths. -Create a UIImageView holding the base image. This is set as a children of CanvasView -Create a custom UIView called PathsView that keeps track of the 2D paths geometry and draws itself with a custom drawRect. This is set as children of the UIImageView. So hierarchy: CanvasView - UIImageView -PathsView In this way when deliverToImageOrPaths is YES, finger gestures transforms both the UIImageView and its child PathsView. When deliverToImageOrPaths is NO the gestures affect only the PathsView altering its geometry. So far so good. QUESTION: The problem I have is that when scaling the base UIImageView (via its .transform property) the PathsView is scaled with aliasing artifacts. drawRect is still being called on the PathsView but I guess it's performing the drawing using the original buffer size and then interpolating. How can I solve this issue? Are there better ways to implement these features? PS: I tried changing the PathsView layer class to CATiledLayer with levelsOfDetailBias 4 and levelsOfDetail 4. It solves the aliasing problem to some extent but it's unacceptable slow to render.

    Read the article

  • Java error on bilinear interpolation of 16 bit data

    - by Jon
    I'm having an issue using bilinear interpolation for 16 bit data. I have two images, origImage and displayImage. I want to use AffineTransformOp to filter origImage through an AffineTransform into displayImage which is the size of the display area. origImage is of type BufferedImage.TYPE_USHORT_GRAY and has a raster of type sun.awt.image.ShortInterleavedRaster. Here is the code I have right now displayImage = new BufferedImage(getWidth(), getHeight(), origImage.getType()); try { op = new AffineTransformOp(atx, AffineTransformOp.TYPE_BILINEAR); op.filter(origImage, displayImage); } catch (Exception e) { e.printStackTrace(); } In order to show the error I have created 2 gradient images. One has values in the 15 bit range (max of 32767) and one in the 16 bit range (max of 65535). Below are the two images 15 bit image 16 bit image These two images were created in identical fashions and should look identical, but notice the line across the middle of the 16 bit image. At first I thought that this was an overflow problem however, it is weird that it's manifesting itself in the center of the gradient instead of at the end where the pixel values are higher. Also, if it was an overflow issue than I would suspect that the 15 bit image would have been affected as well. Any help on this would be greatly appreciated.

    Read the article

  • Use WPF DLL Assembly in ASP.NET problem

    - by liimur
    Hello, I have C++ project that compiles as DLL Assembly in .NET 3.5 SP1 Project is used for Image rendering processing by using WPF (it loads 2 images from local folder, applies one image on another and saves the output file in the same folder). I want to use that that project as a reference in ASP.NET project to the rendering on the website. So I created simple Web Project in ASP.NET C# that uses C++ project as a Reference. Everything works great in ASP.NET Web Development Server (built-in Web server in VS2008). But once I publish this project to IIS on the same Machine or use IIS for debug instead of built-in Web server Image rendering it's not working anymore. I'm not getting any exceptions or error messages, it just output image is not processes as it supposed to be. If anyone know what could cause that I would really appreciate your insight!

    Read the article

  • Why are there 3 conflicting OpenCV camera calibration formulas?

    - by John
    I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved: (1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376): xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ], ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ], where r = sqrt( x^2 + y^2 ). Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy, where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions. However... (2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern. In the C reference, it is expressed that: x' = X / Z, y' = Y / Z, x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ], y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ], where r' = sqrt( x'^2 + y'^2 ), and finally that u = fx * x'' + cx, v = fy * y'' + cy. As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z). (3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part: "Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ~, v ~ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ~, y ~ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following: x ~ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ] y ~ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ], where r = sqrt( x^2 + y^2 ). ... "Because u ~ = cx + fx * u and v ~ = cy + fy * v , … the resultant system can be rewritten as follows: u ~ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ] v ~ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ] The latter relations are used to undistort images from the camera." Well, it would appear that the expressions involving x ~ and y ~ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ~ and y ~ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ~, y ~ ) and ( u ~, v ~ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ~, y ~ ) and ( x, y ) refer to 'physical' coordinates while ( u ~, v ~ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost! Thanks for any input!

    Read the article

  • How do I get apache RewriteRule working correctly for a subdomain?

    - by mike
    I just setup a subdomain with the following RewriteCond: RewriteCond $1 !^search.php$ RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^/?([^/]+)$ search.php?q=$1 [L,NS] I'm using the same rewrite condition on my main domain and it works perfectly. However, when I set it up on the subdomain, it simply outputs "index.php" when going to http://sub.domain.com Every page on the subdomain outputs the page name in the body instead of processing the code, except for the search page, which appears to be working correctly. What can I do to correct this issue?

    Read the article

  • @Android display /res/viewable in WebView

    - by MikeNereson
    I am throwing HTML to a webview to render. In the HTML I need to load an image that I have in /res/drawable. I have /res/drawable/my_image.png and code such as this: final WebView browser = (WebView) findViewById(R.id.my_webview); String html = new MyHelper(myObject).getHtml(); browser.loadDataWithBaseURL("", html, "text/html", "UTF-8", ""); Where the String html has something like: <html><head> <h1>Here is the image</h1> <img src="my_image.png" /> </head><html> The question is, what should that image src attribute be to refer to the image in /res/drawable ~Thanks

    Read the article

  • How do I add an icon as a classpath resource to an SWT window created with WindowBuilder?

    - by Zoot
    I'm trying to add an external icon from an *.ico file to a window that I'm creating using the WindowBuilder design window. I can select the shell, which brings up an "image" properties field. That brings up the image chooser dialog box: How do I make my icon show up in this menu as a classpath resource? The image works if an absolute path is given, but I don't want to use that option in my application. Thanks!

    Read the article

  • Display last picture

    - by steve
    Hi I'm inserting an image from the camera (Taking a picture) into the MediaStore.Images.Media datastore. Does anyone know how I can go about displaying the last picture taken? I used Uri image = ContentUris.withAppendedId(externalContentUri, 45); to display an image from the datastore but obviously 45 is not the correct image. I try to pass the information from the previous activity (Camera) to the display activity but I'm assuming due to the photo call back being its own thread the value never gets set. Photo code is as follows Camera.PictureCallback photoCallback = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub FileOutputStream fos; try { Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length); fileUrl = MediaStore.Images.Media.insertImage(getContentResolver(), bm, "LastTaken", "Picture"); if(fileUrl == null) { Log.d("Still", "Image Insert Failed"); return; } else { picUri = Uri.parse(fileUrl); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, picUri)); } } catch(Exception e) { Log.d("Picture", "Error Picture: ", e); } camera.startPreview(); } };

    Read the article

  • searchlogic and virtual attributes

    - by Ermin
    Let's say I have the following model: Person <AR def name [self.first_name,self.middle_name,self.last_name].select{|n| n.present?}.join(' ') end end How could I do a search on the virtual attribute with searchlogic, something like: Person.search.name_like 'foo' Of courese I could construct a large statement like: Person.search.first_name_like_or_last_name_like_or_... 'argh' but surely there is a more elegant way.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >