Search Results

Search found 2009 results on 81 pages for 'transform'.

Page 8/81 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Wrapping/warping a CALayer/UIView (or OpenGL) in 3D (iPhone)

    - by jbrennan
    I've got a UIView (and thus a CALayer) which I'm trying to warp or bend slightly in 3D space. That is, imagine my UIView is a flat label which I want to partially wrap around a beer bottle (not 360 degrees around, just on one "side"). I figured this would be possible by applying a transform to the view's layer, but as far as I can tell, this transform is limited to rotation, scale and translation of the layer uniformly. I could be wrong here, as my linear algebra is foggy at this point, to say the least. How can I achieve this?

    Read the article

  • -webkit-transition-property for translation

    - by sexyprout
    Hai. What is the transition property for translations in CSS3? I'm currently using all but I got a bug in iOS so I want to test another property. -webkit-transform: translate(-320px, 0);   -webkit-transition: ??? .5 ease-in-out; See the bug with an iOS device here (swipe horizontally), there's a kind of flash. Update: to anyone interested, I found a way to fix it thanks to Duopixel: E { -webkit-transition: all .5s ease-in-out; -webkit-transform: translate3d(0, 0, 0); // perform an "invisible" translation } // Then you can translate with translate3d(), no bug! document.querySelector('E').webkitTransform = 'translate3d(-320px, 0, 0)'

    Read the article

  • CSS3 Continous Rotate Animation (Just like a loading sundial)

    - by Gcoop
    Hi, I am trying to replicate an Apple style activity indicator (sundial loading icon) by using a PNG and CSS3 animation. I have the image rotating and doing it continuously, but there seems to be a delay after the animation has finished before it does the next rotation. @-webkit-keyframes rotate { from { -webkit-transform: rotate(0deg); } to { -webkit-transform: rotate(360deg); } } #loading img { -webkit-animation-name: rotate; -webkit-animation-duration: 0.5s; -webkit-animation-iteration-count: infinite; -webkit-transition-timing-function: linear; } I have tried changing the animation duration but it makes no difference, if you slow it right down say 5s its just more apparent that after the first rotation there is a pause before it rotates again. It's this pause I want to get rid of. Any help is much appreciated, thanks.

    Read the article

  • WPF / C#: Transforming coordinates from an image control to the image source

    - by Gabriel
    I'm trying to learn WPF, so here's a simple question, I hope: I have a window that contains an Image element bound to a separate data object with user-configurable Stretch property <Image Name="imageCtrl" Source="{Binding MyImage}" Stretch="{Binding ImageStretch}" /> When the user moves the mouse over the image, I would like to determine the coordinates of the mouse with respect to the original image (before stretching/cropping that occurs when it is displayed in the control), and then do something with those coordinates (update the image). I know I can add an event-handler to the MouseMove event over the Image control, but I'm not sure how best to transform the coordinates: void imageCtrl_MouseMove(object sender, MouseEventArgs e) { Point locationInControl = e.GetPosition(imageCtrl); Point locationInImage = ??? updateImage(locationInImage); } Now I know I could compare the size of Source to the ActualSize of the control, and then switch on imageCtrl.Stretch to compute the scalars and offsets on X and Y, and do the transform myself. But WPF has all the information already, and this seems like functionality that might be built-in to the WPF libraries somewhere. So I'm wondering: is there a short and sweet solution? Or do I need to write this myself? EDIT I'm appending my current, not-so-short-and-sweet solution. Its not that bad, but I'd be somewhat suprised if WPF didn't provide this functionality automatically: Point ImgControlCoordsToPixelCoords(Point locInCtrl, double imgCtrlActualWidth, double imgCtrlActualHeight) { if (ImageStretch == Stretch.None) return locInCtrl; Size renderSize = new Size(imgCtrlActualWidth, imgCtrlActualHeight); Size sourceSize = bitmap.Size; double xZoom = renderSize.Width / sourceSize.Width; double yZoom = renderSize.Height / sourceSize.Height; if (ImageStretch == Stretch.Fill) return new Point(locInCtrl.X / xZoom, locInCtrl.Y / yZoom); double zoom; if (ImageStretch == Stretch.Uniform) zoom = Math.Min(xZoom, yZoom); else // (imageCtrl.Stretch == Stretch.UniformToFill) zoom = Math.Max(xZoom, yZoom); return new Point(locInCtrl.X / zoom, locInCtrl.Y / zoom); }

    Read the article

  • Remove parent of matched locator

    - by Ilan
    Is there a way to locate a node based child properties? I need to run a web.config transform to remove the 2nd <dependentAssembly in the following: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <!-- Don't want to delete this one --> <dependentAssembly> <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0"/> </dependentAssembly> <!-- This is the one I want to delete --> <dependentAssembly> <assemblyIdentity name="Microsoft.VisualStudio.Enterprise.AspNetHelper" publicKeyToken="b03f5f7f11d50a3a" culture="neutral"/> <codeBase version="11.0.0.0" href="file:///C:/Program%20Files%20(x86)/Microsoft%20Visual%20Studio%2011.0/Common7/IDE/PrivateAssemblies/Microsoft.VisualStudio.Enterprise.AspNetHelper.DLL"/> </dependentAssembly> </assemblyBinding> </runtime> Finding the <assemblyIdentity is easy enough, but I need to delete the parent <dependentAssembly (and <codeBase). If there was a "xdt:Transform="RemoveParent" this would do the trick, but AFAIK there isn't. Alternatively if there was a Locator I could use on the <dependentAssembly which would match children, then that could work too.

    Read the article

  • scaling svg paths in Raphael 2.1

    - by user1229001
    I'm using SVG paths from a wikimedia commons map of the US. I've singled out Pennsylvania with its counties. I'm feeding the paths out of a database and using Raphael 2.1 to put them on the page. Because in the original map, Pennsylvania was so small and set at an angle, I'd like to scale up the paths and rotate Pa. so that it isn't on an angle. When I try to use Raphael's transform method, all the counties look strange and overlapped. I gave up on setting the viewBox when I heard that it doesn't work in all browsers. Anyone have any ideas? Here is my code: $(document).ready(function() { var $paths = []; //array of paths var $thisPath; //variable to hold whichever path we're drawing $.post('getmapdata.php', function(data){ var objData = jQuery.parseJSON(data); for (var i=0; i<objData.length; i++) { $paths.push(objData[i].path); //$counties.push(objData[i].name); }//end for drawMap($paths); }) function drawMap(data) { // var map = new Raphael(document.getElementById('map_div_id'),0, 0); var map = new Raphael(0, 0, 520, 320); //map.setViewBox(0,0,500,309, true); for (var i = 0; i < data.length; i++) { var thisPath = map.path(data[i]); thisPath.transform(s2); thisPath.attr({stroke:"#FFFFFF", fill:"#CBCBCB","stroke-width":"0.5"}); } //end cycling through i }//end drawMap });//end program

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • iPhone: How to use CGContextConcatCTM for saving a transformed image properly?

    - by Irene
    I am making an iPhone application that loads an image from the camera, and then the user can select a second image from the library, move/scale/rotate that second image, and then save the result. I use two UIImageViews in IB as placeholders, and then apply transformations while touching/pinching. The problem comes when I have to save both images together. I use a rect of the size of the first image and pass it to UIGraphicsBeginImageContext. Then I tried to use CGContextConcatCTM but I can't understand how it works: CGRect rect = CGRectMake(0, 0, img1.size.width, img1.size.height); // img1 from camera UIGraphicsBeginImageContext(rect.size); // Start drawing CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextClearRect(ctx, rect); // Clear whole thing [img1 drawAtPoint:CGPointZero]; // Draw background image at 0,0 CGContextConcatCTM(ctx, img2.transform); // Apply the transformations of the 2nd image But what do I need to do next? What information is being held in the img2.transform matrix? The documentation for CGContextConcatCTM doesn't help me that much unfortunately.. Right now I'm trying to solve it by calculating the points and the angle using trigonometry (with the help of this answer), but since the transformation is there, there has to be an easier and more elgant way to do this, right?

    Read the article

  • Push, parse & import "selected" data, text, info blobs from Webpages/ Emails as Event/ Appointment to standard Calendar directly or as .ics file?

    - by Alex S
    Any tool, plugin, extension, script/ code to push "selected" data, text, information blobs from Web pages, Emails etc, then parsed and imported to structured Event, Appointment (e.g. .ics) on a standard Calendar like Outlook, Google, iCal? If not, what and how could I use some scripting, coding or existing tools, extensions to add on top and do this. I come across a lot of unstructured information on Webpages, Emails, FB events etc. where I just want to add that information to my Calendar. Instead of entering all the information by hand all the time, there should be an easy enough way to have the information get parsed, organized and imported to a Calendar... Either directly to a calendar from source or Translated to a standard format such as .ICS that can be imported & saved easily. Would love to see some suggestions for this incorporating one or more of the following: on Windows with Chrome & Outlook on iPhone/ iPad to its Calendar PS: I'll come back and see if I can add more information to this question and to answer it as well. I have not found a solution yet.

    Read the article

  • Generating text file from database

    - by Goldmember
    I have a requirement to hand-code an text file from data residing in a SQL table. Just wondering if there are any best practices here. Should I write it as an XMLDocument first and transform using XSL or just use Streamwriter and skip transformation altogether? The generated text file will be in EDIFACT format, so layout is very specific.

    Read the article

  • XSLT 2.0 Header Leaks into Transformed XML

    - by user1303797
    First, a thank you in advance. Second, this is my first post so apologies for any errors or wrongdoings. I am a noob w/ xml and xslt, and can't seem to figure this out. When I transform some xml using xslt 2.0, some of the headers from the xslt leaks into the new xml. It doesn't seem to do it in xslt 1.0 (granted the xslt is a little different). Here is the xml: <?xml version="1.0" encoding="ISO-8859-1" ?> <xml_content> <feed_name>feed</feed_name> <feed_info> <entry_1> <id>1</id> <pub_date>1320814800</pub_date> </entry_1> </feed_info> </xml_content> Here is the xslt: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/TR/xhtml1/strict"> <xsl:output method="xml" indent="yes" /> <xsl:template match="xml_content"> <Records> <xsl:for-each select="feed_info/entry_1"> <Record> <ID><xsl:value-of select="id" /></ID> <PublicationDate><xsl:value-of select='xs:dateTime("1970-01-01T00:00:00") + xs:integer(pub_date) * xs:dayTimeDuration("PT1S")'/></PublicationDate> </Record> </xsl:for-each> </Records> </xsl:template> </xsl:stylesheet> Here is the new xml. Look specifically at the first "Records" element. <?xml version="1.0" encoding="UTF-8"?> <Records xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/TR/xhtml1/strict"> <Record> <ID>1</ID> <PublicationDate>2011-11-09T05:00:00</PublicationDate> </Record> </Records>

    Read the article

  • Cisco ASA dropping IPsec VPN between istself and CentOS server

    - by sebelk
    Currently we're trying to set up an IPsec VPN between a Cisco ASA Version 8.0(4) and a CentOS Linux server. The tunnel comes up successfully, but for some reason that we can't figure out, the firewall is dropping packets from the VPN. The IPsec settings in the ASA sre as follows: crypto ipsec transform-set up-transform-set esp-3des esp-md5-hmac crypto ipsec transform-set up-transform-set2 esp-3des esp-sha-hmac crypto ipsec transform-set up-transform-set3 esp-aes esp-md5-hmac crypto ipsec transform-set up-transform-set4 esp-aes esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map linuxserver 10 match address filtro-encrypt-linuxserver crypto map linuxserver 10 set peer linuxserver crypto map linuxserver 10 set transform-set up-transform-set2 up-transform-set3 up-transform-set4 crypto map linuxserver 10 set security-association lifetime seconds 28800 crypto map linuxserver 10 set security-association lifetime kilobytes 4608000 crypto map linuxserver interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes hash sha group 2 lifetime 28800 crypto isakmp policy 2 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 3 authentication pre-share encryption aes-256 hash md5 group 2 lifetime 86400 crypto isakmp policy 4 authentication pre-share encryption aes-192 hash sha group 2 lifetime 86400 crypto isakmp policy 5 authentication pre-share encryption aes-192 hash md5 group 2 group-policy linuxserverip internal group-policy linuxserverip attributes vpn-filter value filtro-linuxserverip tunnel-group linuxserverip type ipsec-l2l tunnel-group linuxserverip general-attributes default-group-policy linuxserverip tunnel-group linuxserverip ipsec-attributes pre-shared-key * Does anyone know where the problem is and how to fix it?

    Read the article

  • "Genie" animation effect in XAML

    - by ScottCate
    What would the XAML look like to create a Genie animation effect? Take an object of any size/shape, and "Genie" it to another Minimized location. Kind of like the OSX window minimize. Maybe even a little fancier through a smoke-like effect where the path is more switch back, instead of a simple funnel (if that makes any sense). I'm guessing that there is some sort of Path that could be drawn, and the shape could move and transform along that path. Just a wild guess. Thanks for your ideas.

    Read the article

  • How to detect circles accurately

    - by user1767798
    Is there any way to accurately detect circles in opencv? I was using hough transform which give me good result but most of the time, shadow of the object and surrounding,light etc gives bad results, so am looking for options other than hough circles, accurate detection is very important for my project. My basic approach so far is to find some spheres in the image taken in realtime. I am using houghcircle to find the spheres and base later calculations on the radius I am getting from that. If the background is plain and nothing the sphere detect without problem, however if I am taking that image in my room where the background will have other objects it's often difficult to detect. So am looking for some other approach.

    Read the article

  • Is there a transformation matrix that can scale the x and/or y axis logarithmically?

    - by Dave M
    I'm using .net WPF geometry classes to graph waveforms. I've been using the matrix transformations to convert from the screen coordinate space to my coordinate space for the waveform. Everything works great and it's really simple to keep track of my window and scaling, etc. I can even use the inverse transform to calculate the mouse position in terms of the coordinate space. I use the built in Scaling and Translation classes and then a custom matrix to do the y-axis flipping (there's not a prefab matrix for flipping). I want to be able to graph these waveforms on a log scale as well (either x axis or y axis or both), but I'm not sure if this is even possible to do with a matrix transformation. Does anyone know if this is possible, and if it is, what is the matrix?

    Read the article

  • OpenCV Python HoughCircles error

    - by Dan
    Hi, I'm working on a program that detects circular shapes in images. I decided a Hough Transform would be the best, and I found one in the OpenCV library. The problem is that when I try to use it I get an error that I have no idea how to fix. Is OpenCV for Python not fully implemented? Is there a fix to the library I need for the program to work? Here's the code: import cv #cv.NamedWindow("camera", 1) capture = cv.CaptureFromCAM(0) while True: img = cv.QueryFrame(capture) gray = cv.CreateImage(cv.GetSize(img), 8, 1) edges = cv.CreateImage(cv.GetSize(img), 8, 1) cv.CvtColor(img, gray, cv.CV_BGR2GRAY) cv.Canny(gray, edges, 50, 200, 3) cv.Smooth(gray, gray, cv.CV_GAUSSIAN, 9, 9) storage = cv.CreateMat(1, 2, cv.CV_32FC3) #This is the line that throws the error cv.HoughCircles(edges, storage, cv.CV_HOUGH_GRADIENT, 2, gray.height/4, 200, 100) #cv.ShowImage("camera", img) if cv.WaitKey(10) == 27: break And here is the error I'm getting: OpenCV Error: Null pinter () in unknown function, file ..\..\..\..\ocv\openc\src\cxcore\cxdatastructs.cpp, line 408 Traceback (most recent call last): File "ellipse-detect-webcam.py", line 20, in cv.HoughCircles(edges, storage, cv.CV_HOUGH_GRADIENT, 2, gray.height/4, 200, 100) cv.error Thanks in advance for the help.

    Read the article

  • dximagetransform.matrix, absolutely position child elements not rotating in IE 8 standards mode

    - by davydka
    I've looked all over for more information on this, and would like to know why it happens. Here's the code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> </head> <body> <div style="position:absolute; top:200px; left:200px; height:200px; width:200px; border:1px solid black; filter: progid:DXImageTransform.Microsoft.Matrix(sizingMethod='auto expand', M11=0.9886188373396114, M12=-0.15044199698646263, M21=0.15044199698646263, M22=0.9886188373396114);"> <div style="position:absolute; top:10px; left:10px; border:1px solid darkblue;"> I do not rotate in IE 8. </div> </div> </body> </html> The problem is that absolutely or relatively positioned elements within a div that has been rotated using MS's dximagetransform.matrix do not inherit the transformation in IE 8. IE 6 & 7 render correctly, and I can solve the IE8 problem by triggering compatibility mode, but I'd rather not do that. Does anyone have any experience with this? I'm using css3 transform on other browsers and using dximagetransform.matrix to achieve this effect in IE. EDIT: Added the opening html tag. Problem still exists. http://i45.tinypic.com/nf4gmq.png

    Read the article

  • getting bone base and tip positions from a transform matrix?

    - by ddos
    I need this for a Blender3d script, but you don't really need to know Blender to answer this. I need to get bone head and tip positions from a transform matrix read from a file. The position of base is the location part of the matrix, length of the bone (distance from base to tip) is the scale, position of the tip is calculated from the scale (distance from bone base) and rotation part of the matrix. So how to calculate these? bone.base([x,y,z]) # x,y,z - floats bone.tip([x,y,z])

    Read the article

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

  • MSBuild: Items + Batching + CreateItem + Transforms Question

    - by KeithCS
    I have this bit of an msbuild project that is making me wonder why it the outcome is the way it is. Not that it is causing an issue or anything of the sort, but I would like to try and better my understanding of it. <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="TestTarget1;TestTarget2" ToolsVersion="3.5"> <ItemGroup> <PathDir Include="C:\RootDir\UniqueDir1"/> <PathDir Include="C:\RootDir\UniqueDir2" /> </ItemGroup> <Target Name="TestTarget1" Outputs="%(PathDir.Identity)"> <PropertyGroup> <RootPath>%(PathDir.Identity)</RootPath> </PropertyGroup> <ItemGroup> <SubDirectory Include="Common1"/> <SubDirectory Include="Common2"/> </ItemGroup> <CreateItem Include="@(SubDirectory->'$(RootPath)\%(Identity)')"> <Output TaskParameter="Include" ItemName="FullPath"/> </CreateItem> <Message Text="@(FullPath)"/> </Target> <Target Name="TestTarget2"> <Message Text="@(FullPath)"/> </Target> </Project> So I have two main paths that are unique, and within each I have two directories with the same names in each of the unique paths. In target1, I am batching against the identity of the items in PathDir, and then performing a transform on item SubDirectory, which contains the common folder names found in the unique directories, to create a new item containing the full paths. So anyways, after that, the output for the targets is as follows: Target 1: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2 C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 Target 2: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2;C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 So my question I guess is ... why does target1 only display the directories containing the directory it is batching against? I know it probably has to do with batching, but thats all I know.

    Read the article

  • How can I stop the iPhone from displaying a transform change until after the screen is redrawn

    - by Ed Marty
    I have found the UIScrollView's zooming mechanism to be clunk and essentially unusable. So instead, I'm rolling my own. I have a UIView that resizes itself with the pinch-zoom, and that's working fine. When the zoom is complete, the view needs to reset its transform and redraw the images. The zoom works essentially in the same way the UIScrollView does. It sets the transform property of the UIView until complete. Then, when the zoom finishes, I want to reset the transform to CGAffineTransformIdentity, resize the frame to be the size it was before, and tell the view to redraw itself at the new size. It all works pretty well, except when I change the transform to identity then redraw the image, there is a slight flicker before the image completely redraws. This is due to the fact that I'm using a subclass of CATiledLayer, since the view can be of arbitrary size. I've overridden the fadeDuration to be zero, but there is still a flicker while the transform is reset before the redraw is finished. Is there any simple way to overcome this without creating another view to draw with then replacing it?

    Read the article

  • Hough transformation for iris detection in opencv

    - by iva123
    Hi, I wrote the code for iris detection and it works well. Also I can crop the eye location of a face. Now I want to detect the iris of the crop image with applying the Hough transformation(cvHoughCircle). However when I try this procedure, the system is not able to find any circle on the image. Maybe, the reason is, there are noises in the image but I don't think it's the reason. So, how can I detect the iris ? I have the code of binary thresholding maybe I can use it, but I don't know how to do ?? If anyone helps I really appreciated. thx :)

    Read the article

  • App.Config Transformation for Visual Studio 2010?

    - by Amitabh
    For Visual Studio 2010 Web based application we have Config Transformation features by which we can maintain multiple configuration files for different environments. But the same feature is not available for App.Config files for Windows Services/WinForms or Console Application. There is a workaround available as suggested on the following link. http://vishaljoshi.blogspot.com/2010/05/applying-xdt-magic-to-appconfig.html However it is not straightforward and requires no of steps. Is there an easier way to achieve the same for App.Config files?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >