Search Results

Search found 22128 results on 886 pages for 'point in polygon'.

Page 1/886 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Adding tolerance to a point in polygon test

    - by David Gouveia
    I've been using this method which was taken from Game Coding Complete to detect whether a point is inside of a polygon. It works in almost every case, but is failing on a few edge cases, and I can't figure out the reason. For example, given a polygon with vertices at (0,0) (0,100) and (100,100), the algorithm is returning: True for any point strictly inside the polygon False for any of the vertices False for (0, 50) which lies on one of the edges of the polygon True (?) for (50,50) which is also on one of the edges of the polygon I'd actually like to relax the algorithm so that it returns true in all of these cases. In other words, it should return true for points that are strictly inside, for the vertices themselves, and for points on the edges of the polygon. If possible I'd also like to give it enough tolerance so that it always tend towards "true" in face of floating point fluctuations. For example, I have another method, that given a line segment and a point, returns the closest location on the line segment to the given point. Currently, given any point outside the polygon and one of its edges, there are cases where the result is categorized as being inside by the method above, while other points are considered outside. I'd like to give it enough tolerance so that it always returns true in this situation. The way I've currently solved the problem is an hack, which consists of using an external library to inflate the polygon by a few pixels, and performing the tests on the inflated polygon, but I'd really like to replace this with a proper solution.

    Read the article

  • Oracle Retail Point-of-Service with Mobile Point-of-Service, Release 13.4.1

    - by Oracle Retail Documentation Team
    Oracle Retail Mobile Point-of-Service was previously released as a standalone product. Oracle Retail Mobile Point-of-Service is now a supported extension of Oracle Retail Point-of-Service, Release 13.4.1. Oracle Retail Mobile Point-of-Service provides support for using a mobile device to perform tasks such as scanning items, applying price adjustments, tendering, and looking up item information. Integration with Oracle Retail Store Inventory Management (SIM) If Oracle Retail Mobile Point-of-Service is implemented with Oracle Retail Store Inventory Management (SIM), the following Oracle Retail Store Inventory Management functionality is supported: Inventory lookup at the current store Inventory lookup at buddy stores Validation of serial numbers Technical Overview The Oracle Retail Mobile Point-of-Service server application runs in a domain on Oracle WebLogic. The server supports the mobile devices in the store. On each mobile device, the Mobile POS application is downloaded and then installed. Highlighted End User Documentation Updates and List of Documents  Oracle Retail Point-of-Service with Mobile Point-of-Service Release NotesA high-level overview is included about the release's functional, technical, and documentation enhancements. In addition, a section has been written that addresses Product Support considerations.   Oracle Retail Mobile Point-of-Service Java API ReferenceJava API documentation for Oracle Retail Mobile Point-of-Service is included as part of the Oracle Retail Mobile Point-of-Service Release 13.4.1 documentation set. Oracle Retail Point-of-Service with Mobile Point-of-Service Installation Guide - Volume 1, Oracle StackA new chapter is included with information on installing the Mobile Point-of-Service server and setting up the Mobile POS application. The installer screens for installing the server are included in a new appendix. Oracle Retail Point-of-Service with Mobile Point-of-Service User GuideA new chapter describes the functionality available on a mobile device and how to use Oracle Retail Mobile Point-of-Service on a mobile device. Oracle Retail POS Suite with Mobile Point-of-Service Configuration GuideThe Configuration Guide is updated to indicate which parameters are used for Oracle Retail Mobile Point-of-Service. Oracle Retail POS Suite with Mobile Point-of-Service Implementation Guide - Volume 5, Mobile Point-of-ServiceThis new Implementation Guide volume contains information for extending and customizing both the Mobile POS application for the mobile device and the Oracle Retail Mobile Point-of-Service server. Oracle Retail POS Suite with Mobile Point-of-Service Licensing InformationThe Licensing Information document is updated with the list of third-party open-source software used by Oracle Retail Mobile Point-of-Service. Oracle Retail POS Suite with Mobile Point-of-Service Security GuideThe Security Guide is updated with information on security for mobile devices. Oracle Retail Enhancements Summary (My Oracle Support Doc ID 1088183.1)This enterprise level document captures the major changes for all the products that are part of releases 13.2, 13.3, and 13.4. The functional, integration, and technical enhancements in the Release Notes for each product are listed in this document.

    Read the article

  • Floating point conversion from Fixed point algorithm

    - by Viks
    Hi, I have an application which is using 24 bit fixed point calculation.I am porting it to a hardware which does support floating point, so for speed optimization I need to convert all fixed point based calculation to floating point based calculation. For this code snippet, It is calculating mantissa for(i=0;i<8207;i++) { // Do n^8/7 calculation and store // it in mantissa and exponent, scaled to // fixed point precision. } So since this calculation, does convert an integer to mantissa and exponent scaled to fixed point precision(23 bit). When I tried converting it to float, by dividing the mantissa part by precision bits and subtracting the exponent part by precision bit, it really does' t work. Please help suggesting a better way of doing it.

    Read the article

  • Polygon count budget

    - by Lautaro
    Is there any smart way to think about polygon budget relating to PC gaming today? My game will have one static 3d background scene and two fighters. No more enemies. I am thinking about having animated 3d models in the background for atmosphere, like spectators. So how could i find out what the polygon count for the player models and background scenarios could be. I guess the question is, what is a for today typical polygon count that most PCs can handle?

    Read the article

  • Algorithm for approximating sihlouette image as polygon

    - by jack
    I want to be able to analyze a texture in real time and approximate a polygon to represent a silhouette. Imagine a person standing in front of a green screen and I want to approximately trace around their outline and get a 2D polygon as the result. Are there algorithms to do this and are they fast enough to work frame-to-frame in a game? (I have found algorithms to triangulate polygons, but I am having trouble knowing what to search for that describes my goal.)

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    I'm currently calculating the normal vector of a polygon using this code, but for some faces here and there it calculates a wrong normal. I don't really know what's going on or where it fails but its not reliable. Do you have any polygon normal calculation that's tested and found to be reliable? // calculate normal of a polygon using all points var n:int = points.length; var x:Number = 0; var y:Number = 0; var z:Number = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // turn large values into a unit vector if (length != 0){ x = x / length; y = y / length; z = z / length; }else { throw new Error("Cannot calculate normal since triangle has an area of 0"); }

    Read the article

  • Subdividing a polygon into boxes of varying size

    - by Michael Trouw
    I would like to be pointed to information / resources for creating algorithms like the one illustrated on this blog, which is a subdivision of a polygon (in my case a voronoi cell) into several boxes of varying size: http://procworld.blogspot.nl/2011/07/city-lots.html In the comments a paper by among others the author of the blog can be found, however the only formula listed is about candidate location suitability: http://www.groenewegen.de/delft/thesis-final/ProceduralCityLayoutGeneration-Preprint.pdf Any language will do, but if examples can be given Javascript is preferred (as it is the language i am currently working with) A similar question is this one: What is an efficient packing algorithm for packing rectangles into a polygon?

    Read the article

  • Calculate pixels within a polygon

    - by DoomStone
    In an assignment for school do we need to do some image recognizing, where we have to find a path for a robot. So far have we been able to find all the polygons in the image, but now we need to generate a pixel map, that be used for an astar algorithm later. We have found a way to do this, show below, but the problem is that is very slow, as we go though each pixel and test if it is inside the polygon. So my question is, are there a way that we can generate this pixel map faster? We have a list of cordinates for the polygon private List<IntPoint> hull; The fuction "getMap" is called to get the pixel map public Point[] getMap() { List<Point> points = new List<Point>(); lock (hull) { Rectangle rect = getRectangle(); for (int x = rect.X; x <= rect.X + rect.Width; x++) { for (int y = rect.Y; y <= rect.Y + rect.Height; y++) { if (inPoly(x, y)) points.Add(new Point(x, y)); } } } return points.ToArray(); } Get Rectangle is used to limit the search, se we don't have to go thoug the whole image public Rectangle getRectangle() { int x = -1, y = -1, width = -1, height = -1; foreach (IntPoint item in hull) { if (item.X < x || x == -1) x = item.X; if (item.Y < y || y == -1) y = item.Y; if (item.X > width || width == -1) width = item.X; if (item.Y > height || height == -1) height = item.Y; } return new Rectangle(x, y, width-x, height-y); } And atlast this is how we check to see if a pixel is inside the polygon public bool inPoly(int x, int y) { int i, j = hull.Count - 1; bool oddNodes = false; for (i = 0; i < hull.Count; i++) { if (hull[i].Y < y && hull[j].Y >= y || hull[j].Y < y && hull[i].Y >= y) { try { if (hull[i].X + (y - hull[i].X) / (hull[j].X - hull[i].X) * (hull[j].X - hull[i].X) < x) { oddNodes = !oddNodes; } } catch (DivideByZeroException e) { if (0 < x) { oddNodes = !oddNodes; } } } j = i; } return oddNodes; }

    Read the article

  • adding onTap method on path direction between 2 point

    - by idham
    I have a problem in my Android application I have a path direction on my application and I want to add an onTap method for the path, so if I touch that path my application will display information with alert dialog. This my activity code: hasilrute hr = new hasilrute(); for (int k = 0;k < hr.r2.size(); k++){ String angkot = hr.r2.get(i).angkot; Cursor c = db.getLatLong(hasilrute.a); Cursor cc = db.getLatLong(hasilrute.b); String x = (c.getString(3)+","+c.getString(2)); String xx = (cc.getString(3)+","+cc.getString(2)); String pairs[] = getDirectionData(x, xx); String[] lnglat = pairs[0].split(","); GeoPoint point = new GeoPoint((int) (Double.parseDouble(lnglat[1]) *1E6),(int)(Double.parseDouble(lnglat[0]) * 1E6)); GeoPoint gp1; GeoPoint gp2 = point; for (int j = 1;j < pairs.length; j++){ lnglat = pairs[j].split(","); gp1 = gp2; gp2 = new GeoPoint((int) (Double.parseDouble(lnglat[1]) *1E6),(int) (Double.parseDouble(lnglat[0]) * 1E6)); mapView.getOverlays().add(new jalur(gp1, gp2,angkot)); } } and it's my jalur.java code public class jalur extends Overlay { private GeoPoint gp1; private GeoPoint gp2; private String angkot; private Context mContext; public jalur(GeoPoint gp1, GeoPoint gp2, String angkot){ this.gp1 = gp1; this.gp2 = gp2; this.angkot = angkot; } @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when){ Projection projection = mapView.getProjection(); if (shadow == false){ if (angkot.equals("Cimahi-Leuwipanjang")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(118,171,127)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Cangkorah")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(67,204,255)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimindi-Cipatik")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(42,82,0)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Jalan Kaki")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(0,0,0)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Padalarang")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(229,66,66)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); } if (angkot.equals("Pasantren-Sarijadi")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(4,39,255)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Parongpong")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(141,0,200)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Cibeber")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(255,246,0)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Cimindi")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(220,145,251)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Contong")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(242,138,138)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Soreang")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(0,255,78)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); }if (angkot.equals("Cimahi-Batujajar")){ Paint paint = new Paint(); paint.setAntiAlias(true); Point point = new Point(); projection.toPixels(gp1,point); Point point2 = new Point(); projection.toPixels(gp2, point2); paint.setColor(Color.rgb(137,217,51)); paint.setStrokeWidth(2); canvas.drawLine((float) point.x, (float) point.y, (float) point2.x, (float) point2.y, paint); } } return super.draw(canvas, mapView, shadow, when); } @Override public void draw(Canvas canvas, MapView mapView, boolean shadow){ super.draw(canvas, mapView, shadow); } } thanks for your attention :)

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    Do you have any reliable face normal calculation code? I'm using this but it fails when faces are 90 degrees upright or similar. // the normal point var x:Number = 0; var y:Number = 0; var z:Number = 0; // if is a triangle with 3 points if (points.length == 3) { // read vertices of triangle var Ax:Number, Bx:Number, Cx:Number; var Ay:Number, By:Number, Cy:Number; var Az:Number, Bz:Number, Cz:Number; Ax = points[0].x; Bx = points[1].x; Cx = points[2].x; Ay = points[0].y; By = points[1].y; Cy = points[2].y; Az = points[0].z; Bz = points[1].z; Cz = points[2].z; // calculate normal of a triangle x = (By - Ay) * (Cz - Az) - (Bz - Az) * (Cy - Ay); y = (Bz - Az) * (Cx - Ax) - (Bx - Ax) * (Cz - Az); z = (Bx - Ax) * (Cy - Ay) - (By - Ay) * (Cx - Ax); // if is a polygon with 4+ points }else if (points.length > 3){ // calculate normal of a polygon using all points var n:int = points.length; x = 0; y = 0; z = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } if (minx > 0 || miny > 0 || minz > 0){ for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // if area is 0 if (length == 0) { return null; }else{ // turn large values into a unit vector x = x / length; y = y / length; z = z / length; }

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • Polygon count target range for MMO being released in 2 years

    - by classer
    What would a realistic poly count target range be for NPC and player models in a 3D MMO that will be released in 2 years? What about poly count target range for the entire camera view (environment, NPC and player meshes)? I read in some places that one should not aim too low if the game will come out in a couple years because technology is always advancing. If you can give some mesh poly stats on what other current MMOs / MMORPGs are running and future projections, that would be great. Thank you.

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • Can SpriteBatch be used to fill a polygon with a texture?

    - by can poyrazoglu
    I basically need to fill a texture into a polygon using the SpriteBatch. I've done some research but couldn't find anything useful except polygon triangulation method, which works well only with convex polygons (without diving into super math which is definitely not something I'm pretty good at). Are there any solutions for filling in a polygon in a basic way? I of course need something dynamic (I'll have a map editor that you can define polygons, and the game will render them (and collision detection will also use them but that's off topic), basically I can't accept solutions like "pre-calculated" bitmaps or anything like that. I need to draw a polygon with the segments provided, to the screen, using the SpriteBatch.

    Read the article

  • Polygon with different line width (in R)

    - by Tal Galili
    Hi all, I would like to use a command like this: plot(c(1,8), 1:2, type="n") polygon(1:7, c(2,1,2,NA,2,1,2), col=c("red", "blue"), # border=c("green", "yellow"), border=c(1,10), lwd=c(1:10)) To create two triangles, with different line widths. But the polygon command doesn't seem to recycle the "lwd" parameter as it does the col or the border parameters. I would like the resulting plot to look like what the following code will produce: plot(c(1,8), 1:2, type="n") polygon(1:3, c(2,1,2), col=c("red"), # border=c("green", "yellow"), border=c(1,10), lwd=c(1)) polygon(5:7, c(2,1,2), col=c( "blue"), # border=c("green", "yellow"), border=c(1,10), lwd=c(10)) So my questions are: Is there something like polygon that does what I asked for? (If not, I would do it by creating a new polygon function that will break the original x,y by their NA's, although I am not yet sure what is the smartest way to do that...) Thanks, Tal

    Read the article

  • Diagonal of polygon is inside or outside?

    - by Himadri
    I have three consecutive points of polygon, say p1,p2,p3. Now I wanted to know whether the orthogonal between p1 and p3 is inside the polygon or outside the polygon. I am doing it by taking three vectors v1,v2 and v3. And the point before the point p1 in polygon say p0. v1 = (p0 - p1) v2 = (p2 - p1) v3 = (p3 - p1) With reference to this question, I am using the method shown in the accepted answer of that question. It is only for counterclockwise. What if my points are clockwise. I am also knowing my whole polygon is clockwise or counterclockwise. And accordingly I select the vectors v1 and v2. But still I am getting some problem. I am showing one case where I am getting problem. This polygon is counterclockwise. and It is starting from the origin of v1 and v2.

    Read the article

  • JBox2D Polygon Collisions Acting Strange

    - by andy
    I have been playing around with JBox2D and Slick2D and made a little demo with a ground object, a box object, and two different polygons. The problem I am facing is that the collision-detection for the polygons seems to be off (see picture below), but the box's collision works fine. My Code: Main Class package main; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.BodyType; import org.jbox2d.dynamics.World; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import shapes.Box; import shapes.Polygon; public class State1 extends BasicGameState{ World world; int velocityIterations; int positionIterations; float pixelsPerMeter; int state; Box ground; Box box1; Polygon poly1; Polygon poly2; Renderer renderer; public State1(int state) { this.state = state; } @Override public void init(GameContainer gc, StateBasedGame game) throws SlickException { velocityIterations = 10; positionIterations = 10; pixelsPerMeter = 1f; world = new World(new Vec2(0.f, -9.8f)); renderer = new Renderer(gc, gc.getGraphics(), pixelsPerMeter, world); box1 = new Box(-100f, 200f, 40, 50, BodyType.DYNAMIC, world); ground = new Box(-14, -275, 50, 900, BodyType.STATIC, world); poly1 = new Polygon(50f, 10f, new Vec2[] { new Vec2(-6f, -14f), new Vec2(0f, -20f), new Vec2(6f, -14f), new Vec2(10f, 10f), new Vec2(-10f, 10f) }, BodyType.DYNAMIC, world); poly2 = new Polygon(0f, 10f, new Vec2[] { new Vec2(10f, 0f), new Vec2(20f, 0f), new Vec2(30f, 10f), new Vec2(30f, 20f), new Vec2(20f, 30f), new Vec2(10f, 30f), new Vec2(0f, 20f), new Vec2(0f, 10f) }, BodyType.DYNAMIC, world); } @Override public void update(GameContainer gc, StateBasedGame game, int delta) throws SlickException { world.step((float)delta / 180f, velocityIterations, positionIterations); } @Override public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { renderer.render(); } @Override public int getID() { return this.state; } } Polygon Class package shapes; import org.jbox2d.collision.shapes.PolygonShape; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.Body; import org.jbox2d.dynamics.BodyDef; import org.jbox2d.dynamics.BodyType; import org.jbox2d.dynamics.FixtureDef; import org.jbox2d.dynamics.World; import org.newdawn.slick.Color; public class Polygon { public float x, y; public Color color; public BodyType bodyType; org.newdawn.slick.geom.Polygon poly; BodyDef def; PolygonShape ps; FixtureDef fd; Body body; World world; Vec2[] verts; public Polygon(float x, float y, Vec2[] verts, BodyType bodyType, World world) { this.verts = verts; this.x = x; this.y = y; this.bodyType = bodyType; this.world = world; init(); } public void init() { def = new BodyDef(); def.type = bodyType; def.position.set(x, y); ps = new PolygonShape(); ps.set(verts, verts.length); fd = new FixtureDef(); fd.shape = ps; fd.density = 2.0f; fd.friction = 0.7f; fd.restitution = 0.5f; body = world.createBody(def); body.createFixture(fd); } } Rendering Class package main; import org.jbox2d.collision.shapes.PolygonShape; import org.jbox2d.collision.shapes.ShapeType; import org.jbox2d.common.MathUtils; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.Body; import org.jbox2d.dynamics.Fixture; import org.jbox2d.dynamics.World; import org.newdawn.slick.Color; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.geom.Polygon; import org.newdawn.slick.geom.Transform; public class Renderer { World world; float pixelsPerMeter; GameContainer gc; Graphics g; public Renderer(GameContainer gc, Graphics g, float ppm, World world) { this.world = world; this.pixelsPerMeter = ppm; this.g = g; this.gc = gc; } public void render() { Body current = world.getBodyList(); Vec2 center = current.getLocalCenter(); while(current != null) { Vec2 pos = current.getPosition(); g.pushTransform(); g.translate(pos.x * pixelsPerMeter + (0.5f * gc.getWidth()), -pos.y * pixelsPerMeter + (0.5f * gc.getHeight())); Fixture f = current.getFixtureList(); while(f != null) { ShapeType type = f.getType(); g.setColor(getColor(current)); switch(type) { case POLYGON: { PolygonShape shape = (PolygonShape)f.getShape(); Vec2[] verts = shape.getVertices(); int count = shape.getVertexCount(); Polygon p = new Polygon(); for(int i = 0; i < count; i++) { p.addPoint(verts[i].x, verts[i].y); } p.setCenterX(center.x); p.setCenterY(center.y); p = (Polygon)p.transform(Transform.createRotateTransform(current.getAngle() + MathUtils.PI, center.x, center.y)); p = (Polygon)p.transform(Transform.createScaleTransform(pixelsPerMeter, pixelsPerMeter)); g.draw(p); break; } case CIRCLE: { f.getShape(); } default: } f = f.getNext(); } g.popTransform(); current = current.getNext(); } } public Color getColor(Body b) { Color c = new Color(1f, 1f, 1f); switch(b.m_type) { case DYNAMIC: if(b.isActive()) { c = new Color(255, 123, 0); } else { c = new Color(99, 99, 99); } break; case KINEMATIC: break; case STATIC: c = new Color(111, 111, 111); break; default: break; } return c; } } Any help with fixing the collisions would be greatly appreciated, and if you need any other code snippets I would be happy to provide them.

    Read the article

  • Point in polygon OR point on polygon using LINQ

    - by wageoghe
    As noted in an earlier question, How to Zip enumerable with itself, I am working on some math algorithms based on lists of points. I am currently working on point in polygon. I have the code for how to do that and have found several good references here on SO, such as this link Hit test. So, I can figure out whether or not a point is in a polygon. As part of determining that, I want to determine if the point is actually on the polygon. This I can also do. If I can do all of that, what is my question you might ask? Can I do it efficiently using LINQ? I can already do something like the following (assuming a Pairwise extension method as described in my earlier question as well as in links to which my question/answers links, and assuming a Position type that has X and Y members). I have not tested much, so the lambda might not be 100% correct. Also, it does not take very small differences into account. public static PointInPolygonLocation PointInPolygon(IEnumerable<Position> pts, Position pt) { int numIntersections = pts.Pairwise( (p1, p2) => { if (p1.Y != p2.Y) { if ((p1.Y >= pt.Y && p2.Y < pt.Y) || (p1.Y < pt.Y && p2.Y >= pt.Y)) { if (p1.X < p1.X && p2.X < pt.X) { return 1; } if (p1.X < pt.X || p2.X < pt.X) { if (((pt.Y - p1.Y) * ((p1.X - p2.X) / (p1.Y - p2.Y)) * p1.X) < pt.X) { return 1; } } } } return 0; }).Sum(); if (numIntersections % 2 == 0) { return PointInPolygonLocation.Outside; } else { return PointInPolygonLocation.Inside; } } This function, PointInPolygon, takes the input Position, pt, iterates over the input sequence of position values, and uses the Jordan Curve method to determine how many times a ray extended from pt to the left intersects the polygon. The lambda expression will yield, into the "zipped" list, 1 for every segment that is crossed, and 0 for the rest. The sum of these values determines if pt is inside or outside of the polygon (odd == inside, even == outside). So far, so good. Now, for any consecutive pairs of position values in the sequence (i.e. in any execution of the lambda), we can also determine if pt is ON the segment p1, p2. If that is the case, we can stop the calculation because we have our answer. Ultimately, my question is this: Can I perform this calculation (maybe using Aggregate?) such that we will only iterate over the sequence no more than 1 time AND can we stop the iteration if we encounter a segment that pt is ON? In other words, if pt is ON the very first segment, there is no need to examine the rest of the segments because we have the answer. It might very well be that this operation (particularly the requirement/desire to possibly stop the iteration early) does not really lend itself well to the LINQ approach. It just occurred to me that maybe the lambda expression could yield a tuple, the intersection value (1 or 0 or maybe true or false) and the "on" value (true or false). Maybe then I could use TakeWhile(anontype.PointOnPolygon == false). If I Sum the tuples and if ON == 1, then the point is ON the polygon. Otherwise, the oddness or evenness of the sum of the other part of the tuple tells if the point is inside or outside.

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • Greiner-Hormann clipping problem

    - by Belgin
    I have a set of planar polygons in 3D space defined by their vertices in counterclockwise order. Let's define the 'positive face' as being the face of the 3D polygon such as when observed, the vertices appear in counterclockwise order, and the 'negative face', the face which when observed, the vertices appear in clockwise order. I'm doing perspective projection of the set of polygons onto a projection polygon defined by the points in this order: (0, h, 0), (0, 0, 0), (w, 0, 0), and (w, h, 0), where w and h are strictly positive integers. The positive face of this projection polygon is oriented towards positive Z, and the camera point is somewhere at (0, 0, d), where d is a strictly negative number. In order to 'clip' the projected polygons into the projection polygon, I'm applying the Greiner-Hormann (PDF) clipping algorithm, which requires that the clipper and the to-be-clipped polygons be in the same order (i.e. clockwise or counterclockwise). My question is the following: How can I determine whether the projected face of the 3D polygon is the negative or the positive one? Meaning, how do I find out if I have to work with the vertices in normal or inverted order for the algorithm to work? I noticed that only if the 3D polygon is facing the projection polygon with its negative face, both of them are in the same order (counterclockwise), otherwise, a modification needs to be done. Here is a picture (PNG) that illustrates this. Note that the planes described by the polygon from the set and the projection polygon may not always be parallel.

    Read the article

  • how to subtract circle from an arbitrary polygon

    - by George
    Given an arbitary polygon with vertices stored in either clockwise/counterclockwise fashion (depicted as a black rectangle in the diagram), I need to be able to subtract an arbitrary number of circles (in red on the diagram) from that polygon. Removing a circle could possibly split the polygon into two seperate polygons (as depicted by the second line in the diagram). I'm not sure where to start. http://www.freeimagehosting.net/image.php?89a0276d9d.jpg

    Read the article

  • Problem drawing a polygon on data clusters in MATLAB

    - by Hossein
    Hi, I have some data points which I have devided into them into some clusters with some clustering algorithms as the picture below:(it might takes some time for the image to appear) Each color represents different cluster. I have to draw polygons around each cluster. I use convhull for this reason. But as you can see the polygon for the red cluster is very big and covers a lot of areas, which is not the one I am looking for. I need to draw lines(ploygons) exactly around my data sets. For example in the picture above I want a polygon that is drawn exactly the same(and around) as the red cluster with the 3 branches. In other words, in this case I need a polygon with 3 branches to cover my red clusters not that big polygon that covers the whole area. Can anyone help me with this? Please Note that the solution should be general, because the clusters will change in each run of the algorithm, so it needs to be in a way that is general.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >