Search Results

Search found 438 results on 18 pages for 'plane'.

Page 10/18 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Mouse pointer position to screen space

    - by Ylisar
    If I have a mouse pointer position in pixels of canvas, I can easily convert it to the -1..1 range for both X & Y by lerping by dividing with canvas dimensions. However, the problem is what I should put in Z & W if I want my screen space position to be on the near plane? The step afterwards would be for me to multiply by the inverse of view-projection to take me to world space, where I easily can construct a ray from the cameras world space position.

    Read the article

  • Calculating angle between 2 vectors

    - by Error 454
    I am working on some movement AI where there are no obstacles and movement is restricted to the XY plane. I am calculating 2 vectors: v - the direction of ship 1 w - the vector pointing from the position of ship 1 to ship 2 I am then calculating the angle between these two vectors using the standard formula: arccos( v . w / ( |v| |w| ) ) The problem I'm having is the nature of arccos only returning values between 0 and 180. This makes it impossible to determine whether I should turn left or right to face the other ship. Is there a better way to do this?

    Read the article

  • OpenGLES 2.0 gluunProject

    - by secheung
    I've spent more time than i should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using opengl es 2.0 code. Thanks

    Read the article

  • Hello Again, San Francisco

    - by Geertjan
    From the moment I got to the airport in Amsterdam, I've been bumping into JavaOne pilgrims today. Finally got to my hotel, after a pretty good flight (and KLM provides great meals, which helps a lot), and a rather long wait at customs (serves me right for getting seat 66C in a plane with 68 rows). And, best of all, on Twitter I've been seeing a few remarks around the Duke's Choice Awards for this year. The references all point to the September - October issue of the Java Magazine, where page 24 shows the following: So, from page 24 onwards, you can read all about the above applications. What's especially cool is that three of the above are applications created on top of the NetBeans Platform! That's AgroSense (farm management software), MICE (NATO system for defense and battle-space operations), and Level One Registration Tool (UN Refugee Agency sofware for managing refugees). Congratulations to all the winners, looking forward to learning more about them all during the coming days here at the conference.

    Read the article

  • How to render a texture partly transparent?

    - by megamoustache
    Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting : I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you :)

    Read the article

  • How can I calculate the angle between two 2D vectors?

    - by Error 454
    I am working on some movement AI where there are no obstacles and movement is restricted to the XY plane. I am calculating two vectors, v, the facing direction of ship 1, and w, the vector pointing from the position of ship 1 to ship 2. I am then calculating the angle between these two vectors using the formula arccos((v · w) / (|v| · |w|)) The problem I'm having is that arccos only returns values between 0° and 180°. This makes it impossible to determine whether I should turn left or right to face the other ship. Is there a better way to do this?

    Read the article

  • How to Use Text in Unity3d

    - by ZiG-ZaG
    How Can i Create Text in Unity3D? I Use "3D Text" But Its Always on Top Of Everything! Can You Suggest Anything? I creating a 2D Game So its not Necessarily a 3D Text.. Edit: Because I Building a 2D Game My Scene is Full of Planes in Front of Camera And I want My Text to be Over One of the Planes and when plane is moving My Text appears behind it. But When I Use "3D Text" Its Always In Front of Everything. Sorry for My Bad English...

    Read the article

  • Infinite Flight : dans les coulisses du meilleur simulateur de vol mobile, une "success story" française en terres américaines

    Infinite Flight : dans les coulisses du meilleur simulateur de vol sur Smartphone et tablettes Une « success story » française en terres américaines Microsoft faisait travailler plus de cent employés sur les désormais feux « Flight Simulators ». « X-Plane » est conçu par une petite dizaine de personnes. « Infinite Flight » lui, n'a été créé que par deux développeurs. Des passionnés. De code et d'aviation. Et comme souvent avec les personnages de talents, ces deux professionnels - humbles et modestes - ne vous diront jamais que ce qu'ils ont fait, très peu en sont capables. [IMG]http://ftp-developpez.com/gordon-fowler/Infinite%2...

    Read the article

  • Can I randomly generate an endless road?

    - by y26jin
    So suppose we stand on a position(x0, y0) of a map. We can only move on the horizontal plane(no jump and stuff) but we can move forward, left, or right (in a discrete math way, i.e. integer movement). As soon as we move to the next position(x1, y1), everything around us is generated randomly by a program. We could be surrounded by one of mountain, lake, and road. We can only move on the road. The road is always 2D as the map itself. My question is, are we able to play this game endlessly? "End" means that we come across a dead end and the only way out is to go backward.

    Read the article

  • Graphical sandbox for pathfinding

    - by vrode
    If you needed a clean and consistent sandbox for pathfinding what would you use? I want to experiment with different pathfinding algorithms by sending virtual units (robots) around obstacles on a geometric plane. But I don't need a feature overkill like a game engine or Flash might have, just an animated report and native collision detector. I prefer it to be scripted in python, but if there are java or C++ alternatives I would appreciate them as well.

    Read the article

  • Fragment Shader Eye-Space unscaled depth coordinate

    - by Ben Jones
    I'm trying to use the unscaled (true distance from the front clipping plane) distance to objects in my scene in a GLSL fragment shader. The gl_FragCoord.z value is smaller than I expect. In my vertex shader, I just use ftransform() to set gl_Position. I'm seeing values between 2 and 3 when I expect them to be between 15 and 20. How can I get the real eye-space depth? Thanks!

    Read the article

  • Ray-Box Intersection during Scene traversal with matrix transforms

    - by Myx
    Hello: There are a few ways that I'm testing my ray-box intersections: Using the ComputeIntersectionBox(...) method, that takes a ray and a box as arguments and computes the closest intersection of the ray and the box. This method works by forming a plane with each of the faces of the box and finding an intersection with each of the planes. Once an intersection is found, a check is made whether or not the point is on the surface of the box by checking that the intersection point is between the corner points. When I look at rays after running this algorithm on two different boxes, I obtain the correct intersections. Using ComputeIntersectionScene(...) method without using the matrix transformations on a scene that has two spheres, a dodecahedron (a triangular mesh), and two boxes. ComputeIntersectionScene(...) recursively traverses all of the nodes of the scene graph and computes the closest intersection with the given ray. This test in particular does not apply any transformations that parent nodes may have that also need to be applied to their children. With this test, I also obtain the correct intersections. Using ComputeIntersectionScene(...) method WITH the matrix transformations. This test works like the one above except that before finding an intersection between the ray and a node in the scene, the ray is transformed into the node's coordinate frame using the inverse of the node's transformation matrix and after the intersection has been computed, this intersection is transformed back into the world coordinates by applying the transformation matrix to the intersection point. When testing with the third method on the same scene file as described in 2, testing with 4 rays (thus one ray intersects the one sphere, one ray the the other sphere, one ray one box, and one ray the other box), only the two spheres get intersected and the two boxes do not get intersections. When I debug looking into my ComputeIntersectionBox(...) method, it actually tells me that the ray intersects every plane on the box but each intersection point does not lie on the box. This seems to be strange behavior, since when using test 2 without transformations, I obtain the correct box intersections (thus, I believe my ray-box intersection to be correct) and when using test 3 WITH transformations, I obtain the correct sphere intersections (thus, I believe my transformed ray should be OK). Any suggestions where I could be going wrong? Thank you in advance.

    Read the article

  • iPad GLSL. From within a fragment shader how do I get the surface - not vertex - normal

    - by dugla
    Is it possible to access the surface normal - the normal associated with the plane of a fragment - from within a fragment shader? Or perhaps this can be done in the vertex shader? Is all knowledge of the associated geometry lost when we go down the shader pipeline or is there some clever way of recovering that information in either the vertex of fragment shader? Thanks in advance. Cheers, Doug twitter: @dugla

    Read the article

  • Java2d: Set gradient for a lines

    - by Algorist
    Hi, I am having multiple points in a plane and some hundreds of lines pass through those points. Some points can have more lines passing through them than other points. I want to show some kind of more gradient or brightness associated with lines crowded together. Is this possible to do in java2d. Please refer to this : http://ft.ornl.gov/doku/_media/ft/projects/paraxis.jpg Thank you.

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • Generate 2D cross-section polygon from 3D mesh

    - by nornagon
    I'm writing a game which uses 3D models to draw a scene (top-down orthographic projection), but a 2D physics engine to calculate response to collisions, etc. I have a few 3D assets for which I'd like to be able to automatically generate a hitbox by 'slicing' the 3D mesh with the X-Y plane and creating a polygon from the resultant edges. Google is failing me on this one (and not much helpful material on SO either). Suggestions?

    Read the article

  • Is there any algorithm for determining 3d position in such case? (images below)

    - by Ole Jak
    So first of all I have such image (and ofcourse I have all points coordinates in 2d so I can regenerate lines and check where they cross each other) But hey, I have another Image of same lines (I know thay are same) and new coords of my points like on this image So... now Having points (coords) on first image, How can I determin plane rotation and Z depth on second image (asuming first one's center was in point (0,0,0) with no rotation)?

    Read the article

  • AJAX Loader/Progress Bar for Flash File

    - by atif089
    Hi, I want to implement a progress par using AJAX for a flash file. Please see the demo here http://www.freeplaynow.com/online-games/play/1729/park-my-plane.html Tried to debug their page but the javascript is obfuscated and im not so good in js. Any ideas ? Thanks

    Read the article

  • Coloring close points

    - by ooboo
    I have a dense set of points in the plane. I want them colored so that points that are close to each other have the same color, and a different color if they're far away. For simplicity assume that there are, say, 5 different colors to choose from. Turns out I've not the slightest idea how to do that .. I'm using Tkinter with Python, by the way

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >