Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 30/61 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • OpenGLES 2.0 gluunProject

    - by secheung
    I've spent more time than i should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using opengl es 2.0 code. Thanks

    Read the article

  • How to detect and collide two elastic line segments?

    - by Tautrimas
    There are 4 moving physical nodes in 3D space. They are paired with two elastic line segments / strings (1 <- 2; 3 <- 4). Part I: How to detect the collision of two segments? Part II: On the moment of collision, fifth node is created at the intersection point and here you have the force-based graph. 5-th node (bend point) can slide among the strings as in a real world. Given the new coordinates of 4 nodes, how to calculate the position of the 5-th node on the next frame? I assume string force on the nodes to be F = -k * x where x is the string length. All I came up to is that the force between 5 and 1 equals 5 and 2 (the same with 3 and 4). What are the other properties?.

    Read the article

  • SmartSync Printing In ASP.NET Scheduler Reporting v2010.1

    Check out this new SmartSync printing feature of the ASPxScheduler that helps you to print a scheduler report in a Tri-fold style. Hows It Work? If several scheduler report controls are placed on the same report, the scheduler adapter on the report coordinates how the controls will iterate through the schedule data. The view control on the report that has the smallest period becomes the 'principal' or 'driving' control. It starts the iteration, and other controls on the page are...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • implementing a high level function in a script to call a low level function in the game engine

    - by eat_a_lemon
    In my 2d game engine I have a function that does pathfinding, find_shortest_path. It executes for each time step in the game loop and calculates the next coordinate pair in the series of coordinates to reach the destination object. Now I want to call this function in a scripting language and have it only return the last coordinate pair result. I want the game engine to go about the business of rendering the incremental steps but I don't want the high level script to care about the rendering. The high level script is only for ai game logic. Now I know how to bind a method from C to python but how can I signal and coordinate the wait time between the incremental steps without the high level function returning until its time for the last step?

    Read the article

  • How do I duplicate a Box2d simulation, mid-simulation?

    - by Whyte
    I want to serialize the state mid-game, send it over the network to an identical computer (same CPU, same OS, same binary), load it there, and have the two games run in tandem doing the exact same simulation, without one of them drifting off and going haywire. In short: I want pop-in, pop-out networking support on my HIGHLY physics-intensive game, where sending object coordinates every few seconds is impossible, due to having thousands of objects, and many clients. I tried this with Box2D, and saving an object's location/velocity/etc wasn't enough... there's internal state that's not accessible through any public methods. My current workaround is to force EVERY client to save its entire worldstate and reload it from scratch, whenever a new player connects... but this is obviously bad practice, because it hangs the game for everyone whenever someone new connects. However, it works, with zero desynchronization. So, anyone know of any other techniques that can help me? Or should I just kiss my project goodbye?

    Read the article

  • Converting obj data to CSS3D

    - by Don Boots
    I found a ton of formulae and what not, but 3D isn't my forte so I'm at a loss of what specifically to use. My goal is to convert the data in an 3D .obj file (vertices, normals, faces) to CSS3D (width, height, rotateX,Y,Z and/or similar transforms). For example 2 simple planes g plane1 # simple along along Z axis v 0.0 0.0 0.0 v 0.0 0.0 1.0 v 0.0 1.0 1.0 v 0.0 1.0 0.0 g plane2 # plane rotated 90 degrees along Y-axis v 0.0 0.0 0.0 v 0.0 1.0 0.0 v 1.0 1.0 0.0 v 1.0 0.0 0.0 f 1 2 3 4 f 5 6 7 8 Could this data be converted to: #plane1 { width: X; height: Y; transform: rotateX(Xdeg) rotateY(Ydeg) rotateZ(Zdeg) translateZ(Zpx) } #plane2 { width: X; height: Y; transform: rotateX(Xdeg) rotateY(Ydeg) rotateZ(Zdeg) translateZ(Zpx) } /* Or something equivalent such as transform: matrix3d() */ In summary, while this may be too HTML/CSS-y for game development, the core question is how to get the X/Y/Z-rotation of a 4 point plane from it's matrix of x,y,z coordinates?

    Read the article

  • C# XNA 4.0 multitextured cube

    - by chron
    So I am following this tutorial on how to draw a cube in XNA and I ran into a problem. Ok so my shader can only have texture right? I need to have a texture on the front and back of my cube. So I thought I could just have both textures stored in one texture. Problem is I do not know how to map out my UV coords to do so. (I tried dividing by 2 and such with no luck...). I am really new to this (not programming, just game development and some concepts), but how could I get UV coordinates for both halfs of the texture. private void ConstructCube(Vector3 Position,Vector3 Size) { _vertices = new VertexPositionNormalTexture[50]; // Calculate the position of the vertices on the top face. Vector3 topLeftFront = Position + new Vector3(-1.0f, 1.0f, -1.0f) * Size; Vector3 topLeftBack = Position + new Vector3(-1.0f, 1.0f, 1.0f) * Size; Vector3 topRightFront = Position + new Vector3(1.0f, 1.0f, -1.0f) * Size; Vector3 topRightBack = Position + new Vector3(1.0f, 1.0f, 1.0f) * Size; // Calculate the position of the vertices on the bottom face. Vector3 btmLeftFront = Position + new Vector3(-1.0f, -1.0f, -1.0f) * Size; Vector3 btmLeftBack = Position + new Vector3(-1.0f, -1.0f, 1.0f) * Size; Vector3 btmRightFront = Position + new Vector3(1.0f, -1.0f, -1.0f) * Size; Vector3 btmRightBack = Position + new Vector3(1.0f, -1.0f, 1.0f) * Size; // Normal vectors for each face (needed for lighting / display) Vector3 normalFront = new Vector3(0.0f, 0.0f, 1.0f) * Size; Vector3 normalBack = new Vector3(0.0f, 0.0f, -1.0f) * Size; Vector3 normalTop = new Vector3(0.0f, 1.0f, 0.0f) * Size; Vector3 normalBottom = new Vector3(0.0f, -1.0f, 0.0f) * Size; Vector3 normalLeft = new Vector3(-1.0f, 0.0f, 0.0f) * Size; Vector3 normalRight = new Vector3(1.0f, 0.0f, 0.0f) * Size; // UV texture coordinates Vector2 textureTopLeft = new Vector2(1.0f * Size.X, 0.0f * Size.Y); Vector2 textureTopRight = new Vector2(0.0f * Size.X, 0.0f * Size.Y); Vector2 textureBottomLeft = new Vector2(1.0f * Size.X, 1.0f * Size.Y); Vector2 textureBottomRight = new Vector2(0.0f * Size.X, 1.0f * Size.Y); // Add the vertices for the FRONT face. _vertices[0] = new VertexPositionNormalTexture(topLeftFront, normalFront, textureTopLeft); _vertices[1] = new VertexPositionNormalTexture(btmLeftFront, normalFront, textureBottomLeft); _vertices[2] = new VertexPositionNormalTexture(topRightFront, normalFront, textureTopRight); _vertices[3] = new VertexPositionNormalTexture(btmLeftFront, normalFront, textureBottomLeft); _vertices[4] = new VertexPositionNormalTexture(btmRightFront, normalFront, textureBottomRight); _vertices[5] = new VertexPositionNormalTexture(topRightFront, normalFront, textureTopRight); // Add the vertices for the BACK face. _vertices[6] = new VertexPositionNormalTexture(topLeftBack, normalBack, textureTopRight); _vertices[7] = new VertexPositionNormalTexture(topRightBack, normalBack, textureTopLeft); _vertices[8] = new VertexPositionNormalTexture(btmLeftBack, normalBack, textureBottomRight); _vertices[9] = new VertexPositionNormalTexture(btmLeftBack, normalBack, textureBottomRight); _vertices[10] = new VertexPositionNormalTexture(topRightBack, normalBack, textureTopLeft); _vertices[11] = new VertexPositionNormalTexture(btmRightBack, normalBack, textureBottomLeft); // Add the vertices for the TOP face. _vertices[12] = new VertexPositionNormalTexture(topLeftFront, normalTop, textureBottomLeft); _vertices[13] = new VertexPositionNormalTexture(topRightBack, normalTop, textureTopRight); _vertices[14] = new VertexPositionNormalTexture(topLeftBack, normalTop, textureTopLeft); _vertices[15] = new VertexPositionNormalTexture(topLeftFront, normalTop, textureBottomLeft); _vertices[16] = new VertexPositionNormalTexture(topRightFront, normalTop, textureBottomRight); _vertices[17] = new VertexPositionNormalTexture(topRightBack, normalTop, textureTopRight); // Add the vertices for the BOTTOM face. _vertices[18] = new VertexPositionNormalTexture(btmLeftFront, normalBottom, textureTopLeft); _vertices[19] = new VertexPositionNormalTexture(btmLeftBack, normalBottom, textureBottomLeft); _vertices[20] = new VertexPositionNormalTexture(btmRightBack, normalBottom, textureBottomRight); _vertices[21] = new VertexPositionNormalTexture(btmLeftFront, normalBottom, textureTopLeft); _vertices[22] = new VertexPositionNormalTexture(btmRightBack, normalBottom, textureBottomRight); _vertices[23] = new VertexPositionNormalTexture(btmRightFront, normalBottom, textureTopRight); // Add the vertices for the LEFT face. _vertices[24] = new VertexPositionNormalTexture(topLeftFront, normalLeft, textureTopRight); _vertices[25] = new VertexPositionNormalTexture(btmLeftBack, normalLeft, textureBottomLeft ); _vertices[26] = new VertexPositionNormalTexture(btmLeftFront, normalLeft, textureBottomRight ); _vertices[27] = new VertexPositionNormalTexture(topLeftBack, normalLeft, textureTopLeft); _vertices[28] = new VertexPositionNormalTexture(btmLeftBack, normalLeft, textureBottomLeft ); _vertices[29] = new VertexPositionNormalTexture(topLeftFront, normalLeft, textureTopRight ); // Add the vertices for the RIGHT face. _vertices[30] = new VertexPositionNormalTexture(topRightFront, normalRight, textureTopLeft); _vertices[31] = new VertexPositionNormalTexture(btmRightFront, normalRight, textureBottomLeft); _vertices[32] = new VertexPositionNormalTexture(btmRightBack, normalRight, textureBottomRight); _vertices[33] = new VertexPositionNormalTexture(topRightBack, normalRight, textureTopRight); _vertices[34] = new VertexPositionNormalTexture(topRightFront, normalRight, textureTopLeft); _vertices[35] = new VertexPositionNormalTexture(btmRightBack, normalRight, textureBottomRight); done = true; }

    Read the article

  • Building a touch event driven UI from scratch: what algorithms or data types?

    - by user1717079
    I have a touch display. As input I can receive the coordinates and how many touch points are in use, basically I just get an X,Y couple for every touch event/activated point at a customizable rate. I need to start from this and build my own callback system to achieve something like Object.onUp().doSomething() meaning that I would like to abstract just the detection of some particular movements and not having to deal with raw data: what algorithms can be useful in this case? What statements? Is there some C++ library that I can dissect to get some useful info? Would you suggest the use of an heuristic algorithm?

    Read the article

  • SmartSync Printing In ASP.NET Scheduler Reporting v2010.1

    Check out this new SmartSync printing feature of the ASPxScheduler that helps you to print a scheduler report in a Tri-fold style. Hows It Work? If several scheduler report controls are placed on the same report, the scheduler adapter on the report coordinates how the controls will iterate through the schedule data. The view control on the report that has the smallest period becomes the 'principal' or 'driving' control. It starts the iteration, and other controls on the page are...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do you set the movement speed of a sprite?

    - by rphello101
    I'm using Slick 2D/Java to play around with graphics. Getting an image to move is easy: Input input = gc.getInput(); if(input.isKeyDown(sprite.up)){ sprite.y--; }else if (input.isKeyDown(sprite.down)){ sprite.y++; }else if (input.isKeyDown(sprite.left)){ sprite.x--; }else if (input.isKeyDown(sprite.right)){ sprite.x++; } However, this is called on every update, so if you hold up, the sprite moves to the edge of the screen in a few hundred milliseconds. Since coordinates are integers, I can't add less than 1 to slow the sprite down. I'm assuming I must have to implement a timer of some sort or something. Any advice?

    Read the article

  • Vertex Array Object (OpenGL)

    - by user5140
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • Large Sprite Performance

    - by Iansen
    I've got a large Sprite generated using a set of vertices(x,y coordinates) and a bitmap pattern (using moveTo, lineTo, beginBitmapFill, endFill ...etc). It's about 15000 pixels wide and between 1500 - 2000 pixels high depending on the level -it's the terrain for a 2D game. My question is: what is the best way to display/move it on the stage - performance wise? Currently I'm just adding it to the stage as is...I get decent frame rate/ memory/ cpu usage but I want to optimize it for slower PCs. Any ideas? I've been reading a little about blitting but I'm not sure how to implement it in my case. Thanks.

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • iphone app custom images inside UILabel

    - by Ayaz Alavi
    Hi, I have got scenario where i would like to find where text in UILabel is ending and get its coordinates in terms of x and y. Then I need to insert image right after last word of UILabel text. This event will be fired when I click on particular image in app. How can I find what are the x and y of ending character of UILabel text? Regards ayaz alavi

    Read the article

  • WPF: Viewbox and TranslatePoint

    - by Samir Sabri
    Hello, I have a Viewbox with a Canvas Child, I have the Stretch property of Viewbox to Fill, I have changed the width and height of the Viewbox, I need to get the location of children in Canvas with respect to Viewbox parent I tried : Point p = viewboxInstance.TranslatePoint(new Point(Canvas.GetLeft(child), Canvas.GetTop(child)), viewboxInstanceParent); it gets wrong coordinates! Is there a solution or work around ? Thanks

    Read the article

  • iPhone OpenGL ES - How to Pick

    - by Ali Nadalizadeh
    I'm working on an OpenGL ES1 app which displays a 2D grid and allows user to navigate and scale/rotate it. I need to know the exact translation of View Touch coordinates into my opengl world and grid cell. Are there any helpers to do the reverse of last few transforms which I do for navigation ? or I should calculate and do the matrix stuff by hand ?

    Read the article

  • C# - Shortest path map finding

    - by nXqd
    I try to write a simple program in C#, it's like map finding . I've a picture of city / or district ( it's const ) and I'll add a database to this program to store variables, points . I use floyd to find the shortest path and I'll draw the path in the image ( by coordinates I think ) . This is the first time I write a real program in C# so how should I implement this one ;) Thanks so much for reading !

    Read the article

  • Algorithm for heat map?

    - by eshan
    I have a list of values each with latitude and longitude. I'm looking to create a translucent heatmap image to overlay on Google Maps. I know there are server side and flash based solutions already, but I want to build this in javascript using the canvas tag. However, I can't seem to find a concise description of the algorithm used to turn coordinates and values into a heatmap. Can anyone provide or link to one? Thanks.

    Read the article

  • MKMapView refresh after pin moves

    - by slatvick
    A custom AnnotationView is updated with new coordinates. But the problem is that it visually updates only after some manipulations with MKMapView, e.g. zooming or moving. What should I do to manually update visual position on a map? PS. I've tried to change region to current map's region. But it does change zoom. It's strange. [mapView setRegion:[mapView region] animated:YES];

    Read the article

  • Google earth GEOdata ?

    - by Quandary
    Question: Is it possible to use/retrieve Geodata from Google-Earth ? What I want to do is take a little area, get terrain information (coordinates, height, elevation) and simulate how the selected area would be flooded at specified amounts of rain for a specified amount of hours.

    Read the article

  • OpenGL ES 2.0 equivalent of glOrtho()?

    - by Zippo
    In my iphone app, I need to project 3d scene into the 2D coordinates of the screen for some calculations. My objects go through various rotations, translations and scaling. So I figured I need to multiply the vertices with ModelView matrix first, then I need to multiply it with the Orthogonal projection matrix. First of all am on the right track? I have the Model View Matrix, but need the projection matrix. Is there a glOrtho() equivalent in ES 2.0?

    Read the article

  • Distance Between GIS Points

    - by Paul
    I have a large number of GIS (latitude, longitude) coordinates, and I'd like to get the distance between them. Is there a service that will calculate the shortest path for me? I know about google maps, but I'd like something I can use from Python, and that can handle a large batch of requests at once. I'm looking for the driving distance, so a straight distance won't do. Thanks

    Read the article

  • The Skyline Problem.

    - by zeroDivisible
    I just came across this little problem on UVA's Online Judge and thought, that it may be a good candidate for a little code-golf. The problem: You are to design a program to assist an architect in drawing the skyline of a city given the locations of the buildings in the city. To make the problem tractable, all buildings are rectangular in shape and they share a common bottom (the city they are built in is very flat). The city is also viewed as two-dimensional. A building is specified by an ordered triple (Li, Hi, Ri) where Li and Ri are left and right coordinates, respectively, of building i and Hi is the height of the building. In the diagram below buildings are shown on the left with triples (1,11,5), (2,6,7), (3,13,9), (12,7,16), (14,3,25), (19,18,22), (23,13,29), (24,4,28) and the skyline, shown on the right, is represented by the sequence: 1, 11, 3, 13, 9, 0, 12, 7, 16, 3, 19, 18, 22, 3, 23, 13, 29, 0 The output should consist of the vector that describes the skyline as shown in the example above. In the skyline vector (v1, v2, v3, ... vn) , the vi such that i is an even number represent a horizontal line (height). The vi such that i is an odd number represent a vertical line (x-coordinate). The skyline vector should represent the "path" taken, for example, by a bug starting at the minimum x-coordinate and traveling horizontally and vertically over all the lines that define the skyline. Thus the last entry in the skyline vector will be a 0. The coordinates must be separated by a blank space. If I will not count declaration of provided (test) buildings and including all spaces and tab characters, my solution, in Python, is 223 characters long. Here is the condensed version: B=[[1,11,5],[2,6,7],[3,13,9],[12,7,16],[14,3,25],[19,18,22],[23,13,29],[24,4,28]] # Solution. R=range v=[0 for e in R(max([y[2] for y in B])+1)] for b in B: for x in R(b[0], b[2]): if b[1]>v[x]: v[x]=b[1] p=1 k=0 for x in R(len(v)): V=v[x] if p and V==0: continue elif V!=k: p=0 print "%s %s" % (str(x), str(V)), k=V I think that I didn't made any mistake but if so - feel free to criticize me. EDIT I don't have much reputation, so I will pay only 100 for a bounty - I am curious, if anyone could try to solve this in less than .. lets say, 80 characters. Solution posted by cobbal is 101 characters long and currently it is the best one. ANOTHER EDIT I thought, that 80 characters is a sick limit for this kind of problem. cobbal, with his 46 character solution totaly amazed me - though I must admit, that I spent some time reading his explanation before I partially understood what he had written.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >