Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 3/61 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Transform OpenGL coordinates to lower UIView coordinates

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John (void)drawView:(GLView*)view { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // The coordinates of the rectangle are myRect.x, // myRect.y, myRect.width, myRect.height // Do I need a transform matrix here? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; .... } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); //glMatrixMode(GL_MODELVIEW); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); }

    Read the article

  • UPDATE MKANNOTATION COORDINATES

    - by user345711
    Hi everyone, i'm having problems trying to update an annotation location with different coordinates. Is there any way I can change de location property without having to create another annotation? I've tried the code below with no luck. The annotation i'm trying to get is not updating its location. Please help! CLLocationCoordinate2D location; location.latitude = -36.560976; location.longitude = -59.455807; for (id annotation in self.mapView.annotations) { if ([annotation isKindOfClass:[MyAnnotation class]]) { [annotation setCoords:location]; //setCoords is defined in MyAnnotation class } } Thank you all!

    Read the article

  • Code coordinates to match compass bearings

    - by pinnacler
    Right now in Matlab (0,0) is the origin, 0 degrees / 2pi would be to the right of the cartesian plane and angles are measured counter clockwise with 90 degrees being at the top. I'm trying to write a simulator where the coordinates would match a compass bearing. 0/360 degrees or 2pi would be at the top and 90 degrees would be on the right. Any idea how to code in Matlab or c++? I'd imaging it'd be a matrix flipped about the x axis and rotated 90 degrees but I'm at a total loss. Phil

    Read the article

  • Get countries within range of LAT/LONG coordinates

    - by ptrn
    Trouble! I'm looking for a way to find the countries within a given range of LAT/LONG coordinates. E.g.: When viewing a place in Africa in Google Maps, I get out which countries that are in my current view. This is a bit ambitious, and I think the main problem will be dealing with accuracy of the needed polygons. The accuracy of these don't need to be all that great, the borders can probably be tens of miles off, or even more. This will be needed for the entire world.

    Read the article

  • Reading Matrices in MATLAB and assigning coordinates to the entries

    - by Michael Schofield
    Hi, I'm a bit new to MATLAB. Basically, I have a 25x25 Matrix, complete with various random entries ranging from 0 to 3. I need to write a program that reads this matrix, and assigns x-y coordinates to the entries, so that when I ask for an input of a particular x-y coordinate which has, say an entry of 3, then it will result in an error. I'm a bit overwhelmed - but I understand the general concept of what I'm supposed to be finding. I'm wondering if I should use a plot instead to help me.

    Read the article

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • Can get coordinates from iPhone simulator, but can't get coordinates from iPhone device

    - by iPhoneARguy
    Hi everyone, I've run into something of a mysterious bug (to me). I have some code to pick out the user's current location on the iPhone SDK. It works fine on the iPhone simulator, but when I try to run it on the actual device, I get a weird error. Here is the code (I am using ASIFormDataRequest to create a POST request): ASIFormDataRequest * request = [ASIFormDataRequest requestWithURL:url]; [request setPostValue:@"testauthor" forKey:@"author"]; [request setPostValue:[[NSNumber numberWithDouble:datum.location.coordinate.latitude] stringValue] forKey:@"latitude"]; NSLog(@"%f", datum.location.coordinate.latitude); NSLog(@"%f", datum.location.coordinate.longitude); [request setPostValue:[[NSNumber numberWithDouble:datum.location.coordinate.longitude] stringValue] forKey:@"longitude"]; [request setPostValue:datum.comment forKey:@"comment"]; On the simulator, NSLog does log both the latitude and longitude, but on the iPhone, it does not. Even stranger, when I go through with the debugger on the device, I try "po datum.location", I get <+###, -###> +/- 223.10m (speed 0.00 mps / course -1.00) @ 2010-05-02 22:18:37 -0400 (### replaced by my location) but when I do "(gdb) po datum.location.coordinate" I get: There is no member named coordinate. Do you guys have any idea why this might happen? Thanks in advance for your help!

    Read the article

  • ActionScript Local X And Y Coordinates Of Display Object?

    - by TheDarkIn1978
    i'm trying to trace the x and y coordinates from within a sprite. i've added a rectangle to the stage: var rect:Rectangle = new Rectangle(10, 10, 200, 200); addChild(rect); adding a Mouse_Move event to the rect, i can trace mouseX and mouseY to receive the coordinates of the stage while moving over the rect, but how do i get the local x and y coordinates? so if i mouse over the very top left of the rect sprite, the mouseX and mouseY return 10 as the global coordinates, but how do i make it return 0 and the local coordinates of the sprite? i assumed localX and localY was what i was looking for, but this doesn't work: function mouseOverTraceCoords(evt:MouseEvent):void { trace(mouseX, mouseY, evt.localX, evt.localY); }

    Read the article

  • Android-Java: Constructing a triangle based on Coordinates on a map and your bearing

    - by Aidan
    Hi Guys, I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've got the math figured, I just have the triangle to construct. //UPDATE So I've figured out a good bit of this... Below is a method which takes in a long / lat value and attempts to compute a triangle finding a point 700 meters away and one to its left + right. It'd then use these to construct the triangle. It computes the correct longitude but the latitude ends up somewhere off the coast of east Africa. (I'm in Ireland!). public void drawtri(double currlng,double currlat, double bearing){ bearing = (bearing < 0 ? -bearing : bearing); System.out.println("RUNNING THE DRAW TRIANGLE METHOD!!!!!"); System.out.println("CURRENT LNG" + currlng); System.out.println("CURRENT LAT" + currlat); System.out.println("CURRENT BEARING" + bearing); //Find point X(x,y) double distance = 0.7; //700 meters. double R = 6371.0; //The radius of the earth. //Finding X's y value. Math.toRadians(currlng); Math.toRadians(currlat); Math.toRadians(bearing); distance = distance/R; Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+ Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing)); System.out.println("CURRENT ALAT!!: " + Global.Alat); //Finding X's x value. Global.Alng = currlng + Math.atan2(Math.sin(bearing)*Math.sin(distance) *Math.cos(currlat), Math.cos(distance)-Math.sin(currlat)*Math.sin(Global.Alat)); Math.toDegrees(Global.Alat); Math.toDegrees(Global.Alng); //Co-ord of Point B(x,y) // Note: Lng = X axis, Lat = Y axis. Global.Blat = Global.Alat+ 00.007931; Global.Blng = Global.Alng; //Co-ord of Point C(x,y) Global.Clat = Global.Alat - 00.007931; Global.Clng = Global.Alng; } From debugging I've determined the problem lies with the computation of the latitude done here.. Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+ Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing)); I have no idea why though and don't know how to fix it. I got the formula from this site.. http://www.movable-type.co.uk/scripts/latlong.html It appears correct and I've tested multiple things... I've tried converting to Radians then post computations back to degrees, etc. etc. Anyone got any ideas how to fix this method so that it will map the triangle ONLY 700 meters in from my current location in the direction that I am facing? Thanks, EDIT/// Converting the outcome to radians gives me a lat of 5.6xxxxxxxxxxxxxx .I have a feeling this bug has something to do with conversions but its not THAT simple. The equation is correct, it just.. outputs wrong..

    Read the article

  • OpenGL coordinates question

    - by Chonch
    Hey, I have a simple OpenGL drawing. When the user changes the window's size, I want the drawing to maintain it's aspect ration. I accomplished that by setting the glViewport to the maximum rectangle with the appropriate aspect ration whenever the reshape method is called. My problem is that I want to draw a square that will always remain in the top right corner of the window, no matter what the size or shape of the window is. Right now, that square moves around the screen whenever the window is reshaped. Can anyone please explain how to do this? Thank you,

    Read the article

  • Orthogonal projection and texture coordinates in opengl

    - by knuck
    I'm writing a 2D game in Opengl. I already set up the orthogonal projection so I can easily know where a quad will end up on screen. The problem is, I also want to be able to map pixels directly to texture coords, so I also applied an orthogonal transformation (using gluOrtho2d) to the texture. Now I can map pixels directly using integers and glTexCoord2i. The thing is, after googling/reading/asking, I found out no one really knows (apparently) the behavior of glTexCoord2i, but it works just fine the way I'm using. Some sample test code I wrote follows: glBegin(GL_QUADS); glTexCoord2i(16,0); glVertex2f(X, Y); glTexCoord2i(16,16); glVertex2f(X, Y+32); glTexCoord2i(32, 16); glVertex2f(X+32, Y+32); glTexCoord2i(32, 0); glVertex2f(X+32, Y); glEnd(); So, is there any problem with what I'm doing, or is what I'm doing correct?

    Read the article

  • iPhone current user location coordinates showing as (0,0)

    - by ennuikiller
    I'm trying to get the users current latitude and longitude with this viewDidLoad method. The resulting map is correctly indicating the current location however the NSLog consistently shows: 2009-09-19 16:45:29.765 Mapper[671:207] user latitude = 0.000000 2009-09-19 16:45:29.772 Mapper[671:207] user longitude = 0.000000 Anyone know what I am missing here? Thanks in advance for your help! - (void)viewDidLoad { [super viewDidLoad]; [mapView setMapType:MKMapTypeStandard]; [mapView setZoomEnabled:YES]; [mapView setScrollEnabled:YES]; [mapView setShowsUserLocation:YES]; CLLocation *userLoc = mapView.userLocation.location; CLLocationCoordinate2D userCoordinate = userLoc.coordinate; NSLog(@"user latitude = %f",userCoordinate.latitude); NSLog(@"user longitude = %f",userCoordinate.longitude); }

    Read the article

  • How do i open Google Maps for directions using coordinates on the iphone

    - by Aran Mulholland
    I am using UIMapView to display locations on the iPhone. I want to do a directions from current location to the location of interest, I don't think its possible using MapKit (but if it is please inform) So I will open either the Google Maps application or safari to display it. Can i do this by specifying co-ordinates from (current location) to co-ordinates (the location of interest) I have these longitudes and latitudes. Or do i have to use street addresses? If I do have to use street addresses, can i get them from the latitude and longitude.

    Read the article

  • matlab: how to transform screen pixels into specific coordinates

    - by user3137385
    I have to draw a curve captured on a image using screen pixels (mouse clicks) into a coordinate system. E.g.: Pixels on the screen, from left to right (130 px to 970 px) correspond to the x-axis of my coordinate system (1000 to 6000). Pixels from bottom to top (670 to 99) correspond to the y-axis of coordinate system (0 to 1.2). How can this be done? Maybe there's a function in matlab doing something like that? Some more explanation: I have a jpg image of a curve on a coordinate system. I've got pixel positions (x,y) of several points on that curve. Now I want to plot same curve into a matlab figure with same x and y axis as on the jpg image.

    Read the article

  • How can I draw on JPanel using another quadrant for the coordinates?

    - by Sanoj
    I would like to draw some shapes on a JPanel by overriding paintComponent. I would like to be able to pan and zoom. Panning and zooming is easy to do with AffineTransform and the setTransform method on the Graphics2D object. After doing that I can easyli draw the shapes with g2.draw(myShape) The shapes are defined with the "world coordinates" so it works fine when panning and I have to translate them to the canvas/JPanel coordinates before drawing. Now I would like to change the quadrant of the coordinates. From the 4th quadrant that JPanel and computer often uses to the 1st quadrant that the users are most familiar with. The X is the same but the Y-axe should increase upwards instead of downwards. It is easy to redefine origo by new Point(origo.x, -origo.y); But How can I draw the shapes in this quadrant? I would like to keep the coordinates of the shapes (defined in the world coordinates) rather than have them in the canvas coordinates. So I need to transform them in some way, or transform the Graphics2D object, and I would like to do it efficiently. Can I do this with AffineTransform too?

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • Java - Using Linear Coordinates to Check Against AI [closed]

    - by Oliver Jones
    I'm working on some artificial intelligence, and I want my AI not to run into given coordinates as these are references of a wall/boundary. To begin with, every time my AI hits a wall, it makes a reference to that position (x,y). When it hits the same wall three times, it uses linear check points to 'imagine' there is a wall going through these coordinates. I want to now prevent my AI from going into that wall again. To detect if my coordinates make a straight line, i use: private boolean collinear(double x1, double y1, double x2, double y2, double x3, double y3) { return (y1 - y2) * (x1 - x3) == (y1 - y3) * (x1 - x2); } This returns true is the given points are linear to one another. So my problems are: How do I determine whether my robot is approaching the wall from its current trajectory? Instead of Java 'imagining' theres a line from 1, to 3. But to 'imagine' a line all the way through these linear coordinantes, until infinity (or close). I have a feeling this is going to require some confusing trigonometry? (REPOST: http://stackoverflow.com/questions/13542592/java-using-linear-coordinates-to-check-against-ai)

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Defining Light Coordinates

    - by Zachary
    I took a Computer Graphics exam a couple of days ago which had extra credit question like the following: A light can be defined in one of two ways. It can be defined in world coordinates, e.g. a street light, or in the viewer (eye coordinates), e.g., a head-lamp worn by a miner. In either case the viewpoint can freely change. Describe how the light should be transformed different in these two cases. Since I won't get to see the results of this until after spring break, I thought I would ask here. It seems like the analogies being used are misleading - could you not define a light source that is located at the viewers eye in world coordinates just as well as you could in eye coordinates? I've been doing some research on how OpenGL handles light, and it seems as though it always uses eye coordinates - the ModelView matrix would be applied to any light in world coordinates. In that case the answer may just be that you would have to transform light defined in world coordinates into eye coordinates using something like the ModelView matrix, while light defined in eye coordinates would only need to be transformed by the projection matrix. Then again I could be totally under thinking (or over thinking this). Another thought I had is that it determines which way you render shadows, but that has more to do with the location of the light and its type (point, directional, emission, etc) than what coordinates it is represented in. Any ideas?

    Read the article

  • 3d environments and managing them on iOS

    - by alJaree
    I would like to start learning 3d game development and currently only develop 2d games. A few basic questions I am interested in are: What is used to create the 3d environments? Are they all done in e.g. Maya, Lightwave, 3d modeling software? What is the output format for these models and how are they manipulated in iOS? Is it all done using openGL(GL ES on iOS)? e.g a monster needs to be spawned in the game world. What coordinates are used? Are the concepts the same as 2d in terms of collision on the coordinates and movement on the coordinates of the game world? How are 3d games managed in iOS on the low available memory. (e.g. FPS games) Lastly, Can someone please recommend a good book that is up to date and can be applied to todays techniques. Thanks

    Read the article

  • how to transform child elements position into a world position

    - by MrGreg
    So Im making a 2d space game and I have a bunch of spaceships that have turrets. Objects have a position and orientation, the ships being in world coordinates while the turrets are children and coordinates are relative to their parents. How do I efficiently calculate the position of a turret in world coordinates (i.e. when it fires and I need to know where to place a bullet in the world)? Calculating the turrets orientation is trivial - I just add the turrets relative angle to its parents. For position though, I guess I could do a bunch of trigonometry but this MUST be a common problem with a good/fast general solution? Should I be relearning how to do matrix math again? :) btw - Im creating the game in javascript+canvas but its the math/algorithm im interested in here Cheers, Greg

    Read the article

  • HTML5 Canvas Converting between cartesian and isometric coordinates

    - by Amir
    I'm having issues wrapping my head around the Cartesian to Isometric coordinate conversion in HTML5 canvas. As I understand it, the process is two fold: (1) Scale down the y-axis by 0.5, i.e. ctx.scale(1,0.5); or ctx.setTransform(1,0,0,0.5,0,0); This supposedly produces the following matrix: [x; y] x [1, 0; 0, 0.5] (2) Rotate the context by 45 degrees, i.e. ctx.rotate(Math.PI/4); This should produce the following matrix: [x; y] x [cos(45), -sin(45); sin(45), cos(45)] This (somehow) results in the final matrix of ctx.setTransform(2,-1,1,0.5,0,0); which I cannot seem to understand... How is this matrix derived? I cannot seem to produce this matrix by multiplying the scaling and rotation matrices produced earlier... Also, if I write out the equation for the final transformation matrix, I get: newX = 2x + y newY = -x + y/2 But this doesn't seem to be correct. For example, the following code draws an isometric tile at cartesian coordinates (500, 100). ctx.setTransform(2,-1,1,0.5,0,0); ctx.fillRect(500, 100, width*2, height); When I check the result on the screen, the actual coordinates are (285, 215) which do not satisfy the equations I produced earlier... So what is going on here? I would be very grateful if you could: (1) Help me understand how the final isometric transformation matrix is derived; (2) Help me produce the correct equation for finding the on-screen coordinates of an isometric projection. Many thanks and kind regards

    Read the article

  • How to import UTM coordinates into Google Earth?

    - by Florian Jenn
    I have some points in UTM coordinates that I'd like to import into Google Earth. Google Earth is able to show UTM coordinates, but I have not found a way to import points or enter coordinates in a placemark's properties with UTM coordinates. Is there any possibility to do this with just Google Earth or do I need to convert my data set externally? I do have separate tools for coordinate conversion, but I'd like to spare the extra step.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >