Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 12/61 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to get left-top and right-buttom latitude and longitude of map in MapKit For iPad ?

    - by damniatx
    Hi, i'm looking for answer about getting latitude and longitude from uimapkit. there is same problem that has been answered, but it didn't give right coordinate. http://yit.me/3ddp73 it's code from thread i'm linking above. CLLocationCoordinate2D topLeft, bottomRight; topLeft = [mapView convertPoint:CGPointMake(0,0) toCoordinateFromView:mapView]; CGPoint pointBottomRight = CGPointMake(mapView.frame.size.width, mapView.frame.size.height); bottomRight = [mapView convertPoint:pointBottomRight toCoordinateFromView:mapView]; NSLog(@"topleft = %f", topLeft); NSLog(@"bottom right = %f", bottomRight); any idea to fix this issue ? thanks

    Read the article

  • Java/Swing: JTable.rowAtPoint doesn't work correctly for points outside the table?

    - by Jason S
    I have this code to get a row from a JTable that is generated by a DragEvent for a DropTarget in some component C which may or may not be the JTable: public int getRowFromDragEvent(DropTargetDragEvent event) { Point p = event.getLocation(); if (event.getSource() != this.table) { SwingUtilities.convertPointToScreen(p, event.getDropTargetContext().getComponent()); SwingUtilities.convertPointFromScreen(p, this.table); if (!this.table.contains(p)) { System.out.println("outside table, would be row "+this.table.rowAtPoint(p)); } } return this.table.rowAtPoint(p); } The System.out.println is just a hack right now. What I'm wondering, is why the System.out.println doesn't always print "row -1". When the table is empty it does, but if I drag something onto the table's header row, I get the println with "row 0". This seems like a bug... or am I not understanding how rowAtPoint() works?

    Read the article

  • Google Maps: How to add HTML elements to specific coordinates ?

    - by Christian Toma
    I would like to know how and if I can add standard HTML elements (div, button) to a specific set of coordinates on the map ? For example I have a set of coordinates and I would like to attach a custom balloon notification to them and when I pan away from the coordinates the element should disappear and when I pan back to them, the element should reappear. Is it possible to do this with Google Maps ?

    Read the article

  • Missing Coordinates. Basic Trigonometry Help.

    - by TheDarkIn1978
    please refer to my quick diagram attached below. what i'm trying to do is get the coordinates of the yellow dots by using the angle from the red dots' known coordinates. assuming each yellow dot is about 20 pixels away from the x:50/y:250 red dot at a right angle (i think that's what it's called) how do i get their coordinates? i believe this is very basic trigonometry and i should use Math.tan(), but they didn't teach us much math in art school.

    Read the article

  • How can I get node coordinates from a graph, using Perl?

    - by jonny
    Ok, I have a flowchart definition (basically, array of nodes and edges for each node). Now I want to calculate coordinates for every task in the flow, preferably hierarchycal style. I need something like Graph::Easy::Layout but I have no idea how to get nodes coordinates: I render nodes myself and I only want to retrieve box coordinates/size. Any suggestions? What I need is a CPAN module available even in Debian repository.

    Read the article

  • How can I ease the work of getting pixel coordinates from a spritesheet?

    - by ThePlan
    When it comes to spritesheets they're usually easier to use, and they're very efficient memory-wise, but the problem that I'm always having is getting the actual position of a sprite from a sheet. Usually, I have to throw in some aproximated values and modify them several times until I get it right. My question: is there a tool which can basically show you the coordinates of the mouse relative to the image you have opened? Or is there a simpler method of getting the exact rectangle that the sprite is contained in?

    Read the article

  • How to change Matlab program for solving equation with finite element method?

    - by DSblizzard
    I don't know is this question more related to mathematics or programming and I'm absolute newbie in Matlab. Program FEM_50 applies the finite element method to Laplace's equation -Uxx(x, y) - Uyy(x, y) = F(x, y) in Omega. How to change it to apply FEM to equation -Uxx(x, y) - Uyy(x, y) + U(x, y) = F(x, y)? At this page: http://sc.fsu.edu/~burkardt/m_src/fem_50/fem_50.html additional code files in case you need them. function fem_50 ( ) %% FEM_50 applies the finite element method to Laplace's equation. % % Discussion: % % FEM_50 is a set of MATLAB routines to apply the finite % element method to solving Laplace's equation in an arbitrary % region, using about 50 lines of MATLAB code. % % FEM_50 is partly a demonstration, to show how little it % takes to implement the finite element method (at least using % every possible MATLAB shortcut.) The user supplies datafiles % that specify the geometry of the region and its arrangement % into triangular and quadrilateral elements, and the location % and type of the boundary conditions, which can be any mixture % of Neumann and Dirichlet. % % The unknown state variable U(x,y) is assumed to satisfy % Laplace's equation: % -Uxx(x,y) - Uyy(x,y) = F(x,y) in Omega % with Dirichlet boundary conditions % U(x,y) = U_D(x,y) on Gamma_D % and Neumann boundary conditions on the outward normal derivative: % Un(x,y) = G(x,y) on Gamma_N % If Gamma designates the boundary of the region Omega, % then we presume that % Gamma = Gamma_D + Gamma_N % but the user is free to determine which boundary conditions to % apply. Note, however, that the problem will generally be singular % unless at least one Dirichlet boundary condition is specified. % % The code uses piecewise linear basis functions for triangular elements, % and piecewise isoparametric bilinear basis functions for quadrilateral % elements. % % The user is required to supply a number of data files and MATLAB % functions that specify the location of nodes, the grouping of nodes % into elements, the location and value of boundary conditions, and % the right hand side function in Laplace's equation. Note that the % fact that the geometry is completely up to the user means that % just about any two dimensional region can be handled, with arbitrary % shape, including holes and islands. % clear % % Read the nodal coordinate data file. % load coordinates.dat; % % Read the triangular element data file. % load elements3.dat; % % Read the quadrilateral element data file. % load elements4.dat; % % Read the Neumann boundary condition data file. % I THINK the purpose of the EVAL command is to create an empty NEUMANN array % if no Neumann file is found. % eval ( 'load neumann.dat;', 'neumann=[];' ); % % Read the Dirichlet boundary condition data file. % load dirichlet.dat; A = sparse ( size(coordinates,1), size(coordinates,1) ); b = sparse ( size(coordinates,1), 1 ); % % Assembly. % for j = 1 : size(elements3,1) A(elements3(j,:),elements3(j,:)) = A(elements3(j,:),elements3(j,:)) ... + stima3(coordinates(elements3(j,:),:)); end for j = 1 : size(elements4,1) A(elements4(j,:),elements4(j,:)) = A(elements4(j,:),elements4(j,:)) ... + stima4(coordinates(elements4(j,:),:)); end % % Volume Forces. % for j = 1 : size(elements3,1) b(elements3(j,:)) = b(elements3(j,:)) ... + det( [1,1,1; coordinates(elements3(j,:),:)'] ) * ... f(sum(coordinates(elements3(j,:),:))/3)/6; end for j = 1 : size(elements4,1) b(elements4(j,:)) = b(elements4(j,:)) ... + det([1,1,1; coordinates(elements4(j,1:3),:)'] ) * ... f(sum(coordinates(elements4(j,:),:))/4)/4; end % % Neumann conditions. % if ( ~isempty(neumann) ) for j = 1 : size(neumann,1) b(neumann(j,:)) = b(neumann(j,:)) + ... norm(coordinates(neumann(j,1),:) - coordinates(neumann(j,2),:)) * ... g(sum(coordinates(neumann(j,:),:))/2)/2; end end % % Determine which nodes are associated with Dirichlet conditions. % Assign the corresponding entries of U, and adjust the right hand side. % u = sparse ( size(coordinates,1), 1 ); BoundNodes = unique ( dirichlet ); u(BoundNodes) = u_d ( coordinates(BoundNodes,:) ); b = b - A * u; % % Compute the solution by solving A * U = B for the remaining unknown values of U. % FreeNodes = setdiff ( 1:size(coordinates,1), BoundNodes ); u(FreeNodes) = A(FreeNodes,FreeNodes) \ b(FreeNodes); % % Graphic representation. % show ( elements3, elements4, coordinates, full ( u ) ); return end

    Read the article

  • Is it possible to map mouse coordinates to isometric tiles with this coordinate system?

    - by plukich
    I'm trying to implement mouse interaction in a 2D isometric game, but I'm not sure if it's possible given the coordinate system used for tile maps in the game. I've read some helpful things like this. However, this game's coordinate system is "jagged" (for lack of a better word), and looks like this: Is it even possible to map mouse coordinates to this successfully, since the y-axis can't be drawn on this tile-map as a straight line? I've thought about doing odd-y-value translations and even-y-value translations with two different matrices, but that only makes sense going from tile to screen.

    Read the article

  • Why are there 3 conflicting OpenCV camera calibration formulas?

    - by John
    I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved: (1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376): xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ], ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ], where r = sqrt( x^2 + y^2 ). Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy, where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions. However... (2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern. In the C reference, it is expressed that: x' = X / Z, y' = Y / Z, x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ], y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ], where r' = sqrt( x'^2 + y'^2 ), and finally that u = fx * x'' + cx, v = fy * y'' + cy. As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z). (3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part: "Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ~, v ~ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ~, y ~ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following: x ~ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ] y ~ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ], where r = sqrt( x^2 + y^2 ). ... "Because u ~ = cx + fx * u and v ~ = cy + fy * v , … the resultant system can be rewritten as follows: u ~ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ] v ~ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ] The latter relations are used to undistort images from the camera." Well, it would appear that the expressions involving x ~ and y ~ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ~ and y ~ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ~, y ~ ) and ( u ~, v ~ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ~, y ~ ) and ( x, y ) refer to 'physical' coordinates while ( u ~, v ~ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost! Thanks for any input!

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • Creating a voxel chunk with a VBO - How to translate the coordinates of each block and add it to the VBO chunk?

    - by sunsunsunsunsun
    Im trying to make a voxel engine similar to minecraft as a little learning experience and a way to learn some opengl. I have created a chunk class and I want to put all of the vertices for the whole chunk into a single VBO. I was previously only putting each block into a vbo and making a call to render each block. Anyways, I am a bit confused about how I can translate the coordinates of each block in the chunk when I'm putting all vertices into one vbo. This is what I have at the moment. public void putVertices(float tx, float ty, float tz) { float l_length = 1.0f; float l_height = 1.0f; float l_width = 1.0f; vertexPositionData.put(new float[]{ xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty,zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty,zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz }); } public void createChunk() { vertexPositionData = BufferUtils.createFloatBuffer((24*3)*activateBlocks); Random random = new Random(); for (int x = 0; x < CHUNK_SIZE; x++) { for (int y = 0; y < CHUNK_SIZE; y++) { for (int z = 0; z < CHUNK_SIZE; z++) { if(blocks[x][y][z].getActive()) { putVertices(x*2.0f, y*2.0f, z*2.0f); } } } } Whats any easy way to translate the vertices of each block into its correct position? I was previously using glTranslatef with each call to render block but this wont work now. What I am doing now also does not work, the blocks all render in stacks on top of each other and it looks like this: http://i.imgur.com/NyFtBTI.png Thanks

    Read the article

  • I'm storing click coordinates in my db and then reloading them later and showing them on the site wh

    - by trainbolt
    That's it basically. Storing the click coordinates is obviously the simple step, but once I have them if the user comes back and their window is smaller or larger the coordinates are wrong. Am I going about this in the wrong way, should I also store an element id/dom reference or something of that nature. Also, this script will be run over many different websites with more than one layout. Is there a way to do this where the layout is independent of how the coordinates are stored? Thanks.

    Read the article

  • How can I print text fields at the right coordinates?

    - by Milad
    I don't have any background in programming and this is my first shot. I wrote a Delphi program that is supposed to print on a result sheet. I work in an institute and we have to establish hundreds of result sheets every 2 months. It's really difficult to do that and different handwriting is also an important issue. I have this code: if PrintDialog.Execute() then begin with MyPrinter do begin MyPrinter.BeginDoc();//Start Printing //Prints First Name MyPrinter.Canvas.TextOut(FirstNameX,FirstNameY,EditFirstName.Text); //Prints Last Name MyPrinter.Canvas.TextOut(LastNameX,LastNameY,EditLastName.Text); //Prints Level MyPrinter.Canvas.TextOut(LevelX,LevelY,EditLevel.Text); //Prints Date MyPrinter.Canvas.TextOut(DateX,DateY,MEditDate.Text); //Prints Student Number MyPrinter.Canvas.TextOut(StdNumX,StdNumY,EditStdnumber.Text); .... MyPrinter.EndDoc();//End Printing end; end; I can't get the right coordinates to print properly. Am I missing something? How can I set the right coordinates? You know TPrinter uses pixels to get the coordinates but papers are measured in inches or centimeters. I'm really confused. I appreciate any help.

    Read the article

  • How can I find the right UV coordinates for interpolating a bezier curve?

    - by ssb
    I'll let this picture do the talking. I'm trying to create a mesh from a bezier curve and then add a texture to it. The problem here is that the interpolation points along the curve do not increase linearly, so points farther from the control point (near the endpoints) stretch and those in the bend contract, causing the texture to be uneven across the curve, which can be problematic when using a pattern like stripes on a road. How can I determine how far along the curve the vertices actually are so I can give a proper UV coordinate? EDIT: Allow me to clarify that I'm not talking about the trapezoidal distortion of the roads. That I know is normal and I'm not concerned about. I've updated the image to show more clearly where my concerns are. Interpolating over the curve I get 10 segments, but each of these 10 segments is not spaced at an equal point along the curve, so I have to account for this in assigning UV data to vertices or else the road texture will stretch/shrink depending on how far apart vertices are at that particular part of the curve.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • When can a freely moving sphere escape from a ‘cage’ defined by a set of impassible coordinates?

    - by RGrey
    Hopefully there are some computational geometry folks here who can help me out with the following problem - Please imagine that I take a freely moving ball in 3-space and create a 'cage' around it by defining a set of impassible coordinates, Sc (i.e. points in 3-space that no part of the diffusing ball is allowed to overlap). These points reside within the volume, V(cage), of some larger sphere, where V(cage) V(ball). Provided the set of impassible coordinates, Sc, is there a computationally efficient and/or nice way to determine if the ball can ever escape the cage?

    Read the article

  • Codeigniter Image Library Cropping an Image with 2 sets of coordinates to make square anywhere?

    - by chris
    Spending time reading through the docs, and searching for examples. I understand cropping an image from top 0, and left 0 is pretty straight forward. However. I would like to pass 2 sets of coordinates, a starting point and an ending point. Four Points, a Square that is defined anywhere. However from the examples I am finding, and from what I gather the rendition is not going to let me do this. codeigniter so I am seeking confirmation on this thought, is it true that I can only provide end points from 0, and it crops a square based on said end points or can I actually specify an x start, an x end, and similar for y? Or is there some other technique I can use within codeigniter that will allow me to pass for coordinates for starting and ending points?

    Read the article

  • Is there a ways to make google store coordinates in the android map?

    - by Pavel
    Hi. I'm in the middle of developing android app using google map. My question is - how to store coordinates with google map and display them later on? I know how to use canvas, how to draw it etc. but I want to store those coordinates with google map - just like with the google maps feature on internet. When I log in I can see points I've put on the map straight away. Can someone please tell me how to do that? There must be a way. Any answers greatly appreciated!

    Read the article

  • Decal implementation

    - by dreta
    I had issues finding information about decals, so maybe this question will help others. The implementation is for a forward renderer. Could somebody confirm if i got decal implementation right? You define a cube of any dimension that'll define the projection volume in common space. You check for triangle intersection with the defined cube to recieve triangles that the projection will affect. You clip these triangles and save them. You then use matrix tricks to calculate UV coordinates for the saved triangles that'll reference the texture you're projecting. To do this you take the vectors representing height, width and depth of the cube in common space, so that f.e. the bottom left corner is the origin. You put that in a matrix as the i, j, k unit vectors, set the translation for the cube, then you inverse this matrix. You multiply the vertices of the saved triangles by this matrix, that way you get their coordinates inside of a 0 to 1 size cube that you use as the UV coordinates. This way you have the original triangles you're projecting onto and you have UV coordinates for them (the UV coordinates are referencing the texture you're projecting). Then you rerender the saved triangles onto the scene and they overwrite the area of projection with the projected image. Now the questions that i couldn't find answers for. Is the last point right? I've never done software clipping, but it seems error prone enough, due to limited precision, that the'll be some z fighting occuring for the projected texture. Also is the way of getting UV coordinates correct?

    Read the article

  • Split up a screen into regions

    - by nexen
    My task: I want to split up a screen into 3 regions for buffs-bar (with picked items), score-info and a game-map. It doesn't matter are regions intersect with each other or not. For example: I have a screen with width=1; height=1 and the origin of coordinates (0;0) is the left bottom point. I have 3 functions: draw items, draw info, draw map. If I use it without any matrix transformations, it draws fullscreen, because it's vertex coordinates are from 0;0 to 1;1. (pseudo-code) drawItems(); drawInfo(); drawMap(); And after that I see only map onto info onto items. My goal: I have some matrixes for transformation vertexes with 0;0-1;1 coordinates to strict regions. There is only one thing, what I need to do - set matrix before drawing. So my call of drawItems-function is like: (pseudo-code) adjustViewMatrixes_andSomethingElse(items.position_of_the_region_there_it_should_be_drawn, items.sizes_of_region_to_draw); setItemsMatrix(); drawItems(); //the same function with vertex coordinates 0;0->1;1, //but it draws in other coordinates, //because I have just set the matrix for region I know only some people will understand me, so there is a picture with regions which I need to make. Every region has 0;0 - 1;1 inner coordinates.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >