Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 403/1021 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • How to determine where on a path my object will be at a given point in time?

    - by Dave
    I have map and an obj that is meant to move from start to end in X amount of time. The movements are all straight lines, as curves are beyond my ability at the moment. So I am trying to get the object to move from these points, but along the way there are way points which keep it on a given path. The speed of the object is determined by how long it will take to get from start to end (based on X). This is what i have so far: //get_now() returns seconds since epoch var timepassed = get_now() - myObj[id].start; //seconds since epoch for departure var timeleft = myObj[id].end - get_now(); //seconds since epoch for arrival var journey_time = 60; //this means 60 minutes total journey time var array = [[650,250]]; //way points along the straight paths if(step == 0 || step =< array.length){ var destinationx = array[step][0]; var destinationy = array[step][1]; }else if( step == array.length){ var destinationx = 250; var destinationy = 100; } else { var destinationx = myObj[id].startx; var destinationy = myObj[id].starty; } step++; When the user logs in at any given time, the object needs to be drawn in the correct place of the path, almost as if its been travelling along the path whilst the user has not been at the PC with the available information i have above. How do I do this? Note: The camera angle in the game is a birds eye view so its a straight forward X:Y rather than isometric angles.

    Read the article

  • Ouya / Android : button mapping bitwise

    - by scorvi
    I am programming a game with the Gameplay3d Engine. But the Android site has no gamepad support and that is what I need to port my game to Ouya. So I implemented a simple gamepad support and it supports 2 gamepads. So my problem is that I put the button stats in a float array for every gamepad. But the Gameplay3d engine saves their stats in a unsigned int _buttons variable. It is set with bitwise operations and I have no clue how to translate my array to this.

    Read the article

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • How to refactor and improve this XNA mouse input code?

    - by Andrew Price
    Currently I have something like this: public bool IsLeftMouseButtonDown() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Pressed; } public bool IsLeftMouseButtonPressed() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonUp() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonReleased() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Pressed; } This is fine. In fact, I kind of like it. However, I'd hate to have to repeat this same code five times (for right, middle, X1, X2). Is there any way to pass in the button I want to the function so I could have something like this? public bool IsMouseButtonDown(MouseButton button) { return currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonPressed(MouseButton button) { return currentMouseState.IsPressed(button) && !previousMouseState.IsPressed(button); } public bool IsMouseButtonUp(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonReleased(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } I suppose I could create some custom enumeration and switch through it in each function, but I'd like to first see if there is a built-in solution or a better way.. Thanks!

    Read the article

  • Animation file format

    - by Paul
    I'm trying to make a simple 2D animation file format. It'll be very rudimentary: only an XML file containing some parameters (such as frame duration) and metadata, and some images, each representing a frame. I'd like to have the whole animation (frames and XML document) packed in a single file. How do you suggest I do that? What libraries are there that would allow easy access to the files inside the animation file itself? The language I'm using is C++ and the platform is Windows, but I'd rather not use a platform dependent library, if possible.

    Read the article

  • (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D)

    - by OstlerDev
    I am trying to create a render target for my game so that I can re-render at a different screen size. But I am receiving the following error: Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method: // clear screen GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); // Start FBO Rendering Code // The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName = GL30.glGenFramebuffers(); GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, FramebufferName); // The texture we're going to render to int renderedTexture = glGenTextures(); // "Bind" the newly created texture : all future texture functions will modify this texture glBindTexture(GL_TEXTURE_2D, renderedTexture); // Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); // Poor filtering. Needed ! glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Set "renderedTexture" as our colour attachement #0 GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, renderedTexture, 0); // Set the list of draw buffers. IntBuffer drawBuffer = BufferUtils.createIntBuffer(20 * 20); GL20.glDrawBuffers(drawBuffer); // Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) != GL30.GL_FRAMEBUFFER_COMPLETE){ System.out.println("Framebuffer was not created successfully! Exiting!"); return; } // Resets the current viewport GL11.glViewport(0, 0, scaleWidth*scale, scaleHeight*scale); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); // let subsystem paint if (callback != null) { callback.frameRendering(); } // update window contents Display.update(); It is crashing on this line: glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome.

    Read the article

  • 2D game collision response: SAT & minimum displacement along a given axis?

    - by Archagon
    I'm trying to implement a collision system in a 2D game I'm making. The separating axis theorem (as described by metanet's collision tutorial) seems like an efficient and robust way of handling collision detection, but I don't quite like the collision response method they use. By blindly displacing along the axis of least overlap, the algorithm simply ignores the previous position of the moving object, which means that it doesn't collide with the stationary object so much as it enters it and then bounces out. Here's an example of a situation where this would matter: According to the SAT method described above, the rectangle would simply pop out of the triangle perpendicular to its hypotenuse: However, realistically, the rectangle should stop at the lower right corner of the triangle, as that would be the point of first collision if it were moving continuously along its displacement vector: Now, this might not actually matter during gameplay, but I'd love to know if there's a way of efficiently and generally attaining accurate displacements in this manner. I've been racking my brains over it for the past few days, and I don't want to give up yet! (Cross-posted from StackOverflow, hope that's not against the rules!)

    Read the article

  • Pathfinding results in false path costs that are too high

    - by user2144536
    I'm trying to implement pathfinding in a game I'm programming using this method. I'm implementing it with recursion but some of the values after the immediate circle of tiles around the player are way off. For some reason I cannot find the problem with it. This is a screen cap of the problem: The pathfinding values are displayed in the center of every tile. Clipped blocks are displayed with the value of 'c' because the values were too high and were covering up the next value. The red circle is the first value that is incorrect. The code below is the recursive method. //tileX is the coordinates of the current tile, val is the current pathfinding value, used[][] is a boolean //array to keep track of which tiles' values have already been assigned public void pathFind(int tileX, int tileY, int val, boolean[][] used) { //increment pathfinding value int curVal = val + 1; //set current tile to true if it hasn't been already used[tileX][tileY] = true; //booleans to know which tiles the recursive call needs to be used on boolean topLeftUsed = false, topUsed = false, topRightUsed = false, leftUsed = false, rightUsed = false, botomLeftUsed = false, botomUsed = false, botomRightUsed = false; //set value of top left tile if necessary if(tileX - 1 >= 0 && tileY - 1 >= 0) { //isClipped(int x, int y) returns true if the coordinates givin are in a tile that can't be walked through (IE walls) //occupied[][] is an array that keeps track of which tiles have an enemy in them // //if the tile is not clipped and not occupied set the pathfinding value if(isClipped((tileX - 1) * 50 + 25, (tileY - 1) * 50 + 25) == false && occupied[tileX - 1][tileY - 1] == false && !(used[tileX - 1][tileY - 1])) { pathFindingValues[tileX - 1][tileY - 1] = curVal; topLeftUsed = true; used[tileX - 1][tileY - 1] = true; } //if it is occupied set it to an arbitrary high number so enemies find alternate routes if the best is clogged if(occupied[tileX - 1][tileY - 1] == true) pathFindingValues[tileX - 1][tileY - 1] = 1000000000; //if it is clipped set it to an arbitrary higher number so enemies don't travel through walls if(isClipped((tileX - 1) * 50 + 25, (tileY - 1) * 50 + 25) == true) pathFindingValues[tileX - 1][tileY - 1] = 2000000000; } //top middle if(tileY - 1 >= 0 ) { if(isClipped(tileX * 50 + 25, (tileY - 1) * 50 + 25) == false && occupied[tileX][tileY - 1] == false && !(used[tileX][tileY - 1])) { pathFindingValues[tileX][tileY - 1] = curVal; topUsed = true; used[tileX][tileY - 1] = true; } if(occupied[tileX][tileY - 1] == true) pathFindingValues[tileX][tileY - 1] = 1000000000; if(isClipped(tileX * 50 + 25, (tileY - 1) * 50 + 25) == true) pathFindingValues[tileX][tileY - 1] = 2000000000; } //top right if(tileX + 1 <= used.length && tileY - 1 >= 0) { if(isClipped((tileX + 1) * 50 + 25, (tileY - 1) * 50 + 25) == false && occupied[tileX + 1][tileY - 1] == false && !(used[tileX + 1][tileY - 1])) { pathFindingValues[tileX + 1][tileY - 1] = curVal; topRightUsed = true; used[tileX + 1][tileY - 1] = true; } if(occupied[tileX + 1][tileY - 1] == true) pathFindingValues[tileX + 1][tileY - 1] = 1000000000; if(isClipped((tileX + 1) * 50 + 25, (tileY - 1) * 50 + 25) == true) pathFindingValues[tileX + 1][tileY - 1] = 2000000000; } //left if(tileX - 1 >= 0) { if(isClipped((tileX - 1) * 50 + 25, (tileY) * 50 + 25) == false && occupied[tileX - 1][tileY] == false && !(used[tileX - 1][tileY])) { pathFindingValues[tileX - 1][tileY] = curVal; leftUsed = true; used[tileX - 1][tileY] = true; } if(occupied[tileX - 1][tileY] == true) pathFindingValues[tileX - 1][tileY] = 1000000000; if(isClipped((tileX - 1) * 50 + 25, (tileY) * 50 + 25) == true) pathFindingValues[tileX - 1][tileY] = 2000000000; } //right if(tileX + 1 <= used.length) { if(isClipped((tileX + 1) * 50 + 25, (tileY) * 50 + 25) == false && occupied[tileX + 1][tileY] == false && !(used[tileX + 1][tileY])) { pathFindingValues[tileX + 1][tileY] = curVal; rightUsed = true; used[tileX + 1][tileY] = true; } if(occupied[tileX + 1][tileY] == true) pathFindingValues[tileX + 1][tileY] = 1000000000; if(isClipped((tileX + 1) * 50 + 25, (tileY) * 50 + 25) == true) pathFindingValues[tileX + 1][tileY] = 2000000000; } //botom left if(tileX - 1 >= 0 && tileY + 1 <= used[0].length) { if(isClipped((tileX - 1) * 50 + 25, (tileY + 1) * 50 + 25) == false && occupied[tileX - 1][tileY + 1] == false && !(used[tileX - 1][tileY + 1])) { pathFindingValues[tileX - 1][tileY + 1] = curVal; botomLeftUsed = true; used[tileX - 1][tileY + 1] = true; } if(occupied[tileX - 1][tileY + 1] == true) pathFindingValues[tileX - 1][tileY + 1] = 1000000000; if(isClipped((tileX - 1) * 50 + 25, (tileY + 1) * 50 + 25) == true) pathFindingValues[tileX - 1][tileY + 1] = 2000000000; } //botom middle if(tileY + 1 <= used[0].length) { if(isClipped((tileX) * 50 + 25, (tileY + 1) * 50 + 25) == false && occupied[tileX][tileY + 1] == false && !(used[tileX][tileY + 1])) { pathFindingValues[tileX][tileY + 1] = curVal; botomUsed = true; used[tileX][tileY + 1] = true; } if(occupied[tileX][tileY + 1] == true) pathFindingValues[tileX][tileY + 1] = 1000000000; if(isClipped((tileX) * 50 + 25, (tileY + 1) * 50 + 25) == true) pathFindingValues[tileX][tileY + 1] = 2000000000; } //botom right if(tileX + 1 <= used.length && tileY + 1 <= used[0].length) { if(isClipped((tileX + 1) * 50 + 25, (tileY + 1) * 50 + 25) == false && occupied[tileX + 1][tileY + 1] == false && !(used[tileX + 1][tileY + 1])) { pathFindingValues[tileX + 1][tileY + 1] = curVal; botomRightUsed = true; used[tileX + 1][tileY + 1] = true; } if(occupied[tileX + 1][tileY + 1] == true) pathFindingValues[tileX + 1][tileY + 1] = 1000000000; if(isClipped((tileX + 1) * 50 + 25, (tileY + 1) * 50 + 25) == true) pathFindingValues[tileX + 1][tileY + 1] = 2000000000; } //call the method on the tiles that need it if(tileX - 1 >= 0 && tileY - 1 >= 0 && topLeftUsed) pathFind(tileX - 1, tileY - 1, curVal, used); if(tileY - 1 >= 0 && topUsed) pathFind(tileX , tileY - 1, curVal, used); if(tileX + 1 <= used.length && tileY - 1 >= 0 && topRightUsed) pathFind(tileX + 1, tileY - 1, curVal, used); if(tileX - 1 >= 0 && leftUsed) pathFind(tileX - 1, tileY, curVal, used); if(tileX + 1 <= used.length && rightUsed) pathFind(tileX + 1, tileY, curVal, used); if(tileX - 1 >= 0 && tileY + 1 <= used[0].length && botomLeftUsed) pathFind(tileX - 1, tileY + 1, curVal, used); if(tileY + 1 <= used[0].length && botomUsed) pathFind(tileX, tileY + 1, curVal, used); if(tileX + 1 <= used.length && tileY + 1 <= used[0].length && botomRightUsed) pathFind(tileX + 1, tileY + 1, curVal, used); }

    Read the article

  • Keeping the camera from going through walls in a first person game in Unity?

    - by Timothy Williams
    I'm using a modified version of the standard Unity First Person Controller. At the moment when I stand near walls, the camera clips through and lets me see through the wall. I know about camera occlusion and have implemented it in 3rd person games, but I have no clue how I'd accomplish this in a first person game, since the camera doesn't move from the player at all. How do other people accomplish this?

    Read the article

  • Is there a simpler way to create a borderless window with XNA 4.0?

    - by Cypher
    When looking into making my XNA game's window border-less, I found no properties or methods under Game.Window that would provide this, but I did find a window handle to the form. I was able to accomplish what I wanted by doing this: IntPtr hWnd = this.Window.Handle; var control = System.Windows.Forms.Control.FromHandle( hWnd ); var form = control.FindForm(); form.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None; I don't know why but this feels like a dirty hack. Is there a built-in way to do this in XNA that I'm missing?

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Unity-Animation parameters are not being set

    - by user1814893
    I have the following animation controller: with two parameters of walkingSpeed and Jump. I have the following code which should change the values: animator.SetFloat("walkingSpeed",0.9f); animator.SetBool("Jump",true); and animator is the correctly referenced animator object. However the values that the parameters are set to do not appear to change in the animator window, nor do they appear to impact what is happening on the screen. However they do seem to impact the values obtained when doing the following: animator.GetFloat("walkingSpeed"); The animator consists of the shown blend tree, which works correctly and is always active, however due to the values not being changed it does not blends, and always acts as if the value with which it blends (walkingSpeed is 0). What is going on?

    Read the article

  • How do I get my character to move after adding to JFrame?

    - by A.K.
    So this is kind of a follow up on my other JPanel question that got resolved by playing around with the Layout... Now my MouseListener allows me to add a new Board(); object from its class, which is the actual game map and animator itself. But since my Board() takes Key Events from a Player Object inside the Board Class, I'm not sure if they are being started. Here's my Frame Class, where SideScroller S is the player object: package OurPackage; //Made By A.K. 5/24/12 //Contains Frame. import java.awt.BorderLayout; import java.awt.Button; import java.awt.CardLayout; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridBagLayout; import java.awt.GridLayout; import java.awt.Image; import java.awt.Rectangle; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.*; import javax.swing.plaf.basic.BasicOptionPaneUI.ButtonActionListener; public class Frame implements MouseListener { public static boolean StartGame = false; JFrame frm = new JFrame("Action-Packed Jack"); ImageIcon img = new ImageIcon(getClass().getResource("/Images/ActionJackTitle.png")); ImageIcon StartImg = new ImageIcon(getClass().getResource("/Images/JackStart.png")); public Image Title; JLabel TitleL = new JLabel(img); public JPanel TitlePane = new JPanel(); public JPanel BoardPane = new JPanel(); JPanel cards; JButton StartB = new JButton(StartImg); Board nBoard = new Board(); static Sound nSound; public Frame() { frm.setLayout(new GridBagLayout()); cards = new JPanel(new CardLayout()); nSound = new Sound("/Sounds/BunchaJazz.wav"); TitleL.setPreferredSize(new Dimension(970, 420)); frm.add(TitleL); frm.add(cards); cards.setSize(new Dimension(150, 45)); cards.setLayout(new GridBagLayout ()); cards.add(StartB); StartB.addMouseListener(this); StartB.setPreferredSize(new Dimension(150, 45)); frm.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frm.setSize(1200, 420); frm.setVisible(true); frm.setResizable(false); frm.setLocationRelativeTo(null); frm.pack(); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new Frame(); } }); } public void mouseClicked(MouseEvent e) { nSound.play(); StartB.setContentAreaFilled(false); cards.remove(StartB); frm.remove(TitleL); frm.remove(cards); frm.setLayout(new GridLayout(1, 1)); frm.add(nBoard); //Add Game "Tiles" Or Content. x = 1200 nBoard.setPreferredSize(new Dimension(1200, 420)); cards.revalidate(); frm.validate(); } @Override public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mousePressed(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseReleased(MouseEvent arg0) { // TODO Auto-generated method stub } }

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • Why does glGetString returns a NULL string

    - by snape
    I am trying my hands at GLFW library. I have written a basic program to get OpenGL renderer and vendor string. Here is the code #include <GL/glew.h> #include <GL/glfw.h> #include <cstdio> #include <cstdlib> #include <string> using namespace std; void shutDown(int returnCode) { printf("There was an error in running the code with error %d\n",returnCode); GLenum res = glGetError(); const GLubyte *errString = gluErrorString(res); printf("Error is %s\n", errString); glfwTerminate(); exit(returnCode); } int main() { // start GL context and O/S window using GLFW helper library if (glfwInit() != GL_TRUE) shutDown(1); if (glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW_WINDOW) != GL_TRUE) shutDown(2); // start GLEW extension handler glewInit(); // get version info const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string const GLubyte* version = glGetString (GL_VERSION); // version as a string printf("Renderer: %s\n", renderer); printf("OpenGL version supported %s\n", version); // close GL context and any other GLFW resources glfwTerminate(); return 0; } I googled this error and found out that we have to initialize the OpenGL context before calling glGetString(). Although I have initialized OpenGL context using glfwInit() but still the function returns a NULL string. Any ideas? Edit I have updated the code with error checking mechanisms. This code on running outputs the following There was an error in running the code with error 2 Error is no error

    Read the article

  • Parent variable inheritance methods Unity3D/C#

    - by Timothy Williams
    I'm creating a system where there is a base "Hero" class and each hero inherits from that with their own stats and abilities. What I'm wondering is, how could I call a variable from one of the child scripts in the parent script (something like maxMP = MP) or call a function in a parent class that is specified in each child class (in the parent update is alarms() in the child classes alarms() is specified to do something.) Is this possible at all? Or not? Thanks.

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • How or why would this mechanic (not) work to bring game balance to a singleplayer RPG? [closed]

    - by 0xFFF1
    Mechanic details The player, the monsters, and the merchants act as three separate parties. The player needs to beat up monsters for exp points and resources to sell and to buy potions from merchants to continue to fight. The monsters need healing and reviving to survive (also bought from merchants) and the merchants need potion ingredients from the player and the monsters to make potions to sell. These potions are only able to be processed in such bulk by merchants thus their potions would be cheaper than making them yourself. Only the monsters can farm ingredients in bulk. Only the player is or has to be overly aggressive (in bulk). Monsters can farm and produce "Level up candies" that do the work of exp. they are eaten right away after they are made and are never stockpiled or held for fear of the player and merchants who want to sell to the player. The monsters will defend themselves. Reviving is very expensive. The merchants can be found either with a concerned expression or a grinning expression based on how much profit they are making compared to their morale standing. The economies of each monster town and merchant city are distinct but interconnected. Magic Swords are worth a lot. So what I need to know is what concerns would there be to design a game around this mechanic and/or design this mechanic around a developing game. which would fare better? Is game balance an issue here? (how strong the monsters get or how quickly they die off based on the player's input into the system), Or is game balance solely in the hands of the player? (he decides if he overkills monsters or get underleveled.) What do I need to think about to make sure it isn't too easy or too hard to swing the amount/strength of monsters compared to the player and the amount of profit the merchants get vs the player. Would indicating how out of whack things are getting in game help with this?

    Read the article

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • Incomplete mesh using DrawIndexedPrimitives after rotating mesh

    - by user1278255
    Through help on this site I was able to draw the triangles of an unrotated, nonscaled nontransformed mesh created in Blender and exported to OBJ, accurately imported through Assimp and rendered in XNA Graphics. However after applying rotation on a single axis in Blender(Z) and adding materials(I wanted to test loading of materials through Assimp) the same mesh appears incomplete. Is something wrong with my view matrix or is it something else? This is what the unrotated mesh looks like: http://www.4shared.com/photo/qXNUSvxtba/okcube.html Here is the rotated mesh: http://www.4shared.com/photo/HAys2rWvba/badcube.html Camera, View and Projection are defined as follows: cameraPos = new Vector3(0, 5, 9); viewMatrix = Matrix.CreateLookAt(cameraPos, new Vector3(0, 0, 1), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, device.Viewport.AspectRatio, 1.0f, 200.0f); Rendering is done through this code: device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect = new BasicEffect(GraphicsDevice); effect.VertexColorEnabled = true; effect.View = viewMatrix; effect.Projection = projectionMatrix; effect.World = Matrix.Identity; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(Microsoft.Xna.Framework.Graphics.PrimitiveType.TriangleList, 0, 0, oScene.Meshes[0].VertexCount, 0, mMesh.FaceCount); } base.Draw(gameTime);

    Read the article

  • 3D Camera Problem

    - by Chris
    I allow the user to look around the scene by holding down the left mouse button and moving the mouse. The problem that I have is I can be facing one direction, I move the mouse up and the view tilts up, I move down and the view titles down. If I spin around 180 my left and right still works fine, but when I move the mouse up the view tilts down, and when I move the mouse down the view titles up. This is the code I am using, can anyone see what the problem with the logic is? var viewDir = g_math.subVector(target, g_eye); var rotatedViewDir = []; rotatedViewDir[0] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[0]) - (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[2]); rotatedViewDir[1] = viewDir[1]; rotatedViewDir[2] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[2]) + (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[0]); viewDir = rotatedViewDir; rotatedViewDir[0] = viewDir[0]; rotatedViewDir[1] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]) - (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]); rotatedViewDir[2] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]) + (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]); g_lookingDir = rotatedViewDir; var newtarget = g_math.addVector(rotatedViewDir, g_eye);

    Read the article

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >