Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 338/774 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Calculating 2D (screen) coordinates from 3D positions in XNA 4.0

    - by NDraskovic
    I have a program that draws some items to the scene by loading their positions from a file. Now I want to place a Ray on the same location where the items are drawn. So my question is how can I calculate the position of the ray (it's 2D components) by using 3D coordinates of each particular item? The items don't move anywhere, so once they are placed they stay until the end of the programs execution. Thanks.

    Read the article

  • Unity3D: default parameters in C# script

    - by Heisenbug
    Accordingly to this thread, it seems that default parameters aren't supported by C# script in Unity3D environment. Declaring an optional parameter in a C# scirpt makes Mono Ide complaint about it: void Foo(int first, int second = 10) // this line is marked as wrong inside Mono IDE Anyway if I ignore the error message of Mono and run the script in Unity, it works without notify any error inside Unity Console. Could anyone clarify a little bit this issue? Particularly: Are default parameters allowed inside C# scripts? If yes, are they supported by all platforms? Why Mono complains about them if the actually works?

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • Time calculation between openGL update calls.

    - by Vijayendra
    In XNA, the system calls update and draw function with the time information. This contains information such as how much time has passed since last update was called. This makes easy to integrate time and do animation calculation accordingly. But I dont see any such mechanism in openGL. I see openGL requires programmers to have their own implementation which could be buggy or inefficient. Is there any standard (and efficient) code that demonstrate this practice in openGL?

    Read the article

  • libgdx - removing the circle outline rendered on Box2d CircleShape

    - by Brett
    How can I remove the outline on the circleshape below.. CircleShape circle = new CircleShape(); circle.setRadius(1f); ... using ... batch.draw(textureRegion, position.x - 1, position.y - 1, 1f, 1f, 2, 2, 1, 1, angle); I use this to set the body for a Box2d collision but I get a silly circle shape around my texture in libGdx, i.e. my textured sprite (ball) has a circle over the top of it with a line running from center along the radius. Any ideas on how to remove the overlying circle lines?

    Read the article

  • Maximum number of controllers Unity3D can handle

    - by N0xus
    I've been trying to find out the maximum amount of xbox controller Unity3D can handle on one editor. I know through networking, Unity is capable of having as many people as your hardware can handle. But I want to avoid networking as much as possible. Thus, on a single computer, and in a single screen (think Bomberman and Super Smash Brothers) how many xbox controllers can Unity3D support? I have done work in XNA and remember that only being capable of support 4, but for the life of me, I can't find any information that tells me how many Unity can support.

    Read the article

  • Calculating the position of an object with regards to current position using OpenGL like matrices

    - by spartan2417
    i have a 1st person camera that collides with walls, i also have a small sphere in front of my camera denoted by the camera position plus the distance ahead. I cannot get the postion of the sphere but i have the position of my camera. e.g. i need to find the position of the point or at the very least find away of calculating the position using the camera positions. code: static Float P_z = 0; P_z = -15; PushMatrix(); LoadMatrix(&Inv); Material(SCEGU_AMBIENT, 0x00000066); TranslateXYZ(0,0,P_z); ScaleXYZ(0.1f,0.1f,0.1f); pointer.Render(); PopMatrix(); where Inv is the camera positions (Inv.w.x,Inv.w.z), pointer is the sphere.

    Read the article

  • Unable to access A class variables in B Class - Unity-Monodevelop

    - by Syed
    I have made a class including variables in Monodevelop which is: public class GridInfo : MonoBehaviour { public float initPosX; public float initPosY; public bool inUse; public int f; public int g; public int h; public GridInfo parent; public int y,x; } Now I am using its class variable in another class, Map.cs which is: public class Map : MonoBehaviour { public static GridInfo[,] Tile = new GridInfo[17, 23]; void Start() { Tile[0,0].initPosX = initPosX; //Line 49 } } Iam not getting any error on runtime, but when I play in unity it is giving me error NullReferenceException: Object reference not set to an instance of an object Map.Start () (at Assets/Scripts/Map.cs:49) I am not inserting this script in any gameobject, as Map.cs will make a GridInfo type array, I have also tried using variables using GetComponent, where is the problem ?

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • rotate sprite and shooting bullets from the end of a cannon

    - by Alberto
    Hi all i have a problem in my Andengine code, I need , when I touch the screen, shoot a bullet from the cannon (in the same direction of the cannon) The cannon rotates perfectly but when I touch the screen the bullet is not created at the end of the turret This is my code: private void shootProjectile(final float pX, final float pY){ int offX = (int) (pX-canon.getSceneCenterCoordinates()[0]); int offY = (int) (pY-canon.getSceneCenterCoordinates()[1]); if (offX <= 0) return ; if(offY>=0) return; double X=canon.getX()+canon.getWidth()*0,5; double Y=canon.getY()+canon.getHeight()*0,5 ; final Sprite projectile; projectile = new Sprite( (float) X, (float) Y, mProjectileTextureRegion,this.getVertexBufferObjectManager() ); mMainScene.attachChild(projectile); int realX = (int) (mCamera.getWidth()+ projectile.getWidth()/2.0f); float ratio = (float) offY / (float) offX; int realY = (int) ((realX*ratio) + projectile.getY()); int offRealX = (int) (realX- projectile.getX()); int offRealY = (int) (realY- projectile.getY()); float length = (float) Math.sqrt((offRealX*offRealX)+(offRealY*offRealY)); float velocity = (float) 480.0f/1.0f; float realMoveDuration = length/velocity; MoveModifier modifier = new MoveModifier(realMoveDuration,projectile.getX(), realX, projectile.getY(), realY); projectile.registerEntityModifier(modifier); } @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_MOVE){ double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); return true; } else if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_DOWN){ final float touchX = pSceneTouchEvent.getX(); final float touchY = pSceneTouchEvent.getY(); double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); shootProjectile(touchX, touchY); } return false; } Anyone know how to calculate the coordinates (X,Y) of the end of the barrel to draw the bullet?

    Read the article

  • TileMapRenderer in libGDX not drawing anything

    - by Benjamin Dengler
    So I followed the tutorial on the libGDX wiki to draw Tiled maps but it doesn't seem to render anything. Here's how I setup my OrthographicCamera and load the map: camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); map = TiledLoader.createMap(Gdx.files.internal("maps/test.tmx")); atlas = new TileAtlas(map, Gdx.files.internal("maps")); tileMapRenderer = new TileMapRenderer(map, atlas, 8, 8); And here is my render function: Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); tileMapRenderer.render(camera); Also I did pack the tile map using the TiledMapPacker. I'm completely stumped... am I missing anything obvious here? EDIT: While debugging I noticed that the TileAtlas seems to be empty, which I guess shouldn't be the case, but I have no idea why it's empty.

    Read the article

  • Translating an Object to a certain Vector 3 in OpenGL and Java LWJGL

    - by aliasmk
    So after almost two hours, I got the hang of using glTranslated() (with Java and LWJGL). If I am correct, applying glTranslated on an object moves that object with the x,y,z relative to the previously moved object. I believe the correct term for this is local vs global, global being the one I want. I was wondering if there was a way to translate an object to a specific XYZ position, or relative to the origin. Thanks! (Code or other details can be supplied if it helps, just let me know. Also sorry if this is a noob comment, Im very new to OpenGL.)

    Read the article

  • x axis detection issues platformer starter kit

    - by dbomb101
    I've come across a problem with the collision detection code in the platformer starter kit for xna.It will send up the impassible flag on the x axis despite being nowhere near a wall in either direction on the x axis, could someone could tell me why this happens ? Here is the collision method. /// <summary> /// Detects and resolves all collisions between the player and his neighboring /// tiles. When a collision is detected, the player is pushed away along one /// axis to prevent overlapping. There is some special logic for the Y axis to /// handle platforms which behave differently depending on direction of movement. /// </summary> private void HandleCollisions() { // Get the player's bounding rectangle and find neighboring tiles. Rectangle bounds = BoundingRectangle; int leftTile = (int)Math.Floor((float)bounds.Left / Tile.Width); int rightTile = (int)Math.Ceiling(((float)bounds.Right / Tile.Width)) - 1; int topTile = (int)Math.Floor((float)bounds.Top / Tile.Height); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / Tile.Height)) - 1; // Reset flag to search for ground collision. isOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { // If this tile is collidable, TileCollision collision = Level.GetCollision(x, y); if (collision != TileCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = Level.GetBounds(x, y); Vector2 depth = RectangleExtensions.GetIntersectionDepth(bounds, tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY < absDepthX || collision == TileCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (previousBottom <= tileBounds.Top) isOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == TileCollision.Impassable || IsOnGround) { // Resolve the collision along the Y axis. Position = new Vector2(Position.X, Position.Y + depth.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } //This is the section which deals with collision on the x-axis else if (collision == TileCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. Position = new Vector2(Position.X + depth.X, Position.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } } } } // Save the new bounds bottom. previousBottom = bounds.Bottom; }

    Read the article

  • OpenGL flickerinng near the edges

    - by Daniel
    I am trying to simulate particles moving around the scene with OpenCL for computation and OpenGL for rendering with GLUT. There is no OpenCL-OpenGL interop yet, so the drawing is done in the older fixed pipeline way. Whenever circles get close to the edges, they start to flicker. The drawing should draw a part of the circle on the top of the scene and a part on the bottom. The effect is the following: The balls you see on the bottom should be one part on the bottom and one part on the top. Wrapping around the scene, so to say, but they constantly flicker. The code for drawing them is: void Scene::drawCircle(GLuint index){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(pos.at(2*index),pos.at(2*index+1), 0.0f); glBegin(GL_TRIANGLE_FAN); GLfloat incr = (2.0 * M_PI) / (GLfloat) slices; glColor3f(0.8f, 0.255f, 0.26f); glVertex2f(0.0f, 0.0f); glColor3f(1.0f, 0.0f, 0.0f); for(GLint i = 0; i <=slices; ++i){ GLfloat x = radius * sin((GLfloat) i * incr); GLfloat y = radius * cos((GLfloat) i * incr); glVertex2f(x, y); } glEnd(); } If it helps, this is the reshape method: void Scene::reshape(GLint width, GLint height){ if(0 == height) height = 1; //Prevent division by zero glViewport(0, 0, width, height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(xmin, xmax, ymin, ymax); std::cout << xmin << " " << xmax << " " << ymin << " " << ymax << std::endl; }

    Read the article

  • problem adding bumpmap to textured gluSphere in JOGL

    - by ChocoMan
    I currently have one texture on a gluSphere that represents the Earth being displayed perfectly, but having trouble figuring out how to implement a bumpmap as well. The bumpmap resides in "res/planet/earth/earthbump1k.jpg".Here is the code I have for the regular texture: gl.glTranslatef(xPath, 0, yPath + zPos); gl.glColor3f(1.0f, 1.0f, 1.0f); // base color for earth earthGluSphere = glu.gluNewQuadric(); colorTexture.enable(); // enable texture colorTexture.bind(); // bind texture // draw sphere... glu.gluDeleteQuadric(earthGluSphere); colorTexture.disable(); // texturing public void loadPlanetTexture(GL2 gl) { InputStream colorMap = null; try { colorMap = new FileInputStream("res/planet/earth/earthmap1k.jpg"); TextureData data = TextureIO.newTextureData(colorMap, false, null); colorTexture = TextureIO.newTexture(data); colorTexture.getImageTexCoords(); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorMap.close(); } catch(IOException e) { e.printStackTrace(); System.exit(1); } // Set material properties gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T); } How would I add the bumpmap as well to the same gluSphere?

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Wikipedia A* pathfinding algorithm takes a lot of time

    - by Vee
    I've successfully implemented A* pathfinding in C# but it is very slow, and I don't understand why. I even tried not sorting the openNodes list but it's still the same. The map is 80x80, and there are 10-11 nodes. I took the pseudocode from here Wikipedia And this is my implementation: public static List<PGNode> Pathfind(PGMap mMap, PGNode mStart, PGNode mEnd) { mMap.ClearNodes(); mMap.GetTile(mStart.X, mStart.Y).Value = 0; mMap.GetTile(mEnd.X, mEnd.Y).Value = 0; List<PGNode> openNodes = new List<PGNode>(); List<PGNode> closedNodes = new List<PGNode>(); List<PGNode> solutionNodes = new List<PGNode>(); mStart.G = 0; mStart.H = GetManhattanHeuristic(mStart, mEnd); solutionNodes.Add(mStart); solutionNodes.Add(mEnd); openNodes.Add(mStart); // 1) Add the starting square (or node) to the open list. while (openNodes.Count > 0) // 2) Repeat the following: { openNodes.Sort((p1, p2) => p1.F.CompareTo(p2.F)); PGNode current = openNodes[0]; // a) We refer to this as the current square.) if (current == mEnd) { while (current != null) { solutionNodes.Add(current); current = current.Parent; } return solutionNodes; } openNodes.Remove(current); closedNodes.Add(current); // b) Switch it to the closed list. List<PGNode> neighborNodes = current.GetNeighborNodes(); double cost = 0; bool isCostBetter = false; for (int i = 0; i < neighborNodes.Count; i++) { PGNode neighbor = neighborNodes[i]; cost = current.G + 10; isCostBetter = false; if (neighbor.Passable == false || closedNodes.Contains(neighbor)) continue; // If it is not walkable or if it is on the closed list, ignore it. if (openNodes.Contains(neighbor) == false) { openNodes.Add(neighbor); // If it isn’t on the open list, add it to the open list. isCostBetter = true; } else if (cost < neighbor.G) { isCostBetter = true; } if (isCostBetter) { neighbor.Parent = current; // Make the current square the parent of this square. neighbor.G = cost; neighbor.H = GetManhattanHeuristic(current, neighbor); } } } return null; } Here's the heuristic I'm using: private static double GetManhattanHeuristic(PGNode mStart, PGNode mEnd) { return Math.Abs(mStart.X - mEnd.X) + Math.Abs(mStart.Y - mEnd.Y); } What am I doing wrong? It's an entire day I keep looking at the same code.

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • loading 3d model data into buffers

    - by mulletdevil
    I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure. I am using C++ and DirectX9 Thank you, Mark

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • Texture and Lighting Issue in 3D world

    - by noah
    Im using OpenGL ES 1.1 for iPhone. I'm attempting to implement a skybox in my 3d world and started out by following one of Jeff Lamarches tutorials on creating textures. Heres the tutorial: iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html Ive successfully added the image to my 3d world but am not sure why the lighting on the other shapes has changed so much. I want the shapes to be the original color and have the image in the background. Before: https://www.dropbox.com/s/ojmb8793vj514h0/Screen%20Shot%202012-10-01%20at%205.34.44%20PM.png After: https://www.dropbox.com/s/8v6yvur8amgudia/Screen%20Shot%202012-10-01%20at%205.35.31%20PM.png Heres the init OpenGL: - (void)initOpenGLES1 { glShadeModel(GL_SMOOTH); // Enable lighting glEnable(GL_LIGHTING); // Turn the first light on glEnable(GL_LIGHT0); const GLfloat lightAmbient[] = {0.2, 0.2, 0.2, 1.0}; const GLfloat lightDiffuse[] = {0.8, 0.8, 0.8, 1.0}; const GLfloat matAmbient[] = {0.3, 0.3, 0.3, 0.5}; const GLfloat matDiffuse[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat matSpecular[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat lightPosition[] = {0.0, 0.0, 1.0, 0.0}; const GLfloat lightShininess = 100.0; //Configure OpenGL lighting glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, matSpecular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightfv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT0, GL_POSITION, lightPosition); // Define a cutoff angle glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 40.0); // Set the clear color glClearColor(0, 0, 0, 1.0f); // Projection Matrix config glMatrixMode(GL_PROJECTION); glLoadIdentity(); CGSize layerSize = self.view.layer.frame.size; // Swapped height and width for landscape mode gluPerspective(45.0f, (GLfloat)layerSize.height / (GLfloat)layerSize.width, 0.1f, 750.0f); [self initSkyBox]; // Modelview Matrix config glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // This next line is not really needed as it is the default for OpenGL ES glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glDisable(GL_BLEND); // Enable depth testing glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); glDepthMask(GL_TRUE); } Heres the drawSkybox that gets called in the drawFrame method: -(void)drawSkyBox { glDisable(GL_LIGHTING); glDisable(GL_DEPTH_TEST); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); static const SSVertex3D vertices[] = { {-1.0, 1.0, -0.0}, { 1.0, 1.0, -0.0}, {-1.0, -1.0, -0.0}, { 1.0, -1.0, -0.0} }; static const SSVertex3D normals[] = { {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0} }; static const GLfloat texCoords[] = { 0.0, 0.5, 0.5, 0.5, 0.0, 0.0, 0.5, 0.0 }; glLoadIdentity(); glTranslatef(0.0, 0.0, -3.0); glBindTexture(GL_TEXTURE_2D, texture[0]); glVertexPointer(3, GL_FLOAT, 0, vertices); glNormalPointer(GL_FLOAT, 0, normals); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_LIGHTING); glEnable(GL_DEPTH_TEST); } Heres the init Skybox: -(void)initSkyBox { // Turn necessary features on glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_SRC_COLOR); // Bind the number of textures we need, in this case one. glGenTextures(1, &texture[0]); // create a texture obj, give unique ID glBindTexture(GL_TEXTURE_2D, texture[0]); // load our new texture name into the current texture glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); NSString *path = [[NSBundle mainBundle] pathForResource:@"space" ofType:@"jpg"]; NSData *texData = [[NSData alloc] initWithContentsOfFile:path]; UIImage *image = [[UIImage alloc] initWithData:texData]; GLuint width = CGImageGetWidth(image.CGImage); GLuint height = CGImageGetHeight(image.CGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); void *imageData = malloc( height * width * 4 ); // times 4 because will write one byte for rgb and alpha CGContextRef cgContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big ); // Flip the Y-axis CGContextTranslateCTM (cgContext, 0, height); CGContextScaleCTM (cgContext, 1.0, -1.0); CGColorSpaceRelease( colorSpace ); CGContextClearRect( cgContext, CGRectMake( 0, 0, width, height ) ); CGContextDrawImage( cgContext, CGRectMake( 0, 0, width, height ), image.CGImage ); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); CGContextRelease(cgContext); free(imageData); [image release]; [texData release]; } Any help is greatly appreciated.

    Read the article

  • convert orientation vec3 to a rotation matrix

    - by lapin
    I've got a normalized vec3 that represents an orientation. Each frame of animation, an object's orientation changes slightly, so I add a delta vector to the orientation vector and then normalize to find the new orientation. I'd like to convert the vec3 that represents an orientation into a rotation matrix that I can use to orient my object. If it helps, my object is a cone, and I'd like to rotate it about the pointy end, not from its center :) PS I know I should use quaternions because of the gimbal lock problem. If someone can explain quats too, that'd be great :)

    Read the article

  • Scrolling background stops after awhile?

    - by Lewis
    Can anyone tell me where my maths is wrong please, it stops scrolling after awhile. if (background.position.y < background2.position.y) { background.position = ccp(background.contentSize.width / 2, background.position.y - 50 * delta); background2.position = ccp(background.contentSize.width / 2, background.position.y + background.contentSize.height); } else { background.position = ccp(background2.contentSize.width / 2, background2.position.y - 50 * delta); background.position = ccp(background2.contentSize.width / 2, background2.position.y + background.contentSize.height); } //reset if (background.position.y <-background.contentSize.height / 2) { background.position = ccp(background.contentSize.width / 2 ,background2.position.y + background2.contentSize.height); } else if (background2.position.y < -background2.contentSize.height / 2) { background2.position = ccp(background2.contentSize.width / 2, background.position.y + background.contentSize.height); }

    Read the article

  • 2D Tile Map for Platformer, XML or SQLite?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. I'm doing some research into whether to store level data in an XML file or in a database like SQLite. Which would be the best for this situation? Do either have any drawbacks (performance etc) compared to the other?

    Read the article

  • Tool for creating complex paths?

    - by TerryB
    I want to create some fairly complex predefined paths for my AI sprites to follow. I'll need to use curves, splines etc to get the effect I want. Is there a drawing tool out there that will allow me to draw such curves, "mesh" them by placing lots of points along them at some defined density and then output the coordinates of all of those points for me? I could write this tool myself but hopefully one of the drawing packages can do this? Cheers!

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >