Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 512/962 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Checker AI in visual basic not working [on hold]

    - by Eugene Galkine
    I am trying to a make checkers in visual basic with ai. I am using the minimax algorithm (or at least what I understand of it) and it works, except the ai is retarded and plays like it is trying to loose and I tried to switch around the min and the max but the results are IDENTICAL. I am pissed of and have been trying to fix it for over a week now, I would really appreciate it if someone could help me out here. I have 3 years experience of programming (in Java, only about of month of VB experience) and I always am able to solve all my errors on my own so I don't know why I can't get this to work. The program is not at all optimized or anything at this point and is over 1.2K lines long, so here is the entire vb project instead: https://www.dropbox.com/sh/evii0jendn93ir2/9fntwH2dNW I would really appreciate any help I could get.

    Read the article

  • Unity3D: default parameters in C# script

    - by Heisenbug
    Accordingly to this thread, it seems that default parameters aren't supported by C# script in Unity3D environment. Declaring an optional parameter in a C# scirpt makes Mono Ide complaint about it: void Foo(int first, int second = 10) // this line is marked as wrong inside Mono IDE Anyway if I ignore the error message of Mono and run the script in Unity, it works without notify any error inside Unity Console. Could anyone clarify a little bit this issue? Particularly: Are default parameters allowed inside C# scripts? If yes, are they supported by all platforms? Why Mono complains about them if the actually works?

    Read the article

  • Can't get LWJGL lighting to work

    - by Zarkonnen
    I'm trying to enable lighting in lwjgl according to the method described by NeHe and this post. However, no matter what I try, all faces of my shapes always receive the same amount of light, or, in the case of a spinning shape, the amount of lighting seems to oscillate. All faces are lit up by the same amount, which changes as the pyramid rotates. Concrete example (apologies for the length): Note how all panels are always the same brightness, but the brightness varies with the pyramid's rotation. This is using lwjgl 2.8.3 on Mac OS X. package com; import com.zarkonnen.lwjgltest.Main; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.opengl.GL11; import org.newdawn.slick.opengl.Texture; import org.newdawn.slick.opengl.TextureLoader; import org.lwjgl.util.glu.*; import org.lwjgl.input.Keyboard; import java.nio.FloatBuffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; /** * * @author penguin */ public class main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(800, 600)); Display.setTitle("3D Pyramid"); Display.create(); } catch (Exception e) { } initGL(); float rtri = 0.0f; Texture texture = null; try { texture = TextureLoader.getTexture("png", Main.class.getResourceAsStream("tex.png")); } catch (Exception ex) { ex.printStackTrace(); } while (!Display.isCloseRequested()) { // Draw a Triangle :D GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); GL11.glLoadIdentity(); GL11.glTranslatef(0.0f, 0.0f, -10.0f); GL11.glRotatef(rtri, 0.0f, 1.0f, 0.0f); texture.bind(); GL11.glBegin(GL11.GL_TRIANGLES); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); GL11.glBegin(GL11.GL_QUADS); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); Display.update(); rtri += 0.05f; // Exit-Key = ESC boolean exitPressed = Keyboard.isKeyDown(Keyboard.KEY_ESCAPE); if (exitPressed) { System.out.println("Escape was pressed!"); Display.destroy(); } } Display.destroy(); } private static void initGL() { GL11.glEnable(GL11.GL_LIGHTING); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, ((float) 800) / ((float) 600), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0f); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); float lightAmbient[] = {0.5f, 0.5f, 0.5f, 1.0f}; // Ambient Light Values float lightDiffuse[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Diffuse Light Values float lightPosition[] = {0.0f, 0.0f, 2.0f, 1.0f}; // Light Position ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glLight(GL11.GL_LIGHT1, GL11.GL_AMBIENT, (FloatBuffer) temp.asFloatBuffer().put(lightAmbient).flip()); // Setup The Ambient Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_DIFFUSE, (FloatBuffer) temp.asFloatBuffer().put(lightDiffuse).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_POSITION, (FloatBuffer) temp.asFloatBuffer().put(lightPosition).flip()); // Position The Light GL11.glEnable(GL11.GL_LIGHT1); // Enable Light One } }

    Read the article

  • Transform coordinates from 3d to 2d without matrix or built in methods

    - by Thomas
    Not to long ago i started to create a small 3D engine in javascript to combine this with an html5 canvas. One of the issues I run into is how can you transform 3d to 2d coords. Since I cannot use matrices or built in transformation methods I need another way. I've tried implementing the next explanation + pseudo code: http://freespace.virgin.net/hugo.elias/routines/3d_to_2d.htm Unfortunately no luck there. I've replace all the input variables with data from my own camera and object classes. I have the following data: An object with a rotation, position vector and an array of 4 3d coords (its just a plane) a camera with a position and rotation vector the viewport - a square 600 x 600 surface. The example uses a zoom factor which I've set as 1 Most hits on google use either matrix calculations or don't implement camera rotation. Basic transformation should be like this: screen.x = x / z * zoom screen.y = y / z * zoom Can anyone point me in the right direction or explain to me howto achieve this? edit: Thanks for all your posts, I haven't been able to apply all this to my project yet but I hope to do this soon.

    Read the article

  • Incomplete mesh using DrawIndexedPrimitives after rotating mesh

    - by user1278255
    Through help on this site I was able to draw the triangles of an unrotated, nonscaled nontransformed mesh created in Blender and exported to OBJ, accurately imported through Assimp and rendered in XNA Graphics. However after applying rotation on a single axis in Blender(Z) and adding materials(I wanted to test loading of materials through Assimp) the same mesh appears incomplete. Is something wrong with my view matrix or is it something else? This is what the unrotated mesh looks like: http://www.4shared.com/photo/qXNUSvxtba/okcube.html Here is the rotated mesh: http://www.4shared.com/photo/HAys2rWvba/badcube.html Camera, View and Projection are defined as follows: cameraPos = new Vector3(0, 5, 9); viewMatrix = Matrix.CreateLookAt(cameraPos, new Vector3(0, 0, 1), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, device.Viewport.AspectRatio, 1.0f, 200.0f); Rendering is done through this code: device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect = new BasicEffect(GraphicsDevice); effect.VertexColorEnabled = true; effect.View = viewMatrix; effect.Projection = projectionMatrix; effect.World = Matrix.Identity; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(Microsoft.Xna.Framework.Graphics.PrimitiveType.TriangleList, 0, 0, oScene.Meshes[0].VertexCount, 0, mMesh.FaceCount); } base.Draw(gameTime);

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • OpenGL flickerinng near the edges

    - by Daniel
    I am trying to simulate particles moving around the scene with OpenCL for computation and OpenGL for rendering with GLUT. There is no OpenCL-OpenGL interop yet, so the drawing is done in the older fixed pipeline way. Whenever circles get close to the edges, they start to flicker. The drawing should draw a part of the circle on the top of the scene and a part on the bottom. The effect is the following: The balls you see on the bottom should be one part on the bottom and one part on the top. Wrapping around the scene, so to say, but they constantly flicker. The code for drawing them is: void Scene::drawCircle(GLuint index){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(pos.at(2*index),pos.at(2*index+1), 0.0f); glBegin(GL_TRIANGLE_FAN); GLfloat incr = (2.0 * M_PI) / (GLfloat) slices; glColor3f(0.8f, 0.255f, 0.26f); glVertex2f(0.0f, 0.0f); glColor3f(1.0f, 0.0f, 0.0f); for(GLint i = 0; i <=slices; ++i){ GLfloat x = radius * sin((GLfloat) i * incr); GLfloat y = radius * cos((GLfloat) i * incr); glVertex2f(x, y); } glEnd(); } If it helps, this is the reshape method: void Scene::reshape(GLint width, GLint height){ if(0 == height) height = 1; //Prevent division by zero glViewport(0, 0, width, height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(xmin, xmax, ymin, ymax); std::cout << xmin << " " << xmax << " " << ymin << " " << ymax << std::endl; }

    Read the article

  • Parent variable inheritance methods Unity3D/C#

    - by Timothy Williams
    I'm creating a system where there is a base "Hero" class and each hero inherits from that with their own stats and abilities. What I'm wondering is, how could I call a variable from one of the child scripts in the parent script (something like maxMP = MP) or call a function in a parent class that is specified in each child class (in the parent update is alarms() in the child classes alarms() is specified to do something.) Is this possible at all? Or not? Thanks.

    Read the article

  • Maximum number of controllers Unity3D can handle

    - by N0xus
    I've been trying to find out the maximum amount of xbox controller Unity3D can handle on one editor. I know through networking, Unity is capable of having as many people as your hardware can handle. But I want to avoid networking as much as possible. Thus, on a single computer, and in a single screen (think Bomberman and Super Smash Brothers) how many xbox controllers can Unity3D support? I have done work in XNA and remember that only being capable of support 4, but for the life of me, I can't find any information that tells me how many Unity can support.

    Read the article

  • How do I implement collision detection with a sprite walking up a rocky-terrain hill?

    - by detectivecalcite
    I'm working in SDL and have bounding rectangles for collisions set up for each frame of the sprite's animation. However, I recently stumbled upon the issue of putting together collisions for characters walking up and down hills/slopes with irregularly curved or rocky terrain - what's a good way to do collisions for that type of situation? Per-pixel? Loading up the points of the incline and doing player-line collision checking? Should I use bounding rectangles in general or circle collision detection?

    Read the article

  • Animation file format

    - by Paul
    I'm trying to make a simple 2D animation file format. It'll be very rudimentary: only an XML file containing some parameters (such as frame duration) and metadata, and some images, each representing a frame. I'd like to have the whole animation (frames and XML document) packed in a single file. How do you suggest I do that? What libraries are there that would allow easy access to the files inside the animation file itself? The language I'm using is C++ and the platform is Windows, but I'd rather not use a platform dependent library, if possible.

    Read the article

  • Transformation matrix that maps a window

    - by gbhall
    I'm currently learning OpenGL at uni, and they give us questions to help us learn (these are not worth anything), however I'm stuck on this one question and would have to travel over an hour and a half to uni for an answer. How do I do this question? Please include as many steps as you can, I want to be able to follow exactly how to do this. Find the transformation that maps a window whose lower left corner is at (1,1) and upper right corner is at (3,5) onto: The entire device screen whose dimension is (600, 500) A viewport that has lower left corner at (100,100) and upper right corner at (400,400) Edit: Damn sorry I should have added I am meant to find the matrix, so no code.

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Constructive criticsm on my linear sampling Gaussian blur

    - by Aequitas
    I've been attempting to implement a gaussian blur utilising linear sampling, I've come across a few articles presented on the web and a question posed here which dealt with the topic. I've now attempted to implement my own Gaussian function and pixel shader drawing reference from these articles. This is how I'm currently calculating my weights and offsets: int support = int(sigma * 3.0) weights.push_back(exp(-(0*0)/(2*sigma*sigma))/(sqrt(2*pi)*sigma)); total += weights.back(); offsets.push_back(0); for (int i = 1; i <= support; i++) { float w1 = exp(-(i*i)/(2*sigma*sigma))/(sqrt(2*pi)*sigma); float w2 = exp(-((i+1)*(i+1))/(2*sigma*sigma))/(sqrt(2*pi)*sigma); weights.push_back(w1 + w2); total += 2.0f * weights[i]; offsets.push_back(w1 / weights[i]); } for (int i = 0; i < support; i++) { weights[i] /= total; } Here is an example of my vertical pixel shader: vec3 acc = texture2D(tex_object, v_tex_coord.st).rgb*weights[0]; vec2 pixel_size = vec2(1.0 / tex_size.x, 1.0 / tex_size.y); for (int i = 1; i < NUM_SAMPLES; i++) { acc += texture2D(tex_object, (v_tex_coord.st+(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; acc += texture2D(tex_object, (v_tex_coord.st-(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; } gl_FragColor = vec4(acc, 1.0); Am I taking the correct route with this? Any criticism or potential tips to improving my method would be much appreciated.

    Read the article

  • Translating an Object to a certain Vector 3 in OpenGL and Java LWJGL

    - by aliasmk
    So after almost two hours, I got the hang of using glTranslated() (with Java and LWJGL). If I am correct, applying glTranslated on an object moves that object with the x,y,z relative to the previously moved object. I believe the correct term for this is local vs global, global being the one I want. I was wondering if there was a way to translate an object to a specific XYZ position, or relative to the origin. Thanks! (Code or other details can be supplied if it helps, just let me know. Also sorry if this is a noob comment, Im very new to OpenGL.)

    Read the article

  • Need ideas for an algorithm to draw irregular blotchy shapes

    - by Yttermayn
    I'm looking to draw irregular shapes on an x,y grid, and I'd like to come up with a simple, fast method if possible. My only idea so far is to draw a bunch of circles of random sizes very near each other, but at a random distance apart from a more or less central coordinate, then fill in any blank spaces. I realize this is a clunky, inelegant method, hopefully it will give you a rough idea of the kinds of rounded, random blotchy shapesI'm shooting for. Please suggest methods to accomplish this, I'm not so much interested in code. I can noodle that part out myself. Thanks!

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • Multiple volumetric lights

    - by notabene
    I recently read this GPU GEMS 3 article Volumetric Light Scattering as a Post-Process. I like the idea to add volumetric light property to realtime render i'm working on. Question is will it work for multiple lights? Our renderer uses one render pass per light and uses additive blending to sum incoming light. I'm mostly convinced that it have to work nice. Do you agree? Maybe there can be problem where light rays crosses each other.

    Read the article

  • How to draw a global day night curve

    - by Lumis
    I see many applications which have world-clock map, and I would like to make my own to enhance some of my mobile apps. I wonder if anybody has any knowledge where to start, how to draw a curved shadow representing the dawn and the sunset on the globe. See the example: http://aa.usno.navy.mil/imagery/earth/map?year=2012&month=6&day=19&hour=14&minute=47 I think that this curve goes up and down and creates an artic day/night etc Perhaps there is some acceptable approximation formula without a need to load data for each our and each global parallel and meridian...

    Read the article

  • New way of integrating Openfeint in Cocos2d-x 0.12.0

    - by Ef Es
    I am trying to implement OpenFeint for Android in my cocos2d-x project. My approach so far has been creating a button that calls a static java method in class Bridge using jnihelper functions (jnihelper only accepts statics). Bridge has one singleton attribute of type OFAndroid, that is the class dynamically calling the Openfeint Api methods, and every method in the bridge just forwards it to the OFAndroid object. What I am trying to do now is to initialize the openfeint libraries in the main java class that is the one calling the static C++ libraries. My problem right now is that the initializing function void com.openfeint.api.OpenFeint.initialize(Context ctx, OpenFeintSettings settings, OpenFeintDelegate delegate) is not accepting the context parameter that I am giving him, which is a "this" reference to the main class. Main class extends from Cocos2dxActivity but I don't have any other that extends from Application. Any suggestions on fixing it or how to improve the architecture? EDIT: I am trying a new solution. Make the bridge class into an Application child, is called from Main object, initializes OpenFeint when created and it can call the OpenFeint functions instead of needing an additional class. The problem is I still get the error. 03-30 14:39:22.661: E/AndroidRuntime(9029): Caused by: java.lang.NullPointerException 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.content.ContextWrapper.getPackageManager(ContextWrapper.java:85) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.validateManifest(OpenFeintInternal.java:885) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initializeWithoutLoggingIn(OpenFeintInternal.java:829) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initialize(OpenFeintInternal.java:852) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.api.OpenFeint.initialize(OpenFeint.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.NuroFeint.onCreate(NuroFeint.java:23) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.FastFish.onCreate(FastFish.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1069) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2751)

    Read the article

  • LWJGL - OpenGL - Texture shading

    - by Trixmix
    I want to use LWJGL to create a shader that all it does is change the color of the given texture. For example I tell it to draw the letter A using a sprite sheet then I can tell the shader to draw the letter in a certain color. How would you do something like this without needed to create different colored letter sprite sheets? Task for the shader: Simply change all pixels to a certain color in the texture. Input: Color , texture. Output: it draws onto the screen the new colored texture. How do i accomplish such a thing?

    Read the article

  • How do I generate terrain like that of Scorched Earth?

    - by alex
    Hi, I'm a web developer and I am keen to start writing my own games. For familiarity, I've chosen JavaScript and canvas element for now. I want to generate some terrain like that in Scorched Earth. My first attempt made me realise I couldn't just randomise the y value; there had to be some sanity in the peaks and troughs. I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords. Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?

    Read the article

  • Rotation based on x coordinate and x velocity?

    - by Lewis
    -(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { float deceleration = 0.3f, sensitivity = 8.0f, maxVelocity = 150; // adjust velocity based on current accelerometer acceleration playerVelocity.x = playerVelocity.x * deceleration + acceleration.x * sensitivity; // we must limit the maximum velocity of the player sprite, in both directions (positive & negative values) playerVelocity.x = fmaxf(fminf(playerVelocity.x, maxVelocity), -maxVelocity); } Hi, I want to rotate my sprite based on the velocity and accelerometer input. My sprite can move along the X axis like so: <--------- sprite ----------- But it always faces forwards, if it is moving left I want it to point slightly to the left, the degree of how far it is pointing to be judged from the velocity. This should also work for the right. I tried using atan but as the y velocity and position is always the same the function returns 0, which doesn't rotate it at all. Any ideas? Regards, Lewis.

    Read the article

  • Integration error in high velocity

    - by Elektito
    I've implemented a simple simulation of two planets (simple 2D disks really) in which the only force is gravity and there is also collision detection/response (collisions are completely elastic). I can launch one planet into orbit of the other just fine. The collision detection code though does not work so well. I noticed that when one planet hits the other in a free fall it speeds backward and goes much higher than its original position. Some poking around convinced me that the simplistic Euler integration is causing the error. Consider this case. One object has a mass of 1kg and the other has a mass equal to earth. Say the object is 10 meters above ground. Assume that our dt (delta t) is 1 second. The object goes to the height of 9 meters at the end of the first iteration, 7 at the end of the second, 4 at the end of the third and 0 at the end of the fourth iteration. At this points it hits the ground and bounces back with the speed of 10 meters per second. The problem is with dt=1, on the first iteration it bounces back to a height of 10. It takes several more steps to make the object change its course. So my question is, what integration method can I use which fixes this problem. Should I split dt to smaller pieces when velocity is high? Or should I use another method altogether? What method do you suggest? EDIT: You can see the source code here at github:https://github.com/elektito/diskworld/

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >