Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 509/962 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • Generating geometry when using VBO

    - by onedayitwillmake
    Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads. I am doing this by storing a STD::Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays. I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader / VBO approach

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • Switching my collision detection to array lists caused it to stop working

    - by Charlton Santana
    I have made a collision detection system which worked when I did not use array list and block generation. It is weird why it's not working but here's the code, and if anyone could help I would be very grateful :) The first code if the block generation. private static final List<Block> BLOCKS = new ArrayList<Block>(); Random rnd = new Random(System.currentTimeMillis()); int randomx = 400; int randomy = 400; int blocknum = 100; String Title = "blocktitle" + blocknum; private Block block; public void generateBlocks(){ if(blocknum > 0){ int offset = rnd.nextInt(250) + 100; //500 is the maximum offset, this is a constant randomx += offset;//ofset will be between 100 and 400 int randomyoff = rnd.nextInt(80); //500 is the maximum offset, this is a constant randomy = platformheighttwo - 6 - randomyoff;//ofset will be between 100 and 400 block = new Block(BitmapFactory.decodeResource(getResources(), R.drawable.block2), randomx, randomy); BLOCKS.add(block); blocknum -= 1; } The second is where the collision detection takes place note: the block.draw(canvas); works perfectly. It's the blocks that don't work. for(Block block : BLOCKS) { block.draw(canvas); if (sprite.bottomrx < block.bottomrx && sprite.bottomrx > block.bottomlx && sprite.bottomry < block.bottommy && sprite.bottomry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // bottom left touching block? if (sprite.bottomlx < block.bottomrx && sprite.bottomlx > block.bottomlx && sprite.bottomly < block.bottommy && sprite.bottomly > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // top right touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } //top left touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } } The values eg bottomrx are in the block.java file..

    Read the article

  • 2D object-aligned bounding-box intersection test

    - by AshleysBrain
    Hi all, I have two object-aligned bounding boxes (i.e. not axis aligned, they rotate with the object). I'd like to know if two object-aligned boxes overlap. (Edit: note - I'm using an axis-aligned bounding box test to quickly discard distant objects, so it doesn't matter if the quad routine is a little slower.) My boxes are stored as four x,y points. I've searched around for answers, but I can't make sense of the variable names and algorithms in examples to apply them to my particular case. Can someone help show me how this would be done, in a clear and simple way? Thanks. (The particular language isn't important, C-style pseudo code is OK.)

    Read the article

  • Anisotropic and trilinear filtering?

    - by fedab
    I'm confused about the usage of trilinear filtering and anisotropic filtering in SharpDX. As far as i understood, trilinear filtering does linear filtering to the textures and in a case of LOD-change it also interpolates between the too LODs to smooth the transition. Anisotropic filtering make the texture bigger. Now it is possible to use trilinear filtering to do the same thing, due to anisotropic filtering with bigger textures. This causes a lesser blurred image, when you use anisotropy, because the interpolation is better. Now, it should be possible to use trilinear filtering and anisotropic filtering at the same time. But in the SamplerState i can only choose Filter.Anisotropy or Filter.MinMagMipLinear (should be trilinear, right?). You can see all possible filters here: D3D11 Filter Enumeration. So my question: Can you use both techniques together, if yes, how can i archieve that in SharpDX with SamplerState?

    Read the article

  • Transforming a primitive tetrahedron into a primitive icosahedron?

    - by Djentleman
    I've created a tetrahedron by creating a BoundingBox and building the faces of the tetrahedron within the bounding box as follows (see image as well): VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[12]; BoundingBox box = new BoundingBox(new Vector3(-1f, 1f, 1f), new Vector3(1f, -1f, -1f)); vertices[0].Position = box.GetCorners()[0]; vertices[1].Position = box.GetCorners()[2]; vertices[2].Position = box.GetCorners()[7]; vertices[3].Position = box.GetCorners()[0]; vertices[4].Position = box.GetCorners()[5]; vertices[5].Position = box.GetCorners()[2]; vertices[6].Position = box.GetCorners()[5]; vertices[7].Position = box.GetCorners()[7]; vertices[8].Position = box.GetCorners()[2]; vertices[9].Position = box.GetCorners()[5]; vertices[10].Position = box.GetCorners()[0]; vertices[11].Position = box.GetCorners()[7]; What would I then have to do to transform this tetrahedron into an icosahedron? Similar to this image: I understand the concept but applying it is another thing entirely for me.

    Read the article

  • how to implement motion blur effect?

    - by PlayerOne
    I wanted to know how one would implement this motion blur or fade effect behind the soccer ball . Here is what I was thinking . You have the balls current position and you also keep its previous position(couple of sec back). and you draw a "streak" sprite between the 2 points. I have seen this effect lots of time implemented for projects in various 2d games and wanted to know if there is a standard technique. http://i45.tinypic.com/2n24j7r.png

    Read the article

  • Is it possible to construct a cube with fewer than 24 vertices

    - by Telanor
    I have a cube-based world like Minecraft and I'm wondering if there's a way to construct a cube with fewer than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new DX11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • Camera closes in on the fixed point

    - by V1ncam
    I've been trying to create a camera that is controlled by the mouse and rotates around a fixed point (read: (0,0,0)), both vertical and horizontal. This is what I've come up with: camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateRotationY(camRotYFloat)); Vector3 customAxis = new Vector3(-camera.Eye.Z, 0, camera.Eye.X); camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateFromAxisAngle(customAxis, camRotXFloat * 0.0001f)); This works quit well, except from the fact that when I 'use' the second transformation (go up and down with the mouse) the camera not only goes up and down, it also closes in on the point. It zooms in. How do I prevent this? Thanks in advance.

    Read the article

  • Why, on iOS, is glRenderbufferStorage appearing to fail?

    - by dugla
    On an iOS device (iPad) I decided to change the storage for my renderbuffer from the CAEAGLLayer that backs the view to explicit storage via glRenderbufferStorage. Sadly, the following code fails to result in a valid FBO. Can someone please tell me what I missed?: glGenFramebuffers(1, &m_framebuffer); glBindFramebuffer(GL_FRAMEBUFFER, m_framebuffer); glGenRenderbuffers(1, &m_colorbuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_colorbuffer); GLsizei width = (GLsizei)layer.bounds.size.width; GLsizei height = (GLsizei)layer.bounds.size.height; glRenderbufferStorage(m_colorbuffer, GL_RGBA8_OES, width, height); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, m_colorbuffer); Note: The layer size is valid and correct. This is solid production working rendering code. The only change I am making is the line glRenderbufferStorage(...) previously I did: [m_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer] Thanks, Doug

    Read the article

  • matrix 4x4 position data

    - by freefallr
    I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub-matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm::vec3 vParentPos( mParent[3][0], mParent[3][1], mParent[3][2] ); My question is - am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm::vec3 vParentPos( mParent[0][3], mParent[1][3], mParent[2][3] ); thanks!

    Read the article

  • Pathfinding in multi goal, multi agent environment

    - by Rohan Agrawal
    I have an environment in which I have multiple agents (a), multiple goals (g) and obstacles (o). . . . a o . . . . . . . o . g . . a . . . . . . . . . . o . . . . o o o o . g . . o . . . . . . . o . . . . o . . . . o o o o a What would an appropriate algorithm for pathfinding in this environment? The only thing I can think of right now, is to Run a separate version of A* for each goal separately, but i don't think that's very efficient.

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Rotation based on x coordinate and x velocity?

    - by Lewis
    -(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { float deceleration = 0.3f, sensitivity = 8.0f, maxVelocity = 150; // adjust velocity based on current accelerometer acceleration playerVelocity.x = playerVelocity.x * deceleration + acceleration.x * sensitivity; // we must limit the maximum velocity of the player sprite, in both directions (positive & negative values) playerVelocity.x = fmaxf(fminf(playerVelocity.x, maxVelocity), -maxVelocity); } Hi, I want to rotate my sprite based on the velocity and accelerometer input. My sprite can move along the X axis like so: <--------- sprite ----------- But it always faces forwards, if it is moving left I want it to point slightly to the left, the degree of how far it is pointing to be judged from the velocity. This should also work for the right. I tried using atan but as the y velocity and position is always the same the function returns 0, which doesn't rotate it at all. Any ideas? Regards, Lewis.

    Read the article

  • Bitmap font rendering, UV generation and vertex placement

    - by jack
    I am generating a bitmap, however, I am not sure on how to render the UV's and placement. I had a thread like this once before, but it was too loosely worded as to what I was looking to do. What I am doing right now is creating a large 1024x1024 image with characters evenly placed every 64 pixels. Here is an example of what I mean. I then save the bitmap X/Y information to a file (which is all multiples of 64). However, I am not sure how to properly use this information and bitmap to render. This falls into two different categories, UV generation and kerning. Now I believe I know how to do both of these, however, when I attempt to couple them together I will get horrendous results. For example, I am trying to render two different text arrays, "123" and "njfb". While ignoring the texture quality (I will be increasing the texture to provide more detail once I fix this issue), here is what it looks like when I try to render them. http://img64.imageshack.us/img64/599/badfontrendering.png Now for the algorithm. I am doing my letter placement with both GetABCWidth and GetKerningPairs. I am using GetABCWidth for the width of the characters, then I am getting the kerning information for adjust the characters. Does anyone have any suggestions on how I can implement my own bitmap font renderer? I am trying to do this without using external libraries such as angel bitmap tool or freetype. I also want to stick to the way the bitmap font sheet is generated so I can do extra effects in the future. Rendering Algorithm for(U32 c = 0, vertexID = 0, i = 0; c < numberOfCharacters; ++c, vertexID += 4, i += 6) { ObtainCharInformation(fontName, m_Text[c]); letterWidth = (charInfo.A + charInfo.B + charInfo.C) * scale; if(c != 0) { DWORD BytesReq = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, 0, 0, &mat); U8 * glyphImg= new U8[BytesReq]; DWORD r = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, BytesReq, glyphImg, &mat); for (int k=0; k<nKerningPairs; k++) { if ((kerningpairs[k].wFirst == previousCharIndex) && (kerningpairs[k].wSecond == m_Text[c])) { letterBottomLeftX += (kerningpairs[k].iKernAmount * scale); break; } } letterBottomLeftX -= (gm.gmCellIncX * scale); } SetVertex(letterBottomLeftX, 0.0f, zFight, vertexID); SetVertex(letterBottomLeftX, letterHeight, zFight, vertexID + 1); SetVertex(letterBottomLeftX + letterWidth, letterHeight, zFight, vertexID + 2); SetVertex(letterBottomLeftX + letterWidth, 0.0f, zFight, vertexID + 3); zFight -= 0.001f; float BottomLeftX = (F32)(charInfo.bitmapXOrigin) / (float)m_BitmapWidth; float BottomLeftY = (F32)(charInfo.bitmapYOrigin + charInfo.charBitmapHeight) / (float)m_BitmapWidth; float TopLeftX = BottomLeftX; float TopLeftY = (F32)(charInfo.bitmapYOrigin) / (float)m_BitmapWidth; float TopRightX = (F32)(charInfo.bitmapXOrigin + charInfo.B - charInfo.C) / (float)m_BitmapWidth; float TopRightY = TopLeftY; float BottomRightX = TopRightX; float BottomRightY = BottomLeftY; SetTextureCoordinate(TopLeftX, TopLeftY, vertexID + 1); SetTextureCoordinate(BottomLeftX, BottomLeftY, vertexID + 0); SetTextureCoordinate(BottomRightX, BottomRightY, vertexID + 3); SetTextureCoordinate(TopRightX, TopRightY, vertexID + 2); /// index setting letterBottomLeftX += letterWidth; previousCharIndex = m_Text[c]; }

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • Fast pixelshader 2D raytracing

    - by heishe
    I'd like to do a simple 2D shadow calculation algorithm by rendering my environment into a texture, and then use raytracing to determine what pixels of the texture are not visible to the point light (simply handed to the shader as a vec2 position) . A simple brute force algorithm per pixel would looks like this: line_segment = line segment between current pixel of texture and light source For each pixel in the texture: { if pixel is not just empty space && pixel is on line_segment output = black else output = normal color of the pixel } This is, of course, probably not the fastest way to do it. Question is: What are faster ways to do it or what are some optimizations that can be applied to this technique?

    Read the article

  • Draw multiple objects with textures

    - by Simplex
    I want to draw cubes using textures. void OperateWithMainMatrix(ESContext* esContext, GLfloat offsetX, GLfloat offsetY, GLfloat offsetZ) { UserData *userData = (UserData*) esContext->userData; ESMatrix modelview; ESMatrix perspective; //Manipulation with matrix ... glVertexAttribPointer(userData->positionLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //in cubeFaces coordinates verticles cube glVertexAttribPointer(userData->normalLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //for normals (use in fragment shaider for textures) glEnableVertexAttribArray(userData->positionLoc); glEnableVertexAttribArray(userData->normalLoc); // Load the MVP matrix glUniformMatrix4fv(userData->mvpLoc, 1, GL_FALSE, (GLfloat*)&userData->mvpMatrix.m[0][0]); //Bind base map glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, userData->baseMapTexId); //Set the base map sampler to texture unit to 0 glUniform1i(userData->baseMapLoc, 0); // Draw the cube glDrawArrays(GL_TRIANGLES, 0, 36); } (coordinates transformation is in OperateWithMainMatrix() ) Then Draw() function is called: void Draw(ESContext *esContext) { UserData *userData = esContext->userData; // Set the viewport glViewport(0, 0, esContext->width, esContext->height); // Clear the color buffer glClear(GL_COLOR_BUFFER_BIT); // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } This work fine, but if I try to draw multiple cubes (next code for example): void Draw(ESContext *esContext) { ... // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 2.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -2.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } A side faces overlapes frontal face. The side face of the right cube overlaps frontal face of the center cube. How can i remove this effect and display miltiple cubes without it?

    Read the article

  • Checker AI in visual basic not working [on hold]

    - by Eugene Galkine
    I am trying to a make checkers in visual basic with ai. I am using the minimax algorithm (or at least what I understand of it) and it works, except the ai is retarded and plays like it is trying to loose and I tried to switch around the min and the max but the results are IDENTICAL. I am pissed of and have been trying to fix it for over a week now, I would really appreciate it if someone could help me out here. I have 3 years experience of programming (in Java, only about of month of VB experience) and I always am able to solve all my errors on my own so I don't know why I can't get this to work. The program is not at all optimized or anything at this point and is over 1.2K lines long, so here is the entire vb project instead: https://www.dropbox.com/sh/evii0jendn93ir2/9fntwH2dNW I would really appreciate any help I could get.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >