Search Results

Search found 3077 results on 124 pages for 'rendering'.

Page 13/124 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • Effectively implementing a game view using java

    - by kdavis8
    I am writing a 2d game in java. The game mechanics are similar to the Pokémon game boy advance series e.g. fire red, ruby, diamond and so on. I need a way to draw a huge map maybe 5000 by 5000 pixels and then load individual in game sprites to across the entirety of the map, like rendering a scene. Game sprites would be things like terrain objects, trees, rocks, bushes, also houses, castles, NPC's and so on. But i also need to implement some kind of camera view class that focuses on the player. the camera view class needs to follow the characters movements throughout the game map but it also needs to clip the rest of the map away from the user's field of view, so that the user can only see the arbitrary proximity adjacent to the player's sprite. The proximity's range could be something like 500 pixels in every direction around the player’s sprite. On top of this, i need to implement an independent resolution for the game world so that the game view will be uniform on all screen sizes and screen resolutions. I know that this does sound like a handful and may fall under the category of multiple questions, but the questions are all related and any advice would be very much appreciated. I don’t need a full source code listing but maybe some pointers to effective java API classes that could make doing what i need to do a lot simpler. Also any algorithmic/ design advice would greatly benefit me as well. example of what i am trying to do in source code form below package myPackage; /** * The Purpose of GameView is to: Render a scene using Scene class, Create a * clipping pane using CameraView class, and finally instantiate a coordinate * grid using Path class. * * Once all of these things have been done, GameView class should then be * instantiated and used jointly with its helper classes. CameraView should be * used as the main drawing image. CameraView is the the window to the game * world.Scene passes data constantly to CameraView so that the entire map flows * smoothly. Path uses the x and y coordinates from camera view to construct * cells for path finding algorithms. */ public class GameView { // Scene is a helper class to game view. it renders the entire map to memory // for the camera view. Scene scene; // Camera View is a helper class to game view. It clips the Scene into a // small image that follows the players coordinates. CameraView Camera; // Path is a helper class to game view. It observes and calculates the // coordinates of camera view and divides them into Grids/Cells for Path // finding. Path path; // this represents the player and has a getSprite() method that will return // the current frame column row combination of the passed sprite sheet. Sprite player; }

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Any ideas on reducing lag in terrain generation?

    - by l5p4ngl312
    Ok so here's the deal. I've written an isometric engine that generates terrain based on camera values using 2D perlin noise. I planned on doing 3D but first I need to work out the lag issues I'm having. I will try to explain how I am doing this so that maybe someone can spot where I am going wrong. I know it should not be this laggy. There is the abstract class Block which right now just contains render(). BlockGrass, etc. extend this class and each has code in the render function to create a textured quad at the given position. Then there is the class Chunk which has the function Generate() and setBlocksInArea(). Generate uses 2D perlin noise to make a height map and stores the heights in a 2D array. It stores the positions of each block it generates in blockarray[x][y][z]. The chunks are 8x8x128. In the main game class there is a 3D array called blocksInArea. The blocks in this array are what gets rendered. When a chunk generates, it adds its blocks to this array at the correct index. It is like this so chunks can be saved to the hard drive (even though they aren't yet) but there can still be optimization with the rendering that you wouldn't have if you rendered each chunk separately. Here's where the laggy part comes in: When the camera moves to a new chunk, a row of chunks generates on the end of the axis that the camera moved on. But it still has to move the other chunks up/down in the blocksInArea (render) array. It does this by calculating the new position in the array and doing the Chunk.setBlocksInArea(): for(int x = 0; x < 8; x++){ for(int y = 0; y < 8; y++){ nx = x+(coordX - camCoordX)*8 ny = y+(coordY - camCoordY)*8 for(int z = 0; z < height[x][y]; z++){ blockarray[x][y][z] = Game.blocksInArea[nx][ny][z]; } } } My reasoning was that this would be much faster than doing the perlin noise all over again, but there are still little spikes of lag when you move in between chunks. Edit: Would it be possible to create a 3 dimensional array list so that shifting of chunks within the array would not be neccessary?

    Read the article

  • Telugu (unicode) font rendering in emacs

    - by Prakash K
    I sometimes edit text in telugu language. However, when I open the file (UTF-8 encoded) in GNU emacs (version 23.1.50.1 on Ubuntu Jaunty) the text rendering is incorrect. The same text file opened in gedit is rendered correctly. Here's a snippet: ????????? ???? ???? ???????? rendred in gedit: And, the emacs rendering of the same text: Wherever glyphs need to be composited (not sure if it's the right word), emacs (or whatever library it uses) is not doing it right. Is there anyway to fix this? Perhaps tuning some setting in my configuration? Any ideas, please?

    Read the article

  • Color Picking Troubles - LWJGL/OpenGL

    - by Tom Johnson
    I'm attempting to check which object the user is hovering over. While everything seems to be just how I'd think it should be, I'm not able to get the correct color due to the second time I draw (without picking colors). Here is my rendering code: public void render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.applyTranslations(); scene.pick(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.applyTranslations(); scene.render(); } And here is what gets called on each block/tile on "scene.pick()": public void pick() { glColor3ub((byte) pickingColor.x, (byte) pickingColor.y, (byte) pickingColor.z); draw(); glReadBuffer(GL_FRONT); ByteBuffer buffer = BufferUtils.createByteBuffer(4); glReadPixels(Mouse.getX(), Mouse.getY(), 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer); int r = buffer.get(0) & 0xFF; int g = buffer.get(1) & 0xFF; int b = buffer.get(2) & 0xFF; if(r == pickingColor.x && g == pickingColor.y && b == pickingColor.z) { hovered = true; } else { hovered = false; } } I believe the problem is that in the method of each tile/block called by scene.pick(), it is reading the color from the regular drawing state, after that method is called somehow. I believe this because when I remove the "glReadBuffer(GL_FRONT)" line from the pick method, it seems to almost fix it, but then it will also select blocks behind the one you are hovering as it is not only looking at the front. If you have any ideas of what to do, please be sure to reply!/ EDIT: Adding scene.render(), tile.render(), and tile.draw() scene.render: public void render() { for(int x = 0; x < tiles.length; x++) { for(int z = 0; z < tiles.length; z++) { tiles[x][z].render(); } } } tile.render: public void render() { glColor3f(color.x, color.y, color.z); draw(); if(hovered) { glColor3f(1, 1, 1); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); draw(); glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); } } tile.draw: public void draw() { float x = position.x, y = position.y, z = position.z; //Top glBegin(GL_QUADS); glVertex3f(x, y + size, z); glVertex3f(x + size, y + size, z); glVertex3f(x + size, y + size, z + size); glVertex3f(x, y + size, z + size); glEnd(); //Left glBegin(GL_QUADS); glVertex3f(x, y, z); glVertex3f(x + size, y, z); glVertex3f(x + size, y + size, z); glVertex3f(x, y + size, z); glEnd(); //Right glBegin(GL_QUADS); glVertex3f(x + size, y, z); glVertex3f(x + size, y + size, z); glVertex3f(x + size, y + size, z + size); glVertex3f(x + size, y, z + size); glEnd(); } (The game is like an isometric game. That's why I only draw 3 faces.)

    Read the article

  • Not getting desired results with SSAO implementation

    - by user1294203
    After having implemented deferred rendering, I tried my luck with a SSAO implementation using this Tutorial. Unfortunately, I'm not getting anything that looks like SSAO, you can see my result below. You can see there is some weird pattern forming and there is no occlusion shading where there needs to be (i.e. in between the objects and on the ground). The shaders I implemented follow: #VS #version 330 core uniform mat4 invProjMatrix; layout(location = 0) in vec3 in_Position; layout(location = 2) in vec2 in_TexCoord; noperspective out vec2 pass_TexCoord; smooth out vec3 viewRay; void main(void){ pass_TexCoord = in_TexCoord; viewRay = (invProjMatrix * vec4(in_Position, 1.0)).xyz; gl_Position = vec4(in_Position, 1.0); } #FS #version 330 core uniform sampler2D DepthMap; uniform sampler2D NormalMap; uniform sampler2D noise; uniform vec2 projAB; uniform ivec3 noiseScale_kernelSize; uniform vec3 kernel[16]; uniform float RADIUS; uniform mat4 projectionMatrix; noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } mat3 CalcRMatrix(vec3 normal, vec2 texcoord){ ivec2 noiseScale = noiseScale_kernelSize.xy; vec3 rvec = texture(noise, texcoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); return mat3(tangent, bitangent, normal); } void main(void){ vec2 TexCoord = pass_TexCoord; vec3 Position = CalcPosition(); vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); mat3 RotationMatrix = CalcRMatrix(Normal, TexCoord); int kernelSize = noiseScale_kernelSize.z; float occlusion = 0.0; for(int i = 0; i < kernelSize; i++){ // Get sample position vec3 sample = RotationMatrix * kernel[i]; sample = sample * RADIUS + Position; // Project and bias sample position to get its texture coordinates vec4 offset = projectionMatrix * vec4(sample, 1.0); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // Get sample depth float sample_depth = texture(DepthMap, offset.xy).r; float linearDepth = projAB.y / (sample_depth - projAB.x); if(abs(Position.z - linearDepth ) < RADIUS){ occlusion += (linearDepth <= sample.z) ? 1.0 : 0.0; } } out_AO = 1.0 - (occlusion / kernelSize); } I draw a full screen quad and pass Depth and Normal textures. Normals are in RGBA16F with the alpha channel reserved for the AO factor in the blur pass. I store depth in a non linear Depth buffer (32F) and recover the linear depth using: float linearDepth = projAB.y / (depth - projAB.x); where projAB.y is calculated as: and projAB.x as: These are derived from the glm::perspective(gluperspective) matrix. z_n and z_f are the near and far clip distance. As described in the link I posted on the top, the method creates samples in a hemisphere with higher distribution close to the center. It then uses random vectors from a texture to rotate the hemisphere randomly around the Z direction and finally orients it along the normal at the given pixel. Since the result is noisy, a blur pass follows the SSAO pass. Anyway, my position reconstruction doesn't seem to be wrong since I also tried doing the same but with the position passed from a texture instead of being reconstructed. I also tried playing with the Radius, noise texture size and number of samples and with different kinds of texture formats, with no luck. For some reason when changing the Radius, nothing changes. Does anyone have any suggestions? What could be going wrong?

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • SSAO Distortion

    - by Robert Xu
    I'm currently (attempting) to add SSAO to my engine, except it's...not really work, to say the least. I use a deferred renderer to render my scene. I have four render targets: Albedo, Light, Normal, and Depth. Here are the parameters for all of them (Surface Format, Depth Format): Albedo: 32-bit ARGB, Depth24Stencil8 Light: 32-bit ARGB, None Normal: 32-bit ARGB, None Depth: 8-bit R (Single), Depth24Stencil8 To generate my random noise map for the SSAO, I do the following for each pixel in the noise map: Vector3 v3 = Vector3.Zero; double z = rand.NextDouble() * 2.0 - 1.0; double r = Math.Sqrt(1.0 - z * z); double angle = rand.NextDouble() * MathHelper.TwoPi; v3.X = (float)(r * Math.Cos(angle)); v3.Y = (float)(r * Math.Sin(angle)); v3.Z = (float)z; v3 += offset; v3 *= 0.5f; result[i] = new Color(v3); This is my GBuffer rendering effect: PixelInput RenderGBufferColorVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = mul(input.Position, WorldViewProjection); pi.Normal = mul(input.Normal, WorldInverseTranspose); pi.Color = input.Color; pi.TPosition = pi.Position; pi.WPosition = input.Position; return pi; } GBufferTarget RenderGBufferColorPixelShader(PixelInput input) { GBufferTarget output = ( GBufferTarget ) 0; float3 position = input.TPosition.xyz / input.TPosition.w; output.Albedo = lerp(float4(1.0f, 1.0f, 1.0f, 1.0f), input.Color, ColorFactor); output.Normal = EncodeNormal(input.Normal); output.Depth = position.z; return output; } And here is the SSAO effect: float4 EncodeNormal(float3 normal) { return float4((normal.xyz * 0.5f) + 0.5f, 0.0f); } float3 DecodeNormal(float4 encoded) { return encoded * 2.0 - 1.0f; } float Intensity; float Size; float2 NoiseOffset; float4x4 ViewProjection; float4x4 ViewProjectionInverse; texture DepthMap; texture NormalMap; texture RandomMap; const float3 samples[16] = { float3(0.01537562, 0.01389096, 0.02276565), float3(-0.0332658, -0.2151698, -0.0660736), float3(-0.06420016, -0.1919067, 0.5329634), float3(-0.05896204, -0.04509097, -0.03611697), float3(-0.1302175, 0.01034653, 0.01543675), float3(0.3168565, -0.182557, -0.01421785), float3(-0.02134448, -0.1056605, 0.00576055), float3(-0.3502164, 0.281433, -0.2245609), float3(-0.00123525, 0.00151868, 0.02614773), float3(0.1814744, 0.05798516, -0.02362876), float3(0.07945167, -0.08302628, 0.4423518), float3(0.321987, -0.05670302, -0.05418307), float3(-0.00165138, -0.00410309, 0.00537362), float3(0.01687791, 0.03189049, -0.04060405), float3(-0.04335613, -0.00530749, 0.06443053), float3(0.8474263, -0.3590308, -0.02318038), }; sampler DepthSampler = sampler_state { Texture = DepthMap; MipFilter = Point; MinFilter = Point; MagFilter = Point; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler NormalSampler = sampler_state { Texture = NormalMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler RandomSampler = sampler_state { Texture = RandomMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; struct VertexInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; struct PixelInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; PixelInput SSAOVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = input.Position; pi.TextureCoordinates = input.TextureCoordinates; return pi; } float3 GetXYZ(float2 uv) { float depth = tex2D(DepthSampler, uv); float2 xy = uv * 2.0f - 1.0f; xy.y *= -1; float4 p = float4(xy, depth, 1); float4 q = mul(p, ViewProjectionInverse); return q.xyz / q.w; } float3 GetNormal(float2 uv) { return DecodeNormal(tex2D(NormalSampler, uv)); } float4 SSAOPixelShader(PixelInput input) : COLOR0 { float depth = tex2D(DepthSampler, input.TextureCoordinates); float3 position = GetXYZ(input.TextureCoordinates); float3 normal = GetNormal(input.TextureCoordinates); float occlusion = 1.0f; float3 reflectionRay = DecodeNormal(tex2D(RandomSampler, input.TextureCoordinates + NoiseOffset)); for (int i = 0; i < 16; i++) { float3 sampleXYZ = position + reflect(samples[i], reflectionRay) * Size; float4 screenXYZW = mul(float4(sampleXYZ, 1.0f), ViewProjection); float3 screenXYZ = screenXYZW.xyz / screenXYZW.w; float2 sampleUV = float2(screenXYZ.x * 0.5f + 0.5f, 1.0f - (screenXYZ.y * 0.5f + 0.5f)); float frontMostDepthAtSample = tex2D(DepthSampler, sampleUV); if (frontMostDepthAtSample < screenXYZ.z) { occlusion -= 1.0f / 16.0f; } } return float4(occlusion * Intensity * float3(1.0, 1.0, 1.0), 1.0); } technique SSAO { pass Pass0 { VertexShader = compile vs_3_0 SSAOVertexShader(); PixelShader = compile ps_3_0 SSAOPixelShader(); } } However, when I use the effect, I get some pretty bad distortion: Here's the light map that goes with it -- is the static-like effect supposed to be like that? I've noticed that even if I'm looking at nothing, I still get the static-like effect. (you can see it in the screenshot; the top half doesn't have any geometry yet it still has the static-like effect) Also, does anyone have any advice on how to effectively debug shaders?

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • Turning on collision crashes game

    - by MomentumGaming
    I am getting a null pointer excecption to both my sprite and level. I am working on my mob class, and when I try to move him and the move function is called, the game crashes after checking collision with a null pointer excecption. Taking out the one line that actually checks if the tile located in front of it fixes the problem. Also, if i keep collision ON but don't move the position of the mob (the spider) the game works fine. I will have collision, and the spider appears on the screen, only problem is, getting it to move causes this nasty error that i just can't fix. true Exception in thread "Display" java.lang.NullPointerException at com.apcompsci.game.entity.mob.Mob.collision(Mob.java:67) at com.apcompsci.game.entity.mob.Mob.move(Mob.java:38) at com.apcompsci.game.entity.mob.spider.update(spider.java:58) at com.apcompsci.game.level.Level.update(Level.java:55) at com.apcompsci.game.Game.update(Game.java:128) at com.apcompsci.game.Game.run(Game.java:106) at java.lang.Thread.run(Unknown Source) Here is my renderMob mehtod: public void renderMob(int xp,int yp,Sprite sprite,int flip) { xp -= xOffset; yp-=yOffset; for(int y = 0; y<32; y++) { int ya = y + yp; int ys = y; if(flip == 2||flip == 3)ys = 31-y; for(int x = 0; x<32; x++) { int xa = x + xp; int xs = x; if(flip == 1||flip == 3)xs = 31-x; if(xa < -32 || xa >=width || ya<0||ya>=height) break; if(xa<0) xa =0; int col = sprite.pixels[xs+ys*32]; if(col!= 0x000000) pixels[xa+ya*width] = col; } } } My spider class which determines the sprite and where I control movement, also rendering the spider onto the screen, when I increment ya to move the sprite, I get the crash, but without ya++, it runs flawlessly with a spider sprite on screen: package com.apcompsci.game.entity.mob; import com.apcompsci.game.entity.mob.Mob.Direction; import com.apcompsci.game.graphics.Screen; import com.apcompsci.game.graphics.Sprite; import com.apcompsci.game.level.Level; public class spider extends Mob{ Direction dir; private Sprite sprite; private boolean walking; public spider(int x, int y) { this.x = x <<4; this.y = y <<4; sprite = sprite.spider_forward; } public void update() { int xa = 0, ya = 0; ya++; if(ya<0) { sprite = sprite.spider_forward; dir = Direction.UP; } if(ya>0) { sprite = sprite.spider_back; dir = Direction.DOWN; } if(xa<0) { sprite = sprite.spider_side; dir = Direction.LEFT; } if(xa>0) { sprite = sprite.spider_side; dir = Direction.LEFT; } if(xa!= 0 || ya!= 0) { System.out.println("true"); move(xa,ya); walking = true; } else{ walking = false; } } public void render(Screen screen) { screen.renderMob(x, y, sprite, 0); } } This is th mob class that contains the move() method that is called in the spider class above. This move method calls the collision method. tile and sprite comes up null in the debugger: package com.apcompsci.game.entity.mob; import java.util.ArrayList; import java.util.List; import com.apcompsci.game.entity.Entity; import com.apcompsci.game.entity.projectile.DemiGodProjectile; import com.apcompsci.game.entity.projectile.Projectile; import com.apcompsci.game.graphics.Sprite; public class Mob extends Entity{ protected Sprite sprite; protected boolean moving = false; protected enum Direction { UP,DOWN,LEFT,RIGHT } protected Direction dir; public void move(int xa,int ya) { if(xa != 0 && ya != 0) { move(xa,0); move(0,ya); return; } if(xa>0) dir = Direction.RIGHT; if(xa<0) dir = Direction.LEFT; if(ya>0)dir = Direction.DOWN; if(ya<0)dir = Direction.UP; if(!collision(xa,ya)){ x+= xa; y+=ya; } } public void update() { } public void shoot(int x, int y, double dir) { //dir = Math.toDegrees(dir); Projectile p = new DemiGodProjectile(x, y,dir); level.addProjectile(p); } public boolean collision(int xa,int ya) { boolean solid = false; for(int c = 0; c<4; c++) { int xt = ((x+xa) + c % 2 * 14 - 8 )/16; int yt = ((y+ya) + c / 2 * 12 +3 )/16; if(level.getTile(xt, yt).solid()) solid = true; } return solid; } public void render() { } } Finally, here is the method in which i call the add() method for the spider to add it to the level: protected void loadLevel(String path) { try{ BufferedImage image = ImageIO.read(SpawnLevel.class.getResource(path)); int w = width =image.getWidth(); int h = height = image.getHeight(); tiles = new int[w*h]; image.getRGB(0, 0, w,h, tiles,0, w); } catch(IOException e){ e.printStackTrace(); System.out.println("Exception! Could not load level file!"); } add(new spider(20,45)); } I don't think i need to include the level class but just in case, I have provided a gistHub link for better context. It contains all of the full classes listed above , plus my entity class and maybe another. Thanks for the help if you decide to do so, much appreciated! Also, please tell me if i'm in the wrong section of stackeoverflow, i figured that since this is the gamign section that it belonged but debugging code normally goes into the general section.

    Read the article

  • Rendering a frame is producing noise from speakers in Windows and Linux

    - by Robber
    When any hardware accelerated application is rendering a frame (or many of them) a very short noise is coming from my speakers. This can be a game, a WebGL application or XBMC. When the application/game is rendering many frames per second (like most of them do) the noise is a continuous buzzing that gets higher pitched with higher framerates. This applies to Linux and Windows, so I'd assume it's a hardware problem. The current hardware in the PC is: CPU: Core2Quad Q9550 GPU: Radeon HD 5770 RAM: 2x2GB DDR2 Motherboard: Asus P5QLD PRO PSU: be quiet! Pure Power 530W Screen and speakers: Old 720p LCD TV connected via VGA and aux cable Muting the TV stops the noise, muting Windows doesn't. I tried replacing the PSU first (used a Tagan 700W PSU before) because I thought it was a power problem. It wasn't. I tried replacing the motherboard (used a ASUS P5B SE before) next because I thought it was a sound card problem. It wasn't. I tried the GPU in a different PC because I thought it was a broken graphics card. It worked perfectly fine in the other PC. I thought it might be interference, but moving the audio cable around changes absolutely nothing. I tried using an HDMI cable instead and that did work, but is not an option since my TV has only one HDMI input and I need that for my PS3.

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • AJAX Partial Rendering issues for the default page in IIS 7 when using custom http module

    - by WiseGuyEh
    The problem When I try to make a AJAX partial update request (using the UpdatePanel control) from the default page of an IIS7 web site, it fails- instead of returning the html to be updated, it returns the entire page, which then causes the MS AJAX Javascript to throw a parsing shit-fit. The suspected cause I have narrowed the cause down to two issues- making an AJAX request to the default page when I have a certain custom http module registered. A partial rendering request to http://localhost will fail, but a partial rendering request to http://localhost/default.aspx will work fine. Also, If i remove the following line in my custom HttpModule: _application.PreRequestHandlerExecute += OnPreRequestHandlerExecute; The AJAX partial render will work correctly. Wierd huh? Another wierd thing... If I look at trace.axd, I can see that when a partial rendering request fails, two POST requests are logged for the one partial rendering request- one where the default.aspx page executes successfully (trace information such as page_load is logged) but no content is produced and a second that doesn't seem to actually execute (no trace information is logged) but produces content (HTTP_CONTENT_LENGTH is greater than 0). Please help! If anyone with a good knowledge of HTTP modules or the MS AJAX Http module could explain why this is occuring I would be very grateful. As it is, the obvious work arround is to just redirect to default.aspx if the request url is "/" but I would really like to understand why this is occurring.

    Read the article

  • What is the difference between "render a view" and send the response using the Response's method "sendResponse()"?

    - by Green
    I've asked a question about what is "rendering a view". Got some answers: Rendering a view means showing up a View eg html part to user or browser. and So by rendering a view, the MVC framework has handled the data in the controller and done the backend work in the model, and then sends that data to the View to be output to the user. and render just means to emit. To print. To echo. To write to some source (probably stdout). but don't understand then the difference between rendering a view and using the Response class to send the output to the user using its sendResponse() method. If render a view means to echo the output to the user then why sendResponse() exists and vise versa? sendResponse() exactly sends headers and after headers outputs the body. They solve the same tasks but differently? What is the difference?

    Read the article

  • Mapping a Vertex Buffer in DirectX11

    - by judeclarke
    I have a VertexBuffer that I am remapping on a per frame base for a bunch of quads that are constantly updated, sharing the same material\index buffer but have different width/heights. However, currently right now there is a really bad flicker on this geometry. Although it is flickering, the flicker looks correct. I know it is the vertex buffer mapping because if I recreate the entire VB then it will render fine. However, as an optimization I figured I would just remap it. Does anyone know what the problem is? The length (width, size) of the vertex buffer is always the same. One might think it is double buffering, however, it would not be double buffering because it only happens when I map/unmap the buffer, so that leads me to believe that I am setting some parameters wrong on the creation or mapping. I am using DirectX11, my initialization and remap code are: Initialization code D3D11_BUFFER_DESC bd; ZeroMemory( &bd, sizeof(bd) ); bd.Usage = D3D11_USAGE_DYNAMIC; bd.ByteWidth = vertCount * vertexTypeWidth; bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; //bd.CPUAccessFlags = 0; bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; D3D11_SUBRESOURCE_DATA InitData; ZeroMemory( &InitData, sizeof(InitData) ); InitData.pSysMem = vertices; mVertexType = vertexType; HRESULT hResult = device->CreateBuffer( &bd, &InitData, &m_pVertexBuffer ); // This will be S_OK if(hResult != S_OK) return false; Remap code D3D11_MAPPED_SUBRESOURCE resource; HRESULT hResult = deviceContext->Map(m_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource); // This will be S_OK if(hResult != S_OK) return false; resource.pData = vertices; deviceContext->Unmap(m_pVertexBuffer, 0);

    Read the article

  • UV texture mapping with perspective correct interpolation

    - by Twodordan
    I am working on a software rasterizer for educational purposes and I am having issues with the texturing. The problem is, only one face of the cube gets correctly textured. The rest are stretched edges: You can see the running program online here. I have used cartesian coordinates, and all I do is interpolate the uv values along the scanlines. The general formula I use for interpolating the uv coordinates is pretty much the one I use for the z-buffering interpolation and looks like this (in this case for horizontal scanlines): u_Slope = (right.u - left.u) / (triangleRight_x - triangleLeft_x); v_Slope = (right.v - left.v) / (triangleRight_x - triangleLeft_x); //[...] new_u = left.u + ((currentX_onScanLine - triangleLeft_x) * u_Slope); new_v = left.v + ((currentX_onScanLine - triangleLeft_x) * v_Slope); Then, when I add each point to the pixel buffer, I restore z and uv: z = (1/z); uv.u = Math.round(uv.u * z *100);//*100 because my texture is 100x100px uv.v = Math.round(uv.v * z *100); Then I turn the u v indexes into one index in order to fetch the correct pixel from the image data (which is a 1 dimensional px array): var index = texture.width * uv.u + uv.v; //and the rest is unimportant imagedata[index].RGBA bla bla The interpolation formula is correct considering the consistency of the texture (including the straight stripes). However, I seem to get quite a lot of 0 values for either u or v. Which is probably why I only get one face right. Furthermore, why is the texture flipped horizontally? (the "1" is flipped) I must get some sleep now, but before I get into further dissecting of every single value to see what goes wrong, Can someone more experienced guess why might this be happening, just by looking at the cube? "I have no idea what I'm doing" (it's my first time implementing a rasterizer). Did I miss an important stage? Thanks for any insight. PS: My UV values are as follows: { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }, { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }

    Read the article

  • How to use caching to increase render performance?

    - by Christian Ivicevic
    First of all I am going to cover the basic design of my 2d tile-based engine written with SDL in C++, then I will point out what I am up to and where I need some hints. Concept of my engine My engine uses the concept of GameScreens which are stored on a stack in the main game class. The main methods of a screen are usually LoadContent, Render, Update and InitMultithreading. (I use the last one because I am using v8 as a JavaScript bridge to the engine. The main game loop then renders the top screen on the stack (if there is one; otherwise, it exits the game) - actually it calls the render methods, but stores all items to be rendered in a list. After gathering all this information the methods like SDL_BlitSurface are called by my GameUIRenderer which draws the enqueued content and then draws some overlay. The code looks like this: while(Game is running) { Handle input if(Screens on stack == 0) exit Update timer etc. Clear the screen Peek the screen on the stack and collect information on what to render Actually render the enqueue screen stuff and some overlay etc. Flip the screen } The GameUIRenderer uses as hinted a std::vector<std::shared_ptr<ImageToRender>> to hold all necessary information described by this class: class ImageToRender { private: SDL_Surface* image; int x, y, w, h, xOffset, yOffset; }; This bunch of attributes is usually needed if I have a texture atlas with all tiles in one SDL_Surface and then the engine should crop one specific area and draw this to the screen. The GameUIRenderer::Render() method then just iterates over all elements and renders them something like this: std::for_each( this->m_vImageVector.begin(), this->m_vImageVector.end(), [this](std::shared_ptr<ImageToRender> pCurrentImage) { SDL_Rect rc = { pCurrentImage->x, pCurrentImage->y, 0, 0 }; // For the sake of simplicity ignore offsets... SDL_Rect srcRect = { 0, 0, pCurrentImage->w, pCurrentImage->h }; SDL_BlitSurface(pCurrentImage->pImage, &srcRect, g_pFramework->GetScreen(), &rc); } ); this->m_vImageVector.clear(); Current ideas which need to be reviewed The specified approach works really good and IMHO it is really has a good structure, however the performance could be definitely increased. I would like to know what do you suggest, how to implement efficient caching of surfaces etc so that there is no need to redraw the same scene over and over again? The map itself would be almost static, only when the player moves, we would need to move the map. Furthermore animated entities would either require updates of the whole map or updates of only the specific areas the entities are currently moving in. My first approaches were to include a flag IsTainted which should be used by the GameUIRenderer to decide whether to redraw everything or use cached version (or to not render anything so that we do not have to Clear the screen and let the last frame persist). However this seems to be quite messy if I have to manually handle in my Render method of the screen class if something has changed or not.

    Read the article

  • Inter Quake Model IQM render Directx9

    - by Andrew_0
    I'm trying to render an Inter Quake Model(http://lee.fov120.com/iqm/) in DirectX9 that I exported from blender. I want to display animations which IQM supports and my model format does not. The model is a cylinder. It loads fine in the iqm sdk opengl viewer but when i try to render it in directx9 using for example(this is just to render the vertices): IDirect3DDevice9 * device; HRESULT hr = S_OK; for(int i = 0; i < nummeshes; i++) { iqmmesh &m = meshes[0]; hr = device->DrawIndexedPrimitiveUP(D3DPT_TRIANGLELIST, 0, 3*m.num_triangles, m.num_triangles ,&tris[m.first_triangle] ,D3DFMT_INDEX32 ,inposition ,sizeof(unsigned int)); } It renders like this: Incorrect The light grey bit that looks like two triangles in the middle is what is rendered(ignore the other stuff). Whereas it is meant to look like this(using a custom importer which I designed which matches what is displayed in blender): Correct Anyone have any suggestions on what might be going wrong?

    Read the article

  • Why is my Tiled map distorted when rendered with LibGDX?

    - by Sean
    I have a Tiled map that looks like this in the editor: But when I load it using an AssetManager (full static source available on GitHub) it appears completely askew. I believe the relevant portion of the code is below. This is the entire method; the others are either empty or might as well be. private OrthographicCamera camera; private AssetManager assetManager; private BitmapFont font; private SpriteBatch batch; private TiledMap map; private TiledMapRenderer renderer; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); camera = new OrthographicCamera(); assetManager = new AssetManager(); batch = new SpriteBatch(); font = new BitmapFont(); camera.setToOrtho(false, (w / h) * 10, 10); camera.update(); assetManager.setLoader(TiledMap.class, new TmxMapLoader( new InternalFileHandleResolver())); assetManager.load(AssetInfo.ICE_CAVE.assetPath, TiledMap.class); assetManager.finishLoading(); map = assetManager.get(AssetInfo.ICE_CAVE.assetPath); renderer = new IsometricTiledMapRenderer(map, 1f/64f); }

    Read the article

  • Drawing territories border in 2d map

    - by Gabriel A. Zorrilla
    I'm programming a little web strategy game. In the country map I pretend to display each country with a national color. The issue is how to render the borders in a simple and efficient way. Right now I'm planning to set a field to each tile called "border" with values from 0 to 8. The algorithm would check for EVERY tile is its adjacent has a different "owner". If the tile is inside the territory, the border value would be 0, because would not have adjacent any tile with different owner, if not, would vary between 1 (north) clockwise to 9 (north-west) and then draw the border. I find this simple but too processor-intensive. Are there any other "pro" choices to render territories borders?

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >