Search Results

Search found 1784 results on 72 pages for 'depth'.

Page 1/72 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • D3D11 how to simulate multiple depth channels

    - by Nock
    Here's what I'd like to achieve: Rendering a first pass of objects in my scene, using standard depth comparison Rendering another pass of objects in the same scene, but with the following rules: A Pixel of the 2nd pass always override the first pass (no depth compare between them) Use Depth comparison between pixels written from the second pass. In English I want depth comparison made inside each pass but I always want the second pass pixels to override the first pass ones. Some things I've thought: I tried to think about using stencil to solve this, but I couldn't find a way. I know I could render into a separate target the second pass then composite the result into the first, but I'd like to avoid that. I could use two separate Depth Buffer, one dedicated to each pass. (I never tried, but I figure it's possible to switch the depth buffer in a Render Target "on the fly") Any idea of the best solution? Thanks

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Get Specific depth values in Kinect (XNA)

    - by N0xus
    I'm currently trying to make a hand / finger tracking with a kinect in XNA. For this, I need to be able to specify the depth range I want my program to render. I've looked about, and I cannot see how this is done. As far as I can tell, kinect's depth values only work with pre-set ranged found in the depthStream. What I would like to do is make it modular so that I can change the depth range my kinect renders. I know this has been down before but I can't find anything online that can show me how to do this. Could someone please help me out? I have made it possible to render the standard depth view with the kinect, and the method that I have made for converting the depth frame is as follows (I've a feeling its something in here I need to set) private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length) { int tooNearDepth = depthStream.TooNearDepth; int tooFarDepth = depthStream.TooFarDepth; int unknownDepth = depthStream.UnknownDepth; byte[] depthFrame32 = new byte[depthFrame32Length]; for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(~(realDepth >> 8)); if (player == 0 && realDepth == 00) { // white depthFrame32[i32 + RedIndex] = 255; depthFrame32[i32 + GreenIndex] = 255; depthFrame32[i32 + BlueIndex] = 255; } // omitted other if statements. Simple changed the color of the pixels if they went out of the pre=set depth values else { // tint the intensity by dividing by per-player values depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]); depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]); depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]); } } return depthFrame32; } I have a strong hunch it's something I need to change in the int player and int realDepth values, but i can't be sure.

    Read the article

  • Disabling depth write trashes the frame buffer on some GPUs

    - by EboMike
    I sometimes disable depth buffer writing via glDepthMask(GL_FALSE) during the alpha rendering of a frame. That works perfectly fine on some GPUs (like the Motorola Droid's PowerVR), but on the HTC EVO with the Adreno GPU for example, I end up with the frame buffer being complete garbage (I see traces of the meshes I rendered somewhere, but the entire screen is mostly trashed). If I force glDepthMask to be true the entire time, everything works fine. I need glDepthMask to be off during parts of the alpha rendering. What can cause the framebuffer to get destroyed by turning the depth writing off? I do clear the depth buffer initially, and the majority of the screen has pixels rendered with depth writing turned on first before I do additional drawing with it turned off.

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • When does depth testing happen?

    - by Utkarsh Sinha
    I'm working with 2D sprites - and I want to do 3D style depth testing with them. When writing a pixel shader for them, I get access to the semantic DEPTH0. Would writing to this value help? It seems it doesn't. Maybe it's done before the pixel shader step? Or is depth testing only done when drawing 3D things (I'm using SpriteBatch)? Any links/articles/topics to read/search for would be appreciated.

    Read the article

  • Why are my scene's depth values not being written to my DepthStencilView?

    - by dotminic
    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything. I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1. I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled. The device is created with the appropriate debug flags; #if defined(DEBUG) || defined(_DEBUG) deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL; #endif This is how I create my depth map. I've omitted error checking for the sake of brevity D3D11_TEXTURE2D_DESC td; td.Width = width; td.Height = height; td.MipLevels = 1; td.ArraySize = 1; td.Format = DXGI_FORMAT_R32_TYPELESS; td.SampleDesc.Count = 1; td.SampleDesc.Quality = 0; td.Usage = D3D11_USAGE_DEFAULT; td.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; td.CPUAccessFlags = 0; td.MiscFlags = 0; _device->CreateTexture2D(&texDesc, 0, &this->_depthMap); D3D11_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvd.Texture2D.MipSlice = 0; _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV); D3D11_SHADER_RESOURCE_VIEW_DESC srvd; srvd.Format = DXGI_FORMAT_R32_FLOAT; srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvd.Texture2D.MipLevels = texDesc.MipLevels; srvd.Texture2D.MostDetailedMip = 0; _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • Getting Depth Value on Kinect SDK 1.6

    - by AlexanderPD
    this is my first try on Kinect and Kinect SDK so I'm having a lot of "newbie issues" :) my goal is to point my mouse on the Kinect standard video output and get the depth value. I already have both normal video and depth video outputs by using the 2 "Color Basic-WPF" and "Depth Basic-WPF" samples, and handling mouse events or position is not a problem. In fact i already did all and i already got a depth value, but this value is always HIGHLY imprecise. It jumps from 500 to 4000 by just moving to the next pixel in a plane surface. So.. I'm pretty sure I'm reading the depth value in the wrong way. This is how i read it: short debugValue = depthPixels[x*y].Depth; debug.Text = "X = "+x+", Y = "+y+", value = "+debugValue.ToString(); i know it's pretty out of context, this little piece of code is inside the same SensorDepthFrameReady function in "Depth Basic-WPF"! "x" and "y" are the mouse coordinates and depthPixels is DepthImagePixel[] type, a temporary array filled with the "depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);" instruction. Depth frame is filled here: DepthImageFrame depthFrame = e.OpenDepthImageFrame() the "e" comes from here: private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) and this last one is called here: this.sensor.DepthFrameReady += this.SensorDepthFrameReady; how i must handle that depth value i get? I know the value must be between 800 and 4000 but i get values between about 500 and about 8000. i already google a lot (here on SO too) and i still can't understand if the depth value is 11 or 13 bit. The sdk examples uses shrink this value to 8 bit and this is making even more confusion in my head :(

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • ActionScript Measuring Depth

    - by TheDarkIn1978
    i'm having a difficult time understanding how to control the z property of display objects. i know how depth works, but what i don't understand is how i can get the maximum depth, or the number at which the display object just disappears into the background. i assume depth is based on the stage's width and height, and that is why assigning the same depth of the same display object appars mismatched with different stage sizes. so how can i appropriately measure depth?

    Read the article

  • ActionScript Measuring 3D Depth

    - by TheDarkIn1978
    i'm having a difficult time understanding how to control the z property of display objects in a 3D space. i know how depth works, but what i don't understand is how i can get the maximum depth, or the number at which the display object just disappears into the background. i assume depth is based on the stage's width and height, and that is why assigning the same depth of the same display object appars mismatched with different stage sizes. so how can i appropriately measure depth?

    Read the article

  • OpenGL es 2.0 Read depth buffer

    - by Brian
    Hi! As far as i know, we can't read the Z(depth) value in OpenGL ES 2.0. So I am wondering how we can get the 3D world coordinates from a point on the 2D screen? Actually I have some random thoughts might work. Since we can read the RGBA value by using glReadPixels, how about we duplicate the depth buffer and store it in a color buffer(say ColorforDepth). Of course there need to be some nice convention so that we don't lose any information of the depth buffer. And then when we need a point's world coordinates , we attach this ColorforDepth color buffer to the framebuffer and then render it. So when we use glReadPixels to read the depth information at this frame. However, this will lead to 1 frame flash since the colorbuffer is a weird buffer translated from the depth buffer. I am still wondering if there is some standard way to get the depth in OpenGL es 2.0? Thx in advance!:)

    Read the article

  • Depth is disabled - How to turn on?

    - by marc wellman
    In XNA 3.1 is there any other way to disable depth in 3D Worlds using DirectX models other than GraphicsDevice.RenderState.DepthBufferEnable = false; ? The reason for my question is I have quite a huge program which offers a 3D World with a couple of 3D DirectX models inside. Depth was never an issue since it ever worked fine but since a few days after doing some modifications my models are all depth-translucent i.e. depth-buffering and/or culling seems to be disabled. But in my whole source code I never touch any of the options related to Depth or Culling which means I never turn these settings on explicitly nor turn it off somewhere. So I am searching for some other statement maybe related to the GraphicsDevice that implicitly turns depth off - but I can't find it. (Sorry that I don't post any source code but I have too much source code and I simply don't know where to search) UPDATE: These are a couple of simple objects seen with correct depth. These are the same objects in their current state.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Java OpenGL color material darkens when depth test is disabled

    - by Jeff Storey
    I've been working with the depth buffer in OpenGL (JOGL) to ensure certain items are rendered in front of others by disabling the depth buffer (detailed in my previous question http://stackoverflow.com/questions/2516086/java-opengl-saving-depth-buffer). This works, except when I set the color of an item that is being drawn when the depth test is disabled, none of the material shininess shows up. The item is rendered as a darker version of the original color (it seems like there is no lighting effect really applied to it). Is there a reason why this would be happening? and how might I prevent this? thanks, Jeff

    Read the article

  • How to write to the OpenGL Depth Buffer

    - by Mikepote
    I'm trying to implement an old-school technique where a rendered background image AND preset depth information is used to occlude other objects in the scene. So for instance if you have a picture of a room with some wires hanging from the ceiling in the foreground, these are given a shallow depth value in the depthmap, and when rendered correctly, allows the character to walk "behind" the wires but in front of other objects in the room. So far I've tried creating a depth texture using: glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); Then just binding it to a quad and rendering that over the screen, but it doesnt write the depth values from the texture. I've also tried: glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); But this slows down my framerate to about 0.25 fps... I know that you can do this in a pixelshader by setting the gl_fragDepth to a value from the texture, but I wanted to know if I could achieve this with non-pixelshader enabled hardware?

    Read the article

  • Depth First Search Basics

    - by cam
    I'm trying to improve my current algorithm for the 8 Queens problem, and this is the first time I'm really dealing with algorithm design/algorithms. I want to implement a depth-first search combined with a permutation of the different Y values described here: http://en.wikipedia.org/wiki/Eight_queens_puzzle#The_eight_queens_puzzle_as_an_exercise_in_algorithm_design I've implemented the permutation part to solve the problem, but I'm having a little trouble wrapping my mind around the depth-first search. It is described as a way of traversing a tree/graph, but does it generate the tree graph? It seems the only way that this method would be more efficient only if the depth-first search generates the tree structure to be traversed, by implementing some logic to only generate certain parts of the tree. So essentially, I would have to create an algorithm that generated a pruned tree of lexigraphic permutations. I know how to implement the pruning logic, but I'm just not sure how to tie it in with the permutation generator since I've been using next_permutation. Is there any resources that could help me with the basics of depth first searches or creating lexigraphic permutations in tree form?

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • mysql to get depth of record, count parent and ancestor records

    - by Nate
    Hey All, Say I have a post table containing the fields post_id and parent_post_id. I want to return every record in the post table with a count of the "depth" of the post. By depth, I mean, how many parent and ancestor records exist. Take this data for example... post_id parent_post_id ------- -------------- 1 null 2 1 3 1 4 2 5 4 The data represents this hierarchy... 1 |_ 2 | |_ 4 | |_ 5 |_ 3 The result of the query should be... post_id depth ------- ----- 1 0 2 1 3 1 4 2 5 3 Thanks in advance!

    Read the article

  • OpenGL ES depth buffer

    - by Istvan
    Hi! I was wondering if I can deallocate the depth buffer in iPhone OpenGL ES to conserve memory? Or it stays until the application finishes? I only need the depth testing in the beginning of the application.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >