Search Results

Search found 68 results on 3 pages for 'directx9'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • About Alpha blending sprites in Direct3D9

    - by ambrozija
    I have a Direct3D9 application that is rendering ID3DXSprites. The problem I am experiencing is best described in this situation: I have a texture that is totally opaque. On top of it I draw a rectangle filled with solid color and alpha of 128. On top of the rectangle I have a text that is totally opaque. I draw all of this and get the resulting image through GetRenderTarget call. The problem is that on the resulting image, on the area where the transparent rectangle is, I have semi transparent pixels. It is not a problem that the rectangle is transparent, the problem is that the resulting image is. The question is how to setup the blending so in this situation I don't get the transparent pixels in the resulting image? I use the sprite with D3DXSPRITE_ALPHABLEND which sets the device state to D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA. I tried couple of combinations of SetRenderState, like D3DBLEND_SRCALPHA, D3DBLEND_DESTALPHA etc., but couldn't make it work. Thanks.

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • Direct3d - Code structure

    - by marcg11
    I'm learning directx in a master's degree and they taught us to have a GraphicsLayer class which is the one connecting with the direct3d library. That way this class is completly independent from the other classes (my game classes), meaning changing the renderer to OpenGL wouldn't require much effort but only changing the graphicLayer. This classe has it's LoadAssets, Paint methods, but I have a question, they told us to load all the assets inside this class. This means all these methods will be in the loadAssets method: D3DXCreateTextureFromFileEx(g_pD3DDevice,"tiles.png",0,0,1,0,D3DFMT_UNKNOWN,D3DPOOL_DEFAULT,D3DX_FILTER_NONE,D3DX_FILTER_NONE,NULL,NULL,NULL,&texTiles); // And more resources to load //... texTiles as you see is a LPDIRECT3DTEXTURE9 instance which is declared in the graphicLayer.h. So my question is, how do you manage all the resources? Do I have to declare in the .h all my game textures even if I'm not using them? How would you load only those resources there are in a scene and draw them in a code-strucured way?

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • DX9 Deferred Rendering, GBuffer displays as clear color only

    - by Fire31
    I'm trying to implement Catalin Zima's Deferred Renderer in a very lightweight c++ DirectX 9 app (only renders a skydome and a model), at this moment I'm trying to render the gbuffer, but I'm having a problem, the screen shows only the clear color, no matter how much I move the camera around. However, removing all the render target operations lets the app render the scene normally, even if the models are being applied the renderGBuffer effect. Any ideas of what I'm doing wrong?

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • How to get warnings when compiling fx files

    - by jdv-Jan de Vaan
    When I compile DirectX shaders (.fx files), I dont see any compiler warnings unless there was an error in the effect. This happens both when using the offline FXC compiler, as well as calling SlimDx's CompileEffect (which is what we normally do). I could force warnings as errors (/WX), but if you enable that, you get an error that compilation failed, without the warning that caused the problem. So how can I output warnings for shaders that compile properly?

    Read the article

  • Isometric smooth fog

    - by marcg11
    I'm working on a simple 2d game with direct3d 9. It's a isometric game with diamond tiles and a staggered map. This is what I have: As you se I have some king of fog which is acomplished by having a fog matrix which is true (clear terrain) or false (obscure terran). But the result is very chunky. The fog moves as the player moves by tiles but not by pixels. Basically I check for every tile if there is fog, if so I just change the color of that tile: if(scene->fog[i+mapx][j+mapy] == FOG_NONE) { tile_color = 0x666666FF; } I also would like the fog to be smoother, for that I followed this "tutorial" but I haven't managed to work it it out. http://www.appsizematters.com/2010/07/how-to-implement-a-fog-of-war-part-2-smooth/

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • How to blend the sprite into background?

    - by optimisez
    I try to blend the character into game but I still cannot remove the blue color in the sprite sheet and discover that the white area of sprite is semi-transparent. Before that, the color D3DCOLOR_XRGB(255, 255, 255) is set in D3DXCreateTextureFromFileEx. You will see the fireball through the sprite. After I change the color to D3DCOLOR_XRGB(0, 255, 255), the result will be Now, I am trying to remove the blue color of the sprite sheet and my expected result is something like that Until now, I still cannot figure out how to do that. Any ideas? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(0, 255, 255), NULL, NULL, &player); } void renderPlayer() { sprite->Draw(player, &playerRect, NULL, &D3DXVECTOR3(playerDest.X, playerDest.Y, 0),D3DCOLOR_XRGB(255, 255, 255)); } void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &fireball); } void renderFireball() { sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest.X, fireballDest.Y, 0), D3DCOLOR_XRGB(255,255, 255)); }

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • How to capture the screen in DirectX 9 to a raw bitmap in memory without using D3DXSaveSurfaceToFile

    - by cloudraven
    I know that in OpenGL I can do something like this glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, _width, _height, GL_RGB, GL_UNSIGNED_BYTE, _buffer ); And its pretty fast, I get the raw bitmap in _buffer. When I try to do this in DirectX. Assuming that I have a D3DDevice object I can do something like this if (SUCCEEDED(D3DDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackbuffer))) { HResult hr = D3DXSaveSurfaceToFileA(filename, D3DXIFF_BMP, pBackbuffer, NULL, NULL); But D3DXSaveSurfaceToFile is pretty slow, and I don't need to write the capture to disk anyway, so I was wondering if there was a faster way to do this

    Read the article

  • How can I make a 32 bit render target with a 16 bit alpha channel in DirectX?

    - by J Junker
    I want to create a render target that is 32-bit, with 16 bits each for alpha and luminance. The closest surface formats I can find in the DirectX SDK are: D3DFMT_A8L8 // 16-bit using 8 bits each for alpha and luminance. D3DFMT_G16R16F // 32-bit float format using 16 bits for the red channel and 16 bits for the green channel. But I don't think either of these will work, since D3DFMT_A8L8 doesn't have the precision and D3DFMT_G16R16F doesn't have an alpha channel (I need a separate blend state for alpha). How can I create a render target that allows a separate blend state for luminance and alpha, with 16 bit precision on each channel, that doesn't exceed 32 bits per pixel?

    Read the article

  • Why my collision detection is not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him standstill on the wall? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; } void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; }

    Read the article

  • Getting the front buffer into a gfx mem surface (Dx9)

    - by lapin
    I'm using DirectX 9 to acquire the frontbuffer. There are a couple of ways I know of to get at the front buffer: GetRenderTargetData() GetFrontBufferData() The MSDN page on both of these API calls state that the data is copied from device memory to system memory. I'd like to copy the front buffer surface directly to another graphics memory surface, as I have other manipulations to perform on the acquired surface before returning it to system memory. I'm creating a D3DUSAGE_DYNAMIC texture (gfx mem texture) and calling GetFrontBufferData() to write the front buffer to my textures surface0. Is this valid? Will the operation remain in gfx memory, or will it need to move to system memory and then back to graphics memory? If this is the case, is what I'm trying to achieve possible?

    Read the article

  • How to set orthgraphic matrix for a 2d camera with zooming?

    - by MahanGM
    I'm using ID3DXSprite to draw my sprites and haven't set any kind of camera projection matrix. How to setup an orthographic projection matrix for camera in DirectX which it would be able to support zoom functionality? D3DXMATRIX orthographicMatrix; D3DXMATRIX identityMatrix; D3DXMatrixOrthoLH(&orthographicMatrix, nScreenWidth, nScreenHeight, 0.0f, 1.0f); D3DXMatrixIdentity(&identityMatrix); device->SetTransform(D3DTS_PROJECTION, &orthographicMatrix); device->SetTransform(D3DTS_WORLD, &identityMatrix); device->SetTransform(D3DTS_VIEW, &identityMatrix); This code is for initial setup. Then, for zooming I multiply zoom factor in nScreenWidth and nScreenHeight.

    Read the article

  • Is there any guarantee about the graphical output of different GPUs in DirectX?

    - by cloudraven
    Let's say that I run the same game in two different computers with different GPUs. If for example they are both certified for DirectX 10. Is there a guarantee that the output for a given program (game) is going to be the same regardless the manufacturer or model of the GPU? I am assuming the configurable settings are exactly the same in both cases. I heard that it is not the case for DirectX 9 and older, but that it is true for DirectX 10. If someone could provide a source confirming or denying it, it would be great. Also what is the guarantee offered. Will the output be exactly the same or just perceptually the same to the human eye?

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • How do I make camera move at same speed when rotating and moving forward

    - by dez
    I made a camera in DX9. To move forward I press the Up arrow. To rotate on the Y axis I use the mouse. When I perform these movements on their own the camera moves at the speed I want. However, if I hold down Up and move the mouse at the same time then the camera moves a lot faster than it should. I want it to move at the same speed as it does when only the Up arrow is pressed. I think I need to normalize something somewhere but not sure what and not sure where. Have tried various combinations without success so if anyone can point me in the right direction that would be great. Thanks. I've post code below. #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) LRESULT WINAPI MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if( KEY_DOWN(VK_UP)) MovePlayer(D3DXVECTOR3(0, 0, -1.0f)); if( KEY_DOWN(VK_DOWN)) MovePlayer(D3DXVECTOR3(0, 0, 1.0f)); switch( msg ) { case WM_MOUSEMOVE: ProcessMouseInput(); } } void MovePlayer( D3DXVECTOR3 in_vec ) { D3DXMATRIX CameraRot; D3DXMatrixRotationY(&CameraRot,D3DXToRadian(AngleY)); D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&in_vec,&CameraRot); CameraPos += (m_timeElapsed * CameraRotTarget); } void ProcessMouseInput() { GetCursorPos( &CurrentMouseState ); if ((CurrentMouseState.x != GameMouseState.x) || (CurrentMouseState.y != GameMouseState.y)) { int dx = CurrentMouseState.x - GameMouseState.x; int dy = CurrentMouseState.y - GameMouseState.y; AngleY+=m_timeElapsed*dx*7.0f; } GameMouseState = CurrentMouseState; // Set back to window center in Render function } VOID UpdateCamera() { D3DXVECTOR3 CameraOrigTarget(0, 0, -1); D3DXVECTOR3 CameraOrigUp(0, 1, 0); D3DXMATRIX CameraRot; D3DXMATRIX CameraRotX; D3DXMatrixRotationX(&CameraRotX,D3DXToRadian(AngleX)); D3DXMATRIX CameraRotY; D3DXMatrixRotationY(&CameraRotY,D3DXToRadian(AngleY)); CameraRot = CameraRotX * CameraRotY; D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&CameraOrigTarget,&CameraRot); D3DXVECTOR3 CameraTarget; CameraTarget = CameraPos + CameraRotTarget; D3DXVECTOR3 vUpVec( 0.0f, 1.0f, 0.0f ); D3DXMatrixLookAtLH( &matView, &CameraPos, &CameraTarget, &vUpVec ); g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); }

    Read the article

  • How can I keep the correct alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles.

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • How can I keep straight alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles. Now,I changed the title.For some software like maya or AE,what I want is called [straight alpha].

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

< Previous Page | 1 2 3  | Next Page >