Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 267/540 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • How should I unbind and delete OpenAL buffers?

    - by Joe Wreschnig
    I'm using OpenAL to play sounds. I'm trying to implement a fire-and-forget play function that takes a buffer ID and assigns it to a source from a pool I have previously allocated, and plays it. However, there is a problem with object lifetimes. In OpenGL, delete functions either automatically unbind things (e.g. textures), or automatically deletes the thing when it eventually is unbound (e.g. shaders) and so it's usually easy to manage deletion. However alDeleteBuffers instead simply fails with AL_INVALID_OPERATION if the buffer is still bound to a source. Is there an idiomatic way to "delete" OpenAL buffers that allows them to finish playing, and then automatically unbinds and really them? Do I need to tie buffer management more deeply into the source pool (e.g. deleting a buffer requires checking all the allocated sources also)? Similarly, is there an idiomatic way to unbind (but not delete) buffers when they are finished playing? It would be nice if, when I was looking for a free source, I only needed to see if a buffer was attached at all and not bother checking the source state. (I'm using C++, although approaches for C are also fine. Approaches assuming a GCd language and using finalizers are probably not applicable.)

    Read the article

  • Strange if-else branching behavior in a fragment shader

    - by Winged
    In my fragment shader I have passed an uniform int uLightType variable, which indicates what type of light is in usage right now. The problem is that if-else branching does not work correctly - the fragment shader performs instructions in every if statement block. if (uLightType == 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } else if (uLightType == 2) { // Omni-directional light type shadowDepth = unpack(textureCube(uDepthCubemapSampler, -lightVec)); } In the case when uLightType equals 1, unless I comment out the content of the second if block, it assigns an another value to shadowDepth. Also while uLightType equals 1, when I remove the second 'if' block and change == to != like in the sample code below, nothing happens (which means that uLightType really equals 1). if (uLightType != 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } Also, when I manually create an int variable (which is not an uniform) like this: var lightType = 1; and replace uLightType with it in the if-else branch, everything works fine, so I guess it have something to do with the fact that uLightType is the uniform.

    Read the article

  • Multiple Sprites using foreach Collison Detection in XNA (C#)

    - by Bradley Kreuger
    Back again from my last question. Now I was curious I use a foreach statement to use the same shot class. How would I go about doing collison detection. I used the tutorial here on how to shoot a fireball http://www.xnadevelopment.com/tutorials.shtml. I tried to put in several places a foreach to look at all of them to see if they have reached the borders of my sprite hero but doesn't seem to do anything. If again some one might know of a good site that has tutorials to explain collision detection a little bit better that would be appriecated.

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Collision filtering techniques

    - by Griffin
    I was wondering what efficient techniques are out there for mapping collision filtering between various bodies, sub-bodies, and so forth. I'm familiar with the simple idea of having different layers of 2D bodies, but this is not sufficient for more complex mapping: (Think of having sub-bodies of a body, such as limbs, collide with each other by placing them on the same layer, and then wanting to only have the legs collide with the ground while the arms would not) This can be solved with a multidimensional layer setup, but I would probably end up just creating more and more layers to the point where the simplicity and efficiency of layer filtering would be gone. Are there any more complex ways to solve even more complex situations than this?

    Read the article

  • Corona SDK: Quality of support and resources?

    - by Nick Wiggill
    I've not used Corona SDK before and am looking into it for a friend. (He is also considering Unity.) I wonder what the support is like for Corona? While Unity has a great many customers and so the Unity team can often not address issues directly, the community at large is very helpful and there are many excellent resources: tutorials, forum posts, code resources. What is Corona like in this regard, and by comparison?

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Convert rotation from Right handed System to left handed

    - by Hector Llanos
    I have Euler angles from a right handed system that I am trying to convert to a left handed system. All the information that I have read online says that to convert it simply multiply the axis and the angle in the correct order and it should work. In other words, Z * Y * X. When I do this what I see in Maya, and in engine still do not match up. This is what I have so far: static Quaternion ConvertToRightHand(Vector3 Euler) { Quaternion x = Quaternion.AngleAxis(-Euler.x, Vector3.right); Quaternion y = Quaternion.AngleAxis(Euler.y, Vector3.up); Quaternion z = Quaternion.AngleAxis(Euler.z, Vector3.forward); return (z * y * x); } Keeping the -Euler.x helps keep the object pointing up correctly, but when I pass ( 0,0,0) to face in the -z, it faces in the +z. Help :/

    Read the article

  • DirectX10 How to use Constant Buffers

    - by schnozzinkobenstein
    I'm trying to access some variables in my shader, but I think I'm doing this wrong. Say I have a constant buffer that looks like this: cbuffer perFrame { float foo; float bar; }; I got an ID3D10EffectConstantBuffer reference to it, and I can get a specific index by calling GetMemberByIndex, but how can I figure out how many members perFrame has so that I can get each member without going out of bounds?

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Why does my terrain turn white when I get close to it?

    - by Starkers
    When I zoom in on my terrain it goes white: The further in I zoom, the greater the whiteness becomes. Is this normal? Is this to speed up rendering or something? Can I turn it off? I'm also getting these error messages in the console over and over again: rc.right != m_GfxWindow-GetWidth() || rc.bottom != m_GfxWindow-GetHeight() and GUI Window tries to begin rendering while something else has not finished rendering! Either you have a recursive OnGUI rendering, or previous OnGUI did not clean up properly. Does this bear any correlation on the issue? Update I create virtual desktops to flit between using the program Deskpot. Turning this program off and restarting has stopped the above errors appearing in the console. However, I still get white terrain when I zoom in. Not a single error message. I've restarted my computer to no avail. I have an Asus NVidia GeForce GTX 760 2GB DDR5 Direct CU II OC Edition Graphics Card. Any known issues? Update I don't think it's fog...

    Read the article

  • Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or playery increments by one at every iteration until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or playery keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • Texture the quad with different parts of texture

    - by PolGraphic
    I have a 2D quad. Let say it's position is (5,10) and size is (7,11). I want to texture it with one texture, but using three different parts of it. I want to texture the part of quad from x = 5 to x = 7 with part of texture from U = 0 to U = 0.5 (replaying it after achieving 0.5, so I will have 4 same 0.5-lenght fragments). The second one with some other part of texture (also repeating it) and third in the same style. But, how to achieve it? I know that: float2 tc = fmod(input.TexCoord, textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy; //textureCoordinates.xy = fragments' offset Will give me the texture part replaying.

    Read the article

  • Setting up collision using a tilemap and cocos2d

    - by James
    I'm building my first platformer using cocos2d and a tilemap. I'm having trouble coming up with a decent way of determining if the character is colliding with an object. More specifically, in which direction is the character colliding with an object. Following the tutorial here, I have made a separate "meta" layer of collidable tiles. The problem is that unless the character is in the tile, you can't detect the collision. Also, there's no way of telling WHERE the collision is occurring. The best solution would be one that could tell me if a character is up against a wall, or walking on top of a platform. However, I can't seem to figure out a good technique for this.

    Read the article

  • Applying effects to an existing program that uses BasicEffect

    - by Fibericon
    Using the finished product from the tutorial here. Is it possible to apply the grayscale effect from here: Making entire scene fade to grayscale Or would you basically have to rewrite everything? EDIT: It's doing something now, but the whole grayscale seems extremely blue. It's like I'm looking at it through dark blue sunglasses. Here's my draw function: protected override void Draw(GameTime gameTime) { device.SetRenderTarget(renderTarget); graphics.GraphicsDevice.Clear(Color.CornflowerBlue); //Drawing models, bullets, etc. device.SetRenderTarget(null); spriteBatch.Begin(0, BlendState.Additive, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, grayScale); Texture2D temp = (Texture2D)renderTarget; grayScale.Parameters["coloredTexture"].SetValue(temp); grayScale.CurrentTechnique = grayScale.Techniques["Grayscale"]; foreach (EffectPass pass in grayScale.CurrentTechnique.Passes) { pass.Apply(); } spriteBatch.Draw(temp, new Vector2(GraphicsDevice.PresentationParameters.BackBufferWidth/2, GraphicsDevice.PresentationParameters.BackBufferHeight/2), null, Color.White, 0f, new Vector2(renderTarget.Width/2, renderTarget.Height/2), 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } Another edit: figured out what I was doing wrong. I have Blendstate.Additive in the spriteBatch.Draw() call. It should be Blendstate.Opaque, or it literally tries to add the blank blue image to the grayscale image.

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • First Person Camera strafing at angle

    - by Linkandzelda
    I have a simple camera class working in directx 11 allowing moving forward and rotating left and right. I'm trying to implement strafing into it but having some problems. The strafing works when there's no camera rotation, so when the camera starts at 0, 0, 0. But after rotating the camera in either direction it seems to strafe at an angle or inverted or just some odd stuff. Here is a video uploaded to Dropbox showing this behavior. https://dl.dropboxusercontent.com/u/2873587/IncorrectStrafing.mp4 And here is my camera class. I have a hunch that it's related to the calculation for camera position. I tried various different calculations in strafe and they all seem to follow the same pattern and same behavior. Also the m_camera_rotation represents the Y rotation, as pitching isn't implemented yet. #include "camera.h" camera::camera(float x, float y, float z, float initial_rotation) { m_x = x; m_y = y; m_z = z; m_camera_rotation = initial_rotation; updateDXZ(); } camera::~camera(void) { } void camera::updateDXZ() { m_dx = sin(m_camera_rotation * (XM_PI/180.0)); m_dz = cos(m_camera_rotation * (XM_PI/180.0)); } void camera::Rotate(float amount) { m_camera_rotation += amount; updateDXZ(); } void camera::Forward(float step) { m_x += step * m_dx; m_z += step * m_dz; } void camera::strafe(float amount) { float yaw = (XM_PI/180.0) * m_camera_rotation; m_x += cosf( yaw ) * amount; m_z += sinf( yaw ) * amount; } XMMATRIX camera::getViewMatrix() { updatePosition(); return XMMatrixLookAtLH(m_position, m_lookat, m_up); } void camera::updatePosition() { m_position = XMVectorSet(m_x, m_y, m_z, 0.0); m_lookat = XMVectorSet(m_x + m_dx, m_y, m_z + m_dz, 0.0); m_up = XMVectorSet(0.0, 1.0, 0.0, 0.0); }

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • Mobile 3D engine renders alpha as full-object transparency

    - by Nils Munch
    I am running a iOS project using the isgl3d framework for showing pod files. I have a stylish car with 0.5 alpha windows, that I wish to render on a camera background, seeking some augmented reality goodness. The alpha on the windows looks okay, but when I add the object, I notice that it renders the entire object transparently, where the windows are. Including interior of the car. Like so (in example, keyboard can be seen through the dashboard, seats and so on. should be solid) The car interior is a seperate object with alpha 1.0. I would rather not show a "ghost car" in my project, but I haven't found a way around this. Have anyone encountered the same issue, and eventually reached a solution ?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >