Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 380/1130 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • How do I implement camera axis aligned billboards?

    - by user19787
    I am trying to make an axis-aligned billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the up, right, and look vectors. So far this is what I have: target = cam.pos look = norm(target - billboard.pos) right = norm(Vector3(0,1,0) * look) up = look * right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong?

    Read the article

  • Game Engine which can provide 360 degree projection for PC

    - by Never Quit
    I'm searching Game engine which can provide 360 degree real-time projection. I've already achieved this by using VBS2 Game Engine. (Ref.: http://products.bisimulations.com/products/vbs2/vbs2-multi-channel). But I'm not satisfied with its graphics. So I'm looking for some other Game Engine which can do the same and provide me more better graphics and user experience. Like Frostbite2 or Unreal Engine 3. Like this image I want full 360 degree view. Is there any Game Engine which can provide 360 degree projection for PC? Thanks in advance...

    Read the article

  • Objective-c Cocos2d moving a sprite

    - by marcg11
    I hope someone knows how to do the following with cocos2d: I want a sprite to move but not in a single line by using [cocosGuy runAction: [CCMoveTo actionWithDuration:1 position:location]]; What I want is the sprite to do some kind of movements that I preestablish. For example in some point i want the sprirte to move for instance up and then down but in a curve. Do I have to do this with flash like this documents says? http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:animation Does animation in this page means moving sprites or what? thanks

    Read the article

  • XNA 4 Deferred Rendering deforms the model

    - by Tomáš Bezouška
    I have a problem when rendering a model of my World - when rendered using BasicEffect, it looks just peachy. Problem is when I render it using deferred rendering. See for yourselves: what it looks like: http://imageshack.us/photo/my-images/690/survival.png/ what it should look like: http://imageshack.us/photo/my-images/521/survival2.png/ (Please ignora the cars, they shouldn't be there. Nothing changes when they are removed) Im using Deferred renderer from www.catalinzima.com/tutorials/deferred-rendering-in-xna/introduction-2/ except very simplified, without the custom content processor. Here's the code for the GBuffer shader: float4x4 World; float4x4 View; float4x4 Projection; float specularIntensity = 0.001f; float specularPower = 3; texture Texture; sampler diffuseSampler = sampler_state { Texture = (Texture); MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; float2 TexCoord : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; float2 Depth : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TexCoord = input.TexCoord; //pass the texture coordinates further output.Normal = mul(input.Normal,World); //get normal into world space output.Depth.x = output.Position.z; output.Depth.y = output.Position.w; return output; } struct PixelShaderOutput { half4 Color : COLOR0; half4 Normal : COLOR1; half4 Depth : COLOR2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; output.Color = tex2D(diffuseSampler, input.TexCoord); //output Color output.Color.a = specularIntensity; //output SpecularIntensity output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); //transform normal domain output.Normal.a = specularPower; //output SpecularPower output.Depth = input.Depth.x / input.Depth.y; //output Depth return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } And here are the rendering parts in XNA: public void RednerModel(Model model, Matrix world) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); Game.GraphicsDevice.DepthStencilState = DepthStencilState.Default; Game.GraphicsDevice.BlendState = BlendState.Opaque; Game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; foreach (ModelMesh mesh in model.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { GBufferEffect.Parameters["View"].SetValue(Camera.Instance.ViewMatrix); GBufferEffect.Parameters["Projection"].SetValue(Camera.Instance.ProjectionMatrix); GBufferEffect.Parameters["World"].SetValue(boneTransforms[mesh.ParentBone.Index] * world); GBufferEffect.Parameters["Texture"].SetValue(meshPart.Effect.Parameters["Texture"].GetValueTexture2D()); GBufferEffect.Techniques[0].Passes[0].Apply(); RenderMeshpart(mesh, meshPart); } } } private void RenderMeshpart(ModelMesh mesh, ModelMeshPart part) { Game.GraphicsDevice.SetVertexBuffer(part.VertexBuffer); Game.GraphicsDevice.Indices = part.IndexBuffer; Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount); } I import the model using the built in content processor for FBX. The FBX is created in 3DS Max. I don't know the exact details of that export, but if you think it might be relevant, I will get them from my collegue who does them. What confuses me though is why the BasicEffect approach works... seems the FBX shouldnt be a problem. Any thoughts? They will be greatly appreciated :)

    Read the article

  • How do I limit the game loop?

    - by user1758938
    How do I make a game update at a consistent speed? For example, this would loop too fast: while(true) { GetInput(); Render(); } This just wont work (hard to explain): while(true) { GetInput(); Render(); Sleep(16); } How do I sync my game to a certain framerate and still have input and functions going at a consistent rate?

    Read the article

  • Atmospheric Scattering

    - by Lawrence Kok
    I'm trying to implement atmospheric scattering based on Sean O`Neil algorithm that was published in GPU Gems 2. But I have some trouble getting the shader to work. My latest attempts resulted in: http://img253.imageshack.us/g/scattering01.png/ I've downloaded sample code of O`Neil from: http://http.download.nvidia.com/developer/GPU_Gems_2/CD/Index.html. Made minor adjustments to the shader 'SkyFromAtmosphere' that would allow it to run in AMD RenderMonkey. In the images it is see-able a form of banding occurs, getting an blueish tone. However it is only applied to one half of the sphere, the other half is completely black. Also the banding appears to occur at Zenith instead of Horizon, and for a reason I managed to get pac-man shape. I would appreciate it if somebody could show me what I'm doing wrong. Vertex Shader: uniform mat4 matView; uniform vec4 view_position; uniform vec3 v3LightPos; const int nSamples = 3; const float fSamples = 3.0; const vec3 Wavelength = vec3(0.650,0.570,0.475); const vec3 v3InvWavelength = 1.0f / vec3( Wavelength.x * Wavelength.x * Wavelength.x * Wavelength.x, Wavelength.y * Wavelength.y * Wavelength.y * Wavelength.y, Wavelength.z * Wavelength.z * Wavelength.z * Wavelength.z); const float fInnerRadius = 10; const float fOuterRadius = fInnerRadius * 1.025; const float fInnerRadius2 = fInnerRadius * fInnerRadius; const float fOuterRadius2 = fOuterRadius * fOuterRadius; const float fScale = 1.0 / (fOuterRadius - fInnerRadius); const float fScaleDepth = 0.25; const float fScaleOverScaleDepth = fScale / fScaleDepth; const vec3 v3CameraPos = vec3(0.0, fInnerRadius * 1.015, 0.0); const float fCameraHeight = length(v3CameraPos); const float fCameraHeight2 = fCameraHeight * fCameraHeight; const float fm_ESun = 150.0; const float fm_Kr = 0.0025; const float fm_Km = 0.0010; const float fKrESun = fm_Kr * fm_ESun; const float fKmESun = fm_Km * fm_ESun; const float fKr4PI = fm_Kr * 4 * 3.141592653; const float fKm4PI = fm_Km * 4 * 3.141592653; varying vec3 v3Direction; varying vec4 c0, c1; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main( void ) { // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Pos = normalize(gl_Vertex.xyz) * fOuterRadius; vec3 v3Ray = v3CameraPos - v3Pos; float fFar = length(v3Ray); v3Ray = normalize(v3Ray); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(normalize(v3LightPos), v3SamplePoint) / fHeight; float fCameraAngle = dot(normalize(v3Ray), v3SamplePoint) / fHeight; float fScatter = (-fStartOffset + fDepth*( scale(fLightAngle) - scale(fCameraAngle)))/* 0.25f*/; vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader vec4 newPos = vec4( (gl_Vertex.xyz + view_position.xyz), 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(newPos.xyz, 1.0); gl_Position.z = gl_Position.w * 0.99999; c1 = vec4(v3FrontColor * fKmESun, 1.0); c0 = vec4(v3FrontColor * (v3InvWavelength * fKrESun), 1.0); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: uniform vec3 v3LightPos; varying vec3 v3Direction; varying vec4 c0; varying vec4 c1; const float g =-0.90f; const float g2 = g * g; const float Exposure =2; void main(void){ float fCos = dot(normalize(v3LightPos), v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); gl_FragColor = c0 + fMiePhase * c1; gl_FragColor.a = 1.0; }

    Read the article

  • Testing for Auto Save and Load Game

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game every time the newbies open the first app then continues it on the other day? I'm trying to make a simple app with a simple moving block sprite, starting at the center coordinate. Once I moved the sprite to the top by touch n' drag, I touch the back key to close the app. I expected that once I re-open the app and the block sprite is still at the top. But instead, it goes back to the center instead. Where can I find more ways to use the preferences or manipulating by telling the dispose method to dispose only specific wastes but not the important records (fastest time, last time where the sprite is located via coordinates, etc.). Is there really an easy way or it has no shortcuts but most effective way? I need help to expand more ideas. Thank you. Here are the following links that I'm trying to figure it out how to use them: http://www.youtube.com/watch?v=gER5GGQYzGc http://www.badlogicgames.com/wordpress/?p=1585 http://www.youtube.com/watch?v=t0PtLexfBCA&feature=relmfu Take note that these links above are codes. But I'm not looking answers for code but to look how to start or figure it out how to use them. Tell me if I'm wrong.

    Read the article

  • how many times can i buy swtor credits with Extra 100% Bonus in Father’s Day at swtor2credits?

    - by user46860
    When you buy swtor credits, the most important factor must be the price of swtor credits! how can you get the cheap swtor credits? Big surprise for you: For such a special festival - Father’s Day, swtor2credits made a super promotion for all of the swtor players, in the only three days promotion time, you can buy 1600 credits with only $5.04, buy 2000 credits with only $6.30, and 3000k Credits just only need $9.44. The detail time is June 16 to June 18, 2014, 02:00-03:00 a.m. GMT! swtor2credits have been selling at a big loss with so much cheap swtor credits, this is mean 50% off for your order! that is really a crazy super promotion! So may you can not use the 8% disount code and getting double swtor credits at the same time! Everyone has only one chance to get double swtor credits at swtor2credits during our promotion.As long as your order has used extra discount code or voucher, you lose the chance to get exclusive 100% bonus. don't miss the time to buy such cheap swtor credits, like swtor2credits facebook give you more surprise, From May 29, 2014 to June 12.2014.GMT, you can gain Free Cash Coupon, Up to $16 Giveaways for Swtor Credits if you like swtor2credits facebook! http://www.swtor2credits.com/

    Read the article

  • Inverted textures

    - by brainydexter
    I'm trying to draw textures aligned with this physics body whose coordinate system's origin is at the center of the screen. (XNA)Spritebatch has its default origin set to top-left corner. I got the textures to be positioned correctly, but I noticed my textures are vertically inverted. That is, an arrow texture pointing Up , when rendered points down. I'm not sure where I am going wrong with the math. My approach is to convert everything in physic's meter units and draw accordingly. Matrix proj = Matrix.CreateOrthographic(scale * graphics.GraphicsDevice.Viewport.AspectRatio, scale, 0, 1); Matrix view = Matrix.Identity; effect.World = Matrix.Identity; effect.View = view; effect.Projection = proj; effect.TextureEnabled = true; effect.VertexColorEnabled = true; effect.Techniques[0].Passes[0].Apply(); SpriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, DepthStencilState.Default, RasterizerState.CullNone, effect); m_Paddles[1].Draw(gameTime); SpriteBatch.End(); where Paddle::Draw looks like: SpriteBatch.Draw(paddleTexture, mBody.Position, null, Color.White, 0f, new Vector2(16f, 16f), // origin of the texture 0.1875f, SpriteEffects.None, // width of box is 3*2 = 6 meters. texture is 32 pixels wide. to make it 6 meters wide in world space: 6/32 = 0.1875f 0); The orthographic projection matrix seem fine to me, but I am obviously doing something wrong somewhere! Can someone please help me figure out what am i doing wrong here ? Thanks

    Read the article

  • 2D Procedural Terrain with box2d Assets

    - by Alex
    I'm making a game that invloves a tire moving through terrain that is generated randomly over 1000 points. The terrain is always a downwards incline like so: The actual box2d terrain extends one screen width behind and infront of the circular character. I'm now at a stage where I need to add gameplay elements to the terrain such as chasms or physical objects (like the demo polygon in the picture) and am not sure of the best way to structure the procedural generation of the terrain and objects. I currently have a very simple for loop like so: for(int i = 0; i < kMaxHillPoints; i++) { hillKeyPoints[i] = CGPointMake(terrainX, terrainY); if(i%50 == 0) { i += [self generateCasmAtIndex:i]; } terrainX += winsize.width/20; terrainY -= random() % ((int) winsize.height/20); } With the generateCasmAtIndex function add points to the hillKeyPoints array and incrementing the for loop by the required amount. If I want to generate box2d objects as well for specific terrain elements, I'll also have to keep track of the current position of the player and have some sort of array of box2d objects that need to be created at certain locations. I am not sure of an efficient way to accomplish this procedural generation of terrain elements with accompanying box2d objects. My thoughts are: 1) Have many functions for each terrain element (chasm, jump etc.) which add elements to be drawn to an array that is check on each game step - similar to what I've shown above. 2) Create an array of terrain element objects that string together and are looped over to create the terrain and generate the box2d objects. Each object would hold an array of points to be drawn and and array of accompanying box2d objects. Any help on this is much appreciated as I cannot see a 'clean' solution.

    Read the article

  • Raycasting tutorial / vector math question

    - by mattboy
    I'm checking out this nice raycasting tutorial at http://lodev.org/cgtutor/raycasting.html and have a probably very simple math question. In the DDA algorithm I'm having trouble understanding the calcuation of the deltaDistX and deltaDistY variables, which are the distances that the ray has to travel from 1 x-side to the next x-side, or from 1 y-side to the next y-side, in the square grid that makes up the world map (see below screenshot). In the tutorial they are calculated as follows, but without much explanation: //length of ray from one x or y-side to next x or y-side double deltaDistX = sqrt(1 + (rayDirY * rayDirY) / (rayDirX * rayDirX)); double deltaDistY = sqrt(1 + (rayDirX * rayDirX) / (rayDirY * rayDirY)); rayDirY and rayDirX are the direction of a ray that has been cast. How do you get these formulas? It looks like pythagorean theorem is part of it, but somehow there's division involved here. Can anyone clue me in as to what mathematical knowledge I'm missing here, or "prove" the formula by showing how it's derived?

    Read the article

  • Entity System with C++

    - by Dono
    I'm working on a game engine using the Entity System and I have some questions. How I see Entity System : Components : A class with attributs, set and get. Sprite Physicbody SpaceShip ... System : A class with a list of components. (Component logic) EntityManager Renderer Input Camera ... Entity : Just a empty class with a list of components. What I've done : Currently, I've got a program who allow me to do that : // Create a new entity/ Entity* entity = game.createEntity(); // Add some components. entity->addComponent( new TransformableComponent() ) ->setPosition( 15, 50 ) ->setRotation( 90 ) ->addComponent( new PhysicComponent() ) ->setMass( 70 ) ->addComponent( new SpriteComponent() ) ->setTexture( "name.png" ) ->addToSystem( new RendererSystem() ); My questions Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ? Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ?

    Read the article

  • Modular Open MMO RPG

    - by Chris Valentine
    Has there been an MMORPG type attempt at some kind of open universe where you could host a server on your own if you wish and it would merely be added to the collective of possible places to travel within the MMO? Two types come to mind, a DnD Neverwinter Nights type place or something like EVE online. Where there is a "universe" and each hosted space is a planet or solar system or galaxy and players can travel between them using the same characters/ships/portal system and each new server is than just a new adventure or place to go. I would also assume there were dedicated/replicated servers that housed the characters/inventory themselves so that the environment was decentralized and always expandable. Not sure thats clear but has there been any such attempts or WIP? thanks

    Read the article

  • Flood fill algorithm for Game of Go

    - by Jackson Borghi
    I'm having a hell of a time trying to figure out how to make captured stones disappear. I've read everywhere that I should use the flood fill algorithm, but I haven't had any luck with that so far. Any help would be amazing! Here is my code: package Go; import static java.lang.Math.*; import static stdlib.StdDraw.*; import java.awt.Color; public class Go2 { public static Color opposite(Color player) { if (player == WHITE) { return BLACK; } return WHITE; } public static void drawGame(Color[][] board) { Color[][][] unit = new Color[400][19][19]; for (int h = 0; h < 400; h++) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { unit[h][x][y] = YELLOW; } } } setXscale(0, 19); setYscale(0, 19); clear(YELLOW); setPenColor(BLACK); line(0, 0, 0, 19); line(19, 19, 19, 0); line(0, 19, 19, 19); line(0, 0, 19, 0); for (double i = 0; i < 19; i++) { line(0.0, i, 19, i); line(i, 0.0, i, 19); } for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (board[x][y] != YELLOW) { setPenColor(board[x][y]); filledCircle(x, y, 0.47); setPenColor(GRAY); circle(x, y, 0.47); } } } int h = 0; } public static void main(String[] args) { int px; int py; Color[][] temp = new Color[19][19]; Color[][] board = new Color[19][19]; Color player = WHITE; for (int i = 0; i < 19; i++) { for (int h = 0; h < 19; h++) { board[i][h] = YELLOW; temp[i][h] = YELLOW; } } while (true) { drawGame(board); while (!mousePressed()) { } px = (int) round(mouseX()); py = (int) round(mouseY()); board[px][py] = player; while (mousePressed()) { } floodFill(px, py, player, board, temp); System.out.print("XXXXX = "+ temp[px][py]); if (checkTemp(temp, board, px, py)) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (temp[x][y] == GRAY) { board[x][y] = YELLOW; } } } } player = opposite(player); } } private static boolean checkTemp(Color[][] temp, Color[][] board, int x, int y) { if (x < 19 && x > -1 && y < 19 && y > -1) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 18) { if (temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (y == 18) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW) { return false; } } if (y == 0) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 0) { if (temp[x + 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } else { if (x < 19) { if (temp[x + 1][y] == GRAY) { checkTemp(temp, board, x + 1, y); } } if (x >= 0) { if (temp[x - 1][y] == GRAY) { checkTemp(temp, board, x - 1, y); } } if (y < 19) { if (temp[x][y + 1] == GRAY) { checkTemp(temp, board, x, y + 1); } } if (y >= 0) { if (temp[x][y - 1] == GRAY) { checkTemp(temp, board, x, y - 1); } } } return true; } private static void floodFill(int x, int y, Color player, Color[][] board, Color[][] temp) { if (board[x][y] != player) { return; } else { temp[x][y] = GRAY; System.out.println("x = " + x + " y = " + y); if (x < 19) { floodFill(x + 1, y, player, board, temp); } if (x >= 0) { floodFill(x - 1, y, player, board, temp); } if (y < 19) { floodFill(x, y + 1, player, board, temp); } if (y >= 0) { floodFill(x, y - 1, player, board, temp); } } } }

    Read the article

  • Design pattern for procedural terrain assets

    - by Alex
    I'm developing a procedural terrain class at the moment and am stuck on the correct design pattern. The terrain is 2D and is constructed from a series of (x,y) points. I currently have a method that just randomly adds points to an array of points to generate a random spread of points. However I need a more elaborate system for generating the terrain. The terrain will be built form a series of re-accuring terrain structures eg. a pit, jump, hill etc. Each structure will have some randomness assigned to it, each height of hill will be random, pit size will be random etc. Each terrain structure will have: A property detailing the number of points making up that structure A method for generating the points (not absolutely necessary) My current thinking is to have a class for each terrain structure, create a fixed amount of terrain elements ahead of the player, loop over these and add the corresponding points to the game. What is the best way to create these procedural terrain structures when they are ultimately just a set of functions for generating terrain elements? Is a class for each terrain element excessive? I'm developing the game for iphone so any objective-c related answers would be welcome.

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

  • Glenn Fiedler's fixed timestep with fake threads

    - by kaoD
    I've implemented Glenn Fiedler's Fix Your Timestep! quite a few times in single-threaded games. Now I'm facing a different situation: I'm trying to do this in JavaScript. I know JS is single-threaded, but I plan on using requestAnimationFrame for the rendering part. This leaves me with two independent fake threads: simulation and rendering (I suppose requestAnimationFrame isn't really threaded, is it? I don't think so, it would BREAK JS.) Timing in these threads is independent too: dt for simulation and render is not the same. If I'm not mistaken, simulation should be up to Fiedler's while loop end. After the while loop, accumulator < dt so I'm left with some unspent time (dt) in the simulation thread. The problem comes in the draw/interpolation phase: const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); In my render callback, I have the current timestamp to which I can subtract the last-simulated-in-physics-timestamp to have a dt for the current frame. Should I just forget about this dt and draw using the physics thread's dt? It seems weird, since, well, I want to interpolate for the unspent time between simulation and render too, right? Of course, I want simulation and rendering to be completely independent, but I can't get around the fact that in Glenn's implementation the renderer produces time and the simulation consumes it in discrete dt sized chunks. A similar question was asked in Semi Fixed-timestep ported to javascript but the question doesn't really get to the point, and answers there point to removing physics from the render thread (which is what I'm trying to do) or just keeping physics in the render callback too (which is what I'm trying to avoid.)

    Read the article

  • How can I load .FBX files?

    - by gardian06
    I am looking into options for the model assets for my game. I have gotten pretty good with Blender, and want to use C++/DirectX9 (don't need all the excess from 10+), but Blender 2.6 exports .fbx not .x (by nature) and supposedly what is exported from Blender to .x is not entirely stable. In short how do I import .fbx models (I can work around not having animations if I must) into DirectX9? Is there a middleware, or conversion tool that will maintain stability?

    Read the article

  • how to re-use a sprite in cocos2d-x

    - by zinking
    some times it takes time to create the sprite structures in the scene, I might need to setup structures inside this sprite to meet requirement, thus I would hope to reuse such structures with the game again and again. I tried that, remove the child from parent, detach it from parent , clean parent with the sprite. but when I try to add the sprite to another scene, it's just wont pass the assertion that the sprite already have parent did I miss some step ? add an example: I have a sprite A which involves of quite a few steps to construct, so I used it in scene A layer A, and then I want to use it in scene A layer B, scene B layer A1 etc..... generally speaking I don't want to reconstruct the sprte again.

    Read the article

  • C# XNA Handle mouse events?

    - by user406470
    I'm making a 2D game engine called Clixel over on GitHub. The problem I have relates to two classes, ClxMouse and ClxButton. In it I have a mouse class - the code for that can be viewed here. ClxMouse using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace org.clixel { public class ClxMouse : ClxSprite { private MouseState _curmouse, _lastmouse; public int Sensitivity = 3; public bool Lock = true; public Vector2 Change { get { return new Vector2(_curmouse.X - _lastmouse.X, _curmouse.Y - _lastmouse.Y); } } private int _scrollwheel; public int ScrollWheel { get { return _scrollwheel; } } public bool LeftDown { get { if (_curmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightDown { get { if (_curmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleDown { get { if (_curmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public bool LeftPressed { get { if (_curmouse.LeftButton == ButtonState.Pressed && _lastmouse.LeftButton == ButtonState.Released) return true; else return false; } } public bool RightPressed { get { if (_curmouse.RightButton == ButtonState.Pressed && _lastmouse.RightButton == ButtonState.Released) return true; else return false; } } public bool MiddlePressed { get { if (_curmouse.MiddleButton == ButtonState.Pressed && _lastmouse.MiddleButton == ButtonState.Released) return true; else return false; } } public bool LeftReleased { get { if (_curmouse.LeftButton == ButtonState.Released && _lastmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightReleased { get { if (_curmouse.RightButton == ButtonState.Released && _lastmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleReleased { get { if (_curmouse.MiddleButton == ButtonState.Released && _lastmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public MouseState CurMouse { get { return _curmouse; } } public MouseState LastMouse { get { return _lastmouse; } } public ClxMouse() : base(ClxG.Textures.Default.Cursor) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); this.Solid = false; DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); Mouse.SetPosition(CollisionBox.X, CollisionBox.Y); } public ClxMouse(Texture2D _texture) : base(_texture) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); } public override void Update() { _lastmouse = _curmouse; _curmouse = Mouse.GetState(); if (_curmouse != _lastmouse) { if (ClxG.Game.IsActive) { _scrollwheel = _curmouse.ScrollWheelValue; Velocity = new Vector2(Change.X / Sensitivity, Change.Y / Sensitivity); if (Lock) Mouse.SetPosition(ClxG.Screen.Center.X, ClxG.Screen.Center.Y); _curmouse = Mouse.GetState(); } base.Update(); } } public override void Draw(SpriteBatch _sb) { base.Draw(_sb); } } } ClxButton using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace org.clixel { public class ClxButton : ClxSprite { /// <summary> /// The color when the mouse is over the button /// </summary> public Color HoverColor; /// <summary> /// The color when the color is being clicked /// </summary> public Color ClickColor; /// <summary> /// The color when the button is inactive /// </summary> public Color InactiveColor; /// <summary> /// The color when the button is active /// </summary> public Color ActiveColor; /// <summary> /// The color after the button has been clicked. /// </summary> public Color ClickedColor; /// <summary> /// The text to be displayed on the button, set to "" if no text is needed. /// </summary> public string Text; /// <summary> /// The ClxText object to be displayed. /// </summary> public ClxText TextRender; /// <summary> /// The ClxState that should be ResetAndShow() when the button is clicked. /// </summary> public ClxState ClickState; /// <summary> /// Collision check to make sure onCollide() only runs once per frame, /// since only the mouse needs to be collision checked. /// </summary> private bool _runonce = false; /// <summary> /// Gets a value indicating whether this instance is colliding. /// </summary> /// <value> /// <c>true</c> if this instance is colliding; otherwise, <c>false</c>. /// </value> public bool IsColliding { get { return _runonce; } } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> public ClxButton() : base(ClxG.Textures.Default.Button) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Text = Name + ID + " Unset!"; TextRender = new ClxText(); TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> /// <param name="_texture">The button texture.</param> public ClxButton(Texture2D _texture) : base(_texture) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Texture = _texture; Text = Name + ID; TextRender = new ClxText(); TextRender.Name = this.Name + ".TextRender"; TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); TextRender.Reset(); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Draws the debug information, run from ClxG.DrawDebug unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void DrawDebug(SpriteBatch _sb) { _runonce = false; TextRender.DrawDebug(_sb); _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), DebugColor, Rotation, Origin, Flip, Layer); _sb.Draw(ClxG.Textures.Default.DebugBG, new Rectangle(ActualRectangle.X - DebugLineWidth, ActualRectangle.Y - DebugLineWidth, ActualRectangle.Width + DebugLineWidth * 2, ActualRectangle.Height + DebugLineWidth * 2), new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugOutline, Rotation, Origin, Flip, Layer - 0.1f); _sb.Draw(ClxG.Textures.Default.DebugBG, ActualRectangle, new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugBGColor, Rotation, Origin, Flip, Layer - 0.01f); } /// <summary> /// Draws using the SpriteBatch, run from ClxG.Draw unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void Draw(SpriteBatch _sb) { _runonce = false; TextRender.Draw(_sb); if (Visible) if (Debug) { DrawDebug(_sb); } else _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), Color, Rotation, Origin, Flip, Layer); } /// <summary> /// Updates this instance. /// </summary> public override void Update() { if (this.Color != ActiveColor) this.Color = ActiveColor; TextRender.Layer = this.Layer + 0.03f; TextRender.Text = Text; TextRender.Scale = .5f; TextRender.Name = this.Name + ".TextRender"; TextRender.Origin = new Vector2(TextRender.CollisionBox.Center.X, TextRender.CollisionBox.Center.Y); TextRender.Center(this); TextRender.Update(); this.CollisionBox.Width = (int)(TextRender.CollisionBox.Width * TextRender.Scale) + (int)(TextRender.TextPadding.X * 2); this.CollisionBox.Height = (int)(TextRender.CollisionBox.Height * TextRender.Scale) + (int)(TextRender.TextPadding.Y * 2); base.Update(); } /// <summary> /// Collide event, takes the colliding object to call it's proper collision code. /// You'd want to use something like if(typeof(collider) == typeof(ClxObject) /// </summary> /// <param name="collider">The colliding object.</param> public override void onCollide(ClxObject collider) { if (!_runonce) { _runonce = true; UpdateEvents(); base.onCollide(collider); } } /// <summary> /// Updates the mouse based events. /// </summary> public void UpdateEvents() { onHover(); if (ClxG.Mouse.LeftReleased) { onLeftReleased(); return; } if (ClxG.Mouse.RightReleased) { onRightReleased(); return; } if (ClxG.Mouse.MiddleReleased) { onMiddleReleased(); return; } if (ClxG.Mouse.LeftPressed) { onLeftClicked(); return; } if (ClxG.Mouse.RightPressed) { onRightClicked(); return; } if (ClxG.Mouse.MiddlePressed) { onMiddleClicked(); return; } if (ClxG.Mouse.LeftDown) { onLeftClick(); return; } if (ClxG.Mouse.RightDown) { onRightClick(); return; } if (ClxG.Mouse.MiddleDown) { onMiddleClick(); return; } } /// <summary> /// Shows the state of the click. /// </summary> public void ShowClickState() { if (ClickState != null) { ClickState.ResetAndShow(); } } /// <summary> /// Hover event /// </summary> virtual public void onHover() { this.Color = HoverColor; } /// <summary> /// Left click event /// </summary> virtual public void onLeftClick() { this.Color = ClickColor; } /// <summary> /// Right click event /// </summary> virtual public void onRightClick() { } /// <summary> /// Middle click event /// </summary> virtual public void onMiddleClick() { } /// <summary> /// Left click event, called once per click /// </summary> virtual public void onLeftClicked() { ShowClickState(); } /// <summary> /// Right click event, called once per click /// </summary> virtual public void onRightClicked() { this.Reset(); } /// <summary> /// Middle click event, called once per click /// </summary> virtual public void onMiddleClicked() { } /// <summary> /// Ons the left released. /// </summary> virtual public void onLeftReleased() { this.Color = ClickedColor; } virtual public void onRightReleased() { } virtual public void onMiddleReleased() { } } } The issue I have is that I have all these have event styled methods, especially in ClxButton with all the onLeftClick, onRightClick, etc, etc. Is there a better way for me to handle these events to be a lot more easier for a programmer to use? I was looking at normal events on some other sites, (I'd post them but I need more rep.) and didn't really see a good way to implement delegate events into my framework. I'm not really sure how these events work, could someone possibly lay out how these events are processed for me? TL:DR * Is there a better way to handle events like this? * Are events a viable solution to this problem? Thanks in advance for any help.

    Read the article

  • Box2d too much for Circle/Circle collision detection?

    - by Joey Green
    I'm using cocos2d to program a game and am using box2d for collision detection. Everything in my game is a circle and for some reason I'm having a problem with some times things are not being detected as a collision when they should be. I'm thinking of rolling up my own collision detection since I don't think it would be too hard. Questions are: Would this approach work for collision detection between circles? a. get radius of circle A and circle B. b. get distance of the center of circle A and circle B c. if the distance is greater than or equal to the sum of circle A radius and circle B radius then we have a hit Should box2d be used for such simple collision detection? There are no physics in this game.

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • Interleaving Arrays in OpenGL

    - by Benjamin Danger Johnson
    In my pursuit to write code that matches todays OpenGL standards I have found that I am completely clueless about interleaving arrays. I've tried and debugged just about everywhere I can think of but I can't get my model to render using interleaved arrays (It worked when it was configuered to use multiple arrays) Now I know that all the data is properly being parsed from an obj file and information is being copied properly copied into the Vertex object array, but I still can't seem to get anything to render. Below is the code for initializing a model and drawing it (along with the Vertex struct for reference.) Vertex: struct Vertex { glm::vec3 position; glm::vec3 normal; glm::vec2 uv; glm::vec3 tangent; glm::vec3 bitangent; }; Model Constructor: Model::Model(const char* filename) { bool result = loadObj(filename, vertices, indices); glGenVertexArrays(1, &vertexArrayID); glBindVertexArray(vertexArrayID); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &vertices[0], GL_STATIC_DRAW); glGenBuffers(1, &elementbuffer); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer); glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned short), &indices[0], GL_STATIC_DRAW); } Draw Model: Model::Draw(ICamera camera) { GLuint matrixID = glGetUniformLocation(programID, "mvp"); GLuint positionID = glGetAttribLocation(programID, "position_modelspace"); GLuint uvID = glGetAttribLocation(programID, "uv"); GLuint normalID = glGetAttribLocation(programID, "normal_modelspace"); GLuint tangentID = glGetAttribLocation(programID, "tangent_modelspace"); GLuint bitangentID = glGetAttribLocation(programID, "bitangent_modelspace"); glm::mat4 projection = camera->GetProjectionMatrix(); glm::mat4 view = camera->GetViewMatrix(); glm::mat4 model = glm::mat4(1.0f); glm::mat4 mvp = projection * view * model; glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mvp[0][0]); glBindVertexArray(vertexArrayID); glEnableVertexAttribArray(positionID); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glVertexAttribPointer(positionID, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertices[0].position); glEnableVertexAttribArray(uvID); glVertexAttribPointer(uvID, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertices[0].uv); glEnableVertexAttribArray(normalID); glVertexAttribPointer(normalID, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertices[0].normal); glEnableVertexAttribArray(tangentID); glVertexAttribPointer(tangentID, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertices[0].tangent); glEnableVertexAttribArray(bitangentID); glVertexAttribPointer(bitangentID, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertices[0].bitangent); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer); glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_SHORT, (void*)0); glDisableVertexAttribArray(positionID); glDisableVertexAttribArray(uvID); glDisableVertexAttribArray(normalID); glDisableVertexAttribArray(tangentID); glDisableVertexAttribArray(bitangentID); }

    Read the article

  • How do you create a .XNB Font file for use with CocosSharp?

    - by Chris Pietschmann
    I know the .XNB file format is an XNA thing and that CocosSharp inherits this from its MonoGame roots. However there doesn't seem to be any information on how to create your own .XNB fonts to use with CocosSharp. I've tried searching but can find any information. Could someone explain it here or point me to a tutorial on how to create .XNB font file for use with CocosSharp? A site to download already compiled .XNB Fonts would also be acceptable. Update: Another thing that makes this tricky is that I guess XNA Game Studio could be used, but it's not compatible with Windows 8.1; which is what I currently use for my dev machine...

    Read the article

  • Correct way to drive Main Loop in Cocoa

    - by Kyle
    I'm writing a game that currently runs in both Windows and Mac OS X. My main game loop looks like this: while(running) { ProcessOSMessages(); // Using Peek/Translate message in Win32 // and nextEventMatchingMask in Cocoa GameUpdate(); GameRender(); } Thats obviously simplified a bit, but thats the gist of it. In Windows where I have full control over the application, it works great. Unfortunately Apple has their own way of doing things in Cocoa apps. When I first tried to implement my main loop in Cocoa, I couldn't figure out where to put it so I created my own NSApplication per this post. I threw my GameFrame() right in my run function and everything worked correctly. However, I don't feel like its the "right" way to do it. I would like to play nicely within Apple's ecosystem rather than trying to hack a solution that works. This article from apple describes the old way to do it, with an NSTimer, and the "new" way to do it using CVDisplayLink. I've hooked up the CVDisplayLink version, but it just feels....odd. I don't like the idea of my game being driven by the display rather than the other way around. Are my only two options to use a CVDisplayLink or overwrite my own NSApplication? Neither one of those solutions feels quite right.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >