Search Results

Search found 53087 results on 2124 pages for 'database development'.

Page 711/2124 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Get the onended event for an AudioBuffer in HTML5/Chrome

    - by Matthew James Davis
    So I am playing audio file in Chrome and I want to detect when playing has ended so I can delete references to it. Here is my code var source = context.createBufferSource(); source.buffer = sound.buffer; source.loop = sound.loop; source.onended = function() { delete playingSounds[soundName]; } source.connect(mainNode); source.start(0, sound.start, sound.length); however, the event handler doesn't fire. Is this not yet supported as described by the W3 specification? Or am I doing something wrong?

    Read the article

  • Pyglet: How to use second screen's vsync

    - by BaldDude
    does anybody know if it's possible to use the vsync of the second monitor instead of the first one with pyglet? I have 2 monitors, one running at 60Hz and the other at 120Hz. I want to be able to set my application on whatever monitors I have, and have the application use the monitor's rate to swap the buffers. This needs to be cross platform. I found this information... pyglet.window But I was wondering if anybody knows a way... Thanks for your help.

    Read the article

  • Incorrect colour blending when using a pixel shader with XNA

    - by MazK
    I'm using XNA 4.0 to create a 2D game and while implementing a layer tinting pixel shader I noticed that when the texture's alpha value is anything between 1 or 0 the end result is different than expected. The tinting works from selecting a colour and setting the amount of tint. This is achieved via the shader which works out first the starting colour (for each r, g, b and a) : float red = texCoord.r * vertexColour.r; and then the final tinted colour : output.r = red + (tintColour.r - red) * tintAmount; The alpha value isn't tinted and is left as : output.a = texCoord.a * vertexColour.a; The picture in the link below shows different backdrops against an energy ball object where it's outer glow hasn't blended as I would like it to. The middle two are incorrect as the second non tinted one should not show a glow against a white BG and the third should be entirely invisible. The blending function is NonPremultiplied. Why the alpha value is interfering with the final colour?

    Read the article

  • LWJGL Voxel game, glDrawArrays

    - by user22015
    I've been learning about 3D for a couple days now. I managed to create a chunk (8x8x8). Add optimization so it only renders the active and visible blocks. Then I added so it only draws the faces which don't have a neighbor. Next what I found from online research was that it is better to use glDrawArrays to increase performance. So I restarted my little project. Render an entire chunck, add optimization so it only renders active and visible blocks. But now I want to add so it only draws the visible faces while using glDrawArrays. This is giving me some trouble with calling glDrawArrays because I'm passing a wrong count parameter. > # A fatal error has been detected by the Java Runtime Environment: > # > # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000006e31a03, pid=1032, tid=3184 > # Stack: [0x00000000023a0000,0x00000000024a0000], sp=0x000000000249ef70, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [ig4icd64.dll+0xa1a03] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglDrawArrays(IIIJ)V+0 j org.lwjgl.opengl.GL11.glDrawArrays(III)V+20 j com.vox.block.Chunk.render()V+410 j com.vox.ChunkManager.render()V+30 j com.vox.Game.render()V+11 j com.vox.GameHandler.render()V+12 j com.vox.GameHandler.gameLoop()V+15 j com.vox.Main.main([Ljava/lang/StringV+13 v ~StubRoutines::call_stub public class Chunk { public final static int[] DIM = { 8, 8, 8}; public final static int CHUNK_SIZE = (DIM[0] * DIM[1] * DIM[2]); Block[][][] blocks; private int index; private int vBOVertexHandle; private int vBOColorHandle; public Chunk(int index) { this.index = index; vBOColorHandle = GL15.glGenBuffers(); vBOVertexHandle = GL15.glGenBuffers(); blocks = new Block[DIM[0]][DIM[1]][DIM[2]]; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ blocks[x][y][z] = new Block(); } } } } public void render(){ Block curr; FloatBuffer vertexPositionData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); FloatBuffer vertexColorData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); int counter = 0; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ curr = blocks[x][y][z]; boolean[] neightbours = validateNeightbours(x, y, z); if(curr.isActive() && !neightbours[6]) { float[] arr = curr.createCube((index*DIM[0]*Block.BLOCK_SIZE*2) + x*2, y*2, z*2, neightbours); counter += arr.length; vertexPositionData2.put(arr); vertexColorData2.put(createCubeVertexCol(curr.getCubeColor())); } } } } vertexPositionData2.flip(); vertexPositionData2.flip(); FloatBuffer vertexPositionData = BufferUtils.createFloatBuffer(vertexColorData2.position()); FloatBuffer vertexColorData = BufferUtils.createFloatBuffer(vertexColorData2.position()); for(int i = 0; i < vertexPositionData2.position(); i++) vertexPositionData.put(vertexPositionData2.get(i)); for(int i = 0; i < vertexColorData2.position(); i++) vertexColorData.put(vertexColorData2.get(i)); vertexColorData.flip(); vertexPositionData.flip(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexPositionData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexColorData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL11.glPushMatrix(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0L); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL11.glColorPointer(3, GL11.GL_FLOAT, 0, 0L); System.out.println("Counter " + counter); GL11.glDrawArrays(GL11.GL_LINE_LOOP, 0, counter); GL11.glPopMatrix(); //blocks[r.nextInt(DIM[0])][2][r.nextInt(DIM[2])].setActive(false); } //Random r = new Random(); private float[] createCubeVertexCol(float[] CubeColorArray) { float[] cubeColors = new float[CubeColorArray.length * 4 * 6]; for (int i = 0; i < cubeColors.length; i++) { cubeColors[i] = CubeColorArray[i % CubeColorArray.length]; } return cubeColors; } private boolean[] validateNeightbours(int x, int y, int z) { boolean[] bools = new boolean[7]; bools[6] = true; bools[6] = bools[6] && (bools[0] = y > 0 && y < DIM[1]-1 && blocks[x][y+1][z].isActive());//top bools[6] = bools[6] && (bools[1] = y > 0 && y < DIM[1]-1 && blocks[x][y-1][z].isActive());//bottom bools[6] = bools[6] && (bools[2] = z > 0 && z < DIM[2]-1 && blocks[x][y][z+1].isActive());//front bools[6] = bools[6] && (bools[3] = z > 0 && z < DIM[2]-1 && blocks[x][y][z-1].isActive());//back bools[6] = bools[6] && (bools[4] = x > 0 && x < DIM[0]-1 && blocks[x+1][y][z].isActive());//left bools[6] = bools[6] && (bools[5] = x > 0 && x < DIM[0]-1 && blocks[x-1][y][z].isActive());//right return bools; } } public class Block { public static final float BLOCK_SIZE = 1f; public enum BlockType { Default(0), Grass(1), Dirt(2), Water(3), Stone(4), Wood(5), Sand(6), LAVA(7); int BlockID; BlockType(int i) { BlockID=i; } } private boolean active; private BlockType type; public Block() { this(BlockType.Default); } public Block(BlockType type){ active = true; this.type = type; } public float[] getCubeColor() { switch (type.BlockID) { case 1: return new float[] { 1, 1, 0 }; case 2: return new float[] { 1, 0.5f, 0 }; case 3: return new float[] { 0, 0f, 1f }; default: return new float[] {0.5f, 0.5f, 1f}; } } public float[] createCube(float x, float y, float z, boolean[] neightbours){ int counter = 0; for(boolean b : neightbours) if(!b) counter++; float[] array = new float[counter*12]; int offset = 0; if(!neightbours[0]){//top array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[1]){//bottom array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[2]){//front array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[3]){//back array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[4]){//left array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[5]){//right array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } return Arrays.copyOf(array, offset); } public boolean isActive() { return active; } public void setActive(boolean active) { this.active = active; } public BlockType getType() { return type; } public void setType(BlockType type) { this.type = type; } } I highlighted the code I'm concerned about in this following screenshot: - http://imageshack.us/a/img820/7606/18626782.png - (Not allowed to upload images yet) I know the code is a mess but I'm just testing stuff so I wasn't really thinking about it.

    Read the article

  • 3D Dice using Maya - for integration with iOS game app

    - by Anil
    My designer is building a 3D design of a dice using Maya. I want to integrate this in my iOS app so that the user can spin the dice and get a number. Then they play the game using that number. So, I have two questions: 1) How can I make the dice spin and stop at a random position so that a number is presented to the user? and 2) Once it stops spinning how can I detect the number that is displayed to the user (programmatically)? Many thanks. -Anil

    Read the article

  • An issue with tessellation a model with DirectX11

    - by Paul Ske
    I took the hardware tessellation tutorial from Rastertek and implemended texturing instead of color. This is great, so I wanted to implemended the same techique to a model inside my game editor and I noticed it doesn't draw anything. I compared the detailed tessellation from DirectX SDK sample. Inside the shader file - if I replace the HullInputType with PixelInputType it draws. So, I think because when I compiled the shaders inside the program it compiles VertexShader, PixelShader, HullShader then DomainShader. Isn't it suppose to be VertexShader, HullSHader, DomainShader then PixelShader or does it really not matter? I am just curious why wouldn't the model even be drawn when HullInputType but renders fine with PixelInputType. Shader Code: [code] cbuffer ConstantBuffer { float4x4 WVP; float4x4 World; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; Texture2D Texture; Texture2D NormalTexture; SamplerState ss { MinLOD = 5.0f; MipLODBias = 0.0f; }; struct HullOutputType { float3 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct HullInputType { float4 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct VertexInputType { float4 position : POSITION; float2 texcoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; uint uVertexID : SV_VERTEXID; }; struct PixelInputType { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; // texture coordinates float3 normal : NORMAL; float3 tangent : TANGENT; float4 color : COLOR; float3 viewDirection : TEXCOORD1; float4 depthBuffer : TEXTURE0; }; HullInputType VShader(VertexInputType input) { HullInputType output; output.position.w = 1.0f; output.position = mul(input.position,WVP); output.texcoord = input.texcoord; output.normal = input.normal; output.tangent = input.tangent; //output.normal = mul(normal,World); //output.tangent = mul(tangent,World); //output.color = output.color; //output.texcoord = texcoord; // set the texture coordinates, unmodified return output; } ConstantOutputType TexturePatchConstantFunction(InputPatch inputPatch,uint patchID : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("TexturePatchConstantFunction")] HullOutputType HShader(InputPatch patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; // Set the position for this control point as the output position. output.position = patch[pointId].position; // Set the input color as the output color. output.texcoord = patch[pointId].texcoord; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; return output; } [domain("tri")] PixelInputType DShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch patch) { float3 vertexPosition; float2 uvPosition; float4 worldposition; PixelInputType output; // Interpolate world space position with barycentric coordinates float3 vWorldPos = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; // Determine the position of the new vertex. vertexPosition = vWorldPos; // Calculate the position of the new vertex against the world, view, and projection matrices. output.position = mul(float4(vertexPosition, 1.0f),WVP); // Send the input color into the pixel shader. output.texcoord = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.normal = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.tangent = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; //output.depthBuffer = output.position; //output.depthBuffer.w = 1.0f; //worldposition = mul(output.position,WVP); //output.viewDirection = cameraDirection.xyz - worldposition.xyz; //output.viewDirection = normalize(output.viewDirection); return output; } [/code] Somethings are commented out but will be in place when fixed. I'm probably not connecting something correctly.

    Read the article

  • Bitwise operators in DX9 ps_2_0 shader

    - by lapin
    I've got the following code in a shader: // v & y are both floats nPixel = v; nPixel << 8; nPixel |= y; and this gives me the following error in compilation: shader.fx(80,10): error X3535: Bitwise operations not supported on legacy targets. shader.fx(92,18): ID3DXEffectCompiler::CompileEffect: There was an error compiling expression ID3DXEffectCompiler: Compilation failed The error is on the following line: nPixel |= y; What am I doing wrong here?

    Read the article

  • glsl shader to allow color change of skydome ogre3d

    - by Tim
    I'm still very new to all this but learning a lot. I'm putting together an application using Ogre3d as the rendering engine. So far I've got it running, with a simple scene, a day/night cycle system which is working okay. I'm now moving on to looking at changing the color of the skydome material based on the time of day. What I've done so far is to create a struct to hold the ColourValues for the different aspects of the scene. struct todColors { Ogre::ColourValue sky; Ogre::ColourValue ambient; Ogre::ColourValue sun; }; I created an array to store all the colours todColors sceneColours [4]; I populated the array with the colours I want to use for the various times of the day. For instance DayTime (when the sun is high in the sky) sceneColours[2].sky = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].ambient = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].sun = Ogre::ColourValue(135/255, 206/255, 235/255, 255); I've got code to work out the time of the day using a float currentHours to store the current hour of the day 10.5 = 10:30 am. This updates constantly and updates the sun as required. I am then calculating the appropriate colours for the time of day when relevant using else if( currentHour >= 4 && currentHour < 7) { // Lerp from night to morning Ogre::ColourValue lerp = Ogre::Math::lerp<Ogre::ColourValue, float>(sceneColours[GT_TOD_NIGHT].sky , sceneColours[GT_TOD_MORNING].sky, (currentHour - 4) / (7 - 4)); } My original attempt to get this to work was to dynamically generate a material with the new colour and apply that material to the skydome. This, as you can probably guess... didn't go well. I know it's possible to use shaders where you can pass information such as colour to the shader from the code but I am unsure if there is an existing simple shader to change a colour like this or if I need to create one. What is involved in creating a shader and material definition that would allow me to change the colour of a material without the overheads of dynamically generating materials all the time? EDIT : I've created a glsl vertex and fragment shaders as follows. Vertex uniform vec4 newColor; void main() { gl_FrontColor = newColor; gl_Position = ftransform(); } Fragment void main() { gl_FragColor = gl_Color; } I can pass a colour to it using ShaderDesigner and it seems to work. I now need to investigate how to use it within Ogre as a material. EDIT : I created a material file like this : vertex_program colour_vs_test glsl { source test.vert default_params { param_named newColor float4 0.0 0.0 0.0 1 } } fragment_program colour_fs_glsl glsl { source test.frag } material Test/SkyColor { technique { pass { lighting off fragment_program_ref colour_fs_glsl { } vertex_program_ref colour_vs_test { } } } } In the code I have tried : Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().getByName("Test/SkyColor"); Ogre::GpuProgramParametersSharedPtr params = material->getTechnique(0)->getPass(0)->getVertexProgramParameters(); params->setNamedConstant("newcolor", Ogre::Vector4(0.7, 0.5, 0.3, 1)); I've set that as the Skydome material which seems to work initially. I am doing the same with the code that is attempting to lerp between colours, but when I include it there, it all goes black. Seems like there is now a problem with my colour lerping.

    Read the article

  • AS3: StageWidth for BOX2D?

    - by Gabriel Meono
    I know BOX2D uses meters, and AS3 uses pixels. I'm trying to create objects which are limited to the stageWidth. If I do this variable: for (var i:int = 0; i<(stage.stageWidth); i++){...} The animation will freeze, and this output appears: TypeError: Error #1009: Cannot access a property or method of a null object reference. at Box2D.Collision::b2BroadPhase/CreateProxy() at Box2D.Collision.Shapes::b2Shape/CreateProxy() at Box2D.Dynamics::b2Body/CreateShape() at com.actionsnippet.qbox.objects::CircleObject/build() at com.actionsnippet.qbox::QuickObject/init() at com.actionsnippet.qbox::QuickObject() at com.actionsnippet.qbox.objects::CircleObject() at com.actionsnippet.qbox::QuickBox2D/create() at com.actionsnippet.qbox::QuickBox2D/addCircle() at BOX2D_Test_Tutorial_fla::MainTimeline/frame1() Does anyone know how to fix this? Full Code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); // make a heavy circle sim.addCircle({x:3, y:3, radius:0.4, density:1}); // create a few platforms // make pins for (var i:int = 0; i<(stage.stageWidth); i++){ //End sim.addCircle({x:1 + i * 1.5, y:18, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:17, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); } sim.start(); sim.mouseDrag();

    Read the article

  • Ogre 3d and bullet physics interaction

    - by Tim
    I have been playing around with Ogre3d and trying to integrate bullet physics. I have previously somewhat successfully got this functionality working with irrlicht and bullet and I am trying to base this on what I had done there, but modifying it to fit with Ogre. It is working but not correctly and I would like some help to understand what it is I am doing wrong. I have a state system and when I enter the "gamestate" I call some functions such as setting up a basic scene, creating the physics simulation. I am doing that as follows. void GameState::enter() { ... // Setup Physics btBroadphaseInterface *BroadPhase = new btAxisSweep3(btVector3(-1000,-1000,-1000), btVector3(1000,1000,1000)); btDefaultCollisionConfiguration *CollisionConfiguration = new btDefaultCollisionConfiguration(); btCollisionDispatcher *Dispatcher = new btCollisionDispatcher(CollisionConfiguration); btSequentialImpulseConstraintSolver *Solver = new btSequentialImpulseConstraintSolver(); World = new btDiscreteDynamicsWorld(Dispatcher, BroadPhase, Solver, CollisionConfiguration); ... createScene(); } In the createScene method I add a light and try to setup a "ground" plane to act as the ground for things to collide with.. as follows. I expect there is issues with this as I get objects colliding with the ground but half way through it and they glitch around like crazy on collision. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); Transform.setOrigin(btVector3(0,1,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } I then have a method to create a cube and give it rigid body physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); size = boundingB.getSize(); //size /= 2.0f; // Only the half needed? //size *= 0.96f; // Bullet margin is a bit bigger so we need a smaller size entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btVector3 HalfExtents(TScale.getX()*0.5f,TScale.getY()*0.5f,TScale.getZ()*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } Then in the GameState::update() method which which runs every frame to handle input and render etc I call an UpdatePhysics method to update the physics simulation. void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // set rotation btVector3 EulerRotation; QuaternionToEuler(TObject->getOrientation(), EulerRotation); node->setOrientation(1,(Ogre::Real)EulerRotation[0], (Ogre::Real)EulerRotation[1], (Ogre::Real)EulerRotation[2]); //node->rotate(Ogre::Vector3(EulerRotation[0], EulerRotation[1], EulerRotation[2])); } } void GameState::QuaternionToEuler(const btQuaternion &TQuat, btVector3 &TEuler) { btScalar W = TQuat.getW(); btScalar X = TQuat.getX(); btScalar Y = TQuat.getY(); btScalar Z = TQuat.getZ(); float WSquared = W * W; float XSquared = X * X; float YSquared = Y * Y; float ZSquared = Z * Z; TEuler.setX(atan2f(2.0f * (Y * Z + X * W), -XSquared - YSquared + ZSquared + WSquared)); TEuler.setY(asinf(-2.0f * (X * Z - Y * W))); TEuler.setZ(atan2f(2.0f * (X * Y + Z * W), XSquared - YSquared - ZSquared + WSquared)); TEuler *= RADTODEG; } I seem to have issues with the cubes not colliding with each other and colliding strangely with the ground. I have tried to capture the effect with the attached image. I would appreciate any help in understanding what I have done wrong. Thanks. EDIT : Solution The following code shows the changes I made to get accurate physics. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); // Fixed the transform vector here for y back to 0 to stop the objects sinking into the ground. Transform.setOrigin(btVector3(0,0,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); // The ogre bounding box is slightly bigger so I am reducing it for // use with the rigid body. size = boundingB.getSize()*0.95f; entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); node->showBoundingBox(true); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); // I got the size of the bounding box above but wasn't using it to set // the size for the rigid body. This now does. btVector3 HalfExtents(size.x*0.5f,size.y*0.5f,size.z*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // Convert the bullet Quaternion to an Ogre quaternion btQuaternion btq = TObject->getOrientation(); Ogre::Quaternion quart = Ogre::Quaternion(btq.w(),btq.x(),btq.y(),btq.z()); // use the quaternion with setOrientation node->setOrientation(quart); } } The QuaternionToEuler function isn't needed so that was removed from code and header files. The objects now collide with the ground and each other appropriately.

    Read the article

  • Drawing an arrow cursor on user dragging in XNA/MonoGame

    - by adrin
    I am writing a touch enabled game in MonoGame (XNA-like API) and would like to display a an arrow 'cursor' as user is making a drag gesture from point A to point B. I am not sure on how to correctly approach this problem. It seems that its best to just draw a sprite from A to B and scale it as required. This would however mean it gets stretched as user continues dragging gesture in one direction. Or maybe its better to dynamically render the arrow so it looks better?

    Read the article

  • Modelling photo-realistic grass in realtime

    - by sebf
    Hello, I see a number of tutorials on how to create good looking grasses when creating 3D renders but can't think how to model it for realtime/use in a game's scenery. Sure simple models with alpha cutouts can be used to create plants and trees in really awesome scenery but what about a lawn? Are there any good tricks to achieve this effect? I tried with a simple 4 sided box and a small texture and the number of objects needed for a decent appearance made Max crawl to a halt. (I am thinking it may be possible with a shader but that is a whole other area so thought I would just ask about anyones experience with modelling it here) Thanks!

    Read the article

  • Compiling SDL under Windows with sdl-config

    - by DarrenVortex
    I have downloaded NXEngine (The Open Source version of Cave Story). I have a make file in the directory, which I execute using msys. However, the make file uses sdl-config: g++ -g -O2 -c main.cpp -D DEBUG `sdl-config --cflags` -Wreturn-type -Wformat -Wno-multichar -o main.o /bin/sh: sdl-config: command not found And apparently sdl-config does not exist under windows since there's no sdl installation. There's also no documentation on the official sourceforge website about this! What do I do?

    Read the article

  • 45° Slopes in a Tile based 2D platformer

    - by xNidhogg
    I want to have simple 45° slopes in my tile based platformer, however I just cant seem to get the algorithm down. Please take a look at the code and video, maybe I'm missing the obvious? //collisionRectangle is the collision rectangle of the player with //origin at the top left and width and height //wantedPosition is the new position the player will be set to. //this is determined elsewhere by checking the bottom center point of the players rect if(_leftSlope || _rightSlope) { //Test bottom center point var calculationPoint = new Vector2(collisionRectangle.Center.X, collisionRectangle.Bottom); //Get the collision rectangle of the tile, origin is top-left Rectangle cellRect = _tileMap.CellWorldRectangle( _tileMap.GetCellByPixel(calculationPoint)); //Calculate the new Y coordinate depending on if its a left or right slope //CellSize = 8 float newY = _leftSlope ? (calculationPoint.X % CellSize) + cellRect.Y : (-1 * (calculationPoint.X % CellSize) - CellSize) + cellRect.Y; //reset variables so we dont jump in here next frame _leftSlope = false; _rightSlope = false; //now change the players Y according to the difference of our calculation wantedPosition.Y += newY - calculationPoint.Y; } Video of what it looks like: http://youtu.be/EKOWgD2muoc

    Read the article

  • How to apply numerical integration on a graph layout

    - by Cumatru
    I've done some basic 1 D integration, but i can't wrap my head around things and apply it to my graph layout. So, consider the picture below: if i drag the red node to the right, i'm forcing his position to my mouse position the other nodes will "follow" him, but how ? For Verlet, to compute the newPosition, i need the acceleration for every node and the currentPosition. That is what i don't understand. How to i compute the acceleration and the currentPosition ? The currentPosition will be the position of the RedNode ? If yes, doesn't that means that they will all overlap ? http://i.stack.imgur.com/NCKmO.jpg

    Read the article

  • Triangle - Rectangle Intersection in 2D

    - by Kevin Boyd
    I had previously asked this for 3D but now I changed my strategy and would like to do the intersection in 2D. The Rectangle is axis aligned and will always be in a fixed position, and has a constant shape and size, basically I want to clip the red areas of the triangles that extend outside the bounds of the rectangle The triangles could be in any position, shape or size, I my code I have a loop where I check the triangles one by one however I am still clueless about the math. I have identified 5 cases of triangle rectangle intersection as shown here. How do I find the intersection points of the triangle and the rectangle?

    Read the article

  • UBJsonReader (Libgdx) unable to to read UBJson from Python(Blender)

    - by daniel
    I am working on an export tool from Blender to Libgdx, exports like custom attributes and other information (Almost completed), this is a very cool tool that will speed up a lot your works, after I completed I will send to public to contribute forum, Export format is uses python's Standard Json module and readable text, it of course works fine, but I wanna also have a Binary Json export for faster load, so users can Export Straight to Libgdx, but after I search I found that UBJson with draft9.py (simpleubjson 0.6.1) encode is seems matches with one FBXConverter's UBJsonWriter( Xoppa wrote), but when I export, I am not able to read the file, and send this errors (Java heap space) seems this is a different between byte sizes in UBJson(python) and UBJsonReader. how can I write a correct one in python that matches with Libgdx's UBJsonReader, and would be cross-platform? Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: java.lang.OutOfMemoryError: Java heap space at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:120) Caused by: java.lang.OutOfMemoryError: Java heap space at com.badlogic.gdx.utils.UBJsonReader.readString(UBJsonReader.java:162) at com.badlogic.gdx.utils.UBJsonReader.parseString(UBJsonReader.java:150) at com.badlogic.gdx.utils.UBJsonReader.parseObject(UBJsonReader.java:112) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:59) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:52) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:36) at com.badlogic.gdx.utils.UBJsonReader.parse(UBJsonReader.java:45) at com.me.gdximportexport.GdxImportExport.create(GdxImportExport.java:43) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:136) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114) Tested on UbuntuStudio 13.10 with OpenJdk 7, and Windows 7 with jdk 7 Thanks for any guides.

    Read the article

  • Finding the normal of OBB face with an OBB penetrating

    - by Milo
    Below is an illustration: I have an OBB in an OBB (see below for OBB2D code if needed). What I need to determine is, what face it is in, and what direction do I point the normal? The goal is to get the OBB out of the OBB so the normal needs to face outward of the OBB. How could I go about: Finding what face the line is penetrating given the 4 corners of the OBB and the class below: if we define dx=x2-x1 and dy=y2-y1, then the normals are (-dy, dx) and (dy, -dx). Which normal points outward of the OBB? Thanks public class OBB2D { // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(Vector2D center, float w, float h, float angle) { set(center,w,h,angle); } public OBB2D(float left, float top, float width, float height) { set(new Vector2D(left + (width / 2), top + (height / 2)),width,height,0.0f); } public void set(Vector2D center,float w, float h,float angle) { Vector2D X = new Vector2D( (float)Math.cos(angle), (float)Math.sin(angle)); Vector2D Y = new Vector2D((float)-Math.sin(angle), (float)Math.cos(angle)); X = X.multiply( w / 2); Y = Y.multiply( h / 2); corner[0] = center.subtract(X).subtract(Y); corner[1] = center.add(X).subtract(Y); corner[2] = center.add(X).add(Y); corner[3] = center.subtract(X).add(Y); computeAxes(); extents.x = w / 2; extents.y = h / 2; computeDimensions(center,angle); } private void computeDimensions(Vector2D center,float angle) { this.center.x = center.x; this.center.y = center.y; this.angle = angle; boundingRect.left = Math.min(Math.min(corner[0].x, corner[3].x), Math.min(corner[1].x, corner[2].x)); boundingRect.top = Math.min(Math.min(corner[0].y, corner[1].y),Math.min(corner[2].y, corner[3].y)); boundingRect.right = Math.max(Math.max(corner[1].x, corner[2].x), Math.max(corner[0].x, corner[3].x)); boundingRect.bottom = Math.max(Math.max(corner[2].y, corner[3].y),Math.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(new Vector2D(rect.centerX(),rect.centerY()),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0] = corner[1].subtract(corner[0]); axis[1] = corner[3].subtract(corner[0]); // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { axis[a] = axis[a].divide((axis[a].length() * axis[a].length())); origin[a] = corner[0].dot(axis[a]); } } public void moveTo(Vector2D center) { Vector2D centroid = (corner[0].add(corner[1]).add(corner[2]).add(corner[3])).divide(4.0f); Vector2D translation = center.subtract(centroid); for (int c = 0; c < 4; ++c) { corner[c] = corner[c].add(translation); } computeAxes(); computeDimensions(center,angle); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } };

    Read the article

  • Adding VFACE semantic causes overlapping output semantics error

    - by user1423893
    My pixel shader input is a follows struct VertexShaderOut { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; float4 PositionClone : TEXCOORD1; // Final position values must be cloned to be used in PS calculations float3 Normal : TEXCOORD2; //float3x3 TBN : TEXCOORD3; float CullFace : VFACE; // A negative value faces backwards (-1), while a positive value (+1) faces the camera (requires ps_3_0) }; I'm using ps_3_0 and I wish to utilise the VFACE semantic for correct lighting of normals depending on the cull mode. If I add the VFACE semantic then I get the following errors: error X5639: dcl usage+index: position,0 has already been specified for an output register error X4504: overlapping output semantics Why would this occur? I can't see why there would be too much data.

    Read the article

  • Coordinates from 3DS Max to XNA 3.5

    - by David Conde
    Hello My problem is this. I have a simple box made in 3DS Max 2009, the Box is 10x10x10. I've tried to load it on XNA and traslate the camera for 15 units, but I can seem to find the values needed to see the box properly. Can anyone point me to a good resource where I can find some good introduction to XNA coordinate system and how is a simple box made in 3DS Max imported properly Best regards, David

    Read the article

  • HTML5/JS - Choppy Game Loop

    - by Rikonator
    I have been experimenting with HTML5/JS, trying to create a simple game when I hit a wall. My choice of game loop is too choppy to be actually of any use in a game. I'm trying for a fixed time step loop, rendering only when required. I simply use a requestAnimationFrame to run Game.update which finds the elapsed time since the last update, and calls State.update to update and render the current state. State.prototype.update = function(ms) { this.ticks += ms; var updates = 0; while(this.ticks >= State.DELTA_TIME && updates < State.MAX_UPDATES) { this.updateState(); this.updateFrameTicks += State.DELTA_TIME; this.updateFrames++; if(this.updateFrameTicks >= 1000) { this.ups = this.updateFrames; this.updateFrames = 0; this.updateFrameTicks -= 1000; } this.ticks -= State.DELTA_TIME; updates++; } if(updates > 0) { this.renderFrameTicks += updates*State.DELTA_TIME; this.renderFrames++; if(this.renderFrameTicks >= 1000) { this.rps = this.renderFrames; this.renderFrames = 0; this.renderFrameTicks -= 1000; } this.renderState(updates*State.DELTA_TIME); } }; But this strategy does not work very well. This is the result: http://jsbin.com/ukosuc/1 (Edit). As it is apparent, the 'game' has fits of lag, and when you tab out for a long period and come back, the 'game' behaves unexpectedly - updates faster than intended. This is either a problem due to something about game loops that I don't quite understand yet, or a problem due to implementation which I can't pinpoint. I haven't been able to solve this problem despite attempting several variations using setTimeout and requestAnimationFrame. (One such example is http://jsbin.com/eyarod/1/edit). Some help and insight would really be appreciated!

    Read the article

  • Passing elapsed time to the update function from the game loop

    - by Sri Harsha Chilakapati
    I want to pass the time elapsed to the update() method as this would make easy to implement the animations and time related concepts. Here's my game-loop. public void gameLoop(){ boolean running = true; long gameTime = getCurrentTime(); long elapsedTime = 0; long lastUpdateTime = 0; int loops; while (running){ loops = 0; while(getCurrentTime()>gameTime && loops<Global.MAX_FRAMESKIP){ elapsedTime = getCurrentTime() - lastUpdateTime; lastUpdateTime = getCurrentTime(); update(elapsedTime); gameTime += SKIP_STEPS; loops++; } displayGame(); } } getCurrentTime() method public long getCurrentTime(){ return (System.nanoTime()/1000000); } update() method long time = 0; public void update(long elapsedTime){ time += elapsedTime; if (time>=1000){ System.out.println("A second elapsed"); time -= 1000; } } But this is printing the message for 3 seconds. Thanks.

    Read the article

  • What to send to server in real time FPS game?

    - by syloc
    What is the right way to tell the position of our local player to the server? Some documents say that it is better to send the inputs whenever they are produced. And some documents say the client sends its position in a fixed interval. With the sending the inputs approach: What should I do if the player is holding down the direction keys? It means I need to send a package to the server in every frame. Isn't it too much? And there is also the rotation of the player from the mouse input. Here is an example: http://www.gabrielgambetta.com/fpm_live.html What about sending the position in fixed interval approach. It sends too few messages to the server. But it also reduces responsiveness. So which way is better?

    Read the article

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

  • How can I resize a set of sprite images?

    - by Tyler J Fisher
    Hey StackExchange GameDev community, I'm attempting to resize series of sprites upon instantiation of the class they're located in. I've attempted to use the following code to resize the images, however my attempts have been unsuccessful. I have been unable to write an implementation that is even compilable, so no error codes yet. wLeft.getScaledInstance(wLeft.getWidth()*2, wLeft.getHeight()*2, Image.SCALE_FAST); I've heard that Graphics2D is the best option. Any suggestions? I think I'm probably best off loading the images into a Java project, resizing the images then outputting them to a new directory so as not to have to resize each sprite upon class instantiation. What do you think? Photoshopping each individual sprite is out of the question, unless I used a macro. Code: package game; //Import import java.awt.Image; import javax.swing.ImageIcon; public class Mario extends Human { Image wLeft = new ImageIcon("sprites\\mario\\wLeft.PNG").getImage(); //Constructor public Mario(){ super("Mario", 50); wLeft = wLeft.getScaledInstance(wLeft.getWidth()*2, wLeft.getHeight()*2, Image.SCALE_FAST); }

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >