Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 434/1016 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • Bouncing ball slowing down over time

    - by user46610
    I use the unreal engine 4 to bounce a ball off of walls in a 2D space, but over time the ball gets slower and slower. Movement happens in the tick function of the ball FVector location = GetActorLocation(); location.X += this->Velocity.X * DeltaSeconds; location.Y += this->Velocity.Y * DeltaSeconds; SetActorLocation(location, true); When a wall gets hit I get a Hit Event with the normal of the collision. This is how I calculate the new velocity of the ball: FVector2D V = this->Velocity; FVector2D N = FVector2D(HitNormal.X, HitNormal.Y); FVector2D newVelocity = -2 * (V.X * N.X + V.Y * N.Y) * N + V; this->Velocity = newVelocity; Over time, the more the ball bounced around, the velocity gets smaller and smaller. How do I prevent speed loss when bouncing off walls like that? It's supposed to be a perfect bounce without friction or anything.

    Read the article

  • Isometric algorithm producing tiles in wrong draw order

    - by David
    I've been toying with isometric and I just cant get the tiles to be in the right order. I'm probably missing something obvious and I just can't see it. Even at the risk of looking stupid, here's my code: for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } And here's what I end up with: messed up map Yep, bit of an issue. If anyone could help, I'd appreciate it.

    Read the article

  • A good way to build a game loop in OpenGL

    - by Jeff
    I'm currently beginning to learn OpenGL at school, and I've started making a simple game the other day (on my own, not for school). I'm using freeglut, and am building it in C, so for my game loop I had really just been using a function I made passed to glutIdleFunc to update all the drawing and physics in one pass. This was fine for simple animations that I didn't care too much about the frame rate, but since the game is mostly physics based, I really want to (need to) tie down how fast it's updating. So my first attempt was to have my function I pass to glutIdleFunc (myIdle()) to keep track of how much time has passed since the previous call to it, and update the physics (and currently graphics) every so many milliseconds. I used timeGetTime() to do this (by using <windows.h>). And this got me to thinking, is using the idle function really a good way of going about the game loop? My question is, what is a better way to implement the game loop in OpenGL? Should I avoid using the idle function?

    Read the article

  • 3D rotation matrices deform object while rotating

    - by Kevin
    I'm writing a small 3D renderer (using an orthographic projection right now). I've run into some trouble with my 3D rotation matrices. They seem to squeeze my 3D object (a box primitive) at certain angles. Here's a live demo (only tested in Google Chrome): http://dl.dropbox.com/u/109400107/3D/index.html The box is viewed from the top along the Y axis and is rotating around the X and Z axis. These are my 3 rotation matrices (Only rX and rZ are being used): var rX = new Matrix([ [1, 0, 0], [0, Math.cos(radiants), -Math.sin(radiants)], [0, Math.sin(radiants), Math.cos(radiants)] ]); var rY = new Matrix([ [Math.cos(radiants), 0, Math.sin(radiants)], [0, 1, 0], [-Math.sin(radiants), 0, Math.cos(radiants)] ]); var rZ = new Matrix([ [Math.cos(radiants), -Math.sin(radiants), 0], [Math.sin(radiants), Math.cos(radiants), 0], [0, 0, 1] ]); Before projecting the verticies I multiply them by rZ and rX like so: vert1.multiply(rZ); vert1.multiply(rX); vert2.multiply(rZ); vert2.multiply(rX); vert3.multiply(rZ); vert3.multiply(rX); The projection itself looks like this: bX = (pos.x + (vert1.x*scale)); bY = (pos.y + (vert1.z*scale)); Where "pos.x" and "pos.y" is an offset for centering the box on the screen. I just can't seem to find a solution to this and I'm still relativly new to working with Matricies. You can view the source-code of the demo page if you want to see the whole thing.

    Read the article

  • Obtain rectangle indicating 2D world space camera can see

    - by Gareth
    I have a 2D tile based game in XNA, with a moveable camera that can scroll around and zoom. I'm trying to obtain a rectangle which indicates the area, in world space, that my camera is looking at, so I can render anything this rectangle intersects with (currently, everything is rendered). So, I'm drawing the world like this: _SpriteBatch.Begin( SpriteSortMode.FrontToBack, null, SamplerState.PointClamp, // Don't smooth null, null, null, _Camera.GetTransformation()); The GetTransformation() method on my Camera object does this: public Matrix GetTransformation() { _transform = Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(_viewportWidth * 0.5f, _viewportHeight * 0.5f, 0)); return _transform; } The camera properties in the method above should be self explanatory. How can I get a rectangle indicating what the camera is looking at in world space?

    Read the article

  • How can I generate a texture that looks like left-over tea leaves?

    - by Jedidja
    We are working on a project for iPhone and Windows Phone 7 where we'd like to be able to generate tea leaves at the bottom of a cup. It doesn't have to look photo-realistic, and actually cartoon-y is ok. What sort of techniques should we research to accomplish this? Are there any libraries (preferably in C, but we can translate) that would be helpful? Here are some samples pulled from a Google Image search

    Read the article

  • Using Appendbuffers in unity for terrain generation

    - by Wardy
    Like many others I figured I would try and make the most of the monster processing power of the GPU but I'm having trouble getting the basics in place. CPU code: using UnityEngine; using System.Collections; public class Test : MonoBehaviour { public ComputeShader Generator; public MeshTopology Topology; void OnEnable() { var computedMeshPoints = ComputeMesh(); CreateMeshFrom(computedMeshPoints); } private Vector3[] ComputeMesh() { var size = (32*32) * 4; // 4 points added for each x,z pos var buffer = new ComputeBuffer(size, 12, ComputeBufferType.Append); Generator.SetBuffer(0, "vertexBuffer", buffer); Generator.Dispatch(0, 1, 1, 1); var results = new Vector3[size]; buffer.GetData(results); buffer.Dispose(); return results; } private void CreateMeshFrom(Vector3[] generatedPoints) { var filter = GetComponent<MeshFilter>(); var renderer = GetComponent<MeshRenderer>(); if (generatedPoints.Length > 0) { var mesh = new Mesh { vertices = generatedPoints }; var colors = new Color[generatedPoints.Length]; var indices = new int[generatedPoints.Length]; //TODO: build this different based on topology of the mesh being generated for (int i = 0; i < indices.Length; i++) { indices[i] = i; colors[i] = Color.blue; } mesh.SetIndices(indices, Topology, 0); mesh.colors = colors; mesh.RecalculateNormals(); mesh.Optimize(); mesh.RecalculateBounds(); filter.sharedMesh = mesh; } else { filter.sharedMesh = null; } } } GPU code: #pragma kernel Generate AppendStructuredBuffer<float3> vertexBuffer : register(u0); void genVertsAt(uint2 xzPos) { //TODO: put some height generation code here. // could even run marching cubes / dual contouring code. float3 corner1 = float3( xzPos[0], 0, xzPos[1] ); float3 corner2 = float3( xzPos[0] + 1, 0, xzPos[1] ); float3 corner3 = float3( xzPos[0], 0, xzPos[1] + 1); float3 corner4 = float3( xzPos[0] + 1, 0, xzPos[1] + 1 ); vertexBuffer.Append(corner1); vertexBuffer.Append(corner2); vertexBuffer.Append(corner3); vertexBuffer.Append(corner4); } [numthreads(32, 1, 32)] void Generate (uint3 threadId : SV_GroupThreadID, uint3 groupId : SV_GroupID) { uint2 currentXZ = unint2( groupId.x * 32 + threadId.x, groupId.z * 32 + threadId.z); genVertsAt(currentXZ); } Can anyone explain why when I call "buffer.GetData(results);" on the CPU after the compute dispatch call my buffer is full of Vector3(0,0,0), I'm not expecting any y values yet but I would expect a bunch of thread indexes in the x,z values for the Vector3 array. I'm not getting any errors in any of this code which suggests it's correct syntax-wise but maybe the issue is a logical bug. Also: Yes, I know I'm generating 4,000 Vector3's and then basically round tripping them. However, the purpose of this code is purely to learn how round tripping works between CPU and GPU in Unity.

    Read the article

  • How to use the float value from Noise function in voxel terrain?

    - by therealjohn
    Im using Unity, although this question is not really specific to that engine. Im also using an asset from the store called Coherent Noise. It has some neat noise functionality built it. I am using those functions to produce some noise values. I am getting a value between 0 and 1 (floats). I have an array of blocks (for minecraft like voxel terrain) and I am confused on how to use this float value for terrain? Do I do something like <= 0 == Solid block etc etc? I am confused on how to use the floating values that the noise functions produce to use for height values of an array of say a height of 16. Thanks for any guidance.

    Read the article

  • In esenthel engine how can I remove some object from Gui class?

    - by Gajet
    I know many people in this site may not know esenthel engine at all and my question may be better answered at engine forum but I'm putting it here to share the name of a real easy to code gameengine with all of you: you can easily add a Button for example to your GUI class (gui is it's shared instance) with Gui += buttonInstance.create("click on me") but I'm just wondering how can you remove an on object from from Gui members. as far as I know there is no such a method as removeChild or getChildren or anything similar.

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • Understanding Box2d Restitution & Bouncing

    - by layzrr
    I'm currently trying to implement basketball bouncing into my game using Box2d (jBox2d technically), but I'm a bit confused about restitution. While trying to create the ball in the testbed first, I've run into infinite bouncing, as described in this question, however obviously not using my own implementation. The Box2d manual describes restitution as follows: Restitution is used to make objects bounce. The restitution value is usually set to be between 0 and 1. Consider dropping a ball on a table. A value of zero means the ball won't bounce. This is called an inelastic collision. A value of one means the ball's velocity will be exactly reflected. This is called a perfectly elastic collision. My confusion lies in that I am still getting infinite bouncing with restitution values at 0.75/0.8. The same behavior can be seen in the testbed under Collision Watching - Varying Restitution, on the 6th and 7th balls. I believe the last one has restitution of 1, which makes sense, but I don't understand why the second to last ball bounces infinitely (as is happening with my working basketball I've created). I am looking to understand the restitution concept more fully, as well as look for a solution to infinite bouncing with the Box2d framework. My instinct was to sleep objects that appeared to be moving in very small increments, but this seems like a misuse of the engine. Should I just work with lower restitution values altogether?

    Read the article

  • Can I use PBOs for textures in iOS?

    - by Radu
    As far as I can see, there is no GL_PIXEL_UNPACK_BUFFER. Also, the OpenGL ES 2.0 specification (and as far as I know, no iOS device currently supports OpenGL ES 2.0) states that glMapBufferOES() can only use GL_ARRAY_BUFFER as a target, yet glTexImage2D() and glTexSubImage2D() only seem to use PBOs if GL_PIXEL_UNPACK_BUFFER is bound. The OpenGL documentation for glBindBuffer() also states that: GL_PIXEL_PACK_BUFFER and GL_PIXEL_UNPACK_BUFFER are available only if the GL version is 2.1 or greater. So, can I use PBOs for textures? Am I missing something obvious?

    Read the article

  • Drawing a random x,y grid of objects within a prespective

    - by T Reddy
    I'm wrapping my head around OpenGL ES 2.0 and I think I'm trying to do something very simple, but I think the math may be eluding me. I created a simple, flat-ish cylinder in Blender that is 2 units in diameter. I want to create an arbitrary grid of these edge to edge (think of a checker board). I'm using a 3D perspective with GLKit: CGSize size = [[self view] bounds].size; _projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), size.width/size.height, 0.1f, 100.0f); So, I managed to manually get all of these cylinders drawn on the screen just fine. However, I would like to understand how I can programmatically "fit" all of these cylinders on the screen at the same time given the camera location, screen size, cylinder diameter, and the number of rows/columns. So the net effect is that for small grids (i.e., 5x5) the objects are closer to the camera, but for large grids (i.e., 30x30) the objects are farther away. In either case, all of the cylinders are visible.

    Read the article

  • 2D Platform Game Jumping

    - by Bradley Kreuger
    I'm currently writing a game in XNA for fun which uses C#. I have got my sprites loaded and when the character moves right he looks like he is running right and when he moves left he looks like he is running left. I been looking everywhere for a good coding example for how to create a jumping ability. I have read all the physics stuff that I can stand and it doesn't help when I can't figure out how to use say space bar to jump yet can't keep them from using space just jump again until they land.

    Read the article

  • What are the reasons for MMOs to have level caps [on hold]

    - by SamStephens
    In many MMOs players character progression is artificially capped, e.g. by level 60 or 90 or 100 or whatever. Why do MMOs have these level caps in the first place? Why not just allow characters to continue to arbitrary levels with a mathematically designed leveling system that keeps the leveling experience interesting and endless? Answers to this question may help us to see the reason behind the feature and decide if and how this should be implemented in our MMOs.

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • Sharing VBO with multiple objects and fixed size buffer data

    - by Mark Ingram
    I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data.

    Read the article

  • strange behavior in Box2D+LibGDX when applying impulse

    - by Z0lenDer
    I have been playing around with Box2D and LibGDX and have been using a sample code from DecisionTreeGames as the testing ground. Now I have a screen with four walls and a rectangle shape, lets call it a brick. When I use applyLinearImpulse to the brick, it starts bouncing right and left without any pattern and won't stop! I tried adding friction and increasing the density, but the behavior still remains the same. Here are some of the code that might be useful: method for applying the impulse: center = brick.getWorldCenter(); brick.applyLinearImpulse(20, 0, center.x, center.y); Defining the brick: brick_bodyDef.type = BodyType.DynamicBody; brick_bodyDef.position.set(pos); // brick is initially on the ground brick_bodyDef.angle = 0; brick_body = world.createBody(brick_bodyDef); brick_body.setBullet(true); brick_bodyShape.setAsBox(w,h); brick_fixtureDef.density = 0.9f; brick_fixtureDef.restitution = 1; brick_fixtureDef.shape = brick_bodyShape; brick_fixtureDef.friction=1; brick_body.createFixture(fixtureDef); Walls are defined the same only their bullet value is set to false I would really appreciate it if you could help me have a change this code to have a realistic behavior (i.e. when I apply impulse to the brick it should trip a few times and then stop completely).

    Read the article

  • How exactly does XNA's SpriteBatch work?

    - by David Gouveia
    To be more precise, if I needed to recreate this functionality from scratch in another API (e.g. in OpenGL) what would it need to be capable of doing? I do have a general idea of some of the steps, such as how it prepares an orthographic projection matrix and creates a quad for each draw call. I'm not too familiar, however, with the batching process itself. Are all quads stored in the same vertex buffer? Does it need an index buffer? How are different textures handled? If possible I'd be grateful if you could guide me through the process from when SpriteBatch.Begin() is called until SpriteBatch.End(), at least when using the default Deferred mode.

    Read the article

  • How to convert pitch and yaw to x, y, z rotations?

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( So my quesiton is: how do you calculate the X, Y, and Z rotations for an object in terms of pitch and yaw? Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • Basic AI FSM - Handling state transition

    - by Galvanize
    I'm starting to study on how to implement game AI, and it seems to me that a very simple FSM for my Pong demo would be a nice way to start. My vision on implementing this would be to have a basic state interface and a class for each state, then the NPC would have an instance of the current state. The class should have an update method and directions on wich state to go next, depending on the event received. The question is: How do I handle this event? Should I have a regular addEventListener and a costum event system? Or should I check on update for the things that could change the current state? I'm feeling a bit lost, I feel I have a good grasp on the FSM concept but a good implementation seems tricky, thanks in advance.

    Read the article

  • Collision with CCSprite

    - by Coder404
    I'm making an iOS app based off the code from here In the .m file of the tutorial is this: -(void)update:(ccTime)dt { NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *projectile in _projectiles) { CGRect projectileRect = CGRectMake( projectile.position.x - (projectile.contentSize.width/2), projectile.position.y - (projectile.contentSize.height/2), projectile.contentSize.width, projectile.contentSize.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake( target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), target.contentSize.width, target.contentSize.height); if (CGRectIntersectsRect(projectileRect, targetRect)) { [targetsToDelete addObject:target]; } } for (CCSprite *target in targetsToDelete) { [_targets removeObject:target]; [self removeChild:target cleanup:YES]; } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:projectile]; } [targetsToDelete release]; } for (CCSprite *projectile in projectilesToDelete) { [_projectiles removeObject:projectile]; [self removeChild:projectile cleanup:YES]; } [projectilesToDelete release]; } I am trying to take away the projectiles and have the app know when the CCSprite "Player" and the targets collide. Could someone help me with this? Thanks

    Read the article

  • How to manage drawing loop when changing render targets

    - by George Duckett
    I'm managing my game state by having a base GameScreen class with a Draw method. I then have (basically) a stack of GameScreens that I render. I render the bottom one first, as screens above might not completely cover the ones below. I now have a problem where one GameScreen changes render targets while doing its rendering. Anything the previous screens have drawn to the backbuffer is lost (as XNA emulates what happens on the xbox). I don't want to just set the backbuffer to preserve its contents as I want this to work on the xbox as well as PC. How should I manage this problem? A few ideas I've had: Render every GameScreen to its own render target, then render them all to the backbuffer. Create some kind of RenderAction queue where a game screen (and anything else I guess) could queue something to be rendered to the back buffer. They'd render whatever they wanted to any render target as normal, but if they wanted to render to the backbuffer they'd stick that in a queue which would get processed once all rendertarget rendering was done. Abstract away from render targets and backbuffers and have some way of representing the way graphics flows and transforms between render targets and have something manage/work out the correct rendering order (and render targets) given what rendering process needs as input and what it produces as output. I think each of my ideas have pros and cons and there are probably several other ways of approaching this general problem so I'm interested in finding out what solutions are out there.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >