Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 446/1021 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Switching between levels, re-initialize existing structure or create new one?

    - by Martino Wullems
    This is something I've been wondering for quite a while. When building games that exist out of multiple levels (platformers, shmups etc) what is the prefered method to switch between the levels? Let's say we have a level class that does the following: Load data for the level design (tiles), enemies, graphics etc. Setup all these elements in their appriopate locations and display them Start physics and game logic I'm stuck between the following 2 methods: 1: Throw away everything in the level class and make a new one, we have to load an entirely new level anyway! 2: pause the game logic and physics, unload all currents assets, then re-initialize those components with the level data for the new level. They both have their pros and cons. Method 1 is alot easier and seems to make sense since we have to redo everything anyway. But method 2 allows you to re-use exisiting elements which might save resources and allows for a smoother transfer to the new level.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • Want to develop my own primitive physics engine, don't know how to start with it's high-level architecture. Suggestions?

    - by Violet Giraffe
    Few years ago I tried to make a simple 3D game - billiards. Completed like 50%, stuck with physics. Basically, I only need to calculate balls rolling over flat surface, but it would be nice to make something more flexible. I know all the formulas and laws (most of them, anyway). the problem is I have no idea of how to make good physics engine architecture-wise. I tried google and other forums but didn't find what I was looking for. The only suggestion was to look at open-source engine, but I'm not that good a programmer to make heads or tails out of it...

    Read the article

  • How to move a line of sprites in a sine wave?

    - by electroflame
    So, I'm spawning a horizontal line of enemies that I would like to have move in a nice wave. Currently I tried: Enemy.position.X += Enemy.velocity.X; Enemy.position.Y += -(float)Math.Cos(Enemy.position.X / 200) * 5; This...kind of works. But the wave is not a true wave. The top and bottom of one pass are not the same (e.g. 5 for the top, and -5 for the bottom (I don't mean literal points, I just meant that it's not symmetrical)). Is there a better way to do this? I would like the whole line to move in a wave, so it looks fluid. By that, I mean that it should look like each enemy is "following" the one in front of it. The code I posted does have this fluidity to it, but like I said, it's not a perfect wave. Any ideas? Thanks in advance.

    Read the article

  • How to make the Angry Birds "shot arch" dotted line? [duplicate]

    - by unexpected62
    This question already has an answer here: Show path of a body of where it should go after linear impulse is applied 2 answers I am making a game that includes 2D projectile flight paths like that of Angry Birds. Angry Birds employs the notion that a previous shot is shown with a dotted line "arch" showing the player where that last shot went. I think recording that data is simple enough once a shot is fired, but in my game, I want to show it preemptively, ie: before the shot. How would I go about calculating this dotted line? The other caveat is I have wind in my game. How can you determine a projectile preemptively when wind will affect it too? This seems like a pretty tough problem. My wind right now just applies a constant force every step of animation in the direction of the wind flow. I'm using Box2D and AndEngine if it matters.

    Read the article

  • Collision and Graphics integration

    - by Shlomi Atia
    I'm a little confused about the integration between collision and graphics. They both need to share the same position in the world. The most obvious choice is the center of the entity, which is good for bounding volumes and fixed sized sprites. However, for characters with variable height size sprites like this: http://gamemedia.wcgame.ru/data/2011-07-17/game-sprite-sheet.jpg This is no longer good. The character won't align to the ground if I'll draw it from the center. I can just make the sprites the same height, but it will be a waste of memory (the largest sprite is 4 times larger then the smallest one). Even then, this is not an option at all with skeletal sprites like this one: http://user-generated-content.java-gaming.org/img-vault/212a171fc1ebb27ab77608fb9b2dd9bd9205361ce6300b21a7f8d06d025fbbd8.png It seems that the graphics need to be drawn from the ground for characters, but not for other images such as scenery and obstacles. The only solution I could think of was having another position called draw-position, which is the entity center for images, and is the the bottom of the collision volume for characters. Then when I draw relative to that position, it should work properly. I haven't found any references for something like that, so I'm kinda insecure about it. Does anyone knows of a better approach for this problem? Thanks

    Read the article

  • Is it ok to initialize an RB_ConstraintActor in PostBeginPlay?

    - by Almo
    I have a KActorSpawnable subclass that acts weird. In PostBeginPlay, I initialize an RB_ConstraintActor; the default is not to allow rotation. If I create one in the editor, it's fine, and won't rotate. If I spawn one, it rotates. Here's the class: class QuadForceKActor extends KActorSpawnable placeable; var(Behavior) bool bConstrainRotation; var(Behavior) bool bConstrainX; var(Behavior) bool bConstrainY; var(Behavior) bool bConstrainZ; var RB_ConstraintActor PhysicsConstraintActor; simulated event PostBeginPlay() { Super.PostBeginPlay(); PhysicsConstraintActor = Spawn(class'RB_ConstraintActorSpawnable', self, '', Location, rot(0, 0, 0)); if(bConstrainRotation) { PhysicsConstraintActor.ConstraintSetup.bSwingLimited = true; PhysicsConstraintActor.ConstraintSetup.bTwistLimited = true; } SetLinearConstraints(bConstrainX, bConstrainY, bConstrainZ); PhysicsConstraintActor.InitConstraint(self, None); } function SetLinearConstraints(bool InConstrainX, bool InConstrainY, bool InConstrainZ) { if(InConstrainX) { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 0; } if(InConstrainY) { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 0; } if(InConstrainZ) { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 0; } } DefaultProperties { bConstrainRotation=true bConstrainX=false bConstrainY=false bConstrainZ=false bSafeBaseIfAsleep=false bNoEncroachCheck=false } Here's the code I use to spawn one. It's a subclass of the one above, but it doesn't reference the constraint at all. local QuadForceKCreateBlock BlockActor; BlockActor = spawn(class'QuadForceKCreateBlock', none, 'PowerCreate_Block', BlockLocation(), m_PreparedRotation, , false); BlockActor.SetDuration(m_BlockDuration); BlockActor.StaticMeshComponent.SetNotifyRigidBodyCollision(true); BlockActor.StaticMeshComponent.ScriptRigidBodyCollisionThreshold = 0.001; BlockActor.StaticMeshComponent.SetStaticMesh(m_ValidCreationBlock.StaticMesh); BlockActor.StaticMeshComponent.AddImpulse(m_InitialVelocity); I used to initialize an RB_ConstraintActor where I spawned it from the outside. This worked, which is why I'm pretty sure it has nothing to do with the other code in QuadForceKCreateBlock. I then added the internal constraint in QuadForceKActor for other purposes. When I realized I had two constraints on the CreateBlock doing the same thing, I removed the constraint code from the place where I spawn it. Then it started rotating. Is there a reason I should not be initializing an RB_ConstraintActor in PostBeginPlay? I feel like there's some basic thing about how the engine works that I'm missing.

    Read the article

  • Shooter in iOS and a visible Aim line before shooting

    - by London2423
    I have to questions. I am trying to develop a game that is iOS but I did it first in my computer so I can tested there. I was able to must of it for PC but I am having a very hard time with iOS port The problem I do have is that I don't know how to shout in iOS. To be more specific how to line render in iOS This is the script I use in my computer using UnityEngine; using System.Collections; public class NewBehaviourScript : MonoBehaviour { LineRenderer line; void Start () { line = gameObject.GetComponent<LineRenderer>(); line.enabled = false; } void Update () { if (Input.GetButtonDown ("Fire1")) { StopCoroutine ("FireLaser"); StartCoroutine ("FireLaser"); } } IEnumerator FireLaser () { line.enabled = true; while (Input.GetButton("Fire1")) { Ray ray = new Ray(transform.position, transform.forward); RaycastHit hit; line.SetPosition (0, ray.origin); if (Physics.Raycast (ray, out hit,100)) { line.SetPosition(1,hit.point); if (hit.rigidbody) { hit.rigidbody.AddForceAtPosition(transform.forward * 5, hit.point); } } else line.SetPosition (1, ray.GetPoint (100)); yield return null; } line.enabled = false; { } } } Which part I have to change for iOS? I already did in the iOS the touch giu event so my player move around in the xcode/Iphone but I need some help with the shouting part. The second part of the question is where I do have to insert or change the script in order to first aim and I DO see the line of aim and then shout. Now the player can only shout. It can not aim at the gameobject, see the the line coming out of the gun aiming at the object and then shout? How I can do that. Everyone tell me Line render but that's what i did Thank you

    Read the article

  • How to create a reasonably sized urban area manually but efficiently

    - by Overv
    I have a game concept that only really works in an urban area that is of reasonable scale and diversity. In terms of what it should look like, think GTA, in terms of the size think more like a small neighbourhood with residents and a few local shops, perhaps a supermarket. I'm mostly experienced in programming and not at all with modelling, texturing or drawing, but I've found that SketchUp allows me to design interesting looking buildings that I model after real world buildings in my own neighbourhood. Designing these buildings and other objects can take from a few tens of minutes to a few hours. My question is: what is the best approach for a one man army like me who does manage to model buildings to create an interesting city environment in a reasonable amount of time? My game will not be based on procedural generation, the environment will actually be modelled like GTA cities.

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • Designing rules to fight smallpox in Civ-style TBS games

    - by Williham Totland
    TL;DR: How do you design a ruleset for a Civ-style TBS game that prevents city smallpox from being a profitable or viable strategy? Long version: Civ-style games are pretty great. Bringing a civilization from cradle to grave is a great endeavor, and practicing diplomacy with hard-line human players is fun and challenging. In theory. In practice, however, many of these games has, especially in multiplayer, exactly one viable strategy: City smallpox, a.k.a. infinite city spread, a.k.a. covering all available space with 1-citizen cities, packed as tight as they will go. I suppose this could count as emergent gameplay, but still; it could hardly be considered to be in the spirit of the class of game. The Civilization series, of course, is stuck in their more or less fixed rule sets, established with Civilization. Yes, there have been major changes in some respects, but the rules pertaining to city building and maintenance have stayed pretty similar. So the question, then: If you build a ruleset for a TBS from the ground up; what rules should be in place to prevent Infinite City Sprawl from being a viable strategy? Or should ICS be a viable strategy?

    Read the article

  • Character jump animation is not working when i hit the space bar

    - by muzzy
    i am having an issue with my game in XNA. My jump sprite sheet for my character does not trigger when i hit the space bar. I cant seem to find the problem. Please help me. I am also put the code below to make things easier. namespace WindowsGame4 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; // start of new code Texture2D playerWalk; // sprite sheet of walk cycle (14 frames) Texture2D idle; // idle animation Texture2D jump; // jump animation Vector2 playerPos; // to hold x and y position info for the player Point frameDimensions; // to hold width and height values for the frames int presentFrame; // to record which frame we are on at any given time int noOfFrames; // to hold the total number of frames in the spritesheet int elapsedTime; // to know how long each frame has been shown int frameDuration; // to hold info about how long each frame should be shown SpriteEffects flipDirection; // SpriteEffects object int speed; //rate of movement int upMovement; int downMovement; int rightMovement; int leftMovement; int jumpApex; string state; //this is going to be "idle","walking" or "jumping". KeyboardState previousKeyboardState; Vector2 originalPlayerPos; Vector2 movementDirection; Vector2 movementSpeed; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // textures will be defined in the LoadContent() method playerPos = new Vector2(0, 200); // starting position for the player is at the left of the screen, and a Y position of 200 frameDimensions = new Point(55, 65); // each frame in the idle sprite sheet is 55 wide by 65 high presentFrame = 0; // start at frame 0 noOfFrames = 5; // there are 5 frames in the idle cycle elapsedTime = 0; // set elapsed time to start at 0 frameDuration = 80; // 80 milliseconds is how long each frame will show for (the higher the number, the slower the animation) flipDirection = SpriteEffects.None; // set the value of flipDirection to none speed = 200; upMovement = -2; downMovement = 2; rightMovement = 1; leftMovement = -1; jumpApex = 100; state = "idle"; previousKeyboardState = Keyboard.GetState(); originalPlayerPos = Vector2.Zero; movementDirection = Vector2.Zero; movementSpeed = Vector2.Zero; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); playerWalk = Content.Load<Texture2D>("sprites/walkSmall"); // load the walk cycle spritesheet idle = Content.Load<Texture2D>("sprites/idleCycle"); // load the idle cycle sprite sheet jump = Content.Load<Texture2D>("sprites/jump"); // load the jump cycle sprite sheet } protected override void UnloadContent() // we're not using this method at the moment { } protected override void Update(GameTime gameTime) // Update method - used it to call a number of other methods { if (Keyboard.GetState().IsKeyDown(Keys.Escape)) { this.Exit(); // Exit the game if the Escape key is pressed } KeyboardState presentKeyboardState = Keyboard.GetState(); UpdateMovement(presentKeyboardState, gameTime); UpdateIdle(presentKeyboardState, gameTime); UpdateJump(presentKeyboardState); UpdateAnimation(gameTime); playerPos += movementDirection * movementSpeed * (float)gameTime.ElapsedGameTime.TotalSeconds; previousKeyboardState = presentKeyboardState; base.Update(gameTime); } private void UpdateAnimation(GameTime gameTime) { elapsedTime += gameTime.ElapsedGameTime.Milliseconds; if (elapsedTime > frameDuration) { elapsedTime -= frameDuration; elapsedTime = elapsedTime - frameDuration; presentFrame++; if (presentFrame > noOfFrames) if (state != "jumping") { presentFrame = 0; } else { presentFrame = 8; } } } protected void UpdateMovement(KeyboardState presentKeyboardState, GameTime gameTime) { if (state == "idle") { movementSpeed = Vector2.Zero; movementDirection = Vector2.Zero; if (presentKeyboardState.IsKeyDown(Keys.Left)) { state = "walking"; movementSpeed.X = speed; movementDirection.X = leftMovement; flipDirection = SpriteEffects.FlipHorizontally; } if (presentKeyboardState.IsKeyDown(Keys.Right)) { state = "walking"; movementSpeed.X = speed; movementDirection.X = rightMovement; flipDirection = SpriteEffects.None; } } } private void UpdateIdle(KeyboardState presentKeyboardState, GameTime gameTime) { if ((presentKeyboardState.IsKeyUp(Keys.Left) && previousKeyboardState.IsKeyDown(Keys.Left) || presentKeyboardState.IsKeyUp(Keys.Right) && previousKeyboardState.IsKeyDown(Keys.Right) && state != "jumping")) { state = "idle"; } } private void UpdateJump(KeyboardState presentKeyboardState) { if (state == "walking" || state == "idle") { if (presentKeyboardState.IsKeyDown(Keys.Space) && !presentKeyboardState.IsKeyDown(Keys.Space)) { presentFrame = 1; DoJump(); } } if (state == "jumping") { if (originalPlayerPos.Y - playerPos.Y > jumpApex) { movementDirection.Y = downMovement; } if (playerPos.Y > originalPlayerPos.Y) { playerPos.Y = originalPlayerPos.Y; state = "idle"; movementDirection = Vector2.Zero; } } } private void DoJump() { if (state != "jumping") { state = "jumping"; originalPlayerPos = playerPos; movementDirection.Y = upMovement; movementSpeed = new Vector2(speed, speed); } } protected override void Draw(GameTime gameTime) // Draw method { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); // begin the spritebatch if (state == "walking") { noOfFrames = 14; frameDimensions = new Point(55, 65); Vector2 playerWalkPos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(playerWalk, playerWalkPos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } if (state == "idle") { noOfFrames = 5; frameDimensions = new Point(55, 65); Vector2 idlePos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(idle, idlePos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } if (state == "jumping") { noOfFrames = 9; frameDimensions = new Point(55, 92); Vector2 jumpPos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(jump, jumpPos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } spriteBatch.End(); // end the spritebatch commands base.Draw(gameTime); } } }

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Variables in static library are never initialized. Why?

    - by Coyote
    I have a bunch of variables that should be initialized then my game launches, but must of them are never initialized. Here is an example of the code: MyClass.h class MyClass : public BaseObject { DECLARE_CLASS_RTTI(MyClass, BaseObject); ... }; MyClass.cpp REGISTER_CLASS(MyClass) Where REGISTER_CLASS is a macro defined as follow #define REGISTER_CLASS(className)\ class __registryItem##className : public __registryItemBase {\ virtual className* Alloc(){ return NEW className(); }\ virtual BaseObject::RTTI& GetRTTI(){ return className::RTTI; }\ }\ \ const __registryItem##className __registeredItem##className(#className); and __registryItemBase looks like this: class __registryItemBase { __registryItemBase(const _string name):mName(name){ ClassRegistry::Register(this); } const _string mName; virtual BaseObject* Alloc() = 0; virtual BaseObject::RTTI& GetRTTI() = 0; } Now the code is similar to what I currently have and what I have works flawlessly, all the registered classes are registered to a ClassManager before main(...) is called. I'm able to instantiate and configure components from scripts and auto-register them to the right system etc... The problem arrises when I create a static library (currently for the iPhone, but I fear it will happen with android as well). In that case the code in the .cpp files is never registered. Why is the resulting code not executed when it is in the library while the same code in the program's binary is always executed? Bonus questions: For this to work in the static library, what should I do? Is there something I am missing? Do I need to pass a flag when building the lib? Should I create another structure and init all the __registeredItem##className using that structure?

    Read the article

  • How stoper one annimation model on XNA?

    - by Mehdi Bugnard
    I met a Difficulty for one stoper annimation. Everything works great starter for the animation. But I do not see how stoper and can continue the annimation paused. The "animationPlayer.StartClip (clip)" is used to choke the annimation but impossible to find a way to stoper Thans's a lot Here is my code to use. protected override void LoadContent() { //Model - Player model_player = Content.Load<Model>("Models\\Player\\models"); // Look up our custom skinning information. SkinningData skinningData = model_player.Tag as SkinningData; if (skinningData == null) throw new InvalidOperationException ("This model does not contain a SkinningData tag."); // Create an animation player, and start decoding an animation clip. animationPlayer = new AnimationPlayer(skinningData); AnimationClip clip = skinningData.AnimationClips["ArmLowAction_006"]; animationPlayer.StartClip(clip); } protected overide update(GameTime gameTime) { KeyboardState key = Keyboard.GetState(); // If player don't move -> stop anim if (!key.IsKeyDown(Keys.W) && !keyStateOld.IsKeyUp(Keys.S) && !keyStateOld.IsKeyUp(Keys.A) && !keyStateOld.IsKeyUp(Keys.D)) { //animation stop ? not exist ? animationPlayer.Stop(); isPlayerStop = true; } else { if(isPlayerStop == true) { isPlayerStop = false; animationPlayer.StartClip(Clip); } }

    Read the article

  • How much server bandwidth does an average RTS game require per month?

    - by Nat Weiss
    My friend and I are going to write a multiplayer, multiplatform RTS game and are currently analyzing the costs of going with a client-server architecture. The game will have a small map with mostly characters, not buildings (think of DotA or League of Legends). The authoritative game logic will run on the server and message packet sizes will be highly optimized. We'd like to know approximately how much server bandwidth our proposed RTS game would use on a monthly basis, considering these theoretical constants: 100 concurrent users maximum 8 players maximum per game 10 ticks per second Bonus: If you can tell us approximately how much server RAM this kind of game would use that would also help a great deal. Thanks in advance.

    Read the article

  • What 2D game engines are there available for C++?

    - by dysoco
    I just realized there are not C++ 2D Game Engines that I know of. For example, something like Pygame in Python, or Slick2D in Java. We have the following: SDL - Too low level, not a Game Engine SFML - Handles more things than SDL and it's more modern, but still not a Game Engine. I like it, but I have found it a little bit buggy with the 2.0 version. Irrlitch - It's a Game Engine, but 3D focused. Ogre3D - Same as Irrlitch Allegro - This is a Game Engine, but it's C based, I'd like a modern C++ library. Monocle Engine - This looks like what I need... but sadly there is no Documentation, no community... nothing, all I have is the Github repo. So, do you know any ? I'd like to use C++, not C#, not Java: I'm just more comfortable with C++.

    Read the article

  • How do I draw video frames onto the screen permanently using XNA?

    - by izb
    I have an app that plays back a video and draws the video onto the screen at a moving position. When I run the app, the video moves around the screen as it plays. Here is my Draw method... protected override void Draw(GameTime gameTime) { Texture2D videoTexture = null; if (player.State != MediaState.Stopped) videoTexture = player.GetTexture(); if (videoTexture != null) { spriteBatch.Begin(); spriteBatch.Draw( videoTexture, new Rectangle(x++, 0, 400, 300), /* Where X is a class member */ Color.White); spriteBatch.End(); } base.Draw(gameTime); } The video moves horizontally acros the screen. This is not exactly as I expected since I have no lines of code that clear the screen. My question is why does it not leave a trail behind? Also, how would I make it leave a trail behind?

    Read the article

  • How to emulate Mode 13h in a modern 3D renderer?

    - by David Gouveia
    I was indulging in nostalgia and remembered the first game I created, which used Mode 13h. This mode was really simple to work with, since it was essentially just an array of bytes with an element for each pixel on the screen (using an indexed color scheme). So I thought it might be fun to create something nowadays under these restrictions, but on modern hardware. The API could be as simple as: public class Mode13h { public byte[] VideoMemory = new byte[320 * 200]; public Color[] Palette = new Color[256]; } Now I'm wondering what would be the best way to get this data on the screen, using something like XNA / DirectX / OpenGL. The only solution I could think of was to create a texture with the same size as the VideoMemory array, write the contents of VideoMemory to it every frame, then render that texture in a full screen quad with the correct aspect ratio and using point texture filtering for that retro look. Is there a better way?

    Read the article

  • EaseFunction in LoopEntityModifier

    - by Siddharth
    For my game, I need EaseFunction in LoopEntityModifier. In my game, I am rotating ball over certain object. For giving effect I want to use EaseFunction. I want to rotate ball around an object take around 4 to 5 round that was already rotating but I want add some effect so that it looks good. For this I have to use EaseFunction which suits my needs. But if I put EaseFunction in rotation modifier then each round rotation modifier apply an effect of EaseFunction that I want only one time occur either starting or ending time. So if I can able to provide EaseFunction in LoopEntityModifier then it will good for me or something similar also work for me. At present my code is something similar like this. new LoopEntityModifier(new RotationModifier(...)); I hope someone has some idea on this.

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

  • Basic game architechture best practices in Cocos2D on iOS

    - by MrDatabase
    Consider the following simple game: 20 squares floating around an iPhone's screen. Tapping a square causes that square to disappear. What's the "best practices" way to set this up in Cocos2D? Here's my plan so far: One Objective-c GameState singleton class (maintains list of active squares) One CCScene (since there's no menus etc) One CCLayer (child node of the scene) Many CCSprite nodes (one for each square, all child nodes of the layer) Each sprite listens for a tap on itself. Receive tap = remove from GameState Since I'm relatively new to Cocos2D I'd like some feedback on this design. For example I'm unsure of the GameState singleton. Perhaps it's unnecessary.

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >