Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 500/1071 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • HUD layer not being added on my scene

    - by Shailesh_ios
    I have a CCScene which already holds my gameLayer and I am trying to add HUD layer on that.But the HUD layer is not getting added in my scene, I can say that because I have set up a CCLabel on HUD layer and when I run my project, I cannot see that label. Here's what I am doing : In my gameLayer: +(id) scene { CCScene *scene = [CCScene node]; GameScreen *layer = [GameScreen node]; [scene addChild: layer]; HUDclass * otherLayer = [HUDclass node]; [scene addChild:otherLayer]; layer.HC = otherLayer;// HC is reference to my HUD layer in @Interface of gameLayer return scene; } And then in my HUD layer I have just added a CCLabelTTF in its init method like this : -(id)init { if ((self = [super init])) { CCLabelTTF * label = [CCLabelTTF labelWithString:@"IN WEAPON CLASS" fontName:@"Arial" fontSize:15]; label.position = ccp(240,160); [self addChild:label]; } return self; } But now when I run my project I dont see that label, What am I doing wrong here ..? Any Ideas.. ? Thanks in advance for your time.

    Read the article

  • Explaining Asteroids Movement code

    - by Moaz ELdeen
    I'm writing an Asteroids Atari clone, and I want to figure out how the AI for the asteroids is done. I have came across that piece of code, but I can't get what it does 100% if ((float)rand()/(float)RAND_MAX < 0.5) { m_Pos.x = -app::getWindowWidth() / 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Pos.x = app::getWindowWidth() / 2; m_Pos.y = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); } else { m_Pos.x = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); m_Pos.y = -app::getWindowHeight() / 2; if (rand() < 0.5) m_Pos.y = app::getWindowHeight() / 2; } m_Vel.x = (float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) { m_Vel.x = -m_Vel.x; } m_Vel.y =(float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Vel.y = -m_Vel.y;

    Read the article

  • How does this snippet of code create a ray direction vector?

    - by Isaac Waller
    In the Minecraft source code, this code is used to create a direction vector for a ray from pitch and yaw:' float f1 = MathHelper.cos(-rotationYaw * 0.01745329F - 3.141593F); float f3 = MathHelper.sin(-rotationYaw * 0.01745329F - 3.141593F); float f5 = -MathHelper.cos(-rotationPitch * 0.01745329F); float f7 = MathHelper.sin(-rotationPitch * 0.01745329F); return Vec3D.createVector(f3 * f5, f7, f1 * f5); I was wondering how it worked, and what is the constant 0.01745329F?

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • Import 3ds into JMonkeyEngine 3

    - by Yanick Rochon
    I have asked this question on SO, but I think it will be more suitable here. Basically, we are trying to import an animated character body (with skeleton) from 3D Studio Max to JMonkeyEngine 3, but while we succeeded at importing some animations, we cannot seem to export the skeleton to .skeleton.xml using OgreXML format. Since OgreXML seems to be the favored way to import models into JME, we dropped .obj files and such. Any help appreciated.

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • NVidia control panel SSAO not working

    - by János Turánszki
    I am just before implementing screen space ambient occlusion in my game, but first I wanted to try enabling it from NVidia control panel only to find out that it is greyed out so that I can not enable it. With this I could enable SSAO for some other games, but not every one. I know this technique requires the depth buffer and (optionally) a normal map texture to sample information from which I already have access to given I have a deferred renderer working. After that I actually thought to roll back to a previous version of my game which still uses forward rendering so the depth buffer is actually bound to the backbuffer which I render to from the get-go so that maybe the NVidia control panel would somehow make use of it. It was not working with forward rendering either. (I also tried FXAA in the control panel and that works - but it doesn't need any depth or normal texture) So my question is that how can I enable this function so that it would work by enabling it in the NVidia control panel?

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • How many achievements should I include, and of what challenge?

    - by stephelton
    I know this question is fairy broad and subjective, but I'm wondering if there's been any published research into what an optimal number of achievements is and what kind of challenge they should present. The game this question directly relates to is a shoot-em-up, but an ideal answer is fairly theoretical. If there are there are too few achievements, or they are not challenging, I would expect they would fail in their goal to keep people playing. If there are too many, or they are unreasonably difficult, I would expect people to quickly give up. I personally witnessed the latter happening in Starcraft 2; a section of the achievements would have you win hundreds of games against their AI opponents (boring!)

    Read the article

  • Optimizing hierarchical transform

    - by Geotarget
    I'm transforming objects in 3D space by transforming each vector with the object's 4x4 transform matrix. In order to achieve hierarchical transform, I transform the child by its own matrix, and then the child by the parent matrix. This becomes costly because objects deeper in the display tree have to be transformed by all the parent objects. This is what's happening, in summary: Root -- transform its verts by Root matrix Parent -- transform its verts by Parent, Root matrix Child -- transform its verts by Child, Parent, Root matrix Is there a faster way to transform vertices to achieve hierarchical transform? What If I first concatenated each transform matrix with the parent matrices, and then transform verts by that final resulting matrix, would that work and wouldn't that be faster? Root -- transform its verts by Root matrix Parent -- concat Parent, Root matrices, transform its verts by Concated matrix Child -- concat Child, Parent, Root matrices, transform its verts by Concated matrix

    Read the article

  • Updates for IOS AppStore Multiplayer Game

    - by TobiHeidi
    I am developing a multiplayer game for the web, android and ios. For the web and android i can instantly push out new versions of my game because they support executing remotly loaded code. But with IOS i need to wait for an Apple approval taking about 10 days. I want to push updates more then weekly. What if my server code changes so the client MUST update? Run an old version of the server code just for IOS? How do other multiplayer devs handle this ?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Problem with boundary collision

    - by James Century
    The problem: When the player hits the left boundary he stops (this is exactly what I want), when he hits the right boundary. He continues until his rectangle's left boundary meets with the right boundary. Outcome: https://www.youtube.com/watch?v=yuJfIWZ_LL0&feature=youtu.be My Code public class Player extends GameObject{ BufferedImageLoader loader; Texture tex = Game.getInstance(); BufferedImage image; Animation playerWalkLeft; private HealthBarManager healthBar; private String username; private int width; private ManaBarManager manaBar; public Player(float x, float y, ObjectID ID) { super(x, y, ID, null); loader = new BufferedImageLoader(); playerWalkLeft = new Animation(5,tex.player[10],tex.player[11],tex.player[12],tex.player[13],tex.player[14],tex.player[15],tex.player[17],tex.player[18]); } public void tick(LinkedList<GameObject> object) { setX(getX()+velX); setY(getY()+velY); playerWalkLeft.runAnimation(); } public void render(Graphics g) { g.setColor(Color.BLACK); FontMetrics fm = g.getFontMetrics(g.getFont()); if(username != null) width = fm.stringWidth(username); if(username != null){ g.drawString(username,(int) x-width/2+15,(int) y); } if(velX != 0){ playerWalkLeft.drawAnimation(g, (int)x, (int)y); }else{ g.drawImage(tex.player[16], (int)x, (int)y, null); } g.setColor(Color.PINK); g.drawRect((int)x,(int)y,33,48); g.drawRect(0,0,(int)Game.getWalkableBounds().getWidth(), (int)Game.getWalkableBounds().getHeight()); } @SuppressWarnings("unused") private Image getCurrentImage() { return image; } public float getX() { return x; } public float getY() { return y; } public void setX(float x) { Rectangle gameBoundry = Game.getWalkableBounds(); if(x >= gameBoundry.getMinX() && x <= gameBoundry.getMaxX()){ this.x = x; } } public void setY(float y) { //IGNORE THE SetY please. this.y = y; } public float getVelX() { return velX; } public void setHealthBar(HealthBarManager healthBar){ this.healthBar = healthBar; } public HealthBarManager getHealthBar(){ return healthBar; } public float getVelY() { return velY; } public void setVelX(float velX) { this.velX = velX; } public void setVelY(float velY) { this.velY = velY; } public ObjectID getID() { return ID; } public void setUsername(String playerName) { this.username = playerName; } public String getUsername(){ return this.username; } public void setManaBar(ManaBarManager manaBar) { this.manaBar = manaBar; } public ManaBarManager getManaBar(){ return manaBar; } public int getLevel(){ return 1; } public boolean isPlayerInsideBoundry(float x, float y){ Rectangle boundry = Game.getWalkableBounds(); if(boundry.contains(x,y)){ return true; } return false; } } What I've tried: - Using a method that checks if the game boundary contains player boundary rectangle. This gave me the same result as what the check statement in my setX did.

    Read the article

  • this.BoundingBox.Intersects(Wall[0].BoundingBox) not working properly

    - by Pieter
    I seem to be having this problem a lot, I'm still learning XNA / C# and well, trying to make a classic paddle and ball game. The problem I run into (and after debugging have no answer) is that everytime I run my game and press either of the movement keys, the Paddle won't move. Debugging shows that it never gets to the movement part, but I can't understand why not? Here's my code: // This is the If statement for checking Left movement. if (keyboardState.IsKeyDown(Keys.Left) || keyboardState.IsKeyDown(Keys.A)) { if (!CheckCollision(walls[0])) { Location.X -= Velocity; } } //This is the CheckCollision(Wall wall) boolean public bool CheckCollision(Wall wall) { if (this.BoundingBox.Intersects(wall.BoundingBox)) { return true; } return false; } As far as I can tell there should be absolutely no problem with this, I initialize the bounding box in the constructor whenever a new instance of Walls and Paddle is created. this.BoundingBox = new Rectangle(0, 0, Sprite.Width, Sprite.Height); Any idea as to why this isn't working? I have previously succeeded with using the whole Location.X < Wall.Location.X + Wall.Texture.Width code... But to me that seems like too much coding if a simple boolean check could be done.

    Read the article

  • Need some help implementing VBO's with Frustum Culling

    - by Isracg
    i'm currently developing my first 3D game for a school project, the game world is completely inspired by minecraft (world completely made out of cubes). I'm currently seeking to improve the performance trying to implement vertex buffer objects but i'm stuck, i already have this methods implemented: Frustum culling, only drawing exposed faces and distance culling but i have the following doubts: I currently have about 2^24 cubes in my world, divided in 1024 chunks of 16*16*64 cubes, right now i'm doing immediate mode rendering, which works well with frustum culling, if i implement one VBO per chunk, do i have to update that VBO each time i move the camera (to update the frustum)? is there a performance hit with this? Can i dynamically change the size of each VBO? of do i have to make each one the biggest possible size (the chunk completely filled with objects)?. Would i have to keep each visited chunk in memory or could i efficiently remove that VBO and recreated it when needed?.

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • OpenGL Get Rotated X and Y of quad

    - by matejkramny
    I am developing a game in 2D using LWJGL library. So far I have a rotating box. I have done basic Rectangle collision, but it doesn't work for rotated rectangles. Does OpenGL have a function that returns the vertices of rotated rectangle? Or is there another way of doing this using trigonometry? I had researched how to do this and everything I found was using some matrix that I don't understand so I am asking if there is another way of doing this. For clarification, I am trying to find out the true (rotated) X,Y of each point of the rectangle. Let's say, the first point of a rectangle (top,left) has x=10 y=10.. Width and height is 100 pixels. When I rotate the rectangle using glRotatef() the x and y stay the same. The rotation is happening inside OpenGL. I need to extract the x,y of the rectangle so I can detect collisions properly.

    Read the article

  • How was 20Q made?

    - by Dan the Man
    Ever since I was a kid, I've wondered how they made the 20Q electronic game. In this game, which is it's on device, you think of an object, thing, or animal (e.g. a potato or a donkey), once you mentally choose your thing, the device goes through a series of questions such as: Is it larger than a loaf of bread? Is it found outdoors? Is it used for recreation? For each of the questions you can answer yes, no, maybe, or unknown. The way I've always thought of it to work was with immense, nested conditionals (if statements). But, I don't think that would be very likely as it would be terribly difficult to understand while coding it. I'm not looking for a discussion as SE doesn't allow it; I'm looking for concrete knowledge or solutions.

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • Recommended 2D Game Engine for prototyping

    - by Thomas Dufour
    What high-level game engine would you recommend to develop a 2D game prototype on windows? (or mac/linux if you wish) The kind of things I mean by "high-level" includes (but is definitely not limited to): not having to manage low-level stuff like screen buffers, graphics contexts having an API to draw geometric shapes well, I was going to omit it but I guess being based on an actual "high-level" language is a plus (automatic resource management and the existence a reasonable set of data structures in the standard library come to mind). It seems to me that Flash is the proverbial elephant in the room for this query but I'd very much like to see different answers based on all kinds of languages or SDKs.

    Read the article

  • Ease Rotate RigidBody2D toward arbitrary angle

    - by Plastic Sturgeon
    I'm trying to make a rigidbody2D circle return to an orientation after a collision. But there is a weird behavior I do not expect - it always orients to the same direction. This is what I call in FixedUpdate(): rotationdifference = -halfPI + rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); I would expect this would rotate 90 degrees (1/2 Pi Radians) off of the neutral axis. But it does not. In fact it performs exactly the same as: rotationdifference = rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); What is going on? How would I be able to set an angle I want it to ease towards, and then have it ease towards it when its not colliding with some other force?

    Read the article

  • generating maps

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: game will be a elevation maze shooter. levels/maps will be mazes with elevation the player has to negotiate. elevations will have effects on "combat" vision, and movement

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >