Search Results

Search found 25482 results on 1020 pages for 'development methodologies'.

Page 421/1020 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • How do I get the correct values from glReadPixels in OpenGL 3.0?

    - by NoobScratcher
    I'm currently trying to Implement mouse selection into my game editor and I ran into a little problem when I look at the values stored in &pixel[0],&pixel[1],&pixel[2],&pixel[3]; I get r: 0 g: 0 b: 0 a: 0 As you can see I'm not able to get the correct values from glReadPixels(); My 3D models are red colored using glColor3f(255,0,0); I was hoping someone could help me figure this out. Here is the source code: case WM_LBUTTONDOWN: { GetCursorPos(&pos); ScreenToClient(hwnd, &pos); GLenum err = glGetError(); while (glGetError() != GL_NO_ERROR) {cerr << err << endl;} glReadPixels(pos.x, SCREEN_HEIGHT - 1 - pos.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &pixel[0] ); cerr << "r: "<< (int)pixel[0] << endl; cerr << "g: "<< (int)pixel[1] << endl; cerr << "b: "<< (int)pixel[2] << endl; cerr << "a: "<< (int)pixel[3] << endl; cout << pos.x << endl; cout << pos.y << endl; } break; I use : WIN32 API OPENGL 3.0 C++

    Read the article

  • Making a perfect map (not tile-based)

    - by Sri Harsha Chilakapati
    I would like to make a map system as in the GameMaker and the latest code is here. I've searched a lot in google and all of them resulted in tutorials about tile-maps. As tile maps do not fit for every type of game and GameMaker uses tiles for a different purpose, I want to make a "Sprite Based" map. The major problem I had experienced was collision detection being slow for large maps. So I wrote a QuadTree class here and the collision detection is fine upto 50000 objects in the map without PixelPerfect collision detection and 30000 objects with PixelPerferct collisions enabled. Now I need to implement the method "isObjectCollisionFree(float x, float y, boolean solid, GObject obj)". The existing implementation is becoming slow in Platformer games and I need suggestions on improvement. The current Implementation: /** * Checks if a specific position is collision free in the map. * * @param x The x-position of the object * @param y The y-position of the object * @param solid Whether to check only for solid object * @param object The object ( used for width and height ) * @return True if no-collision and false if it collides. */ public static boolean isObjectCollisionFree(float x, float y, boolean solid, GObject object){ boolean bool = true; Rectangle bounds = new Rectangle(Math.round(x), Math.round(y), object.getWidth(), object.getHeight()); ArrayList<GObject> collidables = quad.retrieve(bounds); for (int i=0; i<collidables.size(); i++){ GObject obj = collidables.get(i); if (obj.isSolid()==solid && obj != object){ if (obj.isAlive()){ if (bounds.intersects(obj.getBounds())){ bool = false; if (Global.USE_PIXELPERFECT_COLLISION){ bool = !GUtil.isPixelPerfectCollision(x, y, object.getAnimation().getBufferedImage(), obj.getX(), obj.getY(), obj.getAnimation().getBufferedImage()); } break; } } } } return bool; } Thanks.

    Read the article

  • Problem with DirectX scene-graph

    - by Alex
    I'm trying to implement a basic scene graph in DirectX using C++. I am using a left child-right sibling binary tree to do this. I'm having trouble updating each node's world transformation relative to its parent (and its parent's parent etc.). I'm struggling to get it to work recursively, though I can get it to work like this: for(int i = 0; i < NUM_OBJECTS; i++) { // Initialize to identity matrix. D3DXMatrixIdentity(&mObject[i].toWorldXForm); int k = i; while( k != -1 ) { mObject[i].toWorldXForm *= mObject[k].toParentXForm; k = mObject[k].parent; } } toWorldXForm is the object's world transform and toParentXForm is the object's transform relative to the parent. I want to do this using a method within my object class (the code above is in my main class). This is what I've tried but it doesn't work (only works with nodes 1 generation away from the root) if (this->sibling != NULL) this->sibling->update(toParentXForm); D3DXMatrixIdentity(&toWorldXForm); this->toWorldXForm *= this->toParentXForm; this->toWorldXForm *= toParentXForm; toParentXForm *= this->toParentXForm; if (this->child != NULL) this->child->update(toParentXForm); Sorry if I've not been clear, please tell me if there's anything else you need to know. I've no doubt it's merely a silly mistake on my part, hopefully an outside view will be able to spot the problem.

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Game engine IDE template [on hold]

    - by Spencer Killen
    Hey so I'm working on a fairly basic javascript game, and it's beginning to get to the point where my 'engine' to which I wrote, is difficult to manage in an all text environment, Iv already thought of using a javascript IDE like jet brains, but i was wondering if I could go 1 step further and have use a piece of software to purpose as an IDE and have a customizable GUI that I could use to automate class construction and such, for example, I have it set up right now so that everytime I want to create a new block (it's a platformer) I must copy a text file and fill in all the setting such as bounding box, sprite ect, it would be a lot easier if I could press a button and have a menu apear where I would fill in these values (I have a game maker background) is there software like this? If not what are some similar solutions to my problem?

    Read the article

  • How do game engines implement certain features?

    - by Milo
    I have always wondered how modern game engines do things such as realistic water, ambient occluded lighting, eye adaptation, global illumination, etc. I'm not so much interested in the implementation details, but more on what part of the graphics API such as D3D or OpenGL allow adding such functionality. The only thing I can think of is shaders, but I do not think just shaders can do all that. So really what I'm asking is, what functions or capabilities of graphics APIs enable developers to implement these types of features into their engines? Thanks

    Read the article

  • How can I Implement KeyListeners/ActionListeners into the JFrame?

    - by A.K.
    I'll get to the point: I have a player in my game that you control with the keyboard yet the key methods in the player class and ActionListener w/ KeyAdapter in the Board class don't seem to fire. So far I've tried adding these key methods into the JFrame, doesn't seem to let me move him even though other objects that I have (enemies) can move fine. Here's part of the JFrame class with the event listeners: frm.addKeyListener(KeyBoardListener); public void mouseClicked(MouseEvent e) { nSound.play(); StartB.setContentAreaFilled(false); cards.remove(StartB); frm.remove(TitleL); frm.remove(cards); frm.setLayout(new GridLayout(1, 1)); frm.add(nBoard); //Add Game "Tiles" Or Content. x = 1200 nBoard.setPreferredSize(new Dimension(1200, 420)); cards.revalidate(); frm.validate(); } public KeyListener KeyBoardListener = new KeyListener() { @Override public void keyPressed(KeyEvent args0) { int key = args0.getKeyCode(); if(key == KeyEvent.VK_LEFT) { nBoard.S.vx = -4; } if(key == KeyEvent.VK_RIGHT) { nBoard.S.vx = 4; } if(key == KeyEvent.VK_UP) { nBoard.S.vy = -4; } if(key == KeyEvent.VK_DOWN) { nBoard.S.vy = 4; } if(key == KeyEvent.VK_SPACE) { nBoard.S.fire(); } } @Override public void keyReleased(KeyEvent args0) { int key = args0.getKeyCode(); if(key == KeyEvent.VK_LEFT) { nBoard.S.vx = 0; } if(key == KeyEvent.VK_RIGHT) { nBoard.S.vx = 0; } if(key == KeyEvent.VK_UP) { nBoard.S.vy = 0; } if(key == KeyEvent.VK_DOWN) { nBoard.S.vy = 0; } } @Override public void keyTyped(KeyEvent args0) { // TODO Auto-generated method stub } };

    Read the article

  • How to simulate pressure with particles?

    - by BeachRunnerJoe
    I'm trying to simulate pressure with a collection of spherical particles in a Unity game I'm building. A couple notes about the problem: The goal is to fill a constantly changing 2d space/void with small, frictionless spheres. The game is trying to simulate the ever-growing pressure of more objects being shoved into this space. The level itself is constantly scrolling from left to right, meaning if the space's dimensions are not changed by the user it will automatically get smaller (the leftmost part of the space will continually scroll off-screen). I'm wondering what some approaches are that I can take to tackling these problems... Knowing when to detect when there is space to fill and then add spheres to the space. Removing spheres from the space when it is shrinking. Strategies to simulate pressure on the spheres such that they "explode outwards" when more space is created. The current approach I am contemplating is using a constantly moving wall, that is off screen and moves with the screen, as this image illustrates: . This moving wall will push and trap the spheres into the space. As for adding new spheres, I was going to have either (1) spheres replicate themselves upon detecting free space, OR (2) spawn them at the left side of the space (where the wall is) - pushing the rest of the spheres to fill the space. I foresee problems with idea #1 because this likely wouldn't really create/simulate pressure; idea #2 seems more promising, but raises the question of how to provide a location for these new sphere particles to spawn (and the ramifications of spawning them when there IS no space). Thanks so much in advance for your wisdom!

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • Collision with half semi-circle

    - by heitortsergent
    I am trying to port a game I made using Flash/AS3, to the Windows Phone using C#/XNA 4.0. You can see it here: http://goo.gl/gzFiE In the flash version I used a pixel-perfect collision between meteors (it's a rectangle, and usually rotated) that spawn outside the screen, and move towards the center, and a shield in the center of the screen(which is half of a semi-circle, also rotated by the player), which made the meteor bounce back in the opposite direction it came from, when they collided. My goal now is to make the meteors bounce in different angles, depending on the position it collides with the shield (much like Pong, hitting the borders causes a change in the ball's angle). So, these are the 3 options I thought of: -Pixel-perfect collision (microsoft has a sample(http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel_transformed)) , but then I wouldn't know how to change the meteor angle after the collision -3 BoundingCircle's to represent the half semi-circle shield, but then I would have to somehow move them as I rotate the shield. -Farseer Physics. I could make a shape composed of 3 lines, and use that as the collision object for the shield. Is there any other way besides those? Which would be the best way to do it(it's aimed towards mobile devices, so pixel-perfect is probably not a good choice)? Most of the time there's always a easier/better way than what we think of...

    Read the article

  • Can SpriteBatch be used to fill a polygon with a texture?

    - by can poyrazoglu
    I basically need to fill a texture into a polygon using the SpriteBatch. I've done some research but couldn't find anything useful except polygon triangulation method, which works well only with convex polygons (without diving into super math which is definitely not something I'm pretty good at). Are there any solutions for filling in a polygon in a basic way? I of course need something dynamic (I'll have a map editor that you can define polygons, and the game will render them (and collision detection will also use them but that's off topic), basically I can't accept solutions like "pre-calculated" bitmaps or anything like that. I need to draw a polygon with the segments provided, to the screen, using the SpriteBatch.

    Read the article

  • Android Live Testing

    - by Matthew Dockerty
    I am making a game for android and in it I am using sensors which are not available in the emulator. At the moment I am connecting my device and transferring the apk, then installing to test but that is a pain to do, and I have gotten to the stage where I need to start logging values for debugging. I have gone into the run configs of my app and set it to prompt me to pick a device, but my device is never in the list when it is connected to my PC and I try to run it. How am I supposed to set it up to work properly? Thanks for the help.

    Read the article

  • How do I implement smooth movement in a Box2D platform game?

    - by Romeo
    I have implemented a character in JBox2D which moves with the help of a wheel rotating at the bottom of it. The movement is the best result I've had 'till now but it's a little glitchy when the character stands on the edge. So I am thinking should I use five smaller wheels instead of a big wheel. The wheel/wheels will not be visible in the finished product, now they are drawn for debugging. Here is a video. Is there a better way to do this using JBox2D?

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • How do I adjust the origin of rotation for a group of sprites?

    - by Jon
    I am currently grouping sprites together, then applying a rotation transformation on draw: private void UpdateMatrix(ref Vector2 origin, float radians) { Vector3 matrixorigin = new Vector3(origin, 0); _rotationMatrix = Matrix.CreateTranslation(-matrixorigin) * Matrix.CreateRotationZ(radians) * Matrix.CreateTranslation(matrixorigin); } Where the origin is the Centermost point of my group of sprites. I apply this transformation to each sprite in the group. My problem is that when I adjust the point of origin, my entire sprite group will re-position itself on screen. How could I differentiate the point of rotation used in the transformation, from the position of the sprite group? Is there a better way of creating this transformation matrix? EDIT Here is the relevant part of the Draw() function: Matrix allTransforms = _rotationMatrix * camera.GetTransformation(); spriteBatch.Begin(SpriteSortMode.BackToFront, null, null, null, null, null, allTransforms); for (int i = 0; i < _map.AllParts.Count; i++) { for (int j = 0; j < _map.AllParts[0].Count; j++) { spriteBatch.Draw(_map.AllParts[i][j].Texture, _map.AllParts[i][j].Position, null, Color.White, 0, _map.AllParts[i][j].Origin, 1.0f, SpriteEffects.None, 0f); } } This all works fine, again, the problem is that when a rotation is set and the point of origin is changed, the sprite group's position is offset on screen. I am trying to figure out a way to adjust the point of origin without causing a shift in position. EDIT 2 At this point, I'm looking for workarounds as this is not working. Does anyone know of a better way to rotate a group of sprites in XNA? I need a method that will allow me to modify the point of rotation (origin) without affecting the position of the sprite group on screen.

    Read the article

  • Prevent oversteering catastrophe in racing games

    - by jdm
    When playing GTA III on Android I noticed something that has been annoying me in almost every racing game I've played (maybe except Mario Kart): Driving straight ahead is easy, but curves are really hard. When I switch lanes or pass somebody, the car starts swiveling back and forth, and any attempt to correct it makes it only worse. The only thing I can do is to hit the brakes. I think this is some kind of oversteering. What makes it so irritating is that it never happens to me in real life (thank god :-)), so 90% of the games with vehicles inside feel unreal to me (despite probably having really good physics engines). I've talked to a couple of people about this, and it seems either you 'get' racing games, or you don't. With a lot of practice, I did manage to get semi-good at some games (e.g. from the Need for Speed series), by driving very cautiously, braking a lot (and usually getting a cramp in my fingers). What can you do as a game developer to prevent the oversteering resonance catastrophe, and make driving feel right? (For a casual racing game, that doesn't strive for 100% realistic physics) I also wonder what games like Super Mario Kart exactly do differently so that they don't have so much oversteering? I guess one problem is that if you play with a keyboard or a touchscreen (but not wheels and pedals), you only have digital input: gas pressed or not, steering left/right or not, and it's much harder to steer appropriately for a given speed. The other thing is that you probably don't have a good sense of speed, and drive much faster than you would (safely) in reality. From the top of my head, one solution might be to vary the steering response with speed.

    Read the article

  • How to visually "connect" skybox edges with terrain model

    - by David
    I'm working on a simple airplane game where I use skybox cube rendered using disabled depth test. Very close to the bottom side of the skybox is my terrain model. What bothers me is that the terrain is not connected to the skybox bottom. This is not visible while the plane flies low, but as it gets some altitude, the terrain looks smaller because of the perspective. Since the skybox center is always same as the camera position, the skybox moves with the plane, but the terrain goes into the distance. Ok, I think you understand the problem. My question is how to fix it. It's an airplane game so limiting max altitude is not possible. I thought about some way to stretch terrain to always cover whole bottom side of the skybox cube, but that doesn't feel right and I don't even know how would I calculate new terrain dimensions every frame. Here are some screenshot of games where you can clearly see the problem: (oops, I cannot post images yet) darker brown is the skybox bottom here: http://i.stack.imgur.com/iMsAf.png untextured brown is the skybox bottom here: http://i.stack.imgur.com/9oZr7.png

    Read the article

  • Using MVC with a retained mode renderer

    - by David Gouveia
    I am using a retained mode renderer similar to the display lists in Flash. In other words, I have a scene graph data structure called the Stage to which I add the graphical primitives I would like to see rendered, such as images, animations, text. For simplicity I'll refer to them as Sprites. Now I'm implementing an architecture which is becoming very similar to MVC, but I feel that that instead of having to create View classes, that the sprites already behave pretty much like Views (except for not being explicitly connected to the Model). And since the Model is only changed through the Controller, I could simply update the view together with the Model in the controller, as in the example below: Example 1 class Controller { Model model; Sprite view; void TeleportTo(Vector2 position) { model.Position = view.Position = position; } } The alternative, I think, would be to create View classes that wrap the sprites, make the model observable, and make the view react to changes on the model. This seems like a lot of extra work and boilerplate code, and I'm not seeing the benefits if I'm just going to have one view per controller. Example 2 class Controller { Model model; View view; void TeleportTo(Vector2 position) { model.Position = position; } } class View { Model model; Sprite sprite; View() { model.PropertyChanged += UpdateView; } void UpdateView() { sprite.Position = model.Position; } } So, how is MVC or more specifically, the View, usually implemented when using a retained-mode renderer? And is there any reason why I shouldn't stick with example 1?

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • Vertex fog producing black artifacts

    - by Nick
    I originally posted this question on the XNA forums but got no replies, so maybe someone here can help: I am rendering a textured model using the XNA BasicEffect. When I enable fog, the model outline is still visible as many small black dots when it should be "in the fog". Why is this happening? Here's what it looks like for me -- http://tinypic.com/r/fnh440/6 Here is a minimal example showing my problem: (the ship model that this example uses is from the chase camera sample on this site -- http://xbox.create.msdn.com/en-US/education/catalog/sample/chasecamera -- in case anyone wants to try it out ;)) public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Model model; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here model = Content.Load<Model>("ship"); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.FogEnabled = true; be.FogColor = Color.CornflowerBlue.ToVector3(); be.FogStart = 10; be.FogEnd = 30; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here model.Draw(Matrix.Identity * Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(3 * MathHelper.PiOver4), Matrix.CreateLookAt(new Vector3(0, 0, 30), Vector3.Zero, Vector3.Up), Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 16f/9f, 1, 100)); base.Draw(gameTime); } }

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • How would I balance a multiplayer competitive game

    - by Simon
    I'm looking at my first foray into developing a game, and would love to know whether you guys have any thoughts on game balancing on limited multiplayer games. The game I have in mind involves a neutral player that has to achieve a goal, with two supporting "deity" players who are one of 'good' and 'evil' - One of the deity players would try to help the player achieve their goal, while the other would try to thwart them. Any thoughts or pointers on how I can ensure the deities are balanced? If you want me to expand, I will, just didn't want to give away too much of the game play before I finish it.

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >