Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 466/1169 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • JavaScript 3D space ship rotation

    - by user36202
    I am working with a fairly low-level JavaScript 3D API (not Three.js) which uses euler angles for rotation. In most cases, euler angles work quite well for doing things like aligning buildings, operating a hovercraft, or looking around in the first-person. However, in space there is no up or down. I want to control the ship's roll, pitch, and yaw. To do that, some people would use a local coordinate system or a permenant matrix or quaternion or whatever to represent the ship's angle. However, since I am not most people and am using a library that deals exclusively in euler angles, I will be using relative angles to represent how to rotate the ship in space and getting the resulting non-relative euler angles. For you math nerds, that means I need to convert 3 euler angles (with Y being the vertical axis, X representing the pitch, and Z representing a roll which is unaffected by the other angles, left-handed system) into a 3x3 orientation matrix, do something fancy with the matrix, and convert it back into the 3 euler angles. Euler to matrix to euler. Somebody has posted something similar to this on SO (http://stackoverflow.com/questions/1217775/rotating-a-spaceship-model-for-a-space-simulator-game) but he ended up just working with a matrix. This will not do for me. Also, I am using JavaScript, not C++. What I want essentially are the functions do_roll, do_pitch, and do_yaw which only take in and put out euler angles. Many thanks.

    Read the article

  • How do I prevent my platformer's character from clipping on wall tiles?

    - by Jonathan Hobbs
    Currently, I have a platformer with tiles for terrain (graphics borrowed from Cave Story). The game is written from scratch using XNA, so I'm not using an existing engine or physics engine. The tile collisions are described pretty much exactly as described in this answer (with simple SAT for rectangles and circles), and everything works fine. Except when the player runs into a wall whilst falling/jumping. In that case, they'll catch on a tile and begin thinking they've hit a floor or ceiling that isn't actually there. The player is moving right and falling downwards. So after movement, collisions are checked - and first, it turns out the player character is colliding with the tile 3rd from the floor, and pushed upwards. Second, he's found to be colliding with the tile beside him, and pushed sideways - the end result being the player character thinks he's on the ground and isn't falling, and 'catches' on the tile for as long as he's running into it. I could solve this by defining the tiles from top to bottom instead, which makes him fall smoothly, but then the inverse case happens and he'll hit a ceiling that isn't there when jumping upwards against the wall. How should I approach resolving this, so that the player character can just fall along the wall as it should?

    Read the article

  • Dynamically load images inside jar

    - by Rahat Ahmed
    I'm using Slick2d for a game, and while it runs fine in Eclipse, i'm trying to figure out how to make it work when exported to a runnable .jar. I have it set up to where I load every image located in the res/ directory. Here's the code /** * Loads all .png images located in source folders. * @throws SlickException */ public static void init() throws SlickException { loadedImages = new HashMap<>(); try { URI uri = new URI(ResourceLoader.getResource("res").toString()); File[] files = new File(uri).listFiles(new FilenameFilter(){ @Override public boolean accept(File dir, String name) { if(name.endsWith(".png")) return true; return false; } }); System.out.println("Naming filenames now."); for(File f:files) { System.out.println(f.getName()); FileInputStream fis = new FileInputStream(f); Image image = new Image(fis, f.getName(), false); loadedImages.put(f.getName(), image); } } catch (URISyntaxException | FileNotFoundException e) { System.err.println("UNABLE TO LOAD IMAGES FROM RES FOLDER!"); e.printStackTrace(); } font = new AngelCodeFont("res/bitmapfont.fnt",Art.get("bitmapfont.png")); } Now the obvious problem is the line URI uri = new URI(ResourceLoader.getResource("res").toString()); If I pack the res folder into the .jar there will not be a res folder on the filesystem. How can I iterate through all the images in the compiled .jar itself, or what is a better system to automatically load all images?

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • Best way to blend colors in tile lighting? (XNA)

    - by Lemoncreme
    I have made a color, decent, recursive, fast tile lighting system in my game. It does everything I need except one thing: different colors are not blended at all: Here is my color blend code: return (new Color( (byte)MathHelper.Clamp(color.R / factor, 0, 255), (byte)MathHelper.Clamp(color.G / factor, 0, 255), (byte)MathHelper.Clamp(color.B / factor, 0, 255))); As you can see it does not take the already in place color into account. color is the color of the previous light, which is weakened by the above code by factor. If I wanted to blend using the color already in place, I would use the variable blend. Here is an example of a blend that I tried that failed, using blend: return (new Color( (byte)MathHelper.Clamp(((color.R + blend.R) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.G + blend.G) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.B + blend.B) / 2) / factor, 0, 255))); This color blend produces inaccurate and strange results. I need a blend that is accurate, like the first example, that blends the two colors together. What is the best way to do this?

    Read the article

  • Delaying a Foreach loop half a second

    - by Sigh-AniDe
    I have created a game that has a ghost that mimics the movement of the player after 10 seconds. The movements are stored in a list and i use a foreach loop to go through the commands. The ghost mimics the movements but it does the movements way too fast, in split second from spawn time it catches up to my current movement. How do i slow down the foreach so that it only does a command every half a second? I don't know how else to do it. Please help this is what i tried : The foreach runs inside the update method DateTime dt = DateTime.Now; foreach ( string commandDirection in ghostMovements ) { int mapX = ( int )( ghostPostition.X / scalingFactor ); int mapY = ( int )( ghostPostition.Y / scalingFactor ); // If the dt is the same as current time if ( dt == DateTime.Now ) { if ( commandDirection == "left" ) { switch ( ghostDirection ) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; dt.AddMilliseconds( 500 );// add half a second to dt break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; } } } }

    Read the article

  • Create a thread in xna Update method to find path?

    - by Dan
    I am trying to create a separate thread for my enemy's A* pathfinder which will give me a list of points to get to the player. I have placed the thread in the update method of my enemy. However this seems to cause jittering in the game every-time the thread is called. I have tried calling just the method and this works fine. Is there any way I can sort this out so that I can have the pathfinder on its own thread? Do I need to remove the thread start from the update and start it in the constructor? Is there any way this can work. Here is the code at the moment: bool running = false; bool threadstarted; System.Threading.Thread thread; public void update() { if (running == false && threadstarted == false) { thread = new System.Threading.Thread(PathThread); //thread.Priority = System.Threading.ThreadPriority.Lowest; thread.IsBackground = true; thread.Start(startandendobj); //PathThread(startandendobj); threadstarted = true; } } public void PathThread(object Startandend) { object[] Startandendarray = (object[])Startandend; Point startpoint = (Point)Startandendarray[0]; Point endpoint = (Point)Startandendarray[1]; bool runnable = true; // Path find from 255, 255 to 0,0 on the map foreach(Tile tile in Map) { if(tile.Color == Color.Red) { if (tile.Position.Contains(endpoint)) { runnable = false; } } } if(runnable == true) { running = true; Pathfinder p = new Pathfinder(Map); pathway = p.FindPath(startpoint, endpoint); running = false; threadstarted = false; } }

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • Create indefinitely oscillating pendulum in Farseer Physics 3.3.1/Box2d

    - by GONeale
    I am new to Farseer Physics and using version 3.3.1. I am after some help and would even be happy to receive Box2d answers just to ensure I get a response as I should then be able to convert it! -- Thanks ...After a lot of tinkering around I have managed to produce a thin vertical rectangle shape on the screen and I wish this to swing back and forth pinned at the top up to an angle I set (90 degrees would be fine for this sample). When it is approaching the top I wish it to slow down, then fall back the way it just came, increase speed then obviously slow to a stop at the top again. Almost how a swinging pirate ship would work at a theme park. This is the code I have so far which swings the shape, but it is seeming to lose speed on each swing eventually grinding to a halt: float playerWidth = ConvertUnits.ToSimUnits(5), playerHeight = ConvertUnits.ToSimUnits(68); playerPosition = ConvertUnits.ToSimUnits(-350, 0); playerBody = BodyFactory.CreateRectangle(World, playerWidth, playerHeight, 2f, playerPosition); playerBody.BodyType = BodyType.Dynamic; // create player sprite based on player body _rectangleSprite = new Sprite(ScreenManager.Assets.TextureFromShape(playerBody.FixtureList[0].Shape, MaterialType.Player, Color.Orange, 1f)); // Create swinging joint var joint = JointFactory.CreateFixedRevoluteJoint(World, playerBody, ConvertUnits.ToSimUnits(0, -34), playerBody.Position); If somebody could also provide the command I would need to pause the shape on a mouse click or keyboard command at it's current angle and then continue when I let go of the mouse click that would be super fantastic! (I actually posted this on StackOverflow as well before I realised there was a dedicated game development forum) Cheers

    Read the article

  • What is the most efficient way to add and removed Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • Calculating a child object's Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor, but have encountered the following problem: I have two objects, A and B. A's initial values: Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) B's initial values: Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) If I now make B a child of A, I need to re-calculate the B's Position, Rotation and Scale relative to A such that it maintains its current position, rotation and scale in world coordinates. So B's position would now be (-2, -2, -2) since now A is its center and (-2, -2, -2) will keep B in the same position. I think I got the Position and scale figured out, but not rotation. So I opened Unity and ran the same example and I noticed that when making a child object, the child object did not move at all. but had its Position, Rotation and Scale values changed relative to the parent. For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When B becomes a child of A, it's rotation values become: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same rotation values into the B object in my editor, the object does not move at all. How did Unity arrive at those rotation values for the child? What are the calculations? If you can put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but the Rotation is what I really need.

    Read the article

  • c++ How to use angular velocity that derived from inertia and force(torque) in 3d

    - by user1217203
    I am relatively new to game development. May my terminology and description are not appropriate. Please excuse my poor phrasing and help me by giving advice on how to question better if this question seems less fitting. I really appreciate your efforts. Hi. I am having hard time interpreting the set of values I have. I have inertia and force(torque) in terms of x y z. FYI I used x and y coordinates as my ground, flat coordinates and z as my up/down. I am assuming that since f = ma, that angular acceleration must be a = f / m. So I divide my torque by inertia. Then I add those x y z values to my angular velocity variable's x y z. However these x y z values confuse me. Don't I need angle/sec or radian/sec sort of values in order to apply rotation? The x y z values I have seemed to not say anything about radians or angular movement. Question : If I have ( 1, 2, 3 ) or any ( x, y, z ) as my angular velocity, how do I actually apply it as angular movement? FYI Here I am pasting my code : float mass = 100; float devidedMass = 1.0/12 * mass; Vec3 innertia( devidedMass* (_box._size.z*_box._size.z + _box._size.x*_box._size.x), devidedMass* (_box._size.y*_box._size.y + _box._size.x*_box._size.x), devidedMass* (_box._size.y*_box._size.y + _box._size.z*_box._size.z )); box._angAccel += forceAng/innertia; box._angVelo += box._angAccel; box._angAccel.allZero(); source of my inertia calculation http://www.health.uottawa.ca/biomech/courses/apa4311/solids.pdf

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • How can I make permanent death in a MUD seem acceptable and fair to players?

    - by Luke Laupheimer
    I have considered writing a MUD for years, and I have a lot of ideas my friends think are really cool (and that's how I'd hope to get anywhere -- word of mouth). Thing is, there's one thing I have always wanted, that my friends and strangers hated: permanent death. Now, the emotional response I get to this is visceral revulsion, every time. I'm pretty sure I am the only person that wants this, or if I'm not, I'm a tiny minority. Now, the reason I want it is because I want the actions of the players to matter. Unlike a lot of other MUDs, which have a set of static city-states and social institutions etc, I want the things my players do, should I get any, to actually change the situation. And that includes killing people. If you kill someone, you didn't send them to time out, you killed them. What happens when you kill people? They go away. They don't come back in half an hour to smack talk you some more. They're gone. Forever. By making death non-permanent, you make death not matter. It would be similar if a climax to a character's arc is getting a speeding ticket. It cheapens it. Non-permanent death cheapens death. How can I: 1) Convince my players (and random people!) that this is actually a good idea?, or 2) Find some other way to make death and violence matter as much as it does in real life (except within the game, of course) sans character deletion? What alternatives are there out there?

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • How do I apply 2 rotations about different points to a single primitive using OpenGL

    - by Fenoec
    I'm working on a 2D top-down shooter game that has a rotation feature like Realm Of The Mad God such that if you press e the camera rotates around the character in a clockwise direction and q rotates the camera around the character in a counterclockwise direction. I have this working with my floors and walls by translating to the character, doing the screen rotation, and drawing everything with the character as the origin. The problem arises when I shoot projectiles which need to both rotate around the character and rotate around themselves (shooting uses the mouse cursor so I can shoot at any angle). For example, if the screen is not rotated and I'm shooting rectangular projectiles, I want them to face in the direction I'm shooting (rotation around themselves). However if I only do this rotation, when I then rotate the screen the projectiles will always shoot at the same position because my cursor position does not change. Therefore I need to also either rotate the projectiles around the character or rotate the mouse cursor position to get the correct position (which would then totally screw up all of the collision detection). Likewise if I only do the screen rotation on projectiles, the rectangles will always be facing the same way and they would only look correct if I were shooting straight up or straight down. So my question is, how can I perform 2 rotations on a primitive around 2 different points? The only way I can think of is to translate to the character and do the screen rotation, then somehow calculate the translation required to move back to the middle of the projectile (seeing as how my axes are now rotated) and do its rotation. Or am I thinking about this in the wrong way and there is a different solution to accomplishing this effect?

    Read the article

  • Why can't I compare two Texture2D's?

    - by Fiona
    I am trying to use an accessor, as it seems to me that that is the only way to accomplish what I want to do. Here is my code: Game1.cs public class GroundTexture { private Texture2D dirt; public Texture2D Dirt { get { return dirt; } set { dirt = value; } } } public class Main : Game { public static Texture2D texture = tile.Texture; GroundTexture groundTexture = new GroundTexture(); public static Texture2D dirt; protected override void LoadContent() { Tile tile = (Tile)currentLevel.GetTile(20, 20); dirt = Content.Load<Texture2D>("Dirt"); groundTexture.Dirt = dirt; Texture2D texture = tile.Texture; } protected override void Update(GameTime gameTime) { if (texture == groundTexture.Dirt) { player.TileCollision(groundBounds); } base.Update(gameTime); } } I removed irrelevant information from the LoadContent and Update functions. On the following line: if (texture == groundTexture.Dirt) I am getting the error Operator '==' cannot be applied to operands of type 'Microsoft.Xna.Framework.Graphics.Texture2D' and 'Game1.GroundTexture' Am I using the accessor correctly? And why do I get this error? "Dirt" is Texture2D, so they should be comparable. This using a few functions from a program called Realm Factory, which is a tile editor. The numbers "20, 20" are just a sample of the level I made below: tile.Texture returns the sprite, which here is the content item Dirt.png Thank you very much! (I posted this on the main Stackoverflow site, but after several days didn't get a response. Since it has to do mainly with Texture2D, I figured I'd ask here.)

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >