Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 325/863 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • Automatically triggering standard spaceship controls to stop its motion

    - by Garan
    I have been working on a 2D top-down space strategy/shooting game. Right now it is only in the prototyping stage (I have gotten basic movement) but now I am trying to write a function that will stop the ship based on it's velocity. This is being written in Lua, using the Love2D engine. My code is as follows (note- object.dx is the x-velocity, object.dy is the y-velocity, object.acc is the acceleration, and object.r is the rotation in radians): function stopMoving(object, dt) local targetr = math.atan2(object.dy, object.dx) if targetr == object.r + math.pi then local currentspeed = math.sqrt(object.dx*object.dx+object.dy*object.dy) if currentspeed ~= 0 then object.dx = object.dx + object.acc*dt*math.cos(object.r) object.dy = object.dy + object.acc*dt*math.sin(object.r) end else if (targetr - object.r) >= math.pi then object.r = object.r - object.turnspeed*dt else object.r = object.r + object.turnspeed*dt end end end It is implemented in the update function as: if love.keyboard.isDown("backspace") then stopMoving(player, dt) end The problem is that when I am holding down backspace, it spins the player clockwise (though I am trying to have it go the direction that would be the most efficient at getting to the angle it would have to be) and then it never starts to accelerate the player in the direction opposite to it's velocity. What should I change in this code to get that to work? EDIT : I'm not trying to just stop the player in place, I'm trying to get it to use it's normal commands to neutralize it's existing velocity. I also changed math.atan to math.atan2, apparently it's better. I noticed no difference when running it, though.

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • Create a thread in xna Update method to find path?

    - by Dan
    I am trying to create a separate thread for my enemy's A* pathfinder which will give me a list of points to get to the player. I have placed the thread in the update method of my enemy. However this seems to cause jittering in the game every-time the thread is called. I have tried calling just the method and this works fine. Is there any way I can sort this out so that I can have the pathfinder on its own thread? Do I need to remove the thread start from the update and start it in the constructor? Is there any way this can work. Here is the code at the moment: bool running = false; bool threadstarted; System.Threading.Thread thread; public void update() { if (running == false && threadstarted == false) { thread = new System.Threading.Thread(PathThread); //thread.Priority = System.Threading.ThreadPriority.Lowest; thread.IsBackground = true; thread.Start(startandendobj); //PathThread(startandendobj); threadstarted = true; } } public void PathThread(object Startandend) { object[] Startandendarray = (object[])Startandend; Point startpoint = (Point)Startandendarray[0]; Point endpoint = (Point)Startandendarray[1]; bool runnable = true; // Path find from 255, 255 to 0,0 on the map foreach(Tile tile in Map) { if(tile.Color == Color.Red) { if (tile.Position.Contains(endpoint)) { runnable = false; } } } if(runnable == true) { running = true; Pathfinder p = new Pathfinder(Map); pathway = p.FindPath(startpoint, endpoint); running = false; threadstarted = false; } }

    Read the article

  • Dynamically load images inside jar

    - by Rahat Ahmed
    I'm using Slick2d for a game, and while it runs fine in Eclipse, i'm trying to figure out how to make it work when exported to a runnable .jar. I have it set up to where I load every image located in the res/ directory. Here's the code /** * Loads all .png images located in source folders. * @throws SlickException */ public static void init() throws SlickException { loadedImages = new HashMap<>(); try { URI uri = new URI(ResourceLoader.getResource("res").toString()); File[] files = new File(uri).listFiles(new FilenameFilter(){ @Override public boolean accept(File dir, String name) { if(name.endsWith(".png")) return true; return false; } }); System.out.println("Naming filenames now."); for(File f:files) { System.out.println(f.getName()); FileInputStream fis = new FileInputStream(f); Image image = new Image(fis, f.getName(), false); loadedImages.put(f.getName(), image); } } catch (URISyntaxException | FileNotFoundException e) { System.err.println("UNABLE TO LOAD IMAGES FROM RES FOLDER!"); e.printStackTrace(); } font = new AngelCodeFont("res/bitmapfont.fnt",Art.get("bitmapfont.png")); } Now the obvious problem is the line URI uri = new URI(ResourceLoader.getResource("res").toString()); If I pack the res folder into the .jar there will not be a res folder on the filesystem. How can I iterate through all the images in the compiled .jar itself, or what is a better system to automatically load all images?

    Read the article

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • OnTrigger not firing consistently

    - by Lautaro
    I have a Prefab called Player which has a Body and a Sword. The game uses 2 instances of Player, Player1 and Player2. I use Player1 to strike Player2. This is code on the sword. My hope is that Sword of Player1 will log on contct with Body of Player2. It happens but only the first hit and then i have to hit several times before another strike is logged. But when i look at log from OnTriggerStay it looks like the TriggerExit is never detected untill long after the sword is gone. void OnTriggerEnter(Collider other) { //Play sound to confirm collision var sm = ObjectDirectory.soundManager; sm.PlaySoundClip(sm.gui_02); Debug.Log(other.name + " - ENTER" ); } void OnTriggerStay(Collider other) { Debug.Log(other.name + " - collision" ); } void OnTriggerExit(Collider other) { Debug.Log(other.name + " - HAS LEFT" ); } DEBUG LOG: Player2 - ENTER UnityEngine.Debug:Log(Object) SwordControl:OnTriggerEnter(Collider) (at Assets/Scripts/SwordControl.cs:28) Player2 - collision UnityEngine.Debug:Log(Object) SwordControl:OnTriggerStay(Collider) (at Assets/Scripts/SwordControl.cs:34) (The last debug log then repeated hundreds of times long after the sword of player 1 had withdrawn and was in no contact with player 2 ) EDIT: Further tests shows that if i move player1 backwards away form player2 i trigger the OnTriggerExit. Even if the sword is not touching Player2 since after the blow. However even after OnTriggerExit it takes many tries untill i can get another blow registered.

    Read the article

  • Collision and Graphics integration

    - by Shlomi Atia
    I'm a little confused about the integration between collision and graphics. They both need to share the same position in the world. The most obvious choice is the center of the entity, which is good for bounding volumes and fixed sized sprites. However, for characters with variable height size sprites like this: http://gamemedia.wcgame.ru/data/2011-07-17/game-sprite-sheet.jpg This is no longer good. The character won't align to the ground if I'll draw it from the center. I can just make the sprites the same height, but it will be a waste of memory (the largest sprite is 4 times larger then the smallest one). Even then, this is not an option at all with skeletal sprites like this one: http://user-generated-content.java-gaming.org/img-vault/212a171fc1ebb27ab77608fb9b2dd9bd9205361ce6300b21a7f8d06d025fbbd8.png It seems that the graphics need to be drawn from the ground for characters, but not for other images such as scenery and obstacles. The only solution I could think of was having another position called draw-position, which is the entity center for images, and is the the bottom of the collision volume for characters. Then when I draw relative to that position, it should work properly. I haven't found any references for something like that, so I'm kinda insecure about it. Does anyone knows of a better approach for this problem? Thanks

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • What is the most efficient way to add and removed Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • How do I separate codes with classes?

    - by Trycon
    I have this main class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class tests extends BasicGameState{ public boolean render=false; tests1 test = new tests1(); public tests(int test) { // TODO Auto-generated constructor stub } @Override public void init(GameContainer arg0, StateBasedGame arg1) throws SlickException { // TODO Auto-generated method stub } @Override public void render(GameContainer arg0, StateBasedGame arg1, Graphics g) throws SlickException { // TODO Auto-generated method stub if(render==true) { g.drawString("Hello",100,100); } } @Override public void update(GameContainer gc, StateBasedGame s, int delta) throws SlickException { // TODO Auto-generated method stub test.render=render; test.update(gc, s, delta); } @Override public int getID() { // TODO Auto-generated method stub return 1000; } } and its sub-class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class tests1 { public boolean render; public void update(GameContainer gc, StateBasedGame s, int delta) { Input input = gc.getInput(); if(input.isKeyPressed(Input.KEY_X)) { render=true; } } } I was finding a way to prevent many codes in one class. I'm new to java. When I try running my game, then when I press X, it does not work. How am I suppose to fix that?

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • c++ How to use angular velocity that derived from inertia and force(torque) in 3d

    - by user1217203
    I am relatively new to game development. May my terminology and description are not appropriate. Please excuse my poor phrasing and help me by giving advice on how to question better if this question seems less fitting. I really appreciate your efforts. Hi. I am having hard time interpreting the set of values I have. I have inertia and force(torque) in terms of x y z. FYI I used x and y coordinates as my ground, flat coordinates and z as my up/down. I am assuming that since f = ma, that angular acceleration must be a = f / m. So I divide my torque by inertia. Then I add those x y z values to my angular velocity variable's x y z. However these x y z values confuse me. Don't I need angle/sec or radian/sec sort of values in order to apply rotation? The x y z values I have seemed to not say anything about radians or angular movement. Question : If I have ( 1, 2, 3 ) or any ( x, y, z ) as my angular velocity, how do I actually apply it as angular movement? FYI Here I am pasting my code : float mass = 100; float devidedMass = 1.0/12 * mass; Vec3 innertia( devidedMass* (_box._size.z*_box._size.z + _box._size.x*_box._size.x), devidedMass* (_box._size.y*_box._size.y + _box._size.x*_box._size.x), devidedMass* (_box._size.y*_box._size.y + _box._size.z*_box._size.z )); box._angAccel += forceAng/innertia; box._angVelo += box._angAccel; box._angAccel.allZero(); source of my inertia calculation http://www.health.uottawa.ca/biomech/courses/apa4311/solids.pdf

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Can't detect collision properly using Rectangle.Intersects()

    - by Daniel Ribeiro
    I'm using a single sprite sheet image as the main texture for my breakout game. The image is this: My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself. Texture2D sheet; Point paddleSize = new Point(112, 24); Point paddleSheetPosition = new Point(0, 240); Vector2 paddleViewportPosition; Rectangle paddleRectangle; Point ballSize = new Point(24, 24); Point ballSheetPosition = new Point(160, 240); Vector2 ballViewportPosition; Rectangle ballRectangle; Vector2 ballVelocity; My initialization is a little confusing as well, but it works as expected: paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2)); paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y); Random random = new Random(); ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2)); ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y); ballVelocity = new Vector2(3f, 3f); The problem is I can't detect the collision properly, using this code: if(ballRectangle.Intersects(paddleRectangle)) { ballVelocity.Y = -ballVelocity.Y; } What am I doing wrong?

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Why can't I compare two Texture2D's?

    - by Fiona
    I am trying to use an accessor, as it seems to me that that is the only way to accomplish what I want to do. Here is my code: Game1.cs public class GroundTexture { private Texture2D dirt; public Texture2D Dirt { get { return dirt; } set { dirt = value; } } } public class Main : Game { public static Texture2D texture = tile.Texture; GroundTexture groundTexture = new GroundTexture(); public static Texture2D dirt; protected override void LoadContent() { Tile tile = (Tile)currentLevel.GetTile(20, 20); dirt = Content.Load<Texture2D>("Dirt"); groundTexture.Dirt = dirt; Texture2D texture = tile.Texture; } protected override void Update(GameTime gameTime) { if (texture == groundTexture.Dirt) { player.TileCollision(groundBounds); } base.Update(gameTime); } } I removed irrelevant information from the LoadContent and Update functions. On the following line: if (texture == groundTexture.Dirt) I am getting the error Operator '==' cannot be applied to operands of type 'Microsoft.Xna.Framework.Graphics.Texture2D' and 'Game1.GroundTexture' Am I using the accessor correctly? And why do I get this error? "Dirt" is Texture2D, so they should be comparable. This using a few functions from a program called Realm Factory, which is a tile editor. The numbers "20, 20" are just a sample of the level I made below: tile.Texture returns the sprite, which here is the content item Dirt.png Thank you very much! (I posted this on the main Stackoverflow site, but after several days didn't get a response. Since it has to do mainly with Texture2D, I figured I'd ask here.)

    Read the article

  • Calculating a child object's Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor, but have encountered the following problem: I have two objects, A and B. A's initial values: Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) B's initial values: Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) If I now make B a child of A, I need to re-calculate the B's Position, Rotation and Scale relative to A such that it maintains its current position, rotation and scale in world coordinates. So B's position would now be (-2, -2, -2) since now A is its center and (-2, -2, -2) will keep B in the same position. I think I got the Position and scale figured out, but not rotation. So I opened Unity and ran the same example and I noticed that when making a child object, the child object did not move at all. but had its Position, Rotation and Scale values changed relative to the parent. For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When B becomes a child of A, it's rotation values become: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same rotation values into the B object in my editor, the object does not move at all. How did Unity arrive at those rotation values for the child? What are the calculations? If you can put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but the Rotation is what I really need.

    Read the article

  • How can I design good continuous (seamless) tiles?

    - by Mikalichov
    I have trouble designing tiles so that when assembled, they don't look like tiles, but look like a homogeneous thing. For example, see the image below: Even though the main part of the grass is only one tile, you don't "see" the grid; you know where it is if you look a bit carefully, but it is not obvious. Whereas when I design tiles, you can only see "oh, jeez, 64 times the same tile," like in this image: (I took this from another GDSE question, sorry; not be critical of the game, but it proves my point. And actually has better tile design that what I manage, anyway.) I think the main problem is that I design them so they are independent, there is no junction between two tiles if put closed to each other. I think having the tiles more "continuous" would have a smoother effect, but can't manage to do it, it seems overly complex to me. I think it is probably simpler than I think once you know how to do it, but couldn't find a tutorial on that specific point. Is there a known method to design continuous / homogeneous tiles? (My terminology might be totally wrong, don't hesitate to correct me.)

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • How can I ensure my Collada model fits on an iPhone screen?

    - by rakeshNS
    Hi I am new to game development. I see many examples and tried myself like displaying triangle, cube etc. Now I am looking to render a Collada object. So I created a Collada object using Google Sketch up and trying to render that now. But the thing I am not understanding is, in all examples the vertices are between -1.0 and +1.0 values. But when I looked into that Collada file, the vertices were ranging from -30.0 to 90.0. I know any vertices greater than 1.0 will not display on iPhone. So can you pleas tell my the secret behind converting Object coordinate to normalized vector coordinate? My previous triangle defined as struct Vertex{ float Position[3]; float Color[4]; }; const Vertex Vertices[] = { {{-0.5, -0.866}, {1, 1, 0.5f, 1}}, {{0.5, -0.866}, {1, 1, 0.5, 1}}, {{0, 1}, {1, 1, 0.5, 1}}, {{-0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0, -0.4f}, {0.5f, 0.5f, 0.5f}}, }; And now triangle from collada is const Vertex Vertices[] = { {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5f, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, };

    Read the article

  • andEngine dynamic sprites

    - by Blucreation
    Ive just started with andEngine the past week and i only started learning java/android 3 weeks. I can use a for loop to add multiple sprites to the screen but when i try to check collisions on them it only does it to one and not the rest. I want to be able to add a specific number for sprites made from the same texture to the scene, add collision detection to them and also make them slide across the screen (im making a game where you avoid the obstacles). My simple code: private void createobstacle(float pX, float pY) { obstacle = new AnimatedSprite(pX, pY, this.mObjTextureRegion.deepCopy(), getVertexBufferObjectManager()); obstacle.setScale(MathUtils.random(0.5f, 3f)); scene.attachChild(obstacle); } private void createobstacle(int num) { for(int i=0; i<=num; i++ ) { final float xPos = MathUtils.random(30.0f, (CAMERA_WIDTH - 30.0f)); final float yPos = MathUtils.random(30.0f, (CAMERA_HEIGHT - 30.0f)); createobstacle(xPos, yPos); } } Ive read about arrays but i cannot find any tutorials about anything im stuck with. Any help would be great!

    Read the article

  • how to make a continuous machine gun sound-effect

    - by Jan
    I am trying to make an entity fire one or more machine-guns. For each gun I store the time between shots (1.0 / firing rate) and the time since the last shot. Also I've loaded ~10 different gun-shot sound-effects. Now, for each gun I do the following: function update(deltatime): timeSinceLastShot += deltatime if timeSinceLastShot >= timeBetweenShots + verySmallRandomValue(): timeSinceLastShot -= timeBetweenShots if gunIsFiring: displayMuzzleFlash() spawnBullet() selectRandomSound().play() But now I often get a crackling noise (which I assume is when two or more guns are firing at the same time and confuse the sound-device). My question is whether A) This a common problem and there is a well-known solution, maybe to do with the channels or something, or B) I am using a completely wrong approach to the task. I had a look at some sound-assets for other games and they used complete burst with multiple shots. I suppose I could try that, but I would like to have organic little hickups in the gun-fire (that's what the random value is for) to make the game more gritty and dirty. I am using Panda3D, but I had the exact same problem in PyGame and SDL. [edit] Thanks a lot for the answers so far! One more problem with faking it though: Now how do I stop the sound? Let's say I have an effect with 5 bangs... *bang* *bang* *bang* *bang* *bang* And I magically manage to loop it so that there's no gap or overlap if the player fires more than 5 shots. Now, what do I do if the player stops firing halfway through the third bang? How do I know how long to keep playing the sample so that the third bang is completed and I can start playing the rumbling echo of the last shot? Of course I can look up the shot/pause timing of that sound-sample and code accordingly, but it feels extremely hacky.

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >