Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 529/1332 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • Are there any open source projects for car engine sound simulation?

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • Strange behavior of RigidBody with gravity and impulse applied

    - by Heisenbug
    I'm doing some experiments trying to figure out how physics works in Unity. I created a cube mesh with a BoxCollider and a RigidBody. The cuve is laying on a mesh plane with a BoxCollider. I'm trying to update the object position applying a force on its RigidBody. Inside script FixedUpdate function I'm doing the following: public void FixedUpdate() { if (leftButtonPressed()) this.rigidbody.AddForce( this.transform.forward * this.forceStrength, ForceMode.Impulse); } Despite the object is aligned with the world axis and the force is applied along Z axis, it performs a quite big rotation movement around its y axis. Since I didn't modify the center of mass and the BoxCollider position and dimension, all values should be fine. Removing gravity and letting the object flying without touching the plane, the problem doesn't show. So I suppose it's related to the friction between objects, but I can't understand exactly which is the problem. Why this? What's my mistake? How can I fix this, or what's the right way to do such a moving an object on a plane through a force impulse?

    Read the article

  • Many sources of movement in an entity system

    - by Sticky
    I'm fairly new to the idea of entity systems, having read a bunch of stuff (most usefully, this great blog and this answer). Though I'm having a little trouble understanding how something as simple as being able to manipualate the position of an object by an undefined number of sources. That is, I have my entity, which has a position component. I then have some event in the game which tells this entity to move a given distance, in a given time. These events can happen at any time, and will have different values for position and time. The result is that they'd be compounded together. In a traditional OO solution, I'd have some sort of MoveBy class, that contains the distance/time, and an array of those inside my game object class. Each frame, I'd iterate through all the MoveBy, and apply it to the position. If a MoveBy has reached its finish time, remove it from the array. With the entity system, I'm a little confused as how I should replicate this sort of behavior. If there were just one of these at a time, instead of being able to compound them together, it'd be fairly straightforward (I believe) and look something like this: PositionComponent containing x, y MoveByComponent containing x, y, time Entity which has both a PositionComponent and a MoveByComponent MoveBySystem that looks for an entity with both these components, and adds the value of MoveByComponent to the PositionComponent. When the time is reached, it removes the component from that entity. I'm a bit confused as to how I'd do the same thing with many move by's. My initial thoughts are that I would have: PositionComponent, MoveByComponent the same as above MoveByCollectionComponent which contains an array of MoveByComponents MoveByCollectionSystem that looks for an entity with a PositionComponent and a MoveByCollectionComponent, iterating through the MoveByComponents inside it, applying/removing as necessary. I guess this is a more general problem, of having many of the same component, and wanting a corresponding system to act on each one. My entities contain their components inside a hash of component type - component, so strictly have only 1 component of a particular type per entity. Is this the right way to be looking at this? Should an entity only ever have one component of a given type at all times?

    Read the article

  • Wavefront mesh: determine which face a point belongs to?

    - by Mina Samy
    I have a 3D mesh Wavefront .obj file. Is there any algorithm that takes an arbitrary point coordinates as input and determines which face of the mesh that point belongs to ?? The mesh is rendered on the screen, then the user clicks on it, I want to determine which part of the mesh the user has clicked on ? Here's the code using LibGDX: Vector3 intersection=new Vector3(); Ray ray=camera.getPickRay(x, y); //vertices is an array that hold the coordinates of the mesh boolean ok=Intersector.intersectRayTriangles(ray, vertices, intersection); Thanks

    Read the article

  • Sending Graphics to the C drive [on hold]

    - by CodeOfGenius
    I'm trying to create image files on the users' desktop. Let's say i have a picture of an orange in my eclipse workspace in the resource folder. When somebody downloads the project, I want to take that image of an orange and put it in a folder called fruit on their desktop. Whenever i export my game it can't read the images anymore which is wierd so I prefer to try this method. Just like minecraft has its' stuff in %Appdata%, I want to put a folder with my images the game uses on the desktop. There isn't any errors, I'm just asking how do i do this.

    Read the article

  • libgdx draw issue and animation

    - by johnny-b
    it seems as though i cannot get the draw method to work??? it seems as though the bullet.draw(batcher) does not work and i cannot understand why as the bullet is a sprite. i have made a Sprite[] and added them as animation. could that be it? i tried batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY(), bullet.getOriginX() / 2, bullet.getOriginY() / 2, bullet.getWidth(), bullet.getHeight(), 1, 1, bullet.getRotation()); but that dont work, the only way it draws is this batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); below is the code. // this is in a Asset Class texture = new Texture(Gdx.files.internal("SpriteN1.png")); texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest); bullet1 = new Sprite(texture, 380, 350, 45, 20); bullet1.flip(false, true); bullet2 = new Sprite(texture, 425, 350, 45, 20); bullet2.flip(false, true); Sprite[] bullets = { bullet1, bullet2 }; bulletAnimation = new Animation(0.06f, bullets); bulletAnimation.setPlayMode(Animation.PlayMode.LOOP); // this is the GameRender class public class GameRender() { private Bullet bullet; private Ball ball; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet = myWorld.getBullet(); scroller = myWorld.getScroller(); } private void initAssets() { ballAnimation = AssetLoader.ballAnimation; bulletAnimation = AssetLoader.bulletAnimation; } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); // Disable transparency // This is good for performance when drawing images that do not require // transparency. batcher.disableBlending(); // The ball needs transparency, so we enable that again. batcher.enableBlending(); batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); // End SpriteBatch batcher.end(); } } // this is the gameworld class public class GameWorld { public static Ball ball; private Bullet bullet; private ScrollHandler scroller; public GameWorld() { ball = new Ball(480, 273, 32, 32); bullet = new Bullet(10, 10); scroller = new ScrollHandler(0); } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet() { return bullet; } } is there anyway so make the sprite work?

    Read the article

  • Point Light Soft Shadows

    - by notabene
    How to implement soft shadows for omni directional (point) light. We use typical shadow mapping technique. Depth is rendered to texture cube and addresing is pretty simple then. Just using vector from light to fragments world position. It works perfectly. Until you want soft shadows. In our engine we use PCSS technique for spot lights. But for point light there begins troubles. How to sample in 3D? I developed technique when orthonormal basis is created from a direction and upvector (0,1,0). And then multiply sampling vector (something like this (1.0,i/depthMapSize,j/depthMapSize) with this basis. But this (of course :)) looks pretty bad for vectors near (0,1,0) and (0,-1,0). I will appreciate any help on this.

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Game Input mouse filtering

    - by aaron
    I'm having a problem with filtering mouse inputs, the method I am doing right know moves the cursor back to the center of the screen each frame. But I cant do this because it messes with other things. Does anyone know how to implement this with delta mouse movement. Here is the relevant code. void update() { static float oldX = 0; static float oldY = 0; static float walkSpeed = .05f; static float sensitivity = 0.002f;//mouse sensitivity static float smooth = 0.7f;//mouse smoothing (0.0 - 0.99) float w = ScreenResolution.x/2.0f; float h = ScreenResolution.y/2.0f; Vec2f scrc(w,h); Vec2f mpos(getMouseX(),getMouseY()); float x = scrc.x-mpos.x; float y = scrc.y-mpos.y; oldX = (oldX*smooth + x*(1.0-smooth)); oldY = (oldY*smooth + y*(1.0-smooth)); x = oldX * sensitivity; y = oldY * sensitivity; camera->rotate(Vec3f(y,0,0)); transform->setRotation(transform->getRotation()*Quaternionf::fromAxisAngle(0.0f,1.0f,0.0f,-x)); setMousePosition((int)scrc.x,(int)scrc.y);//THIS IS THE PROBLEM LINE HOW CAN I AVOID THIS .... }

    Read the article

  • Writing Game Engine from scratch with OpenGL [on hold]

    - by Wazery
    I want to start writing my game engine from scratch for learning purpose, what is the prerequisites and how to do that, what programming languages and things you recommend me? Also if you have good articles and books on that it will be great. Thanks in advance! My Programming languages and tools are: C/C++ is it good to use only C? Python OpenGL Git GDB What I want to learn from it: Core Game Engine Rendering / Graphics Game Play/Rules Input (keyboard/mouse/controllers, etc) In Rendering/Graphics: 3D Shading Lighting Texturing

    Read the article

  • Avoiding lag when rendering Texture2D for first time

    - by Emir Lima
    I have found a similar question here, but it is about playing sounds. I am using 2048 x 2048 textures for sprite sheets and every time I call spriteBatch.Draw using a sheet for the first time in game execution, causes a considerable lag. The lag doesn't appears for the next times. Someone has faced this problem before? What can I do to overcome this? Update: I inserted a code in the end of content load routine that draws EVERY Texture2D that is loaded into ContentManager before follow to the game screen. This works well. None lag occurs when different textures are rendered over the time, EXCEPT if the IsFullScreen are changed. Apparently, changing this property makes the textures loaded in the GPU gone. Is that correct?

    Read the article

  • What is the difference between Constant Vertex Attributes and Uniforms?

    - by Samaursa
    According to the OpenGL ES 2.0 Programming Guide: A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states: ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'?

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • Collision in PyGame for spinning rectangular object.touching circles

    - by OverAchiever
    I'm creating a variation of Pong. One of the differences is that I use a rectangular structure as the object which is being bounced around, and I use circles as paddles. So far, all the collision handling I've worked with was using simple math (I wasn't using the collision "feature" in PyGame). The game is contained within a 2-dimensional continuous space. The idea is that the rectangular structure will spin at different speed depending on how far from the center you touch it with the circle. Also, any extremity of the rectangular structure should be able to touch any extremity of the circle. So I need to keep track of where it has been touched on both the circle and the rectangle to figure out the direction it will be bounced to. I intend to have basically 8 possible directions (Up, down, left, right and the half points between each one of those). I can work out the calculation of how the objected will be dislocated once I get the direction it will be dislocated to based on where it has been touch. I also need to keep track of where it has been touched to decide if the rectangular structure will spin clockwise or counter-clockwise after it collided. Before I started coding, I read the resources available at the PyGame website on the collision class they have (And its respective functions). I tried to work out the logic of what I was trying to achieve based on those resources and how the game will function. The only thing I could figure out that I could do was to make each one of these objects as a group of rectangular objects, and depending on which rectangle was touched the other would behave accordingly and give the illusion it is a single object. However, not only I don't know if this will work, but I also don't know if it is gonna look convincing based on how PyGame redraws the objects. Is there a way I can use PyGame to handle these collision detections by still having a single object? Can I figure out the point of collision on both objects using functions within PyGame precisely enough to achieve what I'm looking for? P.s: I hope the question was specific and clear enough. I apologize if there were any grammar mistakes, English is not my native language.

    Read the article

  • World of Warcraft like C++/C# server (highload)

    - by Edward83
    I know it is very big topic and maybe my question is very beaten, but I'm interesting of basics how to write highload server for UDP/TCP client-server communications in MMO-like game on C++/C#? I mean what logic of retrieving hundreds and thousands packages at the same time and sending updates to clients? Please advice me with architecture solutions, your experience, ready-to-use libraries. Maybe you know some interesting details how WoW servers work. Thank you! Edit: my question is about developing, not hardware/software tools;

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

  • Continuous Collision Detection Techniques

    - by Griffin
    I know there are quite a few continuous collision detection algorithms out there , but I can't find a list or summary of different 2D techniques; only tutorials on specific algorithms. What techniques are out there for calculating when different 2D bodies will collide and what are the advantages / disadvantages of each? I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if an algorithm breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • How do I get the point coords of a rotated SFML shaperect?

    - by user15498
    I am trying to get collisions of bullets working, and am using SFML. I am using code to get the position of the points of the rectangle for collisions, however I think there's a way to do this without having to get points but by simply getting the points from SFML, since the shape is a rectangle and the points are stored in that way. Is there a way to do that? Through a combination of getPoint() and getGlobalBounds() maybe? While on this topic, is it better to use shapeRects or sprites? I used to only use sprites, however with the addition of textures and more low level stuff I think it would be best to switch to using rectangles and setting their size.

    Read the article

  • Heightmap and Textures

    - by Robert
    Im trying to find the "best way" to apply a texture to a heightmap with opengl 3.x. Its really hard to find something on google because tutorials are olds and they're all using different methods, im really lost and i dont know what to use at all. Here is my code that generates the heightmap (its basic) float[] vertexes = null; float[] textureCoords = null; for(int x = 0; x < this.m_size.width; x++) { for(int y = 0; y < this.m_size.height; y++) { vertexes ~= [x, 1.0f, y]; textureCoords ~= [cast(float)x / 50, cast(float)y / 50]; } } As you can see, i dont know how to apply the texture at all (i was using / 50 for my tests). Result of that code : I would like to have something very basic like : (you can find more pics in his blog) Edit : my texture size is 1024x1024.

    Read the article

  • Tiled Editor: How is this Map Handling Collision?

    - by user2736286
    BrowserQuest map in question. From what I understand, with tiled, there are two main ways to specify collision: Create an object layer, and interpret the shapes in the engine as collision objects. Create a tiled layer, and make all tiles in the layer have a collision property, and interpret all tiles in the layer as collision objects. I'm using BrowserQuest as a big source of inspiration for my project, and I want to know how they handled collision on the level editing side. I've checked through all their layers, expecting an object layer to be handling cliff collision like: But there are no such object layers to be found. Furthermore, the tile layers containing the tiles for such cliffs have no properties at all, meaning that they didn't just specify "collision" for such tile layers. I especially need to know how they handled less rectangular shapes like: I could imagine that they are not using explicit collision layers, but instead determining collision in the actual engine, based off the presence of specific tile layer sprites. Only because BrowserQuest has whole-tile movement, and it wouldn't look too odd if a small apple, taking up only a fraction of the tile size, prevents movement over that entire tile. But I'm creating a game with more precise movement, so collision has to be tight to the apple, and I really want to know how BrowserQuest approached collision defining. If anyone knowledgeable with Tiled could take a quick look at the map, I'd appreciate it! I'm tearing my hair out here :). Thanks

    Read the article

  • Tips on how to notify a user of new features in your game

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • why are my players drawn top the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • Application toolkits like QT versus traditional game/multimedia libraries like SFML

    - by Aaron
    I currently intend to use SFML for my next game project. I'll need a substantial GUI though (RPG/strategy-type) so I'll either have to implement my own or try to find an appropriate third party library, which seem to boil down to CEGUI, libRocket, and GWEN. At the same time, I do not anticipate doing that many advanced graphical effects. My game will be 2D and primarily sprite-based with some sprite animations. I've recently discovered that QT applications can have their appearance styled so that they don't have to look like plain OS apps. Given that, I am beginning to consider QT a valid alternative to SFML. I wouldn't have to implement the GUI functionality I'd need, and I may not be taking advantage of SFML's lower-level access anyway. The only drawbacks I can think of immediately are the learning curve for QT and figuring out how to fit game logic inside such a framework after getting used to the input/update/render loop of traditional game libraries. When would an application toolkit like QT be more appropriate for a game than a traditional game or multimedia library like SFML?

    Read the article

  • 2D Tile based Game Collision problem

    - by iNbdy
    I've been trying to program a tile based game, and I'm stuck at the collision detection. Here is my code (not the best ^^): void checkTile(Character *c, int **map) { int x1,x2,y1,y2; /* Character position in the map */ c->upY = (c->y) / TILE_SIZE; // Top left corner c->downY = (c->y + c->h) / TILE_SIZE; // Bottom left corner c->leftX = (c->x) / TILE_SIZE; // Top right corner c->rightX = (c->x + c->w) / TILE_SIZE; // Bottom right corner x1 = (c->x + 10) / TILE_SIZE; // 10px from left side point x2 = (c->x + c->w - 10) / TILE_SIZE; // 10px from right side point y1 = (c->y + 10) / TILE_SIZE; // 10px from top side point y2 = (c->y + c->h - 10) / TILE_SIZE; // 10px from bottom side point /* Top */ if (map[c->upY][x1] > 2 || map[c->upY][x2] > 2) c->topCollision = 1; else c->topCollision = 0; /* Bottom */ if ((map[c->downY][x1] > 2 || map[c->downY][x2] > 2)) c->downCollision = 1; else c->downCollision = 0; /* Left */ if (map[y1][c->leftX] > 2 || map[y2][c->leftX] > 2) c->leftCollision = 1; else c->leftCollision = 0; /* Right */ if (map[y1][c->rightX] > 2 || map[y2][c->rightX] > 2) c->rightCollision = 1; else c->rightCollision = 0; } That calculates 8 collision points My moving function is like that: void movePlayer(Character *c, int **map) { if ((c->dirX == LEFT && !c->leftCollision) || (c->dirX == RIGHT && !c->rightCollision)) c->x += c->vx; if ((c->dirY == UP && !c->topCollision) || (c->dirY == DOWN && !c->downCollision)) c->y += c->vy; checkPosition(c, map); } and the checkPosition: void checkPosition(Character *c, int **map) { checkTile(c, map); if (c->downCollision) { if (c->state != JUMPING) { c->vy = 0; c->y = (c->downY * TILE_SIZE - c->h); } } if (c->leftCollision) { c->vx = 0; c->x = (c->leftX) * TILE_SIZE + TILE_SIZE; } if (c->rightCollision) { c->vx = 0; c->x = c->rightX * TILE_SIZE - c->w; } } This works, but sometimes, when the player is landing on ground, right and left collision points become equal to 1. So it's as if there were collision coming from left or right. Does anyone know why this is doing this?

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >