Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 378/1021 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Ray Tracing Shadows in deferred rendering

    - by Grieverheart
    Recently I have programmed a raytracer for fun and found it beutifully simple how shadows are created compared to a rasterizer. Now, I couldn't help but I think if it would be possible to implement somthing similar for ray tracing of shadows in a deferred renderer. The way I though this could work is after drawing to the gbuffer, in a separate pass and for each pixel to calculate rays to the lights and draw them as lines of unique color together with the geometry (with color 0). The lines will be cut-off if there is occlusion and this fact could be used in a fragment shader to calculate which rays are occluded. I guess there must be something I'm missing, for example I'm not sure how the fragment shader could save the occlusion results for each ray so that they are available for pixel at the ray's origin. Has this method been tried before, is it possible to implement it as I described and if yes what would be the drawbacks in performance of calculating shadows this way?

    Read the article

  • Inconsistent movement / line-of-sight around obstacles on a hexagonal grid

    - by Darq
    In a roguelike game I've been working on, one of my core design goals has been to allow the player to "Play the game, not the grid." In essence, I want the player's positioning to be tactical because of elements in the game world, not simply because some grid tiles are more advantageous than others, in relation to enemies. I am fine with world geometry not being realistic, but it needs to be consistent. In this process I have ran into most of the common problems (Square tiles? Diagonal movement, LOS, corner cases, etc.) and have moved to a hexagonal tile grid. For the most part this has been great, and I've not had too many inconsistencies. Recently however I have been stumped by the following: Points A and B are both distance 4 from the player (red lines). Line-of-sight to both are blocked by walls (black tiles). However, due to the hexagonal grid, A can be reached in 4 moves, whereas B requires 5 moves (blue lines). On a hex grid, "shortest path" seems divorced from "direct path", there may be multiple shortest paths to any point, but there is only one direct path (or two in some situations). This is fine, geometry need not be realistic. However this also seems inconsistent, similar obstacles are more effective in some positions than in others. A player running away from an enemy should be able to run in any direction, increasing the distance between the two actors. However when placing obstacles or traps between themselves and enemies, the player is best served by running in one of the six directions that don't have multiple shortest paths. Is there a way to rationalise this? Am I missing something that makes this behaviour consistent? Or is there a way to make this behaviour consistent? I am most certainly over-thinking this, but as it is one of my goals, I should do it due diligence.

    Read the article

  • Obtaining a HBITMAP/HICON from D2D Bitmap

    - by Tom
    Is there any way to obtain a HBITMAP or HICON from a ID2D1Bitmap* using Direct2D? I am using the following function to load a bitmap: http://msdn.microsoft.com/en-us/library/windows/desktop/dd756686%28v=vs.85%29.aspx The reason I ask is because I am creating my level editor tool and would like to draw a PNG image on a standard button control. I know that you can do this using GDI+: HBITMAP hBitmap; Gdiplus::Bitmap b(L"a.png"); b.GetHBITMAP(NULL, &hBitmap); SendMessage(GetDlgItem(hDlg, IDC_BUTTON1), BM_SETIMAGE, IMAGE_BITMAP, (LPARAM)hBitmap); Is there any equivalent, simple solution using Direct2D? If possible, I would like to render multiple PNG files (some with transparency) on a single button.

    Read the article

  • How can I make video games if I don't like programming?

    - by hoper
    I am studying C++ code in my school (my major is computer programming). Honestly, my grades are not so good, and assignments are really hard. Sometimes I feel sad that I will spend 8-10 hours per day coding (which is stressful) in the future for my job. But I still want to make video games. Maybe this is the only reason why I am taking all of these stressful courses. I always write down plots, stories, characters, fictional gaming worlds... Once, I thought I should study artistic technology such as game design and not computer technology such as C++, C#, etc. However, most of popular game designers (or directors) such as Kojima, Miyamoto, etc. used to be good programmers. Companies actaully assign programmers to directors because they understand how to make a game. I've try to find other colleges or universities where they teach game design programs. However, one article that lists rank 10 game design schools in North America seems untrustful because the survey company only scores it from intervews of students. Once, I tried to attend Art Institute of Vancouver which is rank 7 according to that article. However, one programmer who used to be an instructor in there told me the truth: the employement rate of graduated students is low. How can I have a future making games if I don't like programming?

    Read the article

  • Finite state machine in C++

    - by Electro
    So, I've read a lot about using FSMs to do game state management, things like what and FSM is, and using a stack or set of states for building one. I've gone through all that. But I'm stuck at writing an actual, well-designed implementation of an FSM for that purpose. Specifically, how does one cleanly resolve the problem of transitioning between states, (how) should a state be able to use data from other states, and so on. Does anyone have any tips on designing and writing a implementation in C++, or better yet, code examples?

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Mobile 3D engine renders alpha as full-object transparency

    - by Nils Munch
    I am running a iOS project using the isgl3d framework for showing pod files. I have a stylish car with 0.5 alpha windows, that I wish to render on a camera background, seeking some augmented reality goodness. The alpha on the windows looks okay, but when I add the object, I notice that it renders the entire object transparently, where the windows are. Including interior of the car. Like so (in example, keyboard can be seen through the dashboard, seats and so on. should be solid) The car interior is a seperate object with alpha 1.0. I would rather not show a "ghost car" in my project, but I haven't found a way around this. Have anyone encountered the same issue, and eventually reached a solution ?

    Read the article

  • Game physics / 2D Collision detection AS3

    - by Jery
    I know there are some methods you can use like hittestPoint and so on, but I want to see where my movieclip colliedes with another another movieclip. Any other methods I can use? by any chance does somebody know some a good introduction to game physics? Im asking because I coded a small engine and pretty much the whole code is spagetti code thats why I would like to know how you can setup something like this properly

    Read the article

  • Collision Detection, player correction

    - by DoomStone
    I am having some problems with collision detection, I have 2 types of objects excluding the player. Tiles and what I call MapObjects. The tiles are all 16x16, where the MapObjects can be any size, but in my case they are all 16x16. When my player runs along the mapobjects or tiles, it get verry jaggy. The player is unable to move right, and will get warped forward when moving left. I have found the problem, and that is my collision detection will move the player left/right if colliding the object from the side, and up/down if collision from up/down. Now imagine that my player is sitting on 2 tiles, at (10,12) and (11,12), and the player is mostly standing on the (11,12) tile. The collision detection will first run on then (10,12) tile, it calculates the collision depth, and finds that is is a collision from the side, and therefore move the object to the right. After, it will do the collision detection with (11,12) and it will move the character up. So the player will not fall down, but are unable to move right. And when moving left, the same problem will make the player warp forward. This problem have been bugging me for a few days now, and I just can't find a solution! Here is my code that does the collision detection. public void ApplyObjectCollision(IPhysicsObject obj, List<IComponent> mapObjects, TileMap map) { PhysicsVariables physicsVars = GetPhysicsVariables(); Rectangle bounds = ((IComponent)obj).GetBound(); int leftTile = (int)Math.Floor((float)bounds.Left / map.GetTileSize()); int rightTile = (int)Math.Ceiling(((float)bounds.Right / map.GetTileSize())) - 1; int topTile = (int)Math.Floor((float)bounds.Top / map.GetTileSize()); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / map.GetTileSize())) - 1; // Reset flag to search for ground collision. obj.IsOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { IComponent tile = map.Get(x, y); if (tile != null) { bounds = HandelCollision(obj, tile, bounds, physicsVars); } } } // Handel collision for all Moving objects foreach (IComponent mo in mapObjects) { if (mo == obj) continue; if (mo.GetBound().Intersects(((IComponent)obj).GetBound())) { bounds = HandelCollision(obj, mo, bounds, physicsVars); } } } private Rectangle HandelCollision(IPhysicsObject obj, IComponent objb, Rectangle bounds, PhysicsVaraibales physicsVars) { // If this tile is collidable, SpriteCollision collision = ((IComponent)objb).GetCollisionType(); if (collision != SpriteCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = ((IComponent)objb).GetBound(); Vector2 depth = bounds.GetIntersectionDepth(tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY <= absDepthX || collision == SpriteCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (obj.PreviousBound.Bottom <= tileBounds.Top) obj.IsOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == SpriteCollision.Impassable || obj.IsOnGround) { // Resolve the collision along the Y axis. ((IComponent)obj).Position = new Vector2(((IComponent)obj).Position.X, ((IComponent)obj).Position.Y + depth.Y); // If we hit something about us, remove all velosity upwards if (depth.Y > 0 && obj.IsJumping) { obj.Velocity = new Vector2(obj.Velocity.X, 0); obj.JumpTime = physicsVars.MaxJumpTime; } // Perform further collisions with the new bounds. return ((IComponent)obj).GetBound(); } } else if (collision == SpriteCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. ((IComponent)obj).Position = new Vector2(((IComponent)obj).Position.X + depth.X, ((IComponent)obj).Position.Y); // Perform further collisions with the new bounds. return ((IComponent)obj).GetBound(); } } } return bounds; } Update: I have uploaded the source code, if you want to look that through. I think that my general approach might be wrong when i am working with small tiles, I have also be unable to find any good information on physics and collision detection in Platform games. http://dl.dropbox.com/u/3181816/Sogaard.Games.SuperMario.rar

    Read the article

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • UIView vs CCLayer and Making gestures work in Cocos2d

    - by Lewis
    now I've been been searching for an answer to this question for at least 3 days now. I've tried on many cocos2d forums, including the official one and have heard nothing back. I've found a project which uses custom gestures: https://github.com/melle/OneFingerRotationGestureDemo Explanation here: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ Now I want to implement that behaviour onto a sprite in a cocos2d application. I've tried to do this but it fails to work. It uses a view controller which inherits like this: @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> Now my question is how would I implement the OneFingerRotationGesture behaviour onto a CCSprite in cocos2d 2.0? As interfaces in cocos2d look like this: @interface HelloWorldLayer : CCLayer Now I have asked a similar question to this on stack overflow and a user directed me to this link: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which I believe makes use of gestures (like the first github linked project) but not custom gestures. I lack the obj-c skills to work out and implement the functionality into my game, so I would appreciate it if someone could explain the differences between CCLayer and UIViewController, and help me implement the OneFingerRotation gesture into a cocos2d 2.0 project. Regards, Lewis.

    Read the article

  • Random map generation

    - by Thomas Owers
    I'm starting/started a 2D tilemap RPG game in Java and I want to implement random map generation. I have a list of different tiles, (dirt/sand/stone/grass/gravel etc) along with water tiles and path tiles, the problem I have is that I have no idea where to start on generating a map randomly. It would need to have terrain sections (Like a part of it will be sand, part dirt, etc.) Similar to how Minecraft is where you have different biomes and they seamlessly transform into each other. Lastly I would also need to add random paths into this as well going in different directions all over the map. I'm not asking anyone to write me all the code or anything, just piont me into the right direction please. tl;dr - Generate a tile map with biomes, paths and make sure the biomes seamlessly go into each other.

    Read the article

  • Android-Libgdx-ProGuard: Usefulness without DexGuard? [on hold]

    - by Rico Pablo Mince
    So I'm developing a game for Android - using LibGDX - and noticed that the Android SDK (HDK, MDK, WhatTheHellEvarDK) has ProGuard built-in. Browsing the ProGuard page is like searching Google: you get that the idea is to sell some product (in this case, it's DexGuard). That leaves me wondering what features are left out of ProGuard that a game developer targeting Android should worry about. For instance, the ProGuard FAQs answer the question: "Does ProGuard encrypt string constants?" by saying: "No. String encryption in program code has to be perfectly reversible by definition, so it only improves the obfuscation level. It increases the footprint of the code. However, by popular demand, ProGuard's closed-source sibling for Android, DexGuard, does provide string encryption, along with more protection techniques against static and dynamic analysis." Alright. OK. But isn't "...improves the obfuscation level" EXACTLY what ProGuard is supposed to do? Are there better options that can be implemented at build-time in Eclipse using the Gradle options and Libgdx? In particular, the assets folder and res-specific folders will need some protection. The code itself doesn't cure cancer, but I'd prefer if nobody could copy/paste it with different game art and call it "IhAxEdUrGamE"....

    Read the article

  • How to balance a non-symmetric "extension" based game?

    - by Klaim
    Most strategy games have fixed units and possible behaviours. However, think of a game like Magic The Gathering : each card is a set of rules. Regularly, new sets of card types are created. I remember that the firsts editions of the game have been said to be prohibited in official tournaments because the cards were often too powerful. Later extensions of the game provided more subtle effects/rules in cards and they managed to balance the game apparently effectively, even if there is thousands of different cards possible. I'm working on a strategy game that is a bit in the same position : every units are provided by extensions and the game is thought to be extended for some years, at least. The effects variety of the units are very large even with some basic design limitations set to be sure it's manageable. Each player choose a set of units to play with (defining their global strategy) before playing (like chooseing a themed deck of Magic cards). As it's a strategy game (you can think of Magic as a strategy game too in some POV), it's essentially skirmish based so the game have to be fair, even if the players don't choose the same units before starting to play. So, how do you proceed to balance this type of non-symmetric (strategy) game when you know it will always be extended? For the moment, I'm trying to apply those rules but I'm not sure it's right because I don't have enough design experience to know : each unit would provide one unique effect; each unit should have an opposite unit that have an opposite effect that would cancel each others; some limitations based on the gameplay; try to get a lot of beta tests before each extension release? Looks like I'm in the most complex case?

    Read the article

  • Should I be using Lua for game logic on mobile devices?

    - by Rob Ashton
    As above really, I'm writing an android based game in my spare time (android because it's free and I've no real aspirations to do anything commercial). The game logic comes from a very typical component based model whereby entities exist and have components attached to them and messages are sent to and fro in order to make things happen. Obviously the layer for actually performing that is thin, and if I were to write an iPhone version of this app, I'd have to re-write the renderer and core driver (of this component based system) in Objective C. The entities are just flat files determining the names of the components to be added, and the components themselves are simple, single-purpose objects containing the logic for the entity. Now, if I write all the logic for those components in Java, then I'd have to re-write them on Objective C if I decided to do an iPhone port. As the bulk of the application logic is contained within these components, they would, in an ideal world, be written in some platform-agnostic language/script/DSL which could then just be loaded into the app on whatever platform. I've been led to believe however that this is not an ideal world though, and that Lua performance etc on mobile devices still isn't up to scratch, that the overhead is too much and that I'd run into troubles later if I went down that route? Is this actually the case? Obviously this is just a hypothetical question, I'm happy writing them all in Java as it's simple and easy get things off the ground, but say I actually enjoy making this game (unlikely, given how much I'm currently disliking having to deal with all those different mobile devices) and I wanted to make a commercially viable game - would I use Lua or would I just take the hit when it came to porting and just re-write all the code?

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    I've read lots of articles about Data Oriented Design (DOD) and I understand it but I can't design an Object Oriented Programming (OOP) system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OOP interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • Auto-tiling with Yoshi's Island style tiles

    - by Boreal
    I'm creating a 2D platformer and I'd like to implement an auto-tiling system. Normally, this wouldn't be particularly difficult. However, I'd like to have tiles like in Yoshi's Island, where the graphics extend past the actual collidable tile's boundaries. Consider this image: Although the eggs and the Piranha Plant are clearly resting on the ground, the flower tiles continue behind them, out of the collidable tile. I know that it would be simple to do by hand, but extremely time consuming. Using an auto-tiling algorithm would save me a lot of time and boredom, but I'm not sure where to start.

    Read the article

  • 2D Skeletal Animation Transformations

    - by Brad Zeis
    I have been trying to build a 2D skeletal animation system for a while, and I believe that I'm fairly close to finishing. Currently, I have the following data structures: struct Bone { Bone *parent; int child_count; Bone **children; double x, y; }; struct Vertex { double x, y; int bone_count; Bone **bones; double *weights; }; struct Mesh { int vertex_count; Vertex **vertices; Vertex **tex_coords; } Bone->x and Bone->y are the coordinates of the end point of the Bone. The starting point is given by (bone->parent->x, bone->parent->y) or (0, 0). Each entity in the game has a Mesh, and Mesh->vertices is used as the bounding area for the entity. Mesh->tex_coords are texture coordinates. In the entity's update function, the position of the Bone is used to change the coordinates of the Vertices that are bound to it. Currently what I have is: void Mesh_update(Mesh *mesh) { int i, j; double sx, sy; for (i = 0; i < vertex_count; i++) { if (mesh->vertices[i]->bone_count == 0) { continue; } sx, sy = 0; for (j = 0; j < mesh->vertices[i]->bone_count; j++) { sx += (/* ??? */) * mesh->vertices[i]->weights[j]; sy += (/* ??? */) * mesh->vertices[i]->weights[j]; } mesh->vertices[i]->x = sx; mesh->vertices[i]->y = sy; } } I think I have everything I need, I just don't know how to apply the transformations to the final mesh coordinates. What tranformations do I need here? Or is my approach just completely wrong?

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • The best tile based level design [on hold]

    - by ReallyGoodPie
    My current method for tile based levels is to put everything in an array like the following: grass = g sky = s house = h ... """ ["SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS"], ["SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS"], ["HHHHHHHHHHHHHHSSSSSSSSSSSSSSSS"], ["GGGGGGGGGGGGGGGGGGGGGGGGGGGGGG"], """ I would then run a for loop to pass these on to a sprite class, bliting the images to the screen. This is what I'd generally do in pygame when I am creating levels for a tile based RPG's. However, as I've gone on, I have added allot more sprites and image and it is seriously becoming more and more confusing to work with this and allot of mistakes are being made. What is the best alternative or other methods for doing this?

    Read the article

  • What exactly is UV and UVW Mapping?

    - by Michael Stum
    Trying to understand some basic 3D concepts, at the moment I'm trying to figure out how textures actually work. I know that UV and UVW mapping are techniques that map 2D Textures to 3D Objects - Wikipedia told me as much. I googled for explanations but only found tutorials that assumed that I already know what it is. From my understanding, each 3D Model is made out of Points, and several points create a face? Does each point or face have a secondary coordinate that maps to a x/y position in the 2D Texture? Or how does unwrapping manipulate the model? Also, what does the W in UVW really do, what does it offer over UV? As I understand it, W maps to the Z coordinate, but in what situation would I have different textures for the same X/Y and different Z, wouldn't the Z part be invisible? Or am I completely misunderstanding this?

    Read the article

  • car crash android game

    - by Axarydax
    I'd like to make a simple 2d car crashing game, where the player would drive his car into moving traffic and try to cause as much damage as possible in each level (some Burnout games had a mode like this). The physics part of the game is the most important, I can worry about graphics later. Would engine like emini or box2d work for this kind of game? Would Android devices have enough power to handle this? For example if there were about 20 cars colliding, along with some buildings, it would be nice if I could get 20 fps.

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • Registering InputListener in libGDX

    - by JPRO
    I'm just getting started with libGDX and have run into a snag registering an InputListener for a button. I've gone through many examples and this code appears correct to me but the associated callback never triggers ("touched" is not printed to console). I'm just posting the code with the abstract game screen and the implementing screen. The application starts successfully with a label of "Exit" in the bottom left hand corner, but clicking the button/label does nothing. I'm guessing the fix is something simple. What am I overlooking? public abstract class GameScreen<T> implements Screen { protected final T game; protected final SpriteBatch batch; protected final Stage stage; public GameScreen(T game) { this.game = game; this.batch = new SpriteBatch(); this.stage = new Stage(0, 0, true); } @Override public final void render(float delta) { update(delta); // Clear the screen with the given RGB color (black) Gdx.gl.glClearColor(0f, 0f, 0f, 1f); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stage.act(delta); stage.draw(); } public abstract void update(float delta); @Override public void resize(int width, int height) { stage.setViewport(width, height, true); } @Override public void show() { Gdx.input.setInputProcessor(stage); } // hide, pause, resume, dipose } public class ExampleScreen extends GameScreen<MyGame> { private TextButton exitButton; public ExampleScreen(MyGame game) { super(game); } @Override public void show() { super.show(); TextButton.TextButtonStyle buttonStyle = new TextButton.TextButtonStyle(); buttonStyle.font = Font.getFont("Origicide", 32); buttonStyle.fontColor = Color.WHITE; exitButton = new TextButton("Exit", buttonStyle); exitButton.addListener(new InputListener() { @Override public void touchUp (InputEvent event, float x, float y, int pointer, int button) { System.out.println("touched"); } }); stage.addActor(exitButton); } @Override public void update(float delta) { } }

    Read the article

  • How to determine character's foot contact point on a uniform triangle mesh terrain?

    - by xenon
    For a terrain that is modelled by a heightmap with a uniform triangle mesh, what are some techniques I could use to determine the contact point of the foot of a character standing on the terrain? Since the terrain's Y values are altered by the heightmap, they won't be flat any more. As the character moves on the terrain, it has to know at which values of Y-value its foot should be. Conceptually, what are some methods and techniques to determine the contact point of the character's foot standing on the terrain?

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >