Search Results

Search found 39726 results on 1590 pages for 'iphone development'.

Page 906/1590 | < Previous Page | 902 903 904 905 906 907 908 909 910 911 912 913  | Next Page >

  • How to fix issue with my 3D first person camera?

    - by dxCUDA
    My camera moves and rotates, but relative to the worlds origin, instead of the players. I am having difficulty rotating the camera and then translating the camera in the direction relative to the camera facing angle. I have been able to translate the camera and rotate relative to the players origin, but not then rotate and translate in the direction relative to the cameras facing direction. My goal is to have a standard FPS-style camera. float yaw, pitch, roll; D3DXMATRIX rotationMatrix; D3DXVECTOR3 Direction; D3DXMATRIX matRotAxis,matRotZ; D3DXVECTOR3 RotAxis; // Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. pitch = m_rotationX * 0.0174532925f; yaw = m_rotationY * 0.0174532925f; roll = m_rotationZ * 0.0174532925f; up = D3DXVECTOR3(0.0f, 1.0f, 0.0f);//Create the up vector //Build eye ,lookat and rotation vectors from player input data eye = D3DXVECTOR3(m_fCameraX, m_fCameraY, m_fCameraZ); lookat = D3DXVECTOR3(m_fLookatX, m_fLookatY, m_fLookatZ); rotation = D3DXVECTOR3(m_rotationX, m_rotationY, m_rotationZ); D3DXVECTOR3 camera[3] = {eye,//Eye lookat,//LookAt up };//Up RotAxis.x = pitch; RotAxis.y = yaw; RotAxis.z = roll; D3DXVec3Normalize(&Direction, &(camera[1] - camera[0]));//Direction vector D3DXVec3Cross(&RotAxis, &Direction, &camera[2]);//Strafe vector D3DXVec3Normalize(&RotAxis, &RotAxis); // Create the rotation matrix from the yaw, pitch, and roll values. D3DXMatrixRotationYawPitchRoll(&matRotAxis, pitch,yaw, roll); //rotate direction D3DXVec3TransformCoord(&Direction,&Direction,&matRotAxis); //Translate up vector D3DXVec3TransformCoord(&camera[2], &camera[2], &matRotAxis); //Translate in the direction of player rotation D3DXVec3TransformCoord(&camera[0], &camera[0], &matRotAxis); camera[1] = Direction + camera[0];//Avoid gimble locking D3DXMatrixLookAtLH(&in_viewMatrix, &camera[0], &camera[1], &camera[2]);

    Read the article

  • Building a Flash Platformer

    - by Jonathan O
    I am basically making a game where the whole game is run in the onEnterFrame method. This is causing a delay in my code that makes debugging and testing difficult. Should programming an entire platformer in this method be efficient enough for me to run hundreds of lines of code? Also, do variables in flash get updated immediately? Are there just lots of threads listening at the same time? Here is the code... stage.addEventListener(Event.ENTER_FRAME, onEnter); function onEnter(e:Event):void { //Jumping if (Yoshi.y > groundBaseLevel) { dy = 0; canJump = true; onGround = true; //This line is not updated in time } if (Key.isDown(Key.UP) && canJump) { dy = -10; canJump = false; onGround = false; //This line is not updated in time } if(!onGround) { dy += gravity; Yoshi.y += dy; } //limit screen boundaries //character movement if (! Yoshi.hitTestObject(Platform)) //no collision detected { if (Key.isDown(Key.RIGHT)) { speed += 4; speed *= friction; Yoshi.x = Yoshi.x + movementIncrement + speed; Yoshi.scaleX = 1; Yoshi.gotoAndStop('Walking'); } else if (Key.isDown(Key.LEFT)) { speed -= 4; speed *= friction; Yoshi.x = Yoshi.x - movementIncrement + speed; Yoshi.scaleX = -1; Yoshi.gotoAndStop('Walking'); } else { speed *= friction; Yoshi.x = Yoshi.x + speed; Yoshi.gotoAndStop('Still'); } } else //bounce back collision detected { if(Yoshi.hitTestPoint(Platform.x - Platform.width/2, Platform.y - Platform.height/2, false)) { trace('collision left'); Yoshi.x -=20; } if(Yoshi.hitTestPoint(Platform.x, Platform.y - Platform.height/2, false)) { trace('collision top'); onGround=true; //This update is not happening in time speed = 0; } } }

    Read the article

  • Car-like Physics - Basic Maths to Simulate Steering

    - by Reanimation
    As my program stands I have a cube which I can control using keyboard input. I can make it move left, right, up, down, back, fourth along the axis only. I can also rotate the cube either left or right; all the translations and rotations are implemented using glm. if (keys[VK_LEFT]) //move cube along xAxis negative { globalPos.x -= moveCube; keys[VK_RIGHT] = false; } if (keys[VK_RIGHT]) //move cube along xAxis positive { globalPos.x += moveCube; keys[VK_LEFT] = false; } if (keys[VK_UP]) //move cube along yAxis positive { globalPos.y += moveCube; keys[VK_DOWN] = false; } if (keys[VK_DOWN]) //move cube along yAxis negative { globalPos.y -= moveCube; keys[VK_UP] = false; } if (FORWARD) //W - move cube along zAxis positive { globalPos.z += moveCube; BACKWARD = false; } if (BACKWARD) //S- move cube along zAxis negative { globalPos.z -= moveCube; FORWARD = false; } if (ROT_LEFT) //rotate cube left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT) //rotate cube right { rotX -=0.01f; ROT_RIGHT = false; } I render the cube using this function which handles the shader and position on screen: void renderMovingCube(){ glUseProgram(myShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(myShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin,camLookingAt,camNormalXYZ); ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos); ModelViewMatrix = glm::rotate(ModelViewMatrix,rotX, glm::vec3(0,1,0)); //manually rotate glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); movingCube.render(); glUseProgram(0); } The glm::lookAt function always points to the screens centre (0,0,0). The globalPos is a glm::vec3 globalPos(0,0,0); so when the program executes, renders the cube in the centre of the screens viewing matrix; the keyboard inputs above adjust the globalPos of the moving cube. The glm::rotate is the function used to rotate manually. My question is, how can I make the cube go forwards depending on what direction the cube is facing.... ie, once I've rotated the cube a few degrees using glm, the forwards direction, relative to the cube, is no longer on the z-Axis... how can I store the forwards direction and then use that to navigate forwards no matter what way it is facing? (either using vectors that can be applied to my code or some handy maths). Thanks.

    Read the article

  • How to properly render a Frame Buffer to the BackBuffer in Stage3D / AGAL

    - by bigp
    After doing a render pass with RenderToTarget (RTT), how do you properly render that texture buffer to the screen while maintaining original scale / proportions so it doesn't stretch or lose quality? Can an AGAL VertexShader & FragmentShader be written so it's adaptable to any Texture size and Viewport dimensions? I find I'm getting some "blocky" effects in some of my first attempts at "ping-ponging" between two Texture buffers (to create trailing effects). Perhaps I'm not using the UVs correctly between the rendering-to-target and/or the backbuffer? Is there a simpler way just to "splash" the texture on the backbuffer, or is a Quad absolutely necessary (4 vertices, 2 triangles)? If it needs the Quad, should the Texture buffer be fully drawn (0.0 to 1.0 for vertical and horizontal UVs), or only a percentage of it should, like the example below? Texture Buffer U: 0.0 to viewport.width/texturebuffer.width; Texture Buffer V: 0.0 to viewport.height/texturebuffer.height; Thanks!

    Read the article

  • How to stop a tap event from propagating in a XNA / Silverlight game

    - by Mech0z
    I have a game with Silverlight / XNA game where text and buttons are created in Silverlight while 3d is done in XNA. The Silverlight controls are drawn ontop of the 3D and I dont want a click on a button to interact with the 3D underneath So I have private void ButtonPlaceBrick_Tap(object sender, GestureEventArgs e) { e.Handled = true; But my gesture handling on the 3d objects still runs even though I have set handled to true. private void OnUpdate(object sender, GameTimerEventArgs e) { while (TouchPanel.IsGestureAvailable) { // Read the next gesture GestureSample gesture = TouchPanel.ReadGesture(); switch (gesture.GestureType) How am I supposed to stop it from propagating?

    Read the article

  • What are the valid DepthBuffer Texture formats in DirectX 11? And which are also valid for a staging resource?

    - by sebf
    I am trying to read the contents of the depth buffer into main memory so that my CPU side code can do Some Stuff™ with it. I am attempting to do this by creating a staging resource which can be read by the CPU, which I will copy the contents of the depth buffer into before reading it. I keep encountering errors however, because of, I believe, incompatibilities between the resource format and the view formats. Threads like these lead me to believe it is possible in DX11 to access the depth buffer as a resource, and that I can create a resource with a typeless format and have it interpreted in the view as another, but I cannot get it to work. What are the valid formats for the resource to be used as the depth buffer? Which of these are also valid for a CPU accessible staging resource?

    Read the article

  • My raycaster is putting out strange results, how do I fix it?

    - by JamesK89
    I'm working on a raycaster in ActionScript 3.0 for the fun of it, and as a learning experience. I've got it up and running and its displaying me output as expected however I'm getting this strange bug where rays go through corners of blocks and the edges of blocks appear through walls. Maybe somebody with more experience can point out what I'm doing wrong or maybe a fresh pair of eyes can spot a tiny bug I haven't noticed. Thank you so much for your help! Screenshots: http://i55.tinypic.com/25koebm.jpg http://i51.tinypic.com/zx5jq9.jpg Relevant code: function drawScene() { rays.graphics.clear(); rays.graphics.lineStyle(1, rgba(0x00,0x66,0x00)); var halfFov = (player.fov/2); var numRays:int = ( stage.stageWidth / COLUMN_SIZE ); var prjDist = ( stage.stageWidth / 2 ) / Math.tan(toRad( halfFov )); var angStep = ( player.fov / numRays ); for( var i:int = 0; i < numRays; i++ ) { var rAng = ( ( player.angle - halfFov ) + ( angStep * i ) ) % 360; if( rAng < 0 ) rAng += 360; var ray:Object = castRay(player.position, rAng); drawRaySlice(i*COLUMN_SIZE, prjDist, player.angle, ray); } } function drawRaySlice(sx:int, prjDist, angle, ray:Object) { if( ray.distance >= MAX_DIST ) return; var height:int = int(( TILE_SIZE / (ray.distance * Math.cos(toRad(angle-ray.angle))) ) * prjDist); if( !height ) return; var yTop = int(( stage.stageHeight / 2 ) - ( height / 2 )); if( yTop < 0 ) yTop = 0; var yBot = int(( stage.stageHeight / 2 ) + ( height / 2 )); if( yBot > stage.stageHeight ) yBot = stage.stageHeight; rays.graphics.moveTo( (ray.origin.x / TILE_SIZE) * MINI_SIZE, (ray.origin.y / TILE_SIZE) * MINI_SIZE ); rays.graphics.lineTo( (ray.hit.x / TILE_SIZE) * MINI_SIZE, (ray.hit.y / TILE_SIZE) * MINI_SIZE ); for( var x:int = 0; x < COLUMN_SIZE; x++ ) { for( var y:int = yTop; y < yBot; y++ ) { buffer.setPixel(sx+x, y, clrTable[ray.tile-1] >> ( ray.horz ? 1 : 0 )); } } } function castRay(origin:Point, angle):Object { // Return values var rTexel = 0; var rHorz = false; var rTile = 0; var rDist = MAX_DIST + 1; var rMap:Point = new Point(); var rHit:Point = new Point(); // Ray angle and slope var ra = toRad(angle) % ANGLE_360; if( ra < ANGLE_0 ) ra += ANGLE_360; var rs = Math.tan(ra); var rUp = ( ra > ANGLE_0 && ra < ANGLE_180 ); var rRight = ( ra < ANGLE_90 || ra > ANGLE_270 ); // Ray position var rx = 0; var ry = 0; // Ray step values var xa = 0; var ya = 0; // Ray position, in map coordinates var mx:int = 0; var my:int = 0; var mt:int = 0; // Distance var dx = 0; var dy = 0; var ds = MAX_DIST + 1; // Horizontal intersection if( ra != ANGLE_180 && ra != ANGLE_0 && ra != ANGLE_360 ) { ya = ( rUp ? TILE_SIZE : -TILE_SIZE ); xa = ya / rs; ry = int( origin.y / TILE_SIZE ) * ( TILE_SIZE ) + ( rUp ? TILE_SIZE : -1 ); rx = origin.x + ( ry - origin.y ) / rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = true; rTexel = int(rx % TILE_SIZE) } break; } rx += xa; ry += ya; } } // Vertical intersection if( ra != ANGLE_90 && ra != ANGLE_270 ) { xa = ( rRight ? TILE_SIZE : -TILE_SIZE ); ya = xa * rs; rx = int( origin.x / TILE_SIZE ) * ( TILE_SIZE ) + ( rRight ? TILE_SIZE : -1 ); ry = origin.y + ( rx - origin.x ) * rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = false; rTexel = int(ry % TILE_SIZE); } break; } rx += xa; ry += ya; } } return { angle: angle, distance: Math.sqrt(rDist), hit: rHit, map: rMap, tile: rTile, horz: rHorz, origin: origin, texel: rTexel }; }

    Read the article

  • SFML programs fails to debug with glslDevil

    - by Zhen
    I'm testing the glslDevil debugger with a simple (and working) SFML application in Linux + NVidia. But it always fails in the window creation step: W! Program Start | glXGetConfig(0x86a50b0, 0x86acef8, 4, 0xbf8228c4) | glXGetConfig(0x86a50b0, 0x86acef8, 5, 0xbf8228c8) | glXGetConfig(0x86a50b0, 0x86acef8, 8, 0xbf8228cc) | glXGetConfig(0x86a50b0, 0x86acef8, 9, 0xbf8228d0) | glXGetConfig(0x86a50b0, 0x86acef8, 10, 0xbf8228d4) | glXGetConfig(0x86a50b0, 0x86acef8, 11, 0xbf8228d8) | glXGetConfig(0x86a50b0, 0x86acef8, 12, 0xbf8228dc) | glXGetConfig(0x86a50b0, 0x86acef8, 13, 0xbf8228e0) | glXGetConfig(0x86a50b0, 0x86acef8, 100000, 0xbf8228e4) | glXGetConfig(0x86a50b0, 0x86acef8, 100001, 0xbf8228e8) | glXCreateContext(0x86a50b0, 0x86acef8, (nil), 1) E! Child process exited W! Program termination forced! And the code that fails: #include <SFML/Graphics.hpp> #define GL_GLEXT_PROTOTYPES 1 #define GL3_PROTOTYPES 1 #include <GL/gl.h> #include <GL/glu.h> #include <GL/glext.h> int main(){ sf::RenderWindow window{ sf::VideoMode(800, 600), "Test SFML+GL" }; bool running = true; while( running ){ sf::Event event; while( window.pollEvent(event) ){ if( event.type == sf::Event::Closed ){ running = false; }else if(event.type == sf::Event::Resized){ glViewport(0, 0, event.size.width, event.size.height); } } window.display(); } return 0; } Is It posible to solve this problem? or get around the problem to continue the gslsDevil use?.

    Read the article

  • Level editor event system, how to translate event to game action

    - by Martino Wullems
    Hello, I've been busy trying to create a level editor for a tile based game i'm working on. It's all going pretty fine, the only thing i'm having trouble with is creating a simple event system. Let's say the player steps on a particulair tile that had the action "teleport" assigned to it in the editor. The teleport string is saved in the tile object as a variable. When creating the tilegrid an actionmanager class scans the action variable and assigns actions to the variable. public static class ActionManager { public static function ParseTileAction(tile:Tile) { switch(tile.action) { case "TELEPORT": //assign action here break; } } } Now this is an collision event, so I guess I should also provide an object to colide with the tile. But what if it would have to count for collision with all objects in the world? Also, checking for collisions in the actionmanager class doesn't seem very efficient. Am I even on the right track here? I'm new to game design so I could be completly off track. Any tips on how handeling and creating events using an editor is usually done would be great. The main problem i'm having is the Thanks in advance.

    Read the article

  • box2d tween what am I missing

    - by philipp
    I have a Box2D project and I want to tween an kinematic body from position A, to position B. The tween function, got it from this blog: function easeInOut(t , b, c, d ){ if ( ( t /= d / 2 ) < 1){ return c/2 * t * t * t * t + b; } return -c/2 * ( (t -= 2 ) * t * t * t - 2 ) + b; } where t is the current value, b the start, c the end and d the total amount of frames (in my case). I am using the method introduced by this lesson of todd's b2d tutorials to move the body by setting its linear Velocity so here is relevant update code of the sprite: if( moveData.current == moveData.total ){ this._body.SetLinearVelocity( new b2Vec2() ); return; } var t = easeNone( moveData.current, 0, 1, moveData.total ); var step = moveData.length / moveData.total * t; var dir = moveData.direction.Copy(); //this is the line that I think might be corrected dir.Multiply( t * moveData.length * fps /moveData.total ) ; var bodyPosition = this._body.GetWorldCenter(); var idealPosition = bodyPosition.Copy(); idealPosition.Add( dir ); idealPosition.Subtract( bodyPosition.Copy() ); moveData.current++; this._body.SetLinearVelocity( idealPosition ); moveData is an Object that holds the global values of the tween, namely: current frame (int), total frames (int), the length of the total distance to travel (float) the direction vector (targetposition - bodyposition) (b2Vec2) and the start of the tween (bodyposition) (b2Vec2) Goal is to tween the body based on a fixed amount of frames: in moveData.total frames. The value of t is always between 0 and 1 and the only thing that is not working correctly is the resulting distance the body travels. I need to calculate the multiplier for the direction vector. What am I missing to make it work?? Greetings philipp

    Read the article

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • xna download website source code

    - by Emre Canbazoglu
    I have to download the html code of a web site during the game. I am taking the poster url of a movie from the imdb web site by scrapping the html ( also other informations ). I have to do the download process many times during the game for different movies. I can download and scrap the html but downloading the html takes too much time and it causes the game to slow down(freeze while downloading). How can I solve this problem? My one approach is to download and scrap all the information and store them in a database before the game and during the game access this information from the database. I think this will work properly but that is not what I exactly want. It would be better if it is dynamic. I also thought of using multi-threading but I am a bit confused about how to implement threading in xna. I read some articles about it but it is not so clear. I mean when should I start the thread and what about the update function etc. I need your help guys

    Read the article

  • XNA - Error while rendering a texture to a 2D render target via SpriteBatch

    - by Jared B
    I've got this simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D: private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if (Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw( targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from my Draw(GameTime gameTime) function. First drawScene is called, then drawPostProcessing is called. The thing is, when I run this code I get an error on the spriteBatch.Draw call: The render target must not be set on the device when it is used as a texture. I already found the solution, which is to draw the actual render target (targetScene) to the texture so it doesn't create a reference to the loaded render target. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(outputTarget) SpriteBatch.Draw(inputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render inputTarget to outputTarget without reference issues?

    Read the article

  • Simple heart container script for 2D game (Unity)?

    - by N1ghtshade3
    I'm attempting to create a simple mobile game (C#) that involves a simple three-heart life system. After searching for hours online, many of the solutions use OnGUI (which is apparently horrible for performance) and the rest are too complicated for me to understand and add to my code. The other solutions involve using a single texture and just hiding part of it when damage is taken. In my game, however, the player should be able to go over three hearts (for example, every 100 points). Sebastian Lague's Zelda-Style Health is what I'm looking for, but even though it's a tutorial there is way too much going on that I don't need or can't customize to fit in mine. What I have so far is a script called HealthScript.cs which contains a variable lives. I have another script, PlayerPhysics.cs which calls HealthScript and subtracts a life when an enemy is hit. The part I don't get is actually drawing the hearts. I think I understand what needs to happen, I just am not experienced enough with Unity to know how. The Start function should draw three (or whatever lives is set to) hearts in the top right corner. Since the game should be resolution-independent to accommodate the various sizes of Android devices, I'd rather use scaling rather than PixelInset. When the player hits an enemy as detected by PlayerPhysics.cs, it should subtract from lives. I think that I have this working using this.GetComponent<HealthScript>().lives -= 1 but I'm not sure if it actually works. This should trigger a redraw of the hearts so that there are now two hearts. The same principle would apply for adding hearts when a score is reached, except when lives > maxHeartsPerRow, the new hearts should be drawn below the old ones. I realise I don't have much code to show but believe me; I've tried for quite some time to figure this out and have little to show for it. Any help at all would be welcome; it seems like it shouldn't take that much code to put an image on the screen for each life there is, but I haven't found anything yet. Thanks!

    Read the article

  • Gamemaker: Making a bullet Spawn at the enemy it was called from

    - by Strokes
    I'm making a gamemaker game with gml. In this game I have multiple enemies (same object) on screen at the same time. I want them to all spawn a bullet at their location. But instead each enemy spawns a bullet at one single enemy. They all shoot but the bullets appear in the wrong location. I want the bullet to spawn at the location of the instance is was called for. How do I do this? Thank you for reading my question. Code: obj_carrier is the enemy I want to spawn from. obj_carrier_bullet is the bullet I want to spawn at location of the carrier There are multiple carriers around the stage. In the step event of the carrier following an if statement: instance_create(obj_carrier.x, obj_carrier.y, obj_carrier_bullet)

    Read the article

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Unity: Spin wheels to move vehicle

    - by Paul Manta
    I am just getting started with Unity and I'd like to ask a question. If I have a "Vehicle" object that has two children: "FrontWheel" and "BackWheel" (both 'wheels' are cylinders), how should I set everything up such that I can move the entire vehicle by turning its wheels? When I apply a torque to "FrontWheel", the vehicle starts to move, but instead of the whole thing the moving together, the chassis is rolling on the cylinders and eventually falls off. How can I prevent it from doing that?

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • Why isn't my lighting working properly? Are my normals messed up?

    - by Radek Slupik
    I'm relatively new to OpenGL and I am trying to draw a 3D model (loaded from a 3ds file using lib3ds) using OpenGL with lighting, but about half of it is drawn in black. I set up the light as such: glEnable(GL_LIGHTING); glShadeModel(GL_SMOOTH); GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); glEnable(GL_LIGHT0); GLfloat lightColor0[] = {1.0f, 1.0f, 1.0f, 1.0f}; GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 0.0f}; glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0); The model is in a VBO and drawn using glDrawArrays. The normals are in a separate VBO, and the normals are calculated using lib3ds_mesh_calculate_vertex_normals: std::vector<std::array<float, 3>> normals; for (std::size_t i = 0; i < model->nmeshes; ++i) { auto& mesh = *model->meshes[i]; std::vector<float[3]> vertex_normals(mesh.nfaces * 3); lib3ds_mesh_calculate_vertex_normals(&mesh, vertex_normals.data()); for (std::size_t j = 0; j < mesh.nfaces; ++j) { auto& face = mesh.faces[j]; normals.push_back(make_array(vertex_normals[j])); } } glBindBuffer(GL_ARRAY_BUFFER, normal_vbo_); glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(decltype(normals)::value_type), normals.data(), GL_STATIC_DRAW); The problem isn't the vertices; the model is drawn correctly when drawing it as a wireframe. I also fixed the normals in Blender using controlN. What could be the problem? Should I store the normals in a different order?

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • Cocos2dx- Draw primitives(polygons) on Update

    - by Haider
    In my game I'm trying to draw polygons on on each step i.e. update method. I call draw() method to draw new polygon with dynamic vertices. Following is my code: void HelloWorld::draw(){glLineWidth(1);CCPoint filledVertices[] = {ccp(drawX1,drawY1),ccp(drawX2,drawY2), ccp(drawX3,drawY3), ccp(drawX4,drawY4)};ccDrawSolidPoly( filledVertices, 4, ccc4f(0.5f, 0.5f, 1, 1 ));} I call the draw() method from the update(float dt) method. The engine is behaving inconsistently i.e. sometimes it displays the polygons and on other occasions it does not. Is it the right approach to do such a task? If not what is the best way to display large number of primitives?

    Read the article

  • Car engine sound simulation

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

< Previous Page | 902 903 904 905 906 907 908 909 910 911 912 913  | Next Page >