Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 398/774 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Bitmap rotation jitter around pivot

    - by Manderin87
    I am working on a asteriods clone and I have the ship graphic loaded as a 96x96 bitmap. When the player rotates the ship I rotate the bitmap by degree (float). rotation function: if(m_Matrix == null) { m_Matrix = new Matrix(); } else { m_Matrix.reset(); } m_Matrix.setRotate(degree, m_BaseImage.getWidth() / 2, m_BaseImage.getHeight() / 2); m_RotatedImage = Bitmap.createBitmap(m_BaseImage, 0, 0, m_BaseImage.getWidth(), m_BaseImage.getHeight(), m_Matrix, true); draw function: m_Paint.setAntiAlias(true); m_Paint.setFilterBitmap(true); m_Paint.setDither(true); canvas.drawBitmap(m_RotatedImage, (int) posX - m_RotatedImage.getWidth() / 2, (int) posY - m_RotatedImage.getHeight() / 2, m_Paint); When the bitmap is drawn, the bitmap jitters slightly around the pivot. Can anyone fix or tell me why the bitmap is jittering around the pivot? It needs to be smooth.

    Read the article

  • In wow 6.0 expansion, where to buy wow gold?

    - by user50866
    Rs3gold.com is a leading provider of MMORPG virtual currency and other assets around the world, when is the new world of warcraft expansion, you can buy cheapest wow gold from Rs3gold. 8% discount code for your World of Warcraft Gold - RS3GOLD Once your payment on our site is completed successfully, we will deliver your WOW gold instantly within 10-30 minutes! http://www.rs3gold.com/Gold/wow_us.aspx  

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics, and I'd like to know the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have a 2D png for players' placeholders, and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However, it would be a waste of memory and thus less efficient. What do you suggest?

    Read the article

  • TGA loader: reverse y-axis

    - by aVoX
    I've written a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option origin set to Top Left (Note: Actually TGA files are meant to be stored upside down - Bottom Left in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGA, so my question is: How do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • How would I use JBox2d in Java?

    - by BluFire
    So I did some research and a found Box2d. I then proceeded to download it and the testbed. Now that i have it, I don't know how to properly use it. I'm looking for a clear simple answer on how to use the engine. The things I did was that I put it into a lib folder and referenced the JBox2D jar file. After that i got stuck. How can i use this to program games for android? I'm very confused since Box2d was intended for C++.

    Read the article

  • Making a Living Developing Games

    - by cable729
    I'm in my last year of high school, and I've been looking at colleges. I'm taking a C++ class at a local community college and I don't feel that it's worth it. I could have learned everything in that class in a week. This had me thinking, would a CS degree even be worth it? How much can it teach me if I can learn everything on my own? Even if I do need to learn more advanced subjects, many colleges put their material online AND I can buy a book. Will companies hire me if I don't have a CS degree? If I have a portfolio will I stand a chance? What kind of things are needed in the portfolio? I want to live doing what I love - programming. So I will do it. I'm just not sure that a CS degree will do anything to me. In addition, if there is a benefit to getting a CS degree, what places are the best?

    Read the article

  • What is the purpose of the canonical view volume?

    - by breadjesus
    I'm currently learning OpenGL and haven't been able to find an answer to this question. After the projection matrix is applied to the view space, the view space is "normalized" so that all the points lie within the range [-1, 1]. This is generally referred to as the "canonical view volume" or "normalized device coordinates". While I've found plenty of resources telling me about how this happens, I haven't seen anything about why it happens. What is the purpose of this step?

    Read the article

  • Drawing application on OpenGL for iOS (iPad)

    - by Alesia
    Some help is needed. I'm developing drawing application on OpenGL (deployment target 4.0) for iOS (iPad). We have 3 drawing tools: pen, marker (with alfa) and eraser. I draw with textures, using blending in orthographic projection. I can't use z-ordering because in this case I have to face a lot of troubles with cutting and erasing. The thing that I need is to make the pen be always on the top. When I first use marker and than pen - it's ok. But if I use pen first and marker over the pen - I can't see pen color under marker. I'd appreciate any help or advice. Thank you veeeeeery much!

    Read the article

  • Randomly spawning bitmaps on cnvas

    - by Toystoj
    I need some ideas in order to finish algorithm. I'm randomly placing objects (bitmaps) on canvas without overlapping. Time needed to finish it is my problem. When I need to spawn for example 80% of canvas it takes to long. So i was thinking : I should make some change when the bitmaps take off 50 % of canvas. I want to tell algorithm that it should generate new locations (x,y) where it is free space. My question is : How to render new location (x,y) in place where is free space. In summary: Things I know : object location (x,y) 4 corners (x,y) of object object width, height canvas width, height Any suggestions?

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • Why is my collision detection not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him stand still on the wall? From collideWithBox() function below, it shows that playerDest.Y = boxDest.Y - boxDest.height; will get the position the character should standstill on the wall. Theoretically, the clipping effect won't be happen as the character hit the box from below works with the equation playerDest.Y = boxDest.Y + boxDest.height;. void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; } void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; }

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • HTML5 Canvas Tileset Animation

    - by Veyha
    How to do in HTML5 canvas Image animating? I am have this code now: http://jsfiddle.net/WnjB6/1/ In here I am can add animations something like - Animation.add('stand', [0, 1, 2, 3, 4, 5]); But how to play this animation? My image drawing function is - drawTile(canvasX, canvasY, tile, tileWidth, tileHeight); Animation['stand']; return 0, 1, 2, 3, 4, 5 I am need something like when I am run Animation.play('stand') run animation from 'stand' array. I am try to do this something like one day, but no have more idea how. :( Thanks and sorry for my bad English language.

    Read the article

  • Exporting spritesheet for Cocos2d

    - by Terko
    I would like to know how people usually save the animations in order to load them easily in Cocos2d with as few hard-code as possible. E.G. The solution I thought of is to have one plist file containing information about each frame, and the second plist to contain information about each of the animation(name of the animation, which frames to play, and the delay probably). If this is the correct solution, how can I generate such plist files for spritesheet automatically?

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • Calculate initial velocity of a 3d vector-based projectile

    - by Frotty
    Okay, so I got a Projectile with 2 Vectors, position and velocity. I now want to calculate the initial velocity for it in order to reach a specific point on the map. Or actually, how high has the start z-velocity to be (because x and y are probably defined by a speed variable) in order for the projectile to hit the marked position. The projectile is influenced by a constant gravity vector. All calculations are done 32 times per second. I want this, because I don't want to use a parabola function, so the projectile can still be influenced by other sources, simply adding some velocity. I didn't really find anything referring to that topic and would be glad for every helping answer, Thanks.

    Read the article

  • Data structures for a 3D array

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages.

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • Simulate 'Shock absorbtion' with tire rubber in PhysX (2.8.x)

    - by Mungoid
    This is a kinda tricky question and I fear there is no easy enough solution, but I figured I'd hit SE up before giving up on it and just doing what I can. A machine I am working on has no suspension or shocks or springs of any sort in the real machine, so you would think that when it drives over bumps, it would shake like crazy but because its tires (6 of them) are quite large they seem to absorb a lot of shock from the bumps. Part of this is because the machine is around 30k lbs and it just smashes/compresses any bumps in the ground down (This is another issue im still working on) and the other part is that the tires seem to have a lot of flex to them with a lot of air as well. So my current task is to simulate shock absorption in physx without visibly separating the tires from the spindle/axle.. I have been messing with all kinds of NxMaterial, NxSpring, Joints, etc. and have had no luck getting this to work. The main problem is that the spindle attached to the tire is directly in the center and the axle is basically solidly attached to the chassis, so if i give it any spring or suspension travel, that spindle on the tires will move upwards or downwards, looking very odd because now its not any longer in the center of the tire. I tried giving it a higher restitution but that just makes it bouncy without any shock absorption. Another avenue I am messing with is to actively smooth the terrain in front of the tires so that before it hits a bumpy patch, that patch is smoothed and it doesn't bounce. The only issue with this is that it is pretty expensive to do with 6 tires, high tesselation of the terrain and other complex things going on at the same time in this simulation. I am still working on this but I am hoping to mix and match a few different aspects to get the best possible outcome. This is a bit of a complex issue so I'm not expecting anyone to have a definitive answer, just hoping someone may think of something I haven't =-) -Side note: Yes i know PhysX 2.8.x is quite outdated but we have to stick with it for this implementation. We are in the process of going to another physics engine but it is out of scope to apply that engine to this project.

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • Andengine. Put bullet to pool, when it leaves screen

    - by Ashot
    i'm creating a bullet with physics body. Bullet class (extends Sprite class) has die() method, which unregister physics connector, hide sprite and put it in pool public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } in onUpdate method of PhysicsConnector i executes die method, when sprite leaves screen physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); _bullet.die(); } } }; it works as i expected, but _bullet.die() executes TWICE. what i`m doing wrong and is it right way to hide sprites? here is full code of Bullet class (it is inner class of class that represents player) private class Bullet extends Sprite implements PhysicsConstants { private final Body body; private final PhysicsConnector physicsConnector; private final Bullet _bullet; private int id; public Bullet(float x, float y, ITextureRegion texture, VertexBufferObjectManager vertexBufferObjectManager) { super(x,y,texture,vertexBufferObjectManager); _bullet = this; id = bulletId++; body = PhysicsFactory.createCircleBody(mPhysicsWorld, this, BodyDef.BodyType.DynamicBody, bulletFixture); physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); Log.d("bulletDie",id+""); _bullet.die(); } } }; mPhysicsWorld.registerPhysicsConnector(physicsConnector); $this.getParent().attachChild(this); } public void reset() { final float angle = canon.getRotation(); final float x = (float) ((Math.cos(MathUtils.degToRad(angle))*radius) + centerX) / PIXEL_TO_METER_RATIO_DEFAULT; final float y = (float) ((Math.sin(MathUtils.degToRad(angle))*radius) + centerY) / PIXEL_TO_METER_RATIO_DEFAULT; this.setVisible(true); this.setIgnoreUpdate(false); body.setActive(true); mPhysicsWorld.registerPhysicsConnector(physicsConnector); body.setTransform(new Vector2(x,y),0); } public Body getBody() { return body; } public void setLinearVelocity(Vector2 velocity) { body.setLinearVelocity(velocity); } public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } }

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >