Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 322/657 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • If statement causing xna sprites to draw frame by frame

    - by user1489599
    I’m a bit new to XNA but I wanted to write a simple program that would fire a cannon ball from a cannon at a 45 degree angle. It works fine outside of my keyboard i/o if statement, but when I encapsulate the code around an if statement checking to see if the user hits the space bar, the sprite will draw one frame at a time every time the space bar is hit. This is the code in question if (currentKeyboardState.IsKeyUp(Keys.Space) && previousKeyboardState.IsKeyDown(Keys.Space) && !skullBall.Alive) { //works outside the keyboard input if statement //{ skullBall.Position = cannon.Position; skullBall.DeltaY = -(float)(Math.Sin(MathHelper.ToRadians(45)) * 50/*39.7577*/ * time + 0.5 * (gravity * (time * time))); skullBall.DeltaX = (float)(Math.Cos(MathHelper.ToRadians(45)) * 50/*39.7577*/ * time); skullBall.Alive = true; //} } The skull ball represents the cannon ball and the cannon is just the starting point. DeltaX and DeltaY are the values I’m using to update the cannon balls position per update. I know it's dumb to have the cannon ball start at the cannons position every time the update is called but it’s not really noticeable right now. I was wondering if after examining my code, if anyone noticed any errors that would cause the sprite to display frame by frame instead of drawing it as a full animation of the cannon ball leaving the cannon and moving from there.

    Read the article

  • Rendering different materials in a voxel terrain

    - by MaelmDev
    Each voxel datapoint in my terrain model is made up of two properties: density and material type. Each is stored as an unsigned integer value (but the density is interpreted as a decimal value between 0 and 1). My current idea for rendering these different materials on the terrain mesh is to store eleven extra attributes in each vertex: six material values corresponding to the materials of the voxels that the vertices lie between, three decimal values that correspond to the interpolation each vertex has between each voxel, and two decimal values that are used to determine where the fragment lies on the triangle. The material and interpolation attributes are the exact same for each vertex in the triangle. The fragment shader samples each texture that corresponds to each material and then uses the aforementioned couple of decimal values to interpolate between these samples and obtain the final textured color of the fragment. It should work fine, but it seems like a big memory hog. I won't be able to reuse vertices in the mesh with indexing, and each vertex will have a lot of data associated with it. It also seems pretty slow. What are some ways to improve or replace this technique for drawing materials on a voxel terrain mesh?

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • Unity - Invert Movement Direction

    - by m41n
    I am currently developing a 2,5D Sidescroller in Unity (just starting to get to know it). Right now I added a turn-script to have my character face the appropriate direction of movement, though something with the movement itself is behaving oddly now. When I press the right arrow key, the character moves and faces towards the right. If I press the left arrow key, the character faces towards the left, but "moon-walks" to the right. I allready had enough trouble getting the turning to work, so what I am trying is to find a simple solution, if possible without too much reworking of the rest of my project. I was thinking of just inverting the movement direction for a specific input-key/facing-direction. So if anyone knows how to do something like that, I'd be thankful for the help. If it helps, the following is the current part of my "AnimationChooser" script to handle the turning: Quaternion targetf = Quaternion.Euler(0, 270, 0); // Vector3 Direction when facing frontway Quaternion targetb = Quaternion.Euler(0, 90, 0); // Vector3 Direction when facing opposite way if (Input.GetAxisRaw ("Vertical") < 0.0f) // if input is lower than 0 turn to targetf { transform.rotation = Quaternion.Lerp(transform.rotation, targetf, Time.deltaTime * smooth); } if (Input.GetAxisRaw ("Vertical") > 0.0f) // if input is higher than 0 turn to targetb { transform.rotation = Quaternion.Lerp(transform.rotation, targetb, Time.deltaTime * smooth); } The Values (270 and 90) and Axis are because I had to turn my model itself in the very first place to face towards any of the movement directions.

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • Move projectile in direction the gun is facing

    - by Manderin87
    I am attempting to have a projectile follow the direction a gun is facing. When using the following code I am unable to make the projectile go in the right direction. float speed = .5f; float dX = (float) -Math.cos(Math.toRadians(degree)) * speed; float dY = (float) Math.sin(Math.toRadians(degree)) * speed; Can anyone tell me what I am doing wrong? The degree is the direction the gun is facing in degree's.

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Are there any preexisting maps for a Minecraft-like level I could use in my engine?

    - by Rishav Sharan
    I am working on a tiny cube-based engine like Minecraft. I was wondering if there is a way for me to get large blocky terrain in a text format that I can use for rendering on my engine? I don't want to start on procedural generation now, I just want a resource where I can get the coord list for a pretty looking terrain. Alternatively, is it possible for me to parse the Minecraft world files and use that data to generate terrain/buildings in my code?

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • Can't get sprite to rotate correctly?

    - by rphello101
    I'm attempting to play with graphics using Java/Slick 2d. I'm trying to get my sprite to rotate to wherever the mouse is on the screen and then move accordingly. I figured the best way to do this was to keep track of the angle the sprite is at since I have to multiply the cosine/sine of the angle by the move speed in order to get the sprite to go "forwards" even if it is, say, facing 45 degrees in quadrant 3. However, before I even worry about that, I'm having trouble even getting my sprite to rotate in the first place. Preliminary console tests showed that this code worked, but when applied to the sprite, it just kind twitches. Anyone know what's wrong? int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x; int pY = sprite.y; int tempY, tempX; double mAng, pAng = sprite.angle; double angRotate=0; if(mX!=pX){ tempY=pY-mY; tempX=mX-pX; mAng = Math.toDegrees(Math.atan2(Math.abs((tempY)),Math.abs((tempX)))); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=270; else mAng=90; } //Calculations if(mX<pX&&mY<pY){ //If in Q2 mAng = 180-mAng; } if(mX<pX&&mY>pY){ //If in Q3 mAng = 180+mAng; } if(mX>pX&&mY>pY){ //If in Q4 mAng = 360-mAng; } angRotate = mAng-pAng; sprite.angle = mAng; sprite.image.setRotation((float)angRotate);

    Read the article

  • 3D Model not translating correctly (visually)

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } Im suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • Using multiple indexes with buffer objects in OpenTK

    - by Rushyo
    I've got multiple buffers in OpenGL holding data on position, normals and texcoords. I also have an equal number of buffers holding distinct index data for each of those buffers. I quite like this format (indvidual indexes for each buffer) utilised by COLLADA since it strikes me as optimally efficient at accessing each buffer. I've set up pointers to the relevant data arrays using VertexPointer, NormalPointer, etc however I have no way to assign pointers to the index buffers since DrawElements appear to only look at one ElementArrayBuffer. Can I utilise multiple indices some way or will I be better off using a different technique which can support this? I'd prefer to keep the distinct indices if at all possible.

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • strange behavior in Box2D+LibGDX when applying impulse

    - by Z0lenDer
    I have been playing around with Box2D and LibGDX and have been using a sample code from DecisionTreeGames as the testing ground. Now I have a screen with four walls and a rectangle shape, lets call it a brick. When I use applyLinearImpulse to the brick, it starts bouncing right and left without any pattern and won't stop! I tried adding friction and increasing the density, but the behavior still remains the same. Here are some of the code that might be useful: method for applying the impulse: center = brick.getWorldCenter(); brick.applyLinearImpulse(20, 0, center.x, center.y); Defining the brick: brick_bodyDef.type = BodyType.DynamicBody; brick_bodyDef.position.set(pos); // brick is initially on the ground brick_bodyDef.angle = 0; brick_body = world.createBody(brick_bodyDef); brick_body.setBullet(true); brick_bodyShape.setAsBox(w,h); brick_fixtureDef.density = 0.9f; brick_fixtureDef.restitution = 1; brick_fixtureDef.shape = brick_bodyShape; brick_fixtureDef.friction=1; brick_body.createFixture(fixtureDef); Walls are defined the same only their bullet value is set to false I would really appreciate it if you could help me have a change this code to have a realistic behavior (i.e. when I apply impulse to the brick it should trip a few times and then stop completely).

    Read the article

  • Split vector vs matrix notation for transformation

    - by seahorse
    Some rendering engines like Ogre prefer to use a individual vector based notation for transformations like the following Split vector notation: Net Transformation is represented by Scale vector = sx, sy, sz Transformation vector = tx, ty, tz Rotation Quaternion Vector = w,x,y,z Matrix notation: There are other engines which simply use a net combined transformation matrix. What are the advantages of the first notation over the second? Also for animation interpolation does it work in the first notation that we interpolate across the individual components and use the interpolated parts to get the net transformation? Is this another advantage?

    Read the article

  • Can anyone recommend an AI sandbox?

    - by user19433
    I'm passionate person, who has been around AI from a long time [1] but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like: visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render : this is about pathfinding, tactical reasoning, etc.. I have tried : Unreal Dev Kit : while the new release announce is about C++ coding, this is about external tools and will be released in 2013 Cry Engine : really interesting as AI is presents here but coding with it appears to be an hell: did I got it wrong ? Half Life source, C4, Torque, Dx Studio : either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D : the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice : C#, Javascript or Boo. C# is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C++. This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox : this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?

    Read the article

  • How can I chose the depth of a quadtree?

    - by Evpok
    In a 2d world, using a quadtree to prune pairs in collision detection, how can I chose the depth of said quadtree? The world I am dealing with is mostly made of moving objects¹, so the cost of dispatching the objects between the quadtree cells matter. So what I am interested in is the balance between the gain from less collision checking and the loss from more dispatching. 1. To be completely explicit, autonomous self-replicating cells competing for food sources, in an attempt to show my pupils predator-prey dynamics and genetic evolution at work

    Read the article

  • Convex Hull for Concave Objects

    - by Lighthink
    I want to implement GJK and I want it to handle concave shapes too (almost all my shapes are concave). I've thought of decomposing the concave shape into convex shapes and then building a hierarchical tree out of convex shapes, but I do not know how to do it. Nothing I could find on the Internet about it wasn't satisfying my needs, so maybe someone can point me in the right direction or give a full explanation.

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • Handling different screen densities in Android Devices?

    - by DevilWithin
    Well, i know there are plenty of different-sized screens in devices that run Android. The SDK I code with deploys to all major desktop platforms and android. I am aware i must have special cares to handle the different screen sizes and densities, but i just had an idea that would work in theory, and my question is exactly about that method, How could it FAIL ? So, what I do is to have an ortho camera of the same size for all devices, with possible tweaks, but anyway that would grant the proper positioning of all elements in all devices, right? We can assume everything is drawn in OpenGLES and input handling is converted to the proper camera coordinates. If you need me to improve the question, please tell me.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >