Search Results

Search found 21199 results on 848 pages for 'game controller'.

Page 343/848 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Collision detection - Smooth wall sliding, no bounce effect

    - by Joey
    I'm working on a basic collision detection system that provides point - OBB collision detection. I have around 200 cubes in my environment and I check (for now) each of them in turn and see if it collides. If it does I return the colliding face's normal, save the old player position and do some trigonometry to return a new player position for my wall sliding. edit I'll define my meaning of wall sliding: If a player walks in a vertical slope and has a slight horizontal rotation to the left or the right and keeps walking forward in the wall the player should slide a little to the right/left while continually walking towards the wall till he left the wall. Thus, sliding along the wall. Everything works fine and with multiple objects as well but I still have one problem I can't seem to figure out: smooth wall sliding. In my current implementation sliding along the walls make my player bounce like a mad man (especially noticable with gravity on and moving forward). I have a velocity/direction vector, a normal vector from the collided plane and an old and new player position. First I negate the normal vector and get my new velocity vector by substracting the inverted normal from my direction vector (which is the vector to slide along the wall) and I add this vector to my new Player position and recalculate the direction vector (in case I have multiple collisions). I know I am missing some step but I can't seem to figure it out. Here is my code for the collision detection (run every frame): Vector direction; Vector newPos(camera.GetOriginX(), camera.GetOriginY(), camera.GetOriginZ()); direction = newPos - oldPos; // Direction vector // Check for collision with new position for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { // Get inverse normal (direction STRAIGHT INTO wall) Vector invNormal = normal.Negative(); Vector wallDir = direction - invNormal; // We know INTO wall, and DIRECTION to wall. Substract these and you got slide WALL direction newPos = oldPos + wallDir; direction = newPos - oldPos; } } Any help would be greatly appreciated! FIX I eventually got things up and running how they should thanks to Krazy, I'll post the updated code listing in case someone else comes upon this problem! for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { Vector invNormal = normal.Negative(); invNormal = invNormal * (direction * normal).Length(); // Change normal to direction's length and normal's axis Vector wallDir = direction - invNormal; newPos = oldPos + wallDir; direction = newPos - oldPos; } }

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

  • SlimDX and Parsing .X Files

    - by P. Avery
    I'm trying to parse a .x file using SlimDX. I can create the XFile object and register templates but I'm having problems with the enumeration object. The enumeration object has a child count of 0 for a file I know to have valid data. Here is code to create file, enumeration, and data objects: public void Parse(string filename, string templates, ref Frame aParam) { XFile xfile = null; XFileEnumerationObject enumObj = null; XFileData dataObj = null; // create file object xfile = new XFile(); // register templates if (xfile.RegisterTemplates(XFile.DefaultTemplates).IsFailure) { Console.WriteLine(Result.Last); xfile.Dispose(); return; } // create enumeration object enumObj = xfile.CreateEnumerationObject(filename, System.Runtime.InteropServices.CharSet.Auto); if (enumObj == null) { xfile.Dispose(); return; } // get child count( returns 0 here ) long ncElements = enumObj.ChildCount; for (int i = 0; i < ncElements; ++i) { // never reached... dataObj = enumObj.GetChild(i); if (dataObj.IsReference) continue; try { Parse(dataObj, ref aParam); } catch (Exception e) { e.Write(); } finally { dataObj.Dispose(); } } enumObj.Dispose(); xfile.Dispose(); } ...There are no exceptions thrown by this function...the child count is 0 so the conditional loop breaks right away, the file objects are disposed of and the function returns... Here is .x file...a simple cube: xof 0303txt 0032 Frame Root { FrameTransformMatrix { 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000;; } Frame Cube { FrameTransformMatrix { 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000;; } Mesh Cube{ //Cube Mesh 36; -1.000000; 1.000000; 1.000000;, -1.000000;-1.000000; 1.000000;, 0.999999;-1.000001; 1.000000;, -1.000000;-1.000000;-1.000000;, 1.000000;-1.000000;-1.000000;, 0.999999;-1.000001; 1.000000;, 1.000000; 0.999999; 1.000000;, -1.000000; 1.000000; 1.000000;, 0.999999;-1.000001; 1.000000;, -1.000000; 1.000000;-1.000000;, -1.000000;-1.000000;-1.000000;, -1.000000; 1.000000; 1.000000;, -1.000000; 1.000000; 1.000000;, 1.000000; 0.999999; 1.000000;, 1.000000; 1.000000;-1.000000;, 1.000000; 0.999999; 1.000000;, 0.999999;-1.000001; 1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000;-1.000000;-1.000000;, -1.000000;-1.000000; 1.000000;, -1.000000; 1.000000; 1.000000;, 1.000000; 1.000000;-1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000; 1.000000;-1.000000;, 1.000000; 1.000000;-1.000000;, 1.000000; 0.999999; 1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000; 1.000000;-1.000000;, -1.000000; 1.000000; 1.000000;, 1.000000; 1.000000;-1.000000;, -1.000000;-1.000000; 1.000000;, -1.000000;-1.000000;-1.000000;, 0.999999;-1.000001; 1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000;-1.000000;-1.000000;, -1.000000; 1.000000;-1.000000;; 12; 3;0;1;2;, 3;3;4;5;, 3;6;7;8;, 3;9;10;11;, 3;12;13;14;, 3;15;16;17;, 3;18;19;20;, 3;21;22;23;, 3;24;25;26;, 3;27;28;29;, 3;30;31;32;, 3;33;34;35;; MeshNormals { //Mesh Normals 36; 0.000000;-0.000000; 1.000000;, 0.000000;-0.000000; 1.000000;, 0.000000;-0.000000; 1.000000;, -0.000000;-1.000000;-0.000000;, -0.000000;-1.000000;-0.000000;, -0.000000;-1.000000;-0.000000;, -0.000000;-0.000000; 1.000000;, -0.000000;-0.000000; 1.000000;, -0.000000;-0.000000; 1.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, 1.000000;-0.000001; 0.000000;, 1.000000;-0.000001; 0.000000;, 1.000000;-0.000001; 0.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, 0.000000; 0.000000;-1.000000;, 0.000000; 0.000000;-1.000000;, 0.000000; 0.000000;-1.000000;, 1.000000; 0.000000;-0.000000;, 1.000000; 0.000000;-0.000000;, 1.000000; 0.000000;-0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, -0.000000;-1.000000; 0.000000;, -0.000000;-1.000000; 0.000000;, -0.000000;-1.000000; 0.000000;, 0.000000;-0.000000;-1.000000;, 0.000000;-0.000000;-1.000000;, 0.000000;-0.000000;-1.000000;; 12; 3;0;1;2;, 3;3;4;5;, 3;6;7;8;, 3;9;10;11;, 3;12;13;14;, 3;15;16;17;, 3;18;19;20;, 3;21;22;23;, 3;24;25;26;, 3;27;28;29;, 3;30;31;32;, 3;33;34;35;; } //End of Mesh Normals MeshMaterialList { //Mesh Material List 1; 12; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;; Material Material { 0.640000; 0.640000; 0.640000; 1.000000;; 96.078431; 0.500000; 0.500000; 0.500000;; 0.000000; 0.000000; 0.000000;; TextureFilename {"Yellow.jpg";} } } //End of Mesh Material List MeshTextureCoords UVMap{ //Mesh UV Coordinates 36; 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;; } //End of Mesh UV Coordinates } //End of Mesh Mesh } //End of Cube } //End of Root Frame

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • Blending textures together, texture fade over / fade in

    - by Deukalion
    What is the best way to render a texture overlapping effect? Like in this example: I want either the grass to fade in to the snow texture, or the other way around. No rough edges. Somehow make them blend over. So the grass has a bit of snow or the snow has a bit of grass How is this possible during runtime? If that's possible. I don't render this by using the SpriteBatch, since the ground isn't rectangles (they can be moved). This is the way I render each shape (each one of those squares): // LoadTexture // Apply EffectPass device.DrawUserIndexedPrimitives<VertexPositionNormalTexture> ( PrimitiveType.TriangleList, render.Item.Points, // Array of VertexPositionNormalTexture 0, render.Item.Points.Length, render.Item.Indexes, // Array of int indexes (triangulation) 0, render.Item.Indexes.Length / 3, VertexPositionNormalTexture.VertexDeclaration );

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • Skyrim Nexus Mods on Xbox 360 by use of dawnguard?

    - by user17895
    i think it's possible i opened up the dawnguard marketplace content and it consists 3 files: dawnguard.bsa < mod dawnguard.esp <- mod installing file. and spa.bin <-dont know where this is for. and it has been confirmed you can use the top 2 files on pc for a not fully functional dawnguard (barely functional to be exact) and if we could just replace or add a few other bsa and esp files to this marketplace content we could get mods up and running on xbox altough i need confirmation on this. I also have no clue where the spa.bin file for is, i need to examine it some further. Further this is adding a few non-distributed Files to marketplace content and wont get you booted from XBL. Also if anyone wants to examine these files for further information i will gladly share them with you. if you have any information or answers please email me at [email protected] thx

    Read the article

  • Decal implementation

    - by dreta
    I had issues finding information about decals, so maybe this question will help others. The implementation is for a forward renderer. Could somebody confirm if i got decal implementation right? You define a cube of any dimension that'll define the projection volume in common space. You check for triangle intersection with the defined cube to recieve triangles that the projection will affect. You clip these triangles and save them. You then use matrix tricks to calculate UV coordinates for the saved triangles that'll reference the texture you're projecting. To do this you take the vectors representing height, width and depth of the cube in common space, so that f.e. the bottom left corner is the origin. You put that in a matrix as the i, j, k unit vectors, set the translation for the cube, then you inverse this matrix. You multiply the vertices of the saved triangles by this matrix, that way you get their coordinates inside of a 0 to 1 size cube that you use as the UV coordinates. This way you have the original triangles you're projecting onto and you have UV coordinates for them (the UV coordinates are referencing the texture you're projecting). Then you rerender the saved triangles onto the scene and they overwrite the area of projection with the projected image. Now the questions that i couldn't find answers for. Is the last point right? I've never done software clipping, but it seems error prone enough, due to limited precision, that the'll be some z fighting occuring for the projected texture. Also is the way of getting UV coordinates correct?

    Read the article

  • Something other than Vertex Welding with Texture Atlas?

    - by Tim Winter
    What options (in C# with XNA) would there be for texture usage in a procedural generated 3D world made of cubes to increase performance? Yes, it's like Minecraft. I've been doing a texture atlas and rendering faces individually (4 vertices per face), but I've also read in a couple places about using texture wrapping with two 1D atlases to merge adjacent faces with the same texture. If two or more adjacent faces share the same image, it'd be quite easy to wrap in this way reducing vertices by a large amount. My problem with this is having too many textures, swapping too often, and many image related things like non-power of 2 images. Is there a middle ground option between the 1D texture atlas trick and rendering 4 vertices per cube face? This is a picture of what I have currently (in wireframe). 4 vertices per face seems extremely inefficient to me.

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Away3D & Directional Light w/ Rotating Meshes

    - by seethru
    This is likely a stupid error but I can't seem to find what I've done wrong. I've got a simple scene with 10 cylinders rotating at a default speed. If I grab one of these cylinders I can rotate it in the opposite direction or at a greater speed. I have a single directional light in the scene. It would appear that the directional light is only calculated at initialization and not on further frames. The shadow created by the light rotates with the cylinder giving the impression that the light is rotating when it isn't. Camera & Light Initialization _view = new View3D(); addChild(_view); _view.antiAlias = 4; _view.backgroundColor = 0xFFFFFF; _view.camera.z = -850; _view.camera.y = 0; _view.camera.x = 0; _view.camera.lookAt(new Vector3D()); _view.camera.lens = new PerspectiveLens(15); _view.mousePicker = PickingType.RAYCAST_BEST_HIT; _light = new DirectionalLight(); _light.z = -850; _light.direction = new Vector3D(1, 1, 1); _light.color = 0xFFFFFF; _light.ambient = 0.1; _light.diffuse = 0.7; _view.scene.addChild(_light); Mesh and Material creation var material:TextureMaterial = new TextureMaterial(createPow2Texture(sprite, _colors[i]) , true, false, true); material.animateUVs = true; material.lightPicker = _lightPicker; cylinder = new Mesh(new CylinderGeometry(radius, radius, 13, 70, 1, true, true), material); cylinder.subMeshes[0].scaleU = spriteWidth / sprite.width; cylinder.y = y; cylinder.mouseEnabled = true; cylinder.pickingCollider = PickingColliderType.AS3_BEST_HIT; cylinder.addEventListener(MouseEvent3D.MOUSE_OVER, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_MOVE, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_OUT, onMouseOutMesh); _cylinders.push(cylinder); Frame private function onEnterFrame(event:Event):void { for each (var mesh:Mesh in _cylinders) { if (mesh == _mouseOverMesh) continue; mesh.rotationY += 0.25; } _view.render(); }

    Read the article

  • How can I estimate cost of creating tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How to estimate cost of creating tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Taking fixed direction on hemisphere and project to normal (openGL)

    - by Maik Xhani
    I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions: vec3 sampleDirections[6] = {vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, -0.700629f), vec3(-0.509037f, 0.5f, -0.700629), vec3(-0.823639f, 0.5f, 0.267617f)}; now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta = acos(sqrt(1.0f-rand1)); float phi = 6.283185f * rand2; vec3 s = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)); vec3 v = normalize(cross(n,vec3(0.0072, 1.0, 0.0034))); vec3 u = cross(v, n); u = s.x*u; v = s.y*v; vec3 w = s.z*n; vec3 direction = u+v+w; return normalize(direction); } ** EDIT ** This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) { vec3 x; vec3 z; if(abs(n.x) < abs(n.y)){ if(abs(n.x) < abs(n.z)){ x = vec3(1.0f,0.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } }else{ if(abs(n.y) < abs(n.z)){ x = vec3(0.0f,1.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } } z = normalize(cross(x,n)); x = cross(n,z); mat3 M = mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z); return M*sampleDir; } So if my n = (0,0,1); and my sampleDir = (0,1,0); shouldn't the M*sampleDir be (0,0,1)? Cause that is what I was expecting.

    Read the article

  • How can I support objects larger than a single tile in a 2D tile engine?

    - by Yheeky
    I´m currently working on a 2D Engine containing an isometric tile map. It´s running quite well but I'm not sure if I´ve chosen the best approach for that kind of engine. To give you an idea what I´m thinking about right now, let's have a look at a basic object for a tile map and its objects: public class TileMap { public List<MapRow> Rows = new List<MapRow>(); public int MapWidth = 50; public int MapHeight = 50; } public class MapRow { public List<MapCell> Columns = new List<MapCell>(); } public class MapCell { public int TileID { get; set; } } Having those objects it's just possible to assign a tile to a single MapCell. What I want my engine to support is like having groups of MapCells since I would like to add objects to my tile map (e.g. a house with a size of 2x2 tiles). How should I do that? Should I edit my MapCell object that it may has a reference to other related tiles and how can I find an object while clicking on single MapCells? Or should I do another approach using a global container with all objects in it?

    Read the article

  • Create edges in Blender

    - by Mikey
    I've worked with 3DS Max in Uni and am trying to learn Blender. My problem is I know a lot of simple techniques from 3DS max that I'm having trouble translating into Blender. So my question is: Say I have a poly in the middle of a mesh and I want to split it in two. Simply adding an edge between two edges. This would cause a two 5gons either side. It's a simple technique I use every now and then when I want to modify geometry. It's called "Edge connect" in 3DS Max. In Blender the only edge connect method I can find is to create edge loops, not helpful when aiming at low poly iPhone games. Is there an equivalent in blender?

    Read the article

  • Extract derived 3D scaling from a 3D Sprite to set to a 2D billboard

    - by Bill Kotsias
    I am trying to get the derived position and scaling of a 3D Sprite and set them to a 2D Sprite. I have managed to do the first part like this: var p:Point = sprite3d.local3DToGlobal(new Vector3D(0,0,0)); billboard.x = p.x; billboard.y = p.y; But I can't get the scaling part correctly. I am trying this: var mat:Matrix3D = sprite3d.transform.getRelativeMatrix3D(stage); // get derived matrix(?) var scaleV:Vector3D = mat.decompose()[2]; // get scaling vector from derived matrix var scale:Number = scaleV.length; billboard.scaleX = scale; billboard.scaleY = scale; ...but the result is apparently wrong. PS. One might ask what I am trying to achieve. I am trying to create "billboard" 3D sprites, i.e. sprites which are affected by all 3D transformations except rotations, thus they always face the "camera".

    Read the article

  • Atmospheric scattering sky from space artifacts

    - by ollipekka
    I am in the process of implementing atmospheric scattering of a planets from space. I have been using Sean O'Neil's shaders from http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html as a starting point. I have pretty much the same problem related to fCameraAngle except with SkyFromSpace shader as opposed to GroundFromSpace shader as here: http://www.gamedev.net/topic/621187-sean-oneils-atmospheric-scattering/ I get strange artifacts with sky from space shader when not using fCameraAngle = 1 in the inner loop. What is the cause of these artifacts? The artifacts disappear when fCameraAngle is limtied to 1. I also seem to lack the hue that is present in O'Neil's sandbox (http://sponeil.net/downloads.htm) Camera position X=0, Y=0, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. Camera position X=500, Y=500, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. I've found that the camera angle seems to handled very differently depending the source: In the original shaders the camera angle in SkyFromSpaceShader is calculated as: float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; Whereas in ground from space shader the camera angle is calculated as: float fCameraAngle = dot(-v3Ray, v3Pos) / length(v3Pos); However, various sources online tinker with negating the ray. Why is this? Here is a C# Windows.Forms project that demonstrates the problem and that I've used to generate the images: https://github.com/ollipekka/AtmosphericScatteringTest/ Update: I have found out from the ScatterCPU project found on O'Neil's site that the camera ray is negated when the camera is above the point being shaded so that the scattering is calculated from point to the camera. Changing the ray direction indeed does remove artifacts, but introduces other problems as illustrated here: Furthermore, in the ScatterCPU project, O'Neil guards against situations where optical depth for light is less than zero: float fLightDepth = Scale(fLightAngle, fScaleDepth); if (fLightDepth < float.Epsilon) { continue; } As pointed out in the comments, along with these new artifacts this still leaves the question, what is wrong with the images where camera is positioned at 500, 500, 500? It feels like the halo is focused on completely wrong part of the planet. One would expect that the light would be closer to the spot where the sun should hits the planet, rather than where it changes from day to night. The github project has been updated to reflect changes in this update.

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • How can I compile SM 3.0 effects in D3D11 in slimdx?

    - by jacker
    var bytecode = ShaderBytecode.CompileFromFile("shaders\\testShader.fx", "fx_5_0", ShaderFlags.None, SlimDX.D3DCompiler.EffectFlags.None, null, null, out str); var effect = new SlimDX.Direct3D11.Effect(gpu.Device, bytecode); Works fine but if I try to use another shader model like 4.0 or 3.0 it throws an error on the new effect creation: E_FAIL: An undetermined error occurred (-2147467259) How do I compile older shaders? And I've read about device context but I can't find any information on how to use them to maintain DX9 compatibility.

    Read the article

  • Vertex buffer acting strange? [on hold]

    - by Ryan Capote
    I'm having a strange problem, and I don't know what could be causing it. My current code is identical to how I've done this before. I'm trying to render a rectangle using VBO and orthographic projection.   My results:     What I expect: 3x3 rectangle in the top left corner   #include <stdio.h> #include <GL\glew.h> #include <GLFW\glfw3.h> #include "lodepng.h"   static const int FALSE = 0; static const int TRUE = 1;   static const char* VERT_SHADER =     "#version 330\n"       "layout(location=0) in vec4 VertexPosition; "     "layout(location=1) in vec2 UV;"     "uniform mat4 uProjectionMatrix;"     /*"out vec2 TexCoords;"*/       "void main(void) {"     "    gl_Position = uProjectionMatrix*VertexPosition;"     /*"    TexCoords = UV;"*/     "}";   static const char* FRAG_SHADER =     "#version 330\n"       /*"uniform sampler2D uDiffuseTexture;"     "uniform vec4 uColor;"     "in vec2 TexCoords;"*/     "out vec4 FragColor;"       "void main(void) {"    /* "    vec4 texel = texture2D(uDiffuseTexture, TexCoords);"     "    if(texel.a <= 0) {"     "         discard;"     "    }"     "    FragColor = texel;"*/     "    FragColor = vec4(1.f);"     "}";   static int g_running; static GLFWwindow *gl_window; static float gl_projectionMatrix[16];   /*     Structures */ typedef struct _Vertex {     float x, y, z, w;     float u, v; } Vertex;   typedef struct _Position {     float x, y; } Position;   typedef struct _Bitmap {     unsigned char *pixels;     unsigned int width, height; } Bitmap;   typedef struct _Texture {     GLuint id;     unsigned int width, height; } Texture;   typedef struct _VertexBuffer {     GLuint bufferObj, vertexArray; } VertexBuffer;   typedef struct _ShaderProgram {     GLuint vertexShader, fragmentShader, program; } ShaderProgram;   /*   http://en.wikipedia.org/wiki/Orthographic_projection */ void createOrthoProjection(float *projection, float width, float height, float far, float near)  {       const float left = 0;     const float right = width;     const float top = 0;     const float bottom = height;          projection[0] = 2.f / (right - left);     projection[1] = 0.f;     projection[2] = 0.f;     projection[3] = -(right+left) / (right-left);     projection[4] = 0.f;     projection[5] = 2.f / (top - bottom);     projection[6] = 0.f;     projection[7] = -(top + bottom) / (top - bottom);     projection[8] = 0.f;     projection[9] = 0.f;     projection[10] = -2.f / (far-near);     projection[11] = (far+near)/(far-near);     projection[12] = 0.f;     projection[13] = 0.f;     projection[14] = 0.f;     projection[15] = 1.f; }   /*     Textures */ void loadBitmap(const char *filename, Bitmap *bitmap, int *success) {     int error = lodepng_decode32_file(&bitmap->pixels, &bitmap->width, &bitmap->height, filename);       if (error != 0) {         printf("Failed to load bitmap. ");         printf(lodepng_error_text(error));         success = FALSE;         return;     } }   void destroyBitmap(Bitmap *bitmap) {     free(bitmap->pixels); }   void createTexture(Texture *texture, const Bitmap *bitmap) {     texture->id = 0;     glGenTextures(1, &texture->id);     glBindTexture(GL_TEXTURE_2D, texture);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0,              GL_RGBA, GL_UNSIGNED_BYTE, bitmap->pixels);       glBindTexture(GL_TEXTURE_2D, 0); }   void destroyTexture(Texture *texture) {     glDeleteTextures(1, &texture->id);     texture->id = 0; }   /*     Vertex Buffer */ void createVertexBuffer(VertexBuffer *vertexBuffer, Vertex *vertices) {     glGenBuffers(1, &vertexBuffer->bufferObj);     glGenVertexArrays(1, &vertexBuffer->vertexArray);     glBindVertexArray(vertexBuffer->vertexArray);       glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj);     glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 6, (const GLvoid*)vertices, GL_STATIC_DRAW);       const unsigned int uvOffset = sizeof(float) * 4;       glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);     glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)uvOffset);       glEnableVertexAttribArray(0);     glEnableVertexAttribArray(1);       glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }   void destroyVertexBuffer(VertexBuffer *vertexBuffer) {     glDeleteBuffers(1, &vertexBuffer->bufferObj);     glDeleteVertexArrays(1, &vertexBuffer->vertexArray); }   void bindVertexBuffer(VertexBuffer *vertexBuffer) {     glBindVertexArray(vertexBuffer->vertexArray);     glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj); }   void drawVertexBufferMode(GLenum mode) {     glDrawArrays(mode, 0, 6); }   void drawVertexBuffer() {     drawVertexBufferMode(GL_TRIANGLES); }   void unbindVertexBuffer() {     glBindVertexArray(0);     glBindBuffer(GL_ARRAY_BUFFER, 0); }   /*     Shaders */ void compileShader(ShaderProgram *shaderProgram, const char *vertexSrc, const char *fragSrc) {     GLenum err;     shaderProgram->vertexShader = glCreateShader(GL_VERTEX_SHADER);     shaderProgram->fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);       if (shaderProgram->vertexShader == 0) {         printf("Failed to create vertex shader.");         return;     }       if (shaderProgram->fragmentShader == 0) {         printf("Failed to create fragment shader.");         return;     }       glShaderSource(shaderProgram->vertexShader, 1, &vertexSrc, NULL);     glCompileShader(shaderProgram->vertexShader);     glGetShaderiv(shaderProgram->vertexShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile vertex shader.");         return;     }       glShaderSource(shaderProgram->fragmentShader, 1, &fragSrc, NULL);     glCompileShader(shaderProgram->fragmentShader);     glGetShaderiv(shaderProgram->fragmentShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile fragment shader.");         return;     }       shaderProgram->program = glCreateProgram();     glAttachShader(shaderProgram->program, shaderProgram->vertexShader);     glAttachShader(shaderProgram->program, shaderProgram->fragmentShader);     glLinkProgram(shaderProgram->program);          glGetProgramiv(shaderProgram->program, GL_LINK_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to link shader.");         return;     } }   void destroyShader(ShaderProgram *shaderProgram) {     glDetachShader(shaderProgram->program, shaderProgram->vertexShader);     glDetachShader(shaderProgram->program, shaderProgram->fragmentShader);       glDeleteShader(shaderProgram->vertexShader);     glDeleteShader(shaderProgram->fragmentShader);       glDeleteProgram(shaderProgram->program); }   GLuint getUniformLocation(const char *name, ShaderProgram *program) {     GLuint result = 0;     result = glGetUniformLocation(program->program, name);       return result; }   void setUniformMatrix(float *matrix, const char *name, ShaderProgram *program) {     GLuint loc = getUniformLocation(name, program);       if (loc == -1) {         printf("Failed to get uniform location in setUniformMatrix.\n");         return;     }       glUniformMatrix4fv(loc, 1, GL_FALSE, matrix); }   /*     General functions */ static int isRunning() {     return g_running && !glfwWindowShouldClose(gl_window); }   static void initializeGLFW(GLFWwindow **window, int width, int height, int *success) {     if (!glfwInit()) {         printf("Failed it inialize GLFW.");         *success = FALSE;        return;     }          glfwWindowHint(GLFW_RESIZABLE, 0);     *window = glfwCreateWindow(width, height, "Alignments", NULL, NULL);          if (!*window) {         printf("Failed to create window.");         glfwTerminate();         *success = FALSE;         return;     }          glfwMakeContextCurrent(*window);       GLenum glewErr = glewInit();     if (glewErr != GLEW_OK) {         printf("Failed to initialize GLEW.");         printf(glewGetErrorString(glewErr));         *success = FALSE;         return;     }       glClearColor(0.f, 0.f, 0.f, 1.f);     glViewport(0, 0, width, height);     *success = TRUE; }   int main(int argc, char **argv) {          int err = FALSE;     initializeGLFW(&gl_window, 480, 320, &err);     glDisable(GL_DEPTH_TEST);     if (err == FALSE) {         return 1;     }          createOrthoProjection(gl_projectionMatrix, 480.f, 320.f, 0.f, 1.f);          g_running = TRUE;          ShaderProgram shader;     compileShader(&shader, VERT_SHADER, FRAG_SHADER);     glUseProgram(shader.program);     setUniformMatrix(&gl_projectionMatrix, "uProjectionMatrix", &shader);       Vertex rectangle[6];     VertexBuffer vbo;     rectangle[0] = (Vertex){0.f, 0.f, 0.f, 1.f, 0.f, 0.f}; // Top left     rectangle[1] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top right     rectangle[2] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[3] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top left     rectangle[4] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[5] = (Vertex){3.f, 3.f, 0.f, 1.f, 1.f, 1.f}; // Bottom right       createVertexBuffer(&vbo, &rectangle);            bindVertexBuffer(&vbo);          while (isRunning()) {         glClear(GL_COLOR_BUFFER_BIT);         glfwPollEvents();                    drawVertexBuffer();                    glfwSwapBuffers(gl_window);     }          unbindVertexBuffer(&vbo);       glUseProgram(0);     destroyShader(&shader);     destroyVertexBuffer(&vbo);     glfwTerminate();     return 0; }

    Read the article

  • Collision Detection algorithms with early Collision exit

    - by Grieverheart
    I'm using collision detection in Monte Carlo simulations and at the moment I'm using GJK which is quite fast. I can't help to think it could be done even faster though. In the simulations, about 70% of the time GJK is run, it detects a collision. Thus collisions are more than non-collisions in my case. Most collision detection algorithms I know have an early non-collision exit test. Are there any collision detection algorithms that have an early collision detect instead of non-collision and could be potentially faster than GJK in case of collision?

    Read the article

  • Semi Fixed-timestep ported to javascript

    - by abernier
    In Gaffer's "Fix Your Timestep!" article, the author explains how to free your physics' loop from the paint one. Here is the final code, written in C: double t = 0.0; const double dt = 0.01; double currentTime = hires_time_in_seconds(); double accumulator = 0.0; State previous; State current; while ( !quit ) { double newTime = time(); double frameTime = newTime - currentTime; if ( frameTime > 0.25 ) frameTime = 0.25; // note: max frame time to avoid spiral of death currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { previousState = currentState; integrate( currentState, t, dt ); t += dt; accumulator -= dt; } const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); } I'm trying to implement this in JavaScript but I'm quite confused about the second while loop... Here is what I have for now (simplified): ... (function animLoop(){ ... while (accumulator >= dt) { // While? In a requestAnimation loop? Maybe if? ... } ... // render requestAnimationFrame(animLoop); // stand for the 1st while loop [OK] }()) As you can see, I'm not sure about the while loop inside the requestAnimation one... I thought replacing it with a if but I'm not sure it will be equivalent... Maybe some can help me.

    Read the article

  • 2D Tile-Based Concept Art App

    - by ashes999
    I'm making a bunch of 2D games (now and in the near future) that use a 2D, RPG-like interface. I would like to be able to quickly paint tiles down and drop character sprites to create concept art. Sure, I could do it in GIMP or Photoshop. But that would require manually adding each tile, layering on more tiles, cutting and pasting particular character sprites, etc. and I really don't need that level of granularity; I need a quick and fast way to churn out concept art. Is there a tool that I can use for this? Perhaps some sort of 2D tile editor which lets me draw sprites and tiles given that I can provide the graphics files.

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >