Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 399/1016 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Displaying performance data per engine subsystem

    - by liortal
    Our game (Android based) traces how long it takes to do the world logic updates, and how long it takes to a render a frame to the device screen. These traces are collected every frame, and displayed at a constant interval (currently every 1 second). I've seen games where on-screen data of various engine subsystems is displayed, with the time they consume (either in text) or as horizontal colored bars. I am wondering how to implement such a feature?

    Read the article

  • Determinism in multiplayer simulation with Box2D, and single computer

    - by Jake
    I wrote a small test car driving multiplayer game with Box2D using TCP server-client communcations. I ran 1 instance of server.exe and 2 instance of client.exe on the same machine that I code and compile the executables. I type inputs (WASD for a simple car movement) into one of the 2 clients and I can get both clients to update the simulation. There are 2 cars in the simulation. As long as the cars do not collide, I get the same identical output on both client.exe. I can run the car(s) around for as long as I could they still update the same. However, if I start to collide the cars, very quickly they go out of sync. My tools: Windows 7, C++, MSVS 2010, Box2D, freeGlut. My Psuedocode: // client.exe void timer(int value) { tcpServer.send(my_inputs); foreach(i = player including myself) inputs[i] = tcpServer.receive(); foreach(i = player including myself) players[i].process(inputs[i]); myb2World.step(33, 8, 6); // Box2D world step simulation foreach(i = player including myself) renderer.render(player[i]); glutTimerFunc(33, timer, 0); } // server.exe void serviceloop { while(all clients alive) { foreach(c = clients) tcpClients[c].receive(&inputs[c]); // send input of each client to all clients foreach(source = clients) { foreach(dest = clients) { tcpClients[dest].send(inputs[source]); } } } } I have read all over the internet and SE the following claims (paraphrased): Box2D is deterministic as long as floating point architecture/implementation is the same. (For any deterministic engine) Determinism is gauranteed if playback of recorded inputs is on the same machine with exe compiled using same compiler and machine. Additionally my server.exe and client.exe gameloop is single thread with blocking socket calls and fixed time step. Question: Can anyone explain what I did wrong to get different Box2D output?

    Read the article

  • Is there an algorithm for a pool game?

    - by Dmitri
    Hello! I am looking for algorithm to calculate direction and speed of balls in a pool game. I am sure there has to be some type of open source code for this since pool games are some of the oldest computer games I can remember. I mean, when one ball hits another, I need a algorithm to calculate direction of both of them. It will depend of exact angle of where they hit each other and on speed. I want to practice Java coding, so I am looking for java code or package that has this type of code.

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • Bitmap to Texture2D problem with colors

    - by xnaNewbie89
    I have a small problem with converting a bitmap to a Texture2D. The resulted image of the conversion has the red channel switched with the blue channel :/ I don't know why, because the pixel formats are the same. If someone can help me I will be very happy :) System.Drawing.Image image = System.Drawing.Bitmap.FromFile(ImageFileLoader.filename); System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(image); Texture2D mapTexture = new Texture2D(Screen.Game.GraphicsDevice, bitmap.Width, bitmap.Height,false,SurfaceFormat.Color); System.Drawing.Imaging.BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle( 0, 0, bitmap.Width, bitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly,System.Drawing.Imaging.PixelFormat.Format32bppArgb); byte[] bytes = new byte[data.Height * data.Width*4]; System.Runtime.InteropServices.Marshal.Copy(data.Scan0, bytes, 0, bytes.Length); mapTexture.SetData<byte>(bytes, 0, data.Height * data.Width * 4); bitmap.UnlockBits(data); bitmap.Dispose(); image.Dispose();

    Read the article

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • Jumping with Mecanim synchronization

    - by Abhishek Deb
    I am using Unity3D 4.1 for one of my projects. I have a robot character which is always running. I am using mecanim animation system. What I really want:When I press Space bar, the character should jump up in the air, triggering an animation clip and then by the time it reaches the ground, the animation clip should also end. What actually is happening:When I press Space bar, the character jumps in the air. Animation clip plays as it should, but ends way before it reaches the ground. So, it looks like he is running in the mid air. What have I done: I have this humanoid robot setup with a jump animation bounded with the space bar key. Also, instead of using root motion, I am directly moving the robot from code. //Jumping if(Input.GetKeyDown(KeyCode.Space)){ rigidbody.AddForce(Vector3.up*jumpVelocty); anim.SetBool("Jump",true); } else anim.SetBool("Jump",false); Character's Details: Rigidbody = Mass:30, Freeze rotaion:x,y,z Capsule Collider = Material: metal, center(0,4.5,0), radius:1, height:11 Script = jumpVelocity:20000 Jump Animation Clip: ~ 2 seconds. I am really out of ideas how to synchronize everything. Should I make the character jump in some other way so that it quickly comes down and touches the ground to match the animation clip? If yes, please provide a direction.

    Read the article

  • How can I test if my rotated rectangle intersects a corner?

    - by Raven Dreamer
    I have a square, tile-based collision map. To check if one of my (square) entities is colliding, I get the vertices of the 4 corners, and test those 4 points against my collision map. If none of those points are intersecting, I know I'm good to move to the new position. I'd like to allow entities to rotate. I can still calculate the 4 corners of the square, but once you factor in rotation, those 4 corners alone don't seem to be enough information to determine if the entity is trying to move to a valid state. For example: In the picture below, black is unwalkable terrain, and red is the player's hitbox. The left scenario is allowed because the 4 corners of the red square are not in the black terrain. The right scenario would also be (incorrectly) allowed, because the player, cleverly turned at a 45* angle, has its corners in valid spaces, even if it is (quite literally) cutting the corner. How can I detect scenarios similar to the situation on the right?

    Read the article

  • How should I do 3D games through Java on a mac?

    - by Steven Rogers
    I have been self-teaching myself Java on the mac mostly because the language is cross-platform. Recently, I have been only able to develop 2D games using the Graphics2D class. Now, I want to learn how to make 3D games in Java. I used to model and animate stuff in 3D, so my knowledge of 3-Dimensional stuff is okay. I have spent the last 3 hours using google to look up ways of making 3D games in java. Apparently the best one to use is OpenGL, so i looked up a tutorial on it and i cannot find a tutorial that shows how to (if there is a way) install JOGL on the Mac platform. Should i continue to use Java? How can i make 3D games using Java? What is the best way to make 3D games on a mac?

    Read the article

  • How to refactor and improve this XNA mouse input code?

    - by Andrew Price
    Currently I have something like this: public bool IsLeftMouseButtonDown() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Pressed; } public bool IsLeftMouseButtonPressed() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonUp() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonReleased() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Pressed; } This is fine. In fact, I kind of like it. However, I'd hate to have to repeat this same code five times (for right, middle, X1, X2). Is there any way to pass in the button I want to the function so I could have something like this? public bool IsMouseButtonDown(MouseButton button) { return currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonPressed(MouseButton button) { return currentMouseState.IsPressed(button) && !previousMouseState.IsPressed(button); } public bool IsMouseButtonUp(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonReleased(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } I suppose I could create some custom enumeration and switch through it in each function, but I'd like to first see if there is a built-in solution or a better way.. Thanks!

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • Good book or tutorial for learning how to apply integration methods

    - by Cumatru
    I'm looking to animate a graph layout using edges as springs and nodes as weights ( a node with more links will have a bigger weight ). I'm not capable of wrapping my head around the usage of mathematical and physics relations in my application. As far as i read, Runge Kutta 4 ( preferably ) or Verlet will be a good choice, but i have problems with understanding how they really work, and what physics equations should i apply. If i can't understand them, i can't use them. I'm looking for a book or a tutorial which describe the things that i need.

    Read the article

  • Looking for 2D Cross platform suggestions based on requirements specified

    - by MannyG
    I am an intermediate developer with minor experience on enterprise mobile applications for iphone, android and blackberry looking to build my first ever mobile game. I did a google search for some game dev forums and this popped up so I thought I would try posting here as I lack luck elsewhere. If you have ever heard of the game for the iphone and android platform entitled avatar fight then you will have an idea of the graphic capabilities I require. Basically the battles which are automated one sprite attacking another doing cool animations but all in 2d. My buddy and I have two motivations, one is to jump into mobile Dev as my experience is limited as is his so we would like some trending knowledge (html5 would be nice to learn) . The other is to make some money on the side, don't expect much but polishing the game and putting our all will hopefully reward us a bit. We have looked into corona engine, however a lot of people are saying it is limited in the graphics department, we are open to learning new languages like lua, c++, python etc. Others we have looked at include phonegap, rhomobile, unity, and the list goes on. I really have no idea what the pros and cons of these are but for a basic battle sequence and some mini games we want to chose the right one. Some more things that we will be doing include things like card games, side scrolling flying object based games, maybe fishing stuff. We want to start small with these minigames and work our way up to the idea we would like to implement in the future. We only want to work in 2D. So with these requirements please help me chose a platform to work on (cross platform is what we are ideally leaning towards). Please feel free to throw in some pieces of advice you may have for newbie game developers like myself too. Thank you for reading!

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • convert orientation vec3 to a rotation matrix

    - by lapin
    I've got a normalized vec3 that represents an orientation. Each frame of animation, an object's orientation changes slightly, so I add a delta vector to the orientation vector and then normalize to find the new orientation. I'd like to convert the vec3 that represents an orientation into a rotation matrix that I can use to orient my object. If it helps, my object is a cone, and I'd like to rotate it about the pointy end, not from its center :) PS I know I should use quaternions because of the gimbal lock problem. If someone can explain quats too, that'd be great :)

    Read the article

  • How do I get the source code from a Google Code game project?

    - by BluFire
    I'm trying to get the Hedgewars source code. When I went to the downloads tab, it doesn't specify which is the actual game. I tried downloading it using the SVN Checkout on Tortoise, but it seems like it doesn't work on the browse section of Source. (Hgproject_filesAndroid_buildSDL-android-project) I then proceeded to the wiki but I got stuck at step two because I don't know anything about Mercurial. Some other things I don't know from the wiki is "FreePascal" "Android NDK" and "Tar" files. They are new to me so I am really confused. So my question is, how can I download the source code from Hedge Wars for Android without having to browse the source code inside the source tab?

    Read the article

  • Parent variable inheritance methods Unity3D/C#

    - by Timothy Williams
    I'm creating a system where there is a base "Hero" class and each hero inherits from that with their own stats and abilities. What I'm wondering is, how could I call a variable from one of the child scripts in the parent script (something like maxMP = MP) or call a function in a parent class that is specified in each child class (in the parent update is alarms() in the child classes alarms() is specified to do something.) Is this possible at all? Or not? Thanks.

    Read the article

  • Fair dice over network w/o trusted 3rd party

    - by Kay
    Though it should be a pretty basic problem, I did not find a solution for it: How to play dice over a network without a trusted third party? The M players shall roll N dice, one player after another. No player may "cheat", i.e. change the outcome to his advantage, or "look into the future" before the next roll. Is that possible? I guess the solution would be something like public key crypto, where each player turns in an encrypted message. After all messages were collected you exchange the keys to decode the messages. Then the sha1(joined string of all decrypted messages) mod 6 + 1 is used to determine the die. The major problem I have: since the message [c/s]hould be anything, I don't know how to prevent tampering with the private keys. Esp. the last player to turn in his key could easily cheat (I guess). The game should even stay fair, if all players "conspire" against one player.

    Read the article

  • 2D basic map system

    - by Cyril
    i'm currently coding a 2D game in Java, and I would like to have some clues on how-to build this system : the screen is moving on a grander map, for instance, the screen represent 800*600 units on a 100K*100K map. When you command your unit to go to another position, the screen move on this map AND when you move your mouse on a side or another of the screen, you move the screen on the map. Not sure that i'm clear, but we can retrieve this system in most RTS games (warcraft/starcraft for example). I'm currently using Slick 2D. Any idea ? Thanks.

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • Open Source Errors on Apple Cruch

    - by BluFire
    I've been looking around and I finally got the full source code called Apple-Crunch from google code. But when I put it into my project, the source code included so many errors in the class files such as: cannot be resolved into a type the constructor is undefined the method method() is undefined for the type Sprite class.java I downloaded the source directly from the command-line and noticed errors popping up on my project. Since i couldn't figure out how to import the actual folder into my workspace(it wouldn't show up on existing projects) I decided to copy and overwrite the folders into the project. The Errors were still there so I looked at the class files and noticed that the classes with errors extended from 'RokonActivity'. I then proceeded to add to the libs folder the rokon library in hopes to fix the errors. Sadly it didn't work and now I don't what to do to fix the errors. How do i fix the errors without having to manually change the code? The source code should be fully functional so why is there errors?

    Read the article

  • Can't get LWJGL lighting to work

    - by Zarkonnen
    I'm trying to enable lighting in lwjgl according to the method described by NeHe and this post. However, no matter what I try, all faces of my shapes always receive the same amount of light, or, in the case of a spinning shape, the amount of lighting seems to oscillate. All faces are lit up by the same amount, which changes as the pyramid rotates. Concrete example (apologies for the length): Note how all panels are always the same brightness, but the brightness varies with the pyramid's rotation. This is using lwjgl 2.8.3 on Mac OS X. package com; import com.zarkonnen.lwjgltest.Main; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.opengl.GL11; import org.newdawn.slick.opengl.Texture; import org.newdawn.slick.opengl.TextureLoader; import org.lwjgl.util.glu.*; import org.lwjgl.input.Keyboard; import java.nio.FloatBuffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; /** * * @author penguin */ public class main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(800, 600)); Display.setTitle("3D Pyramid"); Display.create(); } catch (Exception e) { } initGL(); float rtri = 0.0f; Texture texture = null; try { texture = TextureLoader.getTexture("png", Main.class.getResourceAsStream("tex.png")); } catch (Exception ex) { ex.printStackTrace(); } while (!Display.isCloseRequested()) { // Draw a Triangle :D GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); GL11.glLoadIdentity(); GL11.glTranslatef(0.0f, 0.0f, -10.0f); GL11.glRotatef(rtri, 0.0f, 1.0f, 0.0f); texture.bind(); GL11.glBegin(GL11.GL_TRIANGLES); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); GL11.glBegin(GL11.GL_QUADS); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); Display.update(); rtri += 0.05f; // Exit-Key = ESC boolean exitPressed = Keyboard.isKeyDown(Keyboard.KEY_ESCAPE); if (exitPressed) { System.out.println("Escape was pressed!"); Display.destroy(); } } Display.destroy(); } private static void initGL() { GL11.glEnable(GL11.GL_LIGHTING); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, ((float) 800) / ((float) 600), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0f); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); float lightAmbient[] = {0.5f, 0.5f, 0.5f, 1.0f}; // Ambient Light Values float lightDiffuse[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Diffuse Light Values float lightPosition[] = {0.0f, 0.0f, 2.0f, 1.0f}; // Light Position ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glLight(GL11.GL_LIGHT1, GL11.GL_AMBIENT, (FloatBuffer) temp.asFloatBuffer().put(lightAmbient).flip()); // Setup The Ambient Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_DIFFUSE, (FloatBuffer) temp.asFloatBuffer().put(lightDiffuse).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_POSITION, (FloatBuffer) temp.asFloatBuffer().put(lightPosition).flip()); // Position The Light GL11.glEnable(GL11.GL_LIGHT1); // Enable Light One } }

    Read the article

  • Game Clock Precision

    - by Philip
    I'm reading a fantastic article about game timer precision and here is a quote about 2/3 of the way into the article: If you start your game clock at about 4 billion (more precisely 2^32, or any large power of two) then your exponent, and hence your precision, will remain constant for the next ~4 billion seconds, or ~136 years. He doesn't give a concrete example of this though. Does this mean I would want to add 2^32 to the game clock value that I store at the beginning of each frame? Or is there a way to actually set the clock in Windows so that the numbers start at 2^32?

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >