Search Results

Search found 13565 results on 543 pages for '3d game'.

Page 327/543 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • How can I select an audio output device in directshow

    - by Vibhore Tanwer
    I was wondering how I can select the output device for audio in directshow. I am able to get available audio output devices in directshow. But how can I make one of these to be audio output device. Its always going for the default audio device. I want to be able to output audio on my choice of device. I have been struggling through google but couldn't find anything useful. All I could get was this link but it doesn't really solve my problem. Any help will be really helpful for me.

    Read the article

  • Moving while doing loop animation in RPGMaker

    - by AzDesign
    I made a custom class to display character portrait in RPGMaker XP Here is the class : class Poser attr_accessor :images def initialize @images = Sprite.new @images.bitmap = RPG::Cache.picture('Character.png') #100x300 @images.x = 540 #place it on the bottom right corner of the screen @images.y = 180 end end Create an event on the map to create an instance as global variable, tada! image popped out. Ok nice. But Im not satisfied, Now I want it to have bobbing-head animation-like (just like when we breathe, sometimes bob our head up and down) so I added another method : def move(x,y) @images.x += x @images.y += y end def animate(x,y,step,delay) forward = true 2.times { step.times { wait(delay) if forward move(x/step,y/step) else move(-x/step,-y/step) end } wait(delay*3) forward = false } end def wait(time) while time > 0 time -= 1 Graphics.update end end I called the method to run the animation and it works, so far so good, but the problem is, WHILE the portrait goes up and down, I cannot move my character until the animation is finished. So that's it, I'm stuck in the loop block, what I want is to watch the portrait moving up and down while I walk around village, talk to npc, etc. Anyone can solve my problem ? Or better solution ? Thanks in advance

    Read the article

  • How to implement explosion in OpenGL with a particle effect?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • Why does my code dividing a 2D array into chunks fail?

    - by Borog
    I have a 2D-Array representing my world. I want to divide this huge thing into smaller chunks to make collision detection easier. I have a Chunk class that consists only of another 2D Array with a specific width and height and I want to iterate through the world, create new Chunks and add them to a list (or maybe a Map with Coordinates as the key; we'll see about that). world = new World(8192, 1024); Integer[][] chunkArray; for(int a = 0; a < map.getHeight() / Chunk.chunkHeight; a++) { for(int b = 0; b < map.getWidth() / Chunk.chunkWidth; b++) { Chunk chunk = new Chunk(); chunkArray = new Integer[Chunk.chunkWidth][Chunk.chunkHeight]; for(int x = Chunk.chunkHeight*a; x < Chunk.chunkHeight*(a+1); x++) { for(int y = Chunk.chunkWidth*b; y < Chunk.chunkWidth*(b+1); y++) { // Yes, the tileMap actually is [height][width] I'll have // to fix that somewhere down the line -.- chunkArray[y][x] = map.getTileMap()[x*a][y*b]; // TODO:Attach to chunk } } chunkList.add(chunk); } } System.out.println(chunkList.size()); The two outer loops get a new chunk in a specific row and column. I do that by dividing the overall size of the map by the chunkSize. The inner loops then fill a new chunkArray and attach it to the chunk. But somehow my maths is broken here. Let's assume the chunkHeight = chunkWidth = 64. For the first Array I want to start at [0][0] and go until [63][63]. For the next I want to start at [64][64] and go until [127][127] and so on. But I get an out of bounds exception and can't figure out why. Any help appreciated! Actually I think I know where the problem lies: chunkArray[y][x] can't work, because y goes from 0-63 just in the first iteration. Afterwards it goes from 64-127, so sure it is out of bounds. Still no nice solution though :/ EDIT: if(y < Chunk.chunkWidth && x < Chunk.chunkHeight) chunkArray[y][x] = map.getTileMap()[y][x]; This works for the first iteration... now I need to get the commonly accepted formula.

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • Balancing Player vs. Monsters: Level-Up Curves

    - by ashes999
    I've written a fair number of games that have RPG-like "levelling up," where the player gains experience for killing monsters/enemies, and eventually, reaches a new level, where their stats increase. How do you find a balance between player growth, monster strength, and difficulty? The extreme ends of this spectrum are: Player levels up really fast and blows away monsters without much effort Monsters are incredibly strong and even at low levels, are very difficult to beat I've also tried a strange situation of making enemies relative to players, i.e. an enemy will always be at 50% or 100% or 150% of player stats (thus requiring the player to use other techniques instead of brute strength to succeeed). But where's the balance, and how do you find it? Edit: For example, I am expecting to hear things like: Balance high instead of balance low (200 HP and 20 str is easier to balance than 20 HP and 2 str) Look at easiest vs. hardest monsters, and see what you have in terms of a range

    Read the article

  • Outline Shader Effect for Orthogonal Geometry in XNA

    - by Griffin
    I just recently started learning the art of shading, but I can't give an outline width to 2D, concave geometry when restrained to a single vertex/pixel shader technique (thanks to XNA). the shape I need to give an outline to has smooth, per-vertex coloring, as well as opacity. The outline, which has smooth, per-vertex coloring, variable width, and opacity cannot interfere with the original shape's colors. A pixel depth border detection algorithm won't work because pixel depth isn't a 3.0 semantic. expanding geometry / redrawing won't work because it interferes with the original shape's colors. I'm wondering if I can do something with the stencil/depth buffer outside of the shader functions since I have access to that through the graphics device. But I don't believe I'm able to manipulate actual values. How might I do this?

    Read the article

  • Calculating vertex normals on the GPU

    - by Etan
    I have some height-map sampled on a regular grid stored in an array. Now, I want to use the normals on the sampled vertices for some smoothing algorithm. The way I'm currently doing it is as follows: For each vertex, generate triangles to all it's neighbours. This results in eight neighbours when using the 1-neighbourhood for all vertices except at the borders. +---+---+ ¦ \ ¦ / ¦ +---o---+ ¦ / ¦ \ ¦ +---+---+ For each adjacent triangle, calculate it's normal by taking the cross product between the two distances. As the triangles all have the same size when projected on the xy-plane, I simply average over all eight normals then and store it for this vertex. However, as my data grows larger, this approach takes too much time and I would prefer doing it on the GPU in a shader code. Is there an easy method, maybe if I could store my height-map as a texture?

    Read the article

  • what's wrong with this Lua code (creating text inside listener in Corona)

    - by Greg
    If you double/triple click on the myObject here the text does NOT disappear. Why is this not working when there are multiple events being fired? That is, are there actually multiple "text" objects, with some existing but no longer having a reference to them held by the local "myText" variable? Do I have to manually removeSelf() on the local "myText" field before assigning it another "display.newText(...)"? display.setStatusBar( display.HiddenStatusBar ) local myText local function hideMyText(event) print ("hideMyText") myText.isVisible = false end local function showTextListener(event) if event.phase == "began" then print("showTextListener") myText = display.newText("Hello World!", 0, 0, native.systemFont, 30) timer.performWithDelay(1000, hideMyText, 1 ) end end -- Display object to press to show text local myObject = display.newImage( "inventory_button.png", display.contentWidth/2, display.contentHeight/2) myObject:addEventListener("touch", showTextListener) Question 2 - Also why is it the case that if I add a line BEFORE "myText = ..." of: a) "if myText then myText:removeSelf() end" = THIS FIXES THINGS, whereas b) "if myText then myText=nil end" = DOES NOT FIX THINGS Interested in hearing how Lua works here re the answer...

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • Behavior tree implementation details

    - by angryInsomniac
    I have been looking around for implementation details of behavior trees, the best descriptions I found were by Alex Champarand and some of Damian Isla's talk about AI in Halo 2 (the video of which is locked up in the GDC vault sadly). However, both descriptions fall short of helping one actually create a BT, one particular question has been bugging me for a while. When is the tree in a behavior tree evaluated? Furthermore: If the tree is in the middle of executing a sequence of actions (patrolling waypoints) and a higher priority impulse comes in (distraction sound) , how to switch to that side of the tree seamlessly without resorting to a state machine like system and if it is decided that the impulse was irrelevant (the distraction is too far away to affect this guard), how to go back to the last thing that the guard was doing ? I have quite a few questions like this and I don't wish to flood the board with separate queries so if you know of any resource where questions like these can be answered I would be very grateful.

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • When I shoot from a gun while walking, the bullet is off the center, but when stand still it's fine

    - by Vlad1k
    I am making a small project in Unity, and whenever I walk with the gun and shoot at the same time, the bullets seem to curve and shoot off 2-3 CMs from the center. When I stand still this doesn't happen. This is my main Javascript code: @script RequireComponent(AudioSource) var projectile : Rigidbody; var speed = 500; var ammo = 30; var fireRate = 0.1; private var nextFire = 0.0; function Update() { if(Input.GetButton ("Fire1") && Time.time > nextFire) { if(ammo != 0) { nextFire = Time.time + fireRate; var clone = Instantiate(projectile, transform.position, transform.root.rotation); clone.velocity = transform.TransformDirection(Vector3 (0, 0, speed)); ammo = ammo - 1; audio.Play(); } else { } } } I assume that these two lines need to be tweaked: var clone = Instantiate(projectile, transform.position, transform.root.rotation); clone.velocity = transform.TransformDirection(Vector3 (0, 0, speed)); Thanks in advanced, and please remember that I just started Unity, and I might have a difficult time understanding some things. Thanks!

    Read the article

  • How to create a copy of an instance without having access to private variables

    - by Jamie
    Im having a bit of a problem. Let me show you the code first: public class Direction { private CircularList xSpeed, zSpeed; private int[] dirSquare = {-1, 0, 1, 0}; public Direction(int xSpeed, int zSpeed){ this.xSpeed = new CircularList(dirSquare, xSpeed); this.zSpeed = new CircularList(dirSquare, zSpeed); } public Direction(Point dirs){ this(dirs.x, dirs.y); } public void shiftLeft(){ xSpeed.shiftLeft(); zSpeed.shiftRight(); } public void shiftRight(){ xSpeed.shiftRight(); zSpeed.shiftLeft(); } public int getXSpeed(){ return this.xSpeed.currentValue(); } public int getZSpeed(){ return this.zSpeed.currentValue(); } } Now lets say i have an instance of Direction: Direction dir = new Direction(0, 0); As you can see in the code of Direction, the arguments fed to the constructor, are passed directly to some other class. One cannot be sure if they stay the same because methods shiftRight() and shiftLeft could have been called, which changes thos numbers. My question is, how do i create a completely new instance of Direction, that is basically copy(not by reference) of dir? The only way i see it, is to create public methods in both CircularList(i can post the code of this class, but its not relevant) and Direction that return the variables needed to create a copy of the instance, but this solution seems really dirty since those numbers are not supposed to be touched after beeing fed to the constructor, and therefore they are private.

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • SDL mouse wheel not picking up

    - by Chris
    Running Ubuntu 11.04, SDL 1.2 trying to pickup mouse wheel up/down movement with this (stripped down) code: int main( int argc, char **argv ) { SDL_MouseButtonEvent *mousebutton = NULL; while ( !done ) { if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_LEFT) yrot += 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_RIGHT) yrot -= 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELUP){ xrot += 0.75f; }else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELDOWN){ xrot -= 0.75f; } while ( SDL_PollEvent( &event ) ) { switch( event.type ) { case SDL_MOUSEBUTTONDOWN: mousebutton = &event.button; break; case SDL_MOUSEBUTTONUP: mousebutton = NULL; break; default: break; } } } return 0; } strange thing is, scrolling with the mouse button does nothing, but if I hold down a mouse button or two and then move the mouse it hits the SDL_BUTTON_WHEEL code occasionally. This honestly reeks of a pointer issue, which would make sense since I've been spoiled with C# for the past couple years, but I am just not seeing it. How do i correctly find mouse scroll events in SDL?

    Read the article

  • Build a view frustum from angles

    - by MulletDevil
    I have 4 angles, left, right, top & bottom. These angles are in degrees. They define the angle between the forward vector and the corresponding side. I am trying to use these to calculate the required values for Perseective Off Centre function found here http://docs.unity3d.com/Documentation/ScriptReference/Camera-projectionMatrix.html I tried doing (near plane-far plane) * Tan(angle) But that didn't give the correct results.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Can I animate render targets or the swap chain?

    - by Eric F.
    I want to animate some synthetic video bits to fullscreen w/o tearing. Can I set up D3D 9/10/11 in exclusive mode, and have it present a series of buffers that I'm writing to? I know how to copy system memory bits into a texture, then draw that texture as a fullscreen quad, but it seems like overkill. Why should I use the triangle rasterizer when I want to do something so simple? All I want to do is set up a long (4-8 buffer) swapchain and set the bits of the back buffer that is about to be displayed. Or, I want to allocate 4-8 RenderTargets, and on each frame, copy the bits from system memory to the RenderTarget, then set it as the next thing to display. I've never seen or heard about anybody doing this, but it seems so dead simple!

    Read the article

  • How do I simulate the mouse and keyboard using C# or C++?

    - by Art
    I want to start develop for Kinect, but hardest theme for it - how to send keyboard and mouse input to any application. In previous question I got an advice to develop my own driver for this devices, but this will take a while. I imagine application like a gate, that can translate SendMessage's into system wide input or driver application with API to send this inputs. So I wonder, is there are drivers or simulators that can interact with C# or C++? Small edition: SendMessage, PostMessage, keybd_event will work only on Windows application with common messages loop. So I need driver application that will work on low, kernel, level.

    Read the article

  • Bullet Physics - Casting a ray straight down from a rigid body (first person camera)

    - by Hydrocity
    I've implemented a first person camera using Bullet--it's a rigid body with a capsule shape. I've only been using Bullet for a few days and physics engines are new to me. I use btRigidBody::setLinearVelocity() to move it and it collides perfectly with the world. The only problem is the Y-value moves freely, which I temporarily solved by setting the Y-value of the translation vector to zero before the body is moved. This works for all cases except when falling from a height. When the body drops off a tall object, you can still glide around since the translate vector's Y-value is being set to zero, until you stop moving and fall to the ground (the velocity is only set when moving). So to solve this I would like to try casting a ray down from the body to determine the Y-value of the world, and checking the difference between that value and the Y-value of the camera body, and disable or slow down movement if the difference is large enough. I'm a bit stuck on simply casting a ray and determining the Y-value of the world where it struck. I've implemented this callback: struct AllRayResultCallback : public btCollisionWorld::RayResultCallback{ AllRayResultCallback(const btVector3& rayFromWorld, const btVector3& rayToWorld) : m_rayFromWorld(rayFromWorld), m_rayToWorld(rayToWorld), m_closestHitFraction(1.0){} btVector3 m_rayFromWorld; btVector3 m_rayToWorld; btVector3 m_hitNormalWorld; btVector3 m_hitPointWorld; float m_closestHitFraction; virtual btScalar addSingleResult(btCollisionWorld::LocalRayResult& rayResult, bool normalInWorldSpace) { if(rayResult.m_hitFraction < m_closestHitFraction) m_closestHitFraction = rayResult.m_hitFraction; m_collisionObject = rayResult.m_collisionObject; if(normalInWorldSpace){ m_hitNormalWorld = rayResult.m_hitNormalLocal; } else{ m_hitNormalWorld = m_collisionObject->getWorldTransform().getBasis() * rayResult.m_hitNormalLocal; } m_hitPointWorld.setInterpolate3(m_rayFromWorld, m_rayToWorld, m_closestHitFraction); return 1.0f; } }; And in the movement function, I have this code: btVector3 from(pos.x, pos.y + 1000, pos.z); // pos is the camera's rigid body position btVector3 to(pos.x, 0, pos.z); // not sure if 0 is correct for Y AllRayResultCallback callback(from, to); Base::getSingletonPtr()->m_btWorld->rayTest(from, to, callback); So I have the callback.m_hitPointWorld vector, which seems to just show the position of the camera each frame. I've searched Google for examples of casting rays, as well as the Bullet documentation, and it's been hard to just find an example. An example is really all I need. Or perhaps there is some method in Bullet to keep the rigid body on the ground? I'm using Ogre3D as a rendering engine, and casting a ray down is quite straightforward with that, however I want to keep all the ray casting within Bullet for simplicity. Could anyone point me in the right direction? Thanks.

    Read the article

  • Does anybody know of any resources to achieve this particular "2.5D" isometric engine effect?

    - by Craig Whitley
    I understand this is a little vague, but I was hoping somebody might be able to describe a high-level workflow or link to a resource to be able to achieve a specific isometric "2.5D" tile engine effect. I fell in love with http://www.youtube.com/watch?v=-Q6ISVaM5Ww this engine. Especially with the lighting and the shaders! He has a brief description of how he achieved what he did, but I could really use a brief flow of where you would start, what you would read up on and learn and the logical order to implement these things. A few specific questions: 1) Is there a heightmap on the ground texture that lets the light reflect brighter on certain parts of it? 2) "..using a special material which calculates the world-space normal vectors of every pixel.." - is this some "magic" special material he has created himself, or can you hazard a guess at what he means? 3) with relation to the above quote - what does he mean by 'world-space normal vectors of every pixel'? 4) I'm guessing I'm being a little bit optimistic when I ask if there's any 'all-in-one' tutorial out there? :)

    Read the article

  • Well-tested libraries for player ratings?

    - by Lucky
    It's common in games to implement some sort of numerical ranking system -- the ELO system is usually used in chess. I could implement this system naively using Wikipedia's descriptions, but I suspect that this would open up a whole box of problems that have already been solved: rating inflation, etc -- for instance, the ELO system has a K constant that's 'fudged' according to rating, duration, pairings, statistics, ... What are some libraries (I'm looking at Python, but anything is okay) that implements rating systems? It also doesn't have to be ELO.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >