Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 415/1071 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • Need help about Drag a sprite with Animation (Cocos2d)

    - by Zishan
    I want to play an animation when someone drag a sprite from it's default position to another selected position. If he drag half of the selected position then animation will be play half. Example, I have 15 Frames of a animation and have a projectile arm. The projectile arm can be rotate maximum 30°, if someone rotate the arm 2° then animation sprite will be show 2nd frame, if rotate 12° then animation sprite will be show 6th frame.... and so on. Also when he release the arm, then the arm will be reverse back to it's default position and animation frames also will be reverse back to it's default first frame. I am new on cocos2d.I know how to make an animation and how to drag a sprite but I have no idea how to do this. Can anyone Please give me any idea or any tutorial how to do this, it will be very helpful for me. Thank you in advance.

    Read the article

  • Odd Android touch event problem

    - by user22241
    Overview When testing my game I came across a bizarre problem with my touch controls. Note this isn't related to multi-touch as I completely removed my ACTION_POINTER_UP and ACTION_POINTER_DOWN along with my ACTION_MOVE code. So I'm simply working with ACTION_UP and ACTION_DOWN now and still get the problem. The problem I have a left and right button on the left of the screen and a jump button on the right. Everything works as it should but if I touch a large area of my hand (the fleshy part at the base of the thumb for instance) onto the screen, then release it and then press one of my arrows, the sprite moves in that direction for a few seconds, and then ACTION_UP is mysteriously triggered. The sprite stops and then if I release my finger and re-apply it to an arrow, the same thing happens. This goes on and on and eventually (randomly??) stops and everything work OK again. Test device & OS Google Nexus 10 Tablet running Jellybean 4.2.2 Code //Action upon which to switch actionMask = event.getActionMasked(); //Pointer Index of the currently touching pointer pointerIndex = event.getActionIndex(); //Number of pointers (for multi-touch) pointerCount = event.getPointerCount(); //ID of the pointer currently being processed (Multitouch) pointerID = event.getPointerId(pointerIndex); switch (actionMask){ //Primary pointer down case MotionEvent.ACTION_DOWN: { //if pressing left button then set moving left if (isLeftPressed(event.getX(), event.getY())){ renderer.setSpriteLeft(); } //if pressing right button then set moving right else if (isRightPressed(event.getX(), event.getY())){ renderer.setSpriteRight(); } //if pressing jump button then set sprite jumping else if (isJumpPressed(event.getX(),event.getY())){ renderer.setSpriteState('j', true); } break; }//End of case //Primary pointer up case MotionEvent.ACTION_UP:{ //When finger leaves the screen, stop sprite's horizontal movement renderer.setSpriteStopped(); break; }

    Read the article

  • TileMapRenderer in libGDX not drawing anything

    - by Benjamin Dengler
    So I followed the tutorial on the libGDX wiki to draw Tiled maps but it doesn't seem to render anything. Here's how I setup my OrthographicCamera and load the map: camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); map = TiledLoader.createMap(Gdx.files.internal("maps/test.tmx")); atlas = new TileAtlas(map, Gdx.files.internal("maps")); tileMapRenderer = new TileMapRenderer(map, atlas, 8, 8); And here is my render function: Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); tileMapRenderer.render(camera); Also I did pack the tile map using the TiledMapPacker. I'm completely stumped... am I missing anything obvious here? EDIT: While debugging I noticed that the TileAtlas seems to be empty, which I guess shouldn't be the case, but I have no idea why it's empty.

    Read the article

  • Rendering Unity across multiple monitors

    - by N0xus
    At the moment I am trying to get unity to run across 2 monitors. I've done some research and know that this is, strictly, possible. There is a workaround where you basically have to fluff your window size in order to get unity to render across both monitors. What I've done is create a new custom screen resolution that takes in the width of both of my monitors, as seen in the following image, its the 3840 x 1080: How ever, when I go to run my unity game exe that size isn't available. All I get is the following: My custom size should be at the very bottom, but isn't. Is there something I haven't done, or missed, that will get unity to take in my custom screen size when it comes to running my game through its exe? Oddly enough, inside the unity editor, my custom screen size is picked up and I can have it set to that in my game window: Is there something that I have forgotten to do when I build and run the game from the file menu? Has someone ever beaten this issue before?

    Read the article

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • UDP Code client server architecture

    - by GameBuilder
    Hi I have developed a game on android.Now I want to play it on wifi or 3G. I have game packets which i want to send it form client(mobile) to server then to another client2(mobile). I don't know how to write code in Java to send the playPackets continuously to server and receive the playPacket continuously from the server to the clients. I guess i have to use two thread one for sending and one for receiving. Can someone help me with the code, or the procedure to write code for it. Thanks in advance.

    Read the article

  • 2D basic map system

    - by Cyril
    i'm currently coding a 2D game in Java, and I would like to have some clues on how-to build this system : the screen is moving on a grander map, for instance, the screen represent 800*600 units on a 100K*100K map. When you command your unit to go to another position, the screen move on this map AND when you move your mouse on a side or another of the screen, you move the screen on the map. Not sure that i'm clear, but we can retrieve this system in most RTS games (warcraft/starcraft for example). I'm currently using Slick 2D. Any idea ? Thanks.

    Read the article

  • Is it possible to construct a cube with fewer than 24 vertices

    - by Telanor
    I have a cube-based world like Minecraft and I'm wondering if there's a way to construct a cube with fewer than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new DX11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • How is an HTML5 game sold?

    - by Bane
    (I know this site doesn't give legal advice, but what I'm dealing here with isn't anything serious at all. Also, I apologize to JP for being annoying over this.) Someone found a game I made on the Internet, and expressed interest in buying it. We agreed upon a price, and, in the meantime, I removed the game's source from the Internet, just to be sure. Now, I'm wondering what to do next. These are the terms: He gets the game's source code, and only that, without the graphics (which weren't made by me). He gets the right to develop and sell the game. I get to keep the ownership of the original game, meaning that I can use it in my portfolio when applying for jobs, for example. The game gets to stay on its original site. But I am not sure how can I legally realize this. Which license can I use?

    Read the article

  • XNA RTS A* pathfinding issues

    - by Slayter
    I'm starting to develop an RTS game using the XNA framework in C# and am still in the very early prototyping stage. I'm working on the basics. I've got unit selection down and am currently working on moving multiple units. I've implemented an A* pathfinding algorithm which works fine for moving a single unit. However when moving multiple units they stack on top of each other. I tried fixing this with a variation of the boids flocking algorithm but this has caused units to sometimes freeze and get stuck trying to move but going no where. Ill post the related methods for moving the units below but ill only post a link to the pathfinding class because its really long and i don't want to clutter up the page. These parts of the code are in the update method for the main controlling class: if (selectedUnits.Count > 0) { int indexOfLeader = 0; for (int i = 0; i < selectedUnits.Count; i++) { if (i == 0) { indexOfLeader = 0; } else { if (Vector2.Distance(selectedUnits[i].position, destination) < Vector2.Distance(selectedUnits[indexOfLeader].position, destination)) indexOfLeader = i; } selectedUnits[i].leader = false; } selectedUnits[indexOfLeader].leader = true; foreach (Unit unit in selectedUnits) unit.FindPath(destination); } foreach (Unit unit in units) { unit.Update(gameTime, selectedUnits); } These three methods control movement in the Unit class: public void FindPath(Vector2 destination) { if (path != null) path.Clear(); Point startPoint = new Point((int)position.X / 32, (int)position.Y / 32); Point endPoint = new Point((int)destination.X / 32, (int)destination.Y / 32); path = pathfinder.FindPath(startPoint, endPoint); pointCounter = 0; if (path != null) nextPoint = path[pointCounter]; dX = 0.0f; dY = 0.0f; stop = false; } private void Move(List<Unit> units) { if (nextPoint == position && !stop) { pointCounter++; if (pointCounter <= path.Count - 1) { nextPoint = path[pointCounter]; if (nextPoint == position) stop = true; } else if (pointCounter >= path.Count) { path.Clear(); pointCounter = 0; stop = true; } } else { if (!stop) { map.occupiedPoints.Remove(this); Flock(units); // Move in X ********* TOOK OUT SPEED ********** if ((int)nextPoint.X > (int)position.X) { position.X += dX; } else if ((int)nextPoint.X < (int)position.X) { position.X -= dX; } // Move in Y if ((int)nextPoint.Y > (int)position.Y) { position.Y += dY; } else if ((int)nextPoint.Y < (int)position.Y) { position.Y -= dY; } if (position == nextPoint && pointCounter >= path.Count - 1) stop = true; map.occupiedPoints.Add(this, position); } if (stop) { path.Clear(); pointCounter = 0; } } } private void Flock(List<Unit> units) { float distanceToNextPoint = Vector2.Distance(position, nextPoint); foreach (Unit unit in units) { float distance = Vector2.Distance(position, unit.position); if (unit != this) { if (distance < space && !leader && (nextPoint != position)) { // create space dX += (position.X - unit.position.X) * 0.1f; dY += (position.Y - unit.position.Y) * 0.1f; if (dX > .05f) nextPoint.X = nextPoint.X - dX; else if (dX < -.05f) nextPoint.X = nextPoint.X + dX; if (dY > .05f) nextPoint.Y = nextPoint.Y - dY; else if (dY < -.05f) nextPoint.Y = nextPoint.Y + dY; if ((dX < .05f && dX > -.05f) && (dY < .05f && dY > -.05f)) stop = true; path[pointCounter] = nextPoint; Console.WriteLine("Make Space: " + dX + ", " + dY); } else if (nextPoint != position && !stop) { dX = speed; dY = speed; Console.WriteLine(dX + ", " + dY); } } } } And here's the link to the pathfinder: https://docs.google.com/open?id=0B_Cqt6txUDkddU40QXBMeTR1djA I hope this post wasn't too long. Also please excuse the messiness of the code. As I said before this is early prototyping. Any help would be appreciated. Thanks!

    Read the article

  • Aw, Snap! in Google Chrome [on hold]

    - by D. S. Schneider
    Just wondering if anyone else's experiencing the "Aw, Snap!" bug in Google Chrome. I'm developing a brand new engine which occasionaly triggers this bug and as far as I know, there's nothing one can do to find out what actually triggered the issue. I've also tested my engine with Firefox, which runs just fine. Anyway, just wanted to know if someone else is facing this while developing games for Google Chrome and has a clue about what can be done to avoid it. I'm using plain JavaScript and the audio and canvas elements from HTML5. Thanks!

    Read the article

  • Who owns the intellectual property to Fragile Allegiance?

    - by analytik
    Fragile Allegiance was developed by Gremlin Interactive, which was later acquired by Infogrames (Atari). I couldn't find any details of the acquisition though. The only interesting thing I have found online is that the owner of the registered trademark Fragile Allegiance is Interplay, who published Fragile Allegiance. However, the only copyright note I've found was in one installation .ini file, claiming it for Gremlin. What are the common business practices when it comes to old, unused IPs? What do publishers/developers actually need to legally claim an intellectual property? Does anyone have an experience with contacting big publishers with copyright/IP inquiries? Related legal question.

    Read the article

  • Which data structure you will use to for a witness list?

    - by mateen
    I'm making a game where the plot is a bank robbery. Lots of people witness that robbery. The game will load a list of suspects, while the players (witnesses) will have to identify the suspects of this robbery. The game should load a list of suspects to identify the one as quickly as possible. Admin can add/remove suspects in the lists and two or more lists of suspects can also be merged into one (to show it to the player). The question is which data structure will be suitable to develop the lists?

    Read the article

  • What's a good data structure solution for a scene manager in XNA?

    - by tunnuz
    Hello, I'm playing with XNA for a game project of myself, I had previous exposure to OpenGL and worked a bit with Ogre, so I'm trying to get the same concepts working on XNA. Specifically I'm trying to add to XNA a scene manager to handle hierarchical transforms, frustum (maybe even occlusion) culling and transparency object sorting. My plan was to build a tree scene manager to handle hierarchical transforms and lighting, and then use an Octree for frustum culling and object sorting. The problem is how to do geometry sorting to support transparencies correctly. I know that sorting is very expensive if done on a per-polygon basis, so expensive that it is not even managed by Ogre. But still images from Ogre look right. Any ideas on how to do it and which data structures to use and their capabilities? I know people around is using: Octrees Kd-trees (someone on GameDev forum said that these are far better than Octrees) BSP (which should handle per-polygon ordering but are very expensive) BVH (but just for frustum and occlusion culling) Thank you Tunnuz

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Pathfinding in multi goal, multi agent environment

    - by Rohan Agrawal
    I have an environment in which I have multiple agents (a), multiple goals (g) and obstacles (o). . . . a o . . . . . . . o . g . . a . . . . . . . . . . o . . . . o o o o . g . . o . . . . . . . o . . . . o . . . . o o o o a What would an appropriate algorithm for pathfinding in this environment? The only thing I can think of right now, is to Run a separate version of A* for each goal separately, but i don't think that's very efficient.

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

  • Strange 3D game engine camera with X,Y,Zoom instead of X,Y,Z

    - by Jenko
    I'm using a 3D game engine, that uses a 4x4 matrix to modify the camera projection, in this format: r r r x r r r y r r r z - - - zoom Strangely though, the camera does not respond to the Z translation parameter, and so you're forced to use X, Y, Zoom to move the camera around. Technically this is plausible for isometric-style games such as Age Of Empires III. But this is a 3D engine, and so why would they have designed the camera to ignore Z and respond only to zoom? Am I missing something here? I've tried every method of setting the camera and it really seems to ignore Z. So currently I have to resort to moving the main object in the scene graph instead of moving the camera in relation to the objects. My question: Do you have any idea why the engine would use such a scheme? Is it common? Why? Or does it seem like I'm missing something and the SetProjection(Matrix) function is broken and somehow ignores the Z translation in the matrix? (unlikely, but possible) Anyhow, what are the workarounds? Is moving objects around the only way? Edit: I'm sorry I cannot reveal much about the engine because we're in a binding contract. It's a locally developed engine (Australia) written in managed C# used for data visualizations. Edit: The default mode of the engine is orthographic, although I've switched it into perspective mode. Its probably more effective to use X, Y, Zoom in orthographic mode, but I need to use perspective mode to render everyday objects as well.

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • Finding Z given X & Y coordinates on terrain?

    - by mrky
    I need to know what the most efficient way of finding Z given X & Y coordinates on terrain. My terrain is set up as a grid, each grid block consisting of two triangles, which may be flipped in any direction. I want to move game objects smoothly along the floor of the terrain without "stepping." I'm currently using the following method with unexpected results: double mapClass::getZ(double x, double y) { int vertexIndex = ((floor(y))*width*2)+((floor(x))*2); vec3ray ray = {glm::vec3(x, y, 2), glm::vec3(x, y, 0)}; vec3triangle tri1 = { glmFrom(vertices[vertexIndex].v1), glmFrom(vertices[vertexIndex].v2), glmFrom(vertices[vertexIndex].v3) }; vec3triangle tri2 = { glmFrom(vertices[vertexIndex+1].v1), glmFrom(vertices[vertexIndex+1].v2), glmFrom(vertices[vertexIndex+1].v3) }; glm::vec3 intersect; if (!intersectRayTriangle(tri1, ray, intersect)) { intersectRayTriangle(tri2, ray, intersect); } return intersect.z; } intersectRayTriangle() and glmFrom() are as follows: bool intersectRayTriangle(vec3triangle tri, vec3ray ray, glm::vec3 &worldIntersect) { glm::vec3 barycentricIntersect; if (glm::intersectLineTriangle(ray.origin, ray.direction, tri.p0, tri.p1, tri.p2, barycentricIntersect)) { // Convert barycentric to world coordinates double u, v, w; u = barycentricIntersect.x; v = barycentricIntersect.y; w = 1 - (u+v); worldIntersect.x = (u * tri.p0.x + v * tri.p1.x + w * tri.p2.x); worldIntersect.y = (u * tri.p0.y + v * tri.p1.y + w * tri.p2.y); worldIntersect.z = (u * tri.p0.z + v * tri.p1.z + w * tri.p2.z); return true; } else { return false; } } glm::vec3 glmFrom(s_point3f point) { return glm::vec3(point.x, point.y, point.z); } My convenience structures are defined as: struct s_point3f { GLfloat x, y, z; }; struct s_triangle3f { s_point3f v1, v2, v3; }; struct vec3ray { glm::vec3 origin, direction; }; struct vec3triangle { glm::vec3 p0, p1, p2; }; vertices is defined as: std::vector<s_triangle3f> vertices; Basically, I'm trying to get the intersect of a ray (which is positioned at the x, and y coordinates specified facing pointing downwards toward the terrain) and one of the two triangles on the grid. getZ() rarely returns anything but 0. Other times, the numbers it generates seem to be completely off. Am I taking the wrong approach? Can anyone see a problem with my code? Any help or critique is appreciated!

    Read the article

  • Physics timestep questions

    - by SSL
    I've got a projectile working perfectly using the code below: //initialised in loading screen 60 is the FPS - projectilEposition and velocity are Vector3 types gravity = new Vector3(0, -(float)9.81 / 60, 0); //called every frame projectilePosition += projectileVelocity; This seems to work fine but I've noticed in various projectile examples I've seen that the elapsedtime per update is taken into account. What's the difference between the two and how can I convert the above to take into account the elapsedtime? (I'm using XNA - do I use ElapsedTime.TotalSeconds or TotalMilliseconds)? Edit: Forgot to add my attempt at using elapsedtime, which seemed to break the physics: projectileVelocity.Y += -(float)((9.81 * gameTime.ElapsedGameTime.TotalSeconds * gameTime.ElapsedGameTime.TotalSeconds) * 0.5f); Thanks for the help

    Read the article

  • Game thread, render thread, animation/inverse kinematics, and synchronization

    - by user782220
    In a multithreaded setup with a game logic thread and a render thread, with some kind of skin mesh animation with inverse kinematics plus etc how does animation work? Does the game logic thread just update a number saying time T in the animation and then the render thread infers Who owns the skin mesh animation, the game logic thread or the render thread? How is it stored in the scene graph if it is stored there at all? When the game logic updates does it do the computation of the skin mesh animation and the computation of the inverse kinematics and then store the result directly in the scene graph or is it stored indirectly and the render thread does the computation?

    Read the article

  • Bullet Physic: Transform body after adding

    - by Mathias Hölzl
    I would like to transform a rigidbody after adding it to the btDiscreteDynamicsWorld. When I use the CF_KINEMATIC_OBJECT flag I am able to transform it but it's static (no collision response/gravity). When I don't use the CF_KINEMATIC_OBJECT flag the transform doesn't gets applied. So how to I transform non-static objects in bullet? DemoCode: btBoxShape* colShape = new btBoxShape(btVector3(SCALING*1,SCALING*1,SCALING*1)); /// Create Dynamic Objects btTransform startTransform; startTransform.setIdentity(); btScalar mass(1.f); //rigidbody is dynamic if and only if mass is non zero, otherwise static bool isDynamic = (mass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) colShape->calculateLocalInertia(mass,localInertia); btDefaultMotionState* myMotionState = new btDefaultMotionState(); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,colShape,localInertia); btRigidBody* body = new btRigidBody(rbInfo); body->setCollisionFlags(body->getCollisionFlags()|btCollisionObject::CF_KINEMATIC_OBJECT); body->setActivationState(DISABLE_DEACTIVATION); m_dynamicsWorld->addRigidBody(body); startTransform.setOrigin(SCALING*btVector3( btScalar(0), btScalar(20), btScalar(0) )); body->getMotionState()->setWorldTransform(startTransform);

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >