Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 410/772 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • What exactly can shaders be used for?

    - by Bane
    I'm not really a 3D person, and I've only used shaders a little in some Three.js examples, and so far I've got an impression that they are only being used for the graphical part of the equation. Although, the (quite cryptic) Wikipedia article and some other sources lead me to believe that they can be used for more than just graphical effects, ie, to program the GPU (Wikipedia). So, the GPU is still a processor, right? With a larger and a different instruction set for easier and faster vector manipulation, but still a processor. Can I use shaders to make regular programs (provided I've got access to the video memory, which is probable)? Edit: regular programs == "Applications", ie create windows/console programs, or at least have some way of drawing things on the screen, maybe even taking user input.

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • Render full-screen gradient or texture

    - by Filip Skakun
    What's the simplest way to fill the background of the screen with a gradient or a texture in Direct3D 10/11? I'm building a Windows 8 metro app in which the camera never moves and I render some content in D3D, but I need to fill the background with something else than a solid color. Do I need to figure out the size and position of a rectangle and position it in 3D space or can I have some simpler solution? I don't care about depth at all, I don't use any depth buffer since all my content is sorted back to front, so I could just start by drawing to the background.

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

  • Deformation of Sphere using Transformations

    - by Mert Toka
    I have a graphic related question. I need to have a transformation matrix that I have no idea about what it is. The problem is to create right image from the right sphere. I created those images in Maya, but I need some matrices for the graphics course. Here is the image: Our professor told us to use some sine and cosine in our transformations, but I have no idea what he meant. I thought of intersecting a plane from the grid(that is xz plane) and sphere, and then scaling down the resulting circle. Would that work? I also checked this paper, however it looks like a bit advanced for me. Another thing is I guess that paper is not about the same type of information I was looking for. It would be great if you could help me.

    Read the article

  • Remove box2d bodies after collision deduction android?

    - by jubin
    Can any one explain me how to destroy box2d body when collide i have tried but my application crashed.First i have checked al collisions then add all the bodies in array who i want to destroy.I am trying to learning this tutorial My all the bodies are falling i want these bodies should destroy when these bodies will collide my actor monkey but when it collide it destroy but my aplication crashed.I have googled and from google i got the application crash reasons we should not destroy body in step funtion but i am removing body in the last of tick method. could any one help me or provide me code aur check my code why i am getting this prblem or how can i destroy box2d bodies. This is my code what i am doing. Please could any one check my code and tell me what is i am doing wrong for removing bodies. The code is for multiple box2d objects falling on my actor monkey it should be destroy when it will fall on the monkey.It is destroing but my application crahes. static class Box2DLayer extends CCLayer { protected static final float PTM_RATIO = 32.0f; protected static final float WALK_FACTOR = 3.0f; protected static final float MAX_WALK_IMPULSE = 0.2f; protected static final float ANIM_SPEED = 0.3f; int isLeft=0; String dir=""; int x =0; float direction; CCColorLayer objectHint; // protected static final float PTM_RATIO = 32.0f; protected World _world; protected static Body spriteBody; CGSize winSize = CCDirector.sharedDirector().winSize(); private static int count = 200; protected static Body monkey_body; private static Body bodies; CCSprite monkey; float animDelay; int animPhase; CCSpriteSheet danceSheet = CCSpriteSheet.spriteSheet("phases.png"); CCSprite _block; List<Body> toDestroy = new ArrayList<Body>(); //CCSpriteSheet _spriteSheet; private static MyContactListener _contactListener = new MyContactListener(); public Box2DLayer() { this.setIsAccelerometerEnabled(true); CCSprite bg = CCSprite.sprite("jungle.png"); addChild(bg,0); bg.setAnchorPoint(0,0); bg.setPosition(0,0); CGSize s = CCDirector.sharedDirector().winSize(); // Use scaled width and height so that our boundaries always match the current screen float scaledWidth = s.width/PTM_RATIO; float scaledHeight = s.height/PTM_RATIO; Vector2 gravity = new Vector2(0.0f, -30.0f); boolean doSleep = false; _world = new World(gravity, doSleep); // Create edges around the entire screen // Define the ground body. BodyDef bxGroundBodyDef = new BodyDef(); bxGroundBodyDef.position.set(0.0f, 0.0f); // The body is also added to the world. Body groundBody = _world.createBody(bxGroundBodyDef); // Register our contact listener // Define the ground box shape. PolygonShape groundBox = new PolygonShape(); Vector2 bottomLeft = new Vector2(0f,0f); Vector2 topLeft = new Vector2(0f,scaledHeight); Vector2 topRight = new Vector2(scaledWidth,scaledHeight); Vector2 bottomRight = new Vector2(scaledWidth,0f); // bottom groundBox.setAsEdge(bottomLeft, bottomRight); groundBody.createFixture(groundBox,0); // top groundBox.setAsEdge(topLeft, topRight); groundBody.createFixture(groundBox,0); // left groundBox.setAsEdge(topLeft, bottomLeft); groundBody.createFixture(groundBox,0); // right groundBox.setAsEdge(topRight, bottomRight); groundBody.createFixture(groundBox,0); CCSprite floorbg = CCSprite.sprite("grassbehind.png"); addChild(floorbg,1); floorbg.setAnchorPoint(0,0); floorbg.setPosition(0,0); CCSprite floorfront = CCSprite.sprite("grassfront.png"); floorfront.setTag(2); this.addBoxBodyForSprite(floorfront); addChild(floorfront,3); floorfront.setAnchorPoint(0,0); floorfront.setPosition(0,0); addChild(danceSheet); //CCSprite monkey = CCSprite.sprite(danceSheet, CGRect.make(0, 0, 48, 73)); //addChild(danceSprite); monkey = CCSprite.sprite("arms_up.png"); monkey.setTag(2); monkey.setPosition(200,100); BodyDef spriteBodyDef = new BodyDef(); spriteBodyDef.type = BodyType.DynamicBody; spriteBodyDef.bullet=true; spriteBodyDef.position.set(200 / PTM_RATIO, 300 / PTM_RATIO); monkey_body = _world.createBody(spriteBodyDef); monkey_body.setUserData(monkey); PolygonShape spriteShape = new PolygonShape(); spriteShape.setAsBox(monkey.getContentSize().width/PTM_RATIO/2, monkey.getContentSize().height/PTM_RATIO/2); FixtureDef spriteShapeDef = new FixtureDef(); spriteShapeDef.shape = spriteShape; spriteShapeDef.density = 2.0f; spriteShapeDef.friction = 0.70f; spriteShapeDef.restitution = 0.0f; monkey_body.createFixture(spriteShapeDef); //Vector2 force = new Vector2(10, 10); //monkey_body.applyLinearImpulse(force, spriteBodyDef.position); addChild(monkey,10000); this.schedule(tickCallback); this.schedule(createobjects, 2.0f); objectHint = CCColorLayer.node(ccColor4B.ccc4(255,0,0,128), 200f, 100f); addChild(objectHint, 15000); objectHint.setVisible(false); _world.setContactListener(_contactListener); } private UpdateCallback tickCallback = new UpdateCallback() { public void update(float d) { tick(d); } }; private UpdateCallback createobjects = new UpdateCallback() { public void update(float d) { secondUpdate(d); } }; private void secondUpdate(float dt) { this.addNewSprite(); } public void addBoxBodyForSprite(CCSprite sprite) { BodyDef spriteBodyDef = new BodyDef(); spriteBodyDef.type = BodyType.StaticBody; //spriteBodyDef.bullet=true; spriteBodyDef.position.set(sprite.getPosition().x / PTM_RATIO, sprite.getPosition().y / PTM_RATIO); spriteBody = _world.createBody(spriteBodyDef); spriteBody.setUserData(sprite); Vector2 verts[] = { new Vector2(-11.8f / PTM_RATIO, -24.5f / PTM_RATIO), new Vector2(11.7f / PTM_RATIO, -24.0f / PTM_RATIO), new Vector2(29.2f / PTM_RATIO, -14.0f / PTM_RATIO), new Vector2(28.7f / PTM_RATIO, -0.7f / PTM_RATIO), new Vector2(8.0f / PTM_RATIO, 18.2f / PTM_RATIO), new Vector2(-29.0f / PTM_RATIO, 18.7f / PTM_RATIO), new Vector2(-26.3f / PTM_RATIO, -12.2f / PTM_RATIO) }; PolygonShape spriteShape = new PolygonShape(); spriteShape.set(verts); //spriteShape.setAsBox(sprite.getContentSize().width/PTM_RATIO/2, //sprite.getContentSize().height/PTM_RATIO/2); FixtureDef spriteShapeDef = new FixtureDef(); spriteShapeDef.shape = spriteShape; spriteShapeDef.density = 2.0f; spriteShapeDef.friction = 0.70f; spriteShapeDef.restitution = 0.0f; spriteShapeDef.isSensor=true; spriteBody.createFixture(spriteShapeDef); } public void addNewSprite() { count=0; Random rand = new Random(); int Number = rand.nextInt(10); switch(Number) { case 0: _block = CCSprite.sprite("banana.png"); break; case 1: _block = CCSprite.sprite("backpack.png");break; case 2: _block = CCSprite.sprite("statue.png");break; case 3: _block = CCSprite.sprite("pineapple.png");break; case 4: _block = CCSprite.sprite("bananabunch.png");break; case 5: _block = CCSprite.sprite("hat.png");break; case 6: _block = CCSprite.sprite("canteen.png");break; case 7: _block = CCSprite.sprite("banana.png");break; case 8: _block = CCSprite.sprite("statue.png");break; case 9: _block = CCSprite.sprite("hat.png");break; } int padding=20; //_block.setPosition(CGPoint.make(100, 100)); // Determine where to spawn the target along the Y axis CGSize winSize = CCDirector.sharedDirector().displaySize(); int minY = (int)(_block.getContentSize().width / 2.0f); int maxY = (int)(winSize.width - _block.getContentSize().width / 2.0f); int rangeY = maxY - minY; int actualY = rand.nextInt(rangeY) + minY; // Create block and add it to the layer float xOffset = padding+_block.getContentSize().width/2+((_block.getContentSize().width+padding)*count); _block.setPosition(CGPoint.make(actualY, 750)); _block.setTag(1); float w = _block.getContentSize().width; objectHint.setVisible(true); objectHint.changeWidth(w); objectHint.setPosition(actualY-w/2, 460); this.addChild(_block,10000); // Create ball body and shape BodyDef ballBodyDef1 = new BodyDef(); ballBodyDef1.type = BodyType.DynamicBody; ballBodyDef1.position.set(actualY/PTM_RATIO, 480/PTM_RATIO); bodies = _world.createBody(ballBodyDef1); bodies.setUserData(_block); PolygonShape circle1 = new PolygonShape(); Vector2 verts[] = { new Vector2(-11.8f / PTM_RATIO, -24.5f / PTM_RATIO), new Vector2(11.7f / PTM_RATIO, -24.0f / PTM_RATIO), new Vector2(29.2f / PTM_RATIO, -14.0f / PTM_RATIO), new Vector2(28.7f / PTM_RATIO, -0.7f / PTM_RATIO), new Vector2(8.0f / PTM_RATIO, 18.2f / PTM_RATIO), new Vector2(-29.0f / PTM_RATIO, 18.7f / PTM_RATIO), new Vector2(-26.3f / PTM_RATIO, -12.2f / PTM_RATIO) }; circle1.set(verts); FixtureDef ballShapeDef1 = new FixtureDef(); ballShapeDef1.shape = circle1; ballShapeDef1.density = 10.0f; ballShapeDef1.friction = 0.0f; ballShapeDef1.restitution = 0.1f; bodies.createFixture(ballShapeDef1); count++; //Remove(); } @Override public void ccAccelerometerChanged(float accelX, float accelY, float accelZ) { //Apply the directional impulse /*float impulse = monkey_body.getMass()*accelY*WALK_FACTOR; Vector2 force = new Vector2(impulse, 0); monkey_body.applyLinearImpulse(force, monkey_body.getWorldCenter());*/ walk(accelY); //Remove(); } private void walk(float accelY) { // TODO Auto-generated method stub direction = accelY; } private void Remove() { for (Iterator<MyContact> it1 = _contactListener.mContacts.iterator(); it1.hasNext();) { MyContact contact = it1.next(); Body bodyA = contact.fixtureA.getBody(); Body bodyB = contact.fixtureB.getBody(); // See if there's any user data attached to the Box2D body // There should be, since we set it in addBoxBodyForSprite if (bodyA.getUserData() != null && bodyB.getUserData() != null) { CCSprite spriteA = (CCSprite) bodyA.getUserData(); CCSprite spriteB = (CCSprite) bodyB.getUserData(); // Is sprite A a cat and sprite B a car? If so, push the cat // on a list to be destroyed... if (spriteA.getTag() == 1 && spriteB.getTag() == 2) { //Log.v("dsfds", "dsfsd"+bodyA); //_world.destroyBody(bodyA); // removeChild(spriteA, true); toDestroy.add(bodyA); } // Is sprite A a car and sprite B a cat? If so, push the cat // on a list to be destroyed... else if (spriteA.getTag() == 2 && spriteB.getTag() == 1) { //Log.v("dsfds", "dsfsd"+bodyB); toDestroy.add(bodyB); } } } // Loop through all of the box2d bodies we want to destroy... for (Iterator<Body> it1 = toDestroy.iterator(); it1.hasNext();) { Body body = it1.next(); // See if there's any user data attached to the Box2D body // There should be, since we set it in addBoxBodyForSprite if (body.getUserData() != null) { // We know that the user data is a sprite since we set // it that way, so cast it... CCSprite sprite = (CCSprite) body.getUserData(); // Remove the sprite from the scene _world.destroyBody(body); removeChild(sprite, true); } // Destroy the Box2D body as well // _contactListener.mContacts.remove(0); } } public synchronized void tick(float delta) { synchronized (_world) { _world.step(delta, 8, 3); //_world.clearForces(); //addNewSprite(); } CCAnimation danceAnimation = CCAnimation.animation("dance", 1.0f); // Iterate over the bodies in the physics world Iterator<Body> it = _world.getBodies(); while(it.hasNext()) { Body b = it.next(); Object userData = b.getUserData(); if (userData != null && userData instanceof CCSprite) { //Synchronize the Sprites position and rotation with the corresponding body CCSprite sprite = (CCSprite)userData; if(sprite.getTag()==1) { //b.applyLinearImpulse(force, pos); sprite.setPosition(b.getPosition().x * PTM_RATIO, b.getPosition().y * PTM_RATIO); sprite.setRotation(-1.0f * ccMacros.CC_RADIANS_TO_DEGREES(b.getAngle())); } else { //Apply the directional impulse float impulse = monkey_body.getMass()*direction*WALK_FACTOR; Vector2 force = new Vector2(impulse, 0); b.applyLinearImpulse(force, b.getWorldCenter()); sprite.setPosition(b.getPosition().x * PTM_RATIO, b.getPosition().y * PTM_RATIO); animDelay -= 1.0f/60.0f; if(animDelay <= 0) { animDelay = ANIM_SPEED; animPhase++; if(animPhase > 2) { animPhase = 1; } } if(direction < 0 ) { isLeft=1; } else { isLeft=0; } if(isLeft==1) { dir = "left"; } else { dir = "right"; } float standingLimit = (float) 0.1f; float vX = monkey_body.getLinearVelocity().x; if((vX > -standingLimit)&& (vX < standingLimit)) { // Log.v("sasd", "standing"); } else { } } } } Remove(); } } Sorry for my english. Thanks in advance.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • Implementing top view physics using box2D

    - by humbleBee
    How can top view physics games be done in box2D? One idea I have is to set the linear velocity of an object manually or to alter the linear and angular damping as my object moves over different surfaces. For example if my object is over a wet surface it'll have less linear damping and if it is over rough surface it'll have more damping. And to see if my object has fallen over an edge I'll try to use an AABB and check if its still inside or manually see if object.x > boundary.x etc. Is there any better way?

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • How to move a directional light according to the camera movement?

    - by Andrea Benedetti
    Given a light direction, how can I move it according to the camera movement, in a shader? Think that an artist has setup a scene (e.g., in 3DSMax) with a mesh in center of that and a directional light with a position and a target. From this position and target I've calculated the light direction. Now I want to use the same direction in my lighting equation but, obviously, I want that this light moves correctly with the camera. Thanks.

    Read the article

  • Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • How to create an extensible rope in Box2D?

    - by Thomas
    Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something?

    Read the article

  • Trouble exporting Maya models to Panda3D?

    - by Aerovistae
    Having issues here. I added the Panda3D exporter script thing to Maya. 2 things: When I go to export to a .egg, no .egg is formed. Instead a fileToBeExported_temp.mb appears next to the original fileToBeExported.ma. My models use curving meshes with many subdivisions, easily in the thousands, like on the smoothed tentacles of an octopus. Will Panda be able to handle this in the first place? I can't find out on my own since it won't export.

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Not getting desired results with SSAO implementation

    - by user1294203
    After having implemented deferred rendering, I tried my luck with a SSAO implementation using this Tutorial. Unfortunately, I'm not getting anything that looks like SSAO, you can see my result below. You can see there is some weird pattern forming and there is no occlusion shading where there needs to be (i.e. in between the objects and on the ground). The shaders I implemented follow: #VS #version 330 core uniform mat4 invProjMatrix; layout(location = 0) in vec3 in_Position; layout(location = 2) in vec2 in_TexCoord; noperspective out vec2 pass_TexCoord; smooth out vec3 viewRay; void main(void){ pass_TexCoord = in_TexCoord; viewRay = (invProjMatrix * vec4(in_Position, 1.0)).xyz; gl_Position = vec4(in_Position, 1.0); } #FS #version 330 core uniform sampler2D DepthMap; uniform sampler2D NormalMap; uniform sampler2D noise; uniform vec2 projAB; uniform ivec3 noiseScale_kernelSize; uniform vec3 kernel[16]; uniform float RADIUS; uniform mat4 projectionMatrix; noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } mat3 CalcRMatrix(vec3 normal, vec2 texcoord){ ivec2 noiseScale = noiseScale_kernelSize.xy; vec3 rvec = texture(noise, texcoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); return mat3(tangent, bitangent, normal); } void main(void){ vec2 TexCoord = pass_TexCoord; vec3 Position = CalcPosition(); vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); mat3 RotationMatrix = CalcRMatrix(Normal, TexCoord); int kernelSize = noiseScale_kernelSize.z; float occlusion = 0.0; for(int i = 0; i < kernelSize; i++){ // Get sample position vec3 sample = RotationMatrix * kernel[i]; sample = sample * RADIUS + Position; // Project and bias sample position to get its texture coordinates vec4 offset = projectionMatrix * vec4(sample, 1.0); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // Get sample depth float sample_depth = texture(DepthMap, offset.xy).r; float linearDepth = projAB.y / (sample_depth - projAB.x); if(abs(Position.z - linearDepth ) < RADIUS){ occlusion += (linearDepth <= sample.z) ? 1.0 : 0.0; } } out_AO = 1.0 - (occlusion / kernelSize); } I draw a full screen quad and pass Depth and Normal textures. Normals are in RGBA16F with the alpha channel reserved for the AO factor in the blur pass. I store depth in a non linear Depth buffer (32F) and recover the linear depth using: float linearDepth = projAB.y / (depth - projAB.x); where projAB.y is calculated as: and projAB.x as: These are derived from the glm::perspective(gluperspective) matrix. z_n and z_f are the near and far clip distance. As described in the link I posted on the top, the method creates samples in a hemisphere with higher distribution close to the center. It then uses random vectors from a texture to rotate the hemisphere randomly around the Z direction and finally orients it along the normal at the given pixel. Since the result is noisy, a blur pass follows the SSAO pass. Anyway, my position reconstruction doesn't seem to be wrong since I also tried doing the same but with the position passed from a texture instead of being reconstructed. I also tried playing with the Radius, noise texture size and number of samples and with different kinds of texture formats, with no luck. For some reason when changing the Radius, nothing changes. Does anyone have any suggestions? What could be going wrong?

    Read the article

  • Object detection in bitmap JavaScript canvas

    - by fallenAngel
    I want to detect clicks on canvas elements which are drawn using paths. So far I have stored element paths in a JavaScript data structure and then check the coordinates of hits which match the element's coordinates. Rendering each element path and checking the hits would be inefficient when there are a lot of elements. I believe there must be an algorithm for this kind of coordinate search, can anyone help me with this?

    Read the article

  • Wheel rotation, to change velocity of vehicle

    - by Lewis
    I update the velocity of my vehicle like so: [v setVelocity: ((2 * 3.14 * 100 * (wheel.getRotationValue / 360) / 30)) * gameSpeed]; // update on 60 fps this gets velocity on all frames divide by 60 for 1 frame. This is done in my update method in my world class. Now wheel.getRotationValue returns the rotation value which is worked out like this: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); float limit = 0.5; rotationValue += (currentTouch - previousTouch) * limit; } touching = YES; } Say I steer the vehicle to the far right of the screen, and want to move it to the far left, It wont start moving to the left of the screen until the rotationValue is past 0 degrees again (the wheel is in its center posistion) and is dragged past this value. Is there anyway to change the code I have above, so that movement on the wheel is recognised instantly and updates the velocity of v instantly too?

    Read the article

  • How can I downsample a texture using FBOs?

    - by snape
    I am rendering a scene to FBO as my render target whose size is 8 times the size of the orignal screen in OpenGL. Now i wan to downsample the texture generated by FBO to the size of the screen so as to achieve spatial anti aliasing. How do i achieve the down sampling ? Please provide implementation details. Note : If there is a better way of doing anti aliasing in FBOs please mention that too. I am trying to remove the aliasing in the image attached below.

    Read the article

  • Foreach loop with 2d array of objects

    - by Jacob Millward
    I'm using a 2D array of objects to store data about tiles, or "blocks" in my gameworld. I initialise the array, fill it with data and then attempt to invoke the draw method of each object. foreach (Block block in blockList) { block.Draw(spriteBatch); } I end up with an exception being thrown "Object reference is not set to an instance of an object". What have I done wrong? EDIT: This is the code used to define the array Block[,] blockList; Then blockList = new Block[screenRectangle.Width, screenRectangle.Height]; // Fill with dummy data for (int x = 0; x <= screenRectangle.Width / texture.Width; x++) { for (int y = 0; y <= screenRectangle.Height / texture.Width; y++) { if (y >= screenRectangle.Height / (texture.Width*2)) { blockList[x, y] = new Block(1, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } else { blockList[x, y] = new Block(0, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } } }

    Read the article

  • How to manage my model

    - by Christophe Debove
    I have in my model, a list of Classes : Player, NonPlayerCharacter, Monster, Item, NonMovableItem etc With AndEngine I've a list of sprite for each piece of my model, How can I manage the relashionship between my model's classes and the graphical elements, what is the degree of abstaction recommended for my problem? One sprite for one Model or one Model for one Sprite or n for n for exemple If I do drag&drop have I to make abstraction of the Sprite Class, another exemple a map is a List of sprite or a list of element of my model?

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >