Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 367/772 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • GLSL compile error when accessing an array with compile-time constant index

    - by Benlitz
    I have this shader that works well on my computer (using an ATI HD 5700). I have a loop iterating between two constant values, which is, afaik, acceptable in a glsl shader. I write stuff in two arrays in this loop. #define NB_POINT_LIGHT 2 ... varying vec3 vVertToLight[NB_POINT_LIGHT]; varying vec3 vVertToLightWS[NB_POINT_LIGHT]; ... void main() { ... for (int i = 0; i < NB_POINT_LIGHT; ++i) { if (bPointLightUse[i]) { vVertToLight[i] = ConvertToTangentSpace(ShPointLightData[i].Position - WorldPos.xyz); vVertToLightWS[i] = ShPointLightData[i].Position - WorldPos.xyz; } } ... } I tried my program on another computer equipped with an nVidia GTX 560 Ti, and it fails to compile my shader. I get the following errors (94 and 95 are the lines of the two affectations) when calling glLinkProgram: Vertex info ----------- 0(94) : error C5025: lvalue in assignment too complex 0(95) : error C5025: lvalue in assignment too complex I think my code is valid, I don't know if this comes from a compiler bug, a conversion of my shader to another format from the compiler (nvidia looks to convert it to CG), or if I just missed something. I already tried to remove the if (bPointLightUse[i]) statement and I still have the same error. However, if I just write this: vVertToLight[0] = ConvertToTangentSpace(ShPointLightData[0].Position - WorldPos.xyz); vVertToLightWS[0] = ShPointLightData[0].Position - WorldPos.xyz; vVertToLight[1] = ConvertToTangentSpace(ShPointLightData[1].Position - WorldPos.xyz); vVertToLightWS[1] = ShPointLightData[1].Position - WorldPos.xyz; Then I don't have the error anymore, but it's really unconvenient so I would prefer to keep something loop-based. Here is the more detailled config that works: Vendor: ATI Technologies Inc. Renderer: ATI Radeon HD 5700 Series Version: 4.1.10750 Compatibility Profile Context Shading Language version: 4.10 And here is the more detailed config that doesn't work (should also be compatibility profile, although not indicated): Vendor: NVIDIA Corporation Renderer: GeForce GTX 560 Ti/PCI/SSE2 Version: 4.1.0 Shading Language version: 4.10 NVIDIA via Cg compiler

    Read the article

  • Collision in Tiled Map - LibGDX

    - by user43353
    I have collision code that deals with left or right or top or bottom. I am using Tiled Map with LibGDX. Question is: How do I detect collision with other cells by all 4 sides, and not specifically by left/right or top/bottom. Here is my top/bottom and left/right collision code: private boolean isCellBlocked(float x, float y) { Cell cell = collisionLayer.getCell((int) (x / collisionLayer.getTileWidth()), (int) (y / collisionLayer.getTileHeight())); return cell != null && cell.getTile() != null && cell.getTile().getProperties().containsKey(blockedKey); } public boolean collidesRight() { for(float step = 0; step < getHeight(); step += collisionLayer.getTileHeight() / 2) if(isCellBlocked(getX() + getWidth(), getY() + step)) return true; return false; } public boolean collidesLeft() { for(float step = 0; step < getHeight(); step += collisionLayer.getTileHeight() / 2) if(isCellBlocked(getX(), getY() + step)) return true; return false; } public boolean collidesTop() { for(float step = 0; step < getWidth(); step += collisionLayer.getTileWidth() / 2) if(isCellBlocked(getX() + step, getY() + getHeight())) return true; return false; } public boolean collidesBottom() { for(float step = 0; step < getWidth(); step += collisionLayer.getTileWidth() / 2) if(isCellBlocked(getX() + step, getY())) return true; return false; } What I'm trying to achieve is simple: I'm trying to make code that will detect by all 4 sides, collidesRight + collidesLeft + collidesTop + collidesBottom in one boolean. For some reason, I cant seem to figure it out. I tried to use Rectangles (the Java Class) on the specific tile I want to be detected, but was messy and I have multiple maps. Having a Rectangle (from Java's API) around the player is no problem. It's just the tiles I want to be detected are the main issues as they cause messy code when used with the Rectangle class. Im trying to minimize the amount of code....

    Read the article

  • Unity - Mecanim & Rigidbody on Third Person Controller - Gravity bug?

    - by Celtc
    I'm working on a third person controller which uses physX to interact with the other objects (using the Rigidbody component) and Mecanim to animate the character. All the animations used are baked to Y, and the movement on this axis is controlled by the gravity applied by the rigidbody component. The configuration of the falling animation: And the character components configuration: Since the falling animation doesn't have root motion on XZ, I move the character on XZ by code. Like this: // On the Ground if (IsGrounded()) { GroundedMovementMgm(); // Stores the velocity velocityPreFalling = rigidbody.velocity; } // Mid-Air else { // Continue the pre falling velocity rigidbody.velocity = new Vector3(velocityPreFalling.x, rigidbody.velocity.y, velocityPreFalling.z); } The problem is that when the chracter starts falling and hit against a wall in mid air, it gets stuck to the wall. Here are some pics which explains the problems: Hope someone can help me. Thanks and sory for my bad english! PD.: I was asked for the IsGrounded() function, so I'm adding it: void OnCollisionEnter(Collision collision) { if (!grounded) TrackGrounded(collision); } void OnCollisionStay(Collision collision) { TrackGrounded(collision); } void OnCollisionExit() { grounded = false; } public bool IsGrounded() { return grounded; } private void TrackGrounded(Collision collision) { var maxHeight = capCollider.bounds.min.y + capCollider.radius * .9f; foreach (var contact in collision.contacts) { if (contact.point.y < maxHeight && Vector3.Angle(contact.normal, Vector3.up) < maxSlopeAngle) { grounded = true; break; } } } I'll also add a LINK to download the project if someone wants it.

    Read the article

  • Android OpenGL ES 2 framebuffer not working properly

    - by user16547
    I'm trying to understand how framebuffers work. In order to achieve that, I want to draw a very basic triangle to a framebuffer texture and then draw the resulting texture to a quad on the default framebuffer. However, I only get a fraction of the triangle like below. LE: The triangle's coordinates should be (1) -0.5f, -0.5f, 0 (2) 0.5f, -0.5f, 0 (3) 0, 0.5f, 0 Here's the code to render: @Override public void onDrawFrame(GL10 gl) { renderNormalStuff(); renderFramebufferTexture(); } protected void renderNormalStuff() { GLES20.glViewport(0, 0, texWidth, texHeight); GLUtils.updateProjectionMatrix(mProjectionMatrix, texWidth, texHeight); GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbo[0]); GLES20.glUseProgram(mProgram); GLES20.glClearColor(.5f, .5f, .5f, 1); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT); Matrix.setIdentityM(mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0); GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, mMVPMatrix, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]); GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 12, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[1]); GLES20.glVertexAttribPointer(a_Color, 4, GLES20.GL_FLOAT, false, 16, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]); GLES20.glDrawElements(GLES20.GL_TRIANGLES, indexBuffer.capacity(), GLES20.GL_UNSIGNED_BYTE, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0); GLES20.glUseProgram(0); } private void renderFramebufferTexture() { GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0); GLES20.glUseProgram(fboProgram); GLES20.glClearColor(.0f, .5f, .25f, 1); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glViewport(0, 0, width, height); GLUtils.updateProjectionMatrix(mProjectionMatrix, width, height); Matrix.setIdentityM(mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0); GLES20.glUniformMatrix4fv(fbo_u_MVPMatrix, 1, false, mMVPMatrix, 0); //draw the texture GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture[0]); GLES20.glUniform1i(fbo_u_Texture, 0); GLUtils.sendBufferData(fbo_a_Position, 3, quadPositionBuffer); GLUtils.sendBufferData(fbo_a_TexCoordinate, 2, quadTexCoordinate); GLES20.glDrawElements(GLES20.GL_TRIANGLES, quadIndexBuffer.capacity(), GLES20.GL_UNSIGNED_BYTE, quadIndexBuffer); GLES20.glUseProgram(0); }

    Read the article

  • How do multipass shaders work in OpenGL?

    - by Boreal
    In Direct3D, multipass shaders are simple to use because you can literally define passes within a program. In OpenGL, it seems a bit more complex because it is possible to give a shader program as many vertex, geometry, and fragment shaders as you want. A popular example of a multipass shader is a toon shader. One pass does the actual cel-shading effect and the other creates the outline. If I have two vertex shaders, "cel.vert" and "outline.vert", and two fragment shaders, "cel.frag" and "outline.frag" (similar to the way you do it in HLSL), how can I combine them to create the full toon shader? I don't want you saying that a geometry shader can be used for this because I just want to know the theory behind multipass GLSL shaders ;)

    Read the article

  • Error in destroying object in Box2D/LibGDX

    - by Crypted
    I'm trying to delete an object when a collision happens. I have put the following code in the render method of the object so it would be outside of the physics calculations. public void render(SpriteBatch spriteBatch) { // some other code... body.setActive(false); body.getWorld().destroyBody(body); But I'm getting an run-time error which crashes the JVM and shows, AL lib: alc_cleanup: 1 device not closed Assertion failed! Program: C:\Program Files\Java\jre6\bin\javaw.exe File: /var/lib/hudson/jobs/libgdx-git/workspace/gdx/jni/Box2D/Dynamics/b2World.cpp, Line 133 Expression: m_bodyCount 0 Can anyone help me here?

    Read the article

  • Algorithm to make groups of units

    - by M28
    In Age of Mythology and some other strategy games, when you select multiple units and order them to move to some place, they make a "group" when they reach the desired location: I have a Vector with several sprites, which are the selected units, the variables tarX and tarY are the target x and y. I just want an example, so you can just set the x and y position and I can adapt it to my code. Also, I would like to ask that the algorithm calls "isWalkable" for the x and y position, to determine if it's a valid position for each unit.

    Read the article

  • How to apply programatical changes to the Terrain SplatPrototype

    - by Shivan Dragon
    I have a script to which I add a Terrain object (I drag and drop the terrain in the public Terrain field). The Terrain is already setup in Unity to have 2 PaintTextures: 1 is a Square (set up with a tile size so that it forms a checkered pattern) and the 2nd one is a grass image: Also the Target Strength of the first PaintTexture is lowered so that the checkered pattern also reveals some of the grass underneath. Now I want, at run time, to change the Tile Size of the first PaintTexture, i.e. have more or less checkers depending on various run time conditions. I've looked through Unity's documentation and I've seen you have the Terrain.terrainData.SplatPrototype array which allows you to change this. Also there's a RefreshPrototypes() method on the terrainData object and a Flush() method on the Terrain object. So I made a script like this: public class AStarTerrain : MonoBehaviour { public int aStarCellColumns, aStarCellRows; public GameObject aStarCellHighlightPrefab; public GameObject aStarPathMarkerPrefab; public GameObject utilityRobotPrefab; public Terrain aStarTerrain; void Start () { //I've also tried NOT drag and dropping the Terrain on the public field //and instead just using the commented line below, but I get the same results //aStarTerrain = this.GetComponents<Terrain>()[0]; Debug.Log ("Got terrain "+aStarTerrain.name); SplatPrototype[] splatPrototypes = aStarTerrain.terrainData.splatPrototypes; Debug.Log("Terrain has "+splatPrototypes.Length+" splat prototypes"); SplatPrototype aStarCellSplat = splatPrototypes[0]; Debug.Log("Re-tyling splat prototype "+aStarCellSplat.texture.name); aStarCellSplat.tileSize = new Vector2(2000,2000); Debug.Log("Tyling is now "+aStarCellSplat.tileSize.x+"/"+aStarCellSplat.tileSize.y); aStarTerrain.terrainData.RefreshPrototypes(); aStarTerrain.Flush(); } //... Problem is, nothing gets changed, the checker map is not re-tiled. The console outputs correctly tell me that I've got the Terrain object with the right name, that it has the right number of splat prototypes and that I'm modifying the tileSize on the SplatPrototype object corresponding to the right texture. It also tells me the value has changed. But nothing gets updated in the actual graphical view. So please, what am I missing?

    Read the article

  • Understanding MotionEvent to implement a virtual DPad and Buttons on Android (Multitouch)

    - by Fabio Gomes
    I once implemented a DPad in XNA and now I'm trying to port it to android, put, I still don't get how the touch events work in android, the more I read the more confused I get. Here is the code I wrote so far, it works, but guess that it will only handle one touch point. public boolean onTouchEvent(MotionEvent event) { if (event.getPointerCount() == 0) return true; int touchX = -1; int touchY = -1; pressedDirection = DPadDirection.None; int actionCode = event.getAction() & MotionEvent.ACTION_MASK; if (actionCode == MotionEvent.ACTION_UP) { if (event.getPointerId(0) == idDPad) { pressedDirection = DPadDirection.None; idDPad = -1; } } else if (actionCode == MotionEvent.ACTION_DOWN || actionCode == MotionEvent.ACTION_MOVE) { touchX = (int)event.getX(); touchY = (int)event.getY(); if (rightRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Right; else if (leftRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Left; else if (upRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Up; else if (downRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Down; if (pressedDirection != DPadDirection.None) idDPad = event.getPointerId(0); } return true; } The logic is: Test if there is a "DOWN" or "MOVED" event, then if one of this events collides with one of the 4 rectangles of my DPad, I set the pressedDirectin variable to the side of the touch event, then I read the DPad actual pressed direction in my Update() event on another class. The thing I'm not sure, is how do I get track of the touch points, I store the ID of the touch point which generated the diretion that is being stored (last one), so when this ID is released I set the Direction to None, but I'm really confused about how to handle this in android, here is the code I had in XNA: public override void Update(GameTime gameTime) { PressedDirection = DpadDirection.None; foreach (TouchLocation _touchLocation in TouchPanel.GetState()) { if (_touchLocation.State == TouchLocationState.Released) { if (_touchLocation.Id == _idDPad) { PressedDirection = DpadDirection.None; _idDPad = -1; } } else if (_touchLocation.State == TouchLocationState.Pressed || _touchLocation.State == TouchLocationState.Moved) { _intersectRect.X = (int)_touchLocation.Position.X; _intersectRect.Y = (int)_touchLocation.Position.Y; _intersectRect.Width = 1; _intersectRect.Height = 1; if (_intersectRect.Intersects(_rightRect)) PressedDirection = DpadDirection.Right; else if (_intersectRect.Intersects(_leftRect)) PressedDirection = DpadDirection.Left; else if (_intersectRect.Intersects(_upRect)) PressedDirection = DpadDirection.Up; else if (_intersectRect.Intersects(_downRect)) PressedDirection = DpadDirection.Down; if (PressedDirection != DpadDirection.None) { _idDPad = _touchLocation.Id; continue; } } } base.Update(gameTime); } So, first of all: Am I doing this correctly? if not, why? I don't want my DPad to handle multiple directions, but I still didn't get how to handle the multiple touch points, is the event called for every touch point, or all touch points comes in a single call? I still don't get it.

    Read the article

  • Is OpenTK Dead?

    - by ashes999
    Looking at OpenTK, I notice some disturbing signs: The last news item was posted on December 31st, 2010 The main forum gets about one post a day On SourceForge, the last nightly build was in March, and the last release was 2010. Does OpenTK exist anymore, or is it abandonware now? Edit: Some people have expressed concern at my use of "ambiguous" and "loaded terms" like "dead," "abandonware," and others. What I'm asking is this: software projects comprise of many pieces: The actual software project (such as OpenTK) A group of people who maintain the software (project leads, core developers) Some vehicle by which users can find and consume the latest versions (such as releasing daily builds) A community (can I ask questions about it? Get answers?) Updates (are there new features? New releases? Active development? A roadmap?) Some projects have all of these things. Most have a few. Some have nothing, other than maybe the actual software project itself. Is OpenTK one of these? Because it seems like: The actual software project is stable The maintainers don't contribute to it anymore There are no more latest versions (daily builds), not since 2010 (2+ years) The community is very low-traffic (nobody is asking/answering questions, who is actually using this anyway?) There are no updates since 2010

    Read the article

  • How could you parallelise a 2D boids simulation

    - by Sycren
    How could you program a 2D boids simulation in such a way that it could use processing power from different sources (clusters, gpu). In the above example, the non-coloured particles move around until they cluster (yellow) and stop moving. The problem is that all the entities could potentially interact with each other although an entity in the top left is unlikely to interact with one in the bottom right. If the domain was split into different segments, it may speed the whole thing up, But if an entity wanted to cross into another segment there may be problems. At the moment this simulation works with 5000 entities with a good frame rate, I would like to try this with millions if possible. Would it be possible to use quad trees to further optimise this? Any other suggestions?

    Read the article

  • Alpha interpolation in a pixel shader

    - by c4sh
    How does the interpolation in a fragment shader work when it comes to the alpha parameter? I'm programming a shader with SharpDX, DirectX11. My idea is to interpolate 2 3d points of a segment, so that I'll have the position interpolated in between in the pixel shader. But I want to know what happens with the alpha parameter when that position is blocked by another polygon. For instance, if alpha is 1.0 at the left end of my segment and 0.0 at the other one. What is the value of alpha in the middle, 0.5? Or does it depend on the visibility at that point (meaning it could be, for instance, 1.0 OR 0.0 depending on if that part of the segment is hidden by a poolygon?

    Read the article

  • box2D simulation doesn't work

    - by shadow_of__soul
    has been a while since last time i used box2D, and i needed to make some stuff, and i saw that my simulation don't worked (compiles, but do anything). i haven't been able to even have working the examples or this simple example i'm pasting below: package { import flash.display.Sprite; import flash.events.Event; import Box2D.Common.Math.b2Vec2; import Box2D.Dynamics.b2World; import Box2D.Dynamics.b2BodyDef; import Box2D.Dynamics.b2Body; import Box2D.Collision.Shapes.b2CircleShape; import Box2D.Dynamics.b2Fixture; import Box2D.Dynamics.b2FixtureDef; import org.flashdevelop.utils.FlashConnect; import flash.events.TimerEvent; import flash.utils.Timer; public class Main extends Sprite { public var world:b2World; public var wheelBody:b2Body; public var stepTimer:Timer; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); var gravity:b2Vec2 = new b2Vec2(0, 10); world = new b2World(gravity, true); var wheelBodyDef:b2BodyDef = new b2BodyDef(); wheelBodyDef.type = b2Body.b2_dynamicBody; wheelBody = world.CreateBody(wheelBodyDef); var circleShape:b2CircleShape = new b2CircleShape(5); var wheelFixtureDef:b2FixtureDef = new b2FixtureDef(); wheelFixtureDef.shape = circleShape; var wheelFixture:b2Fixture = wheelBody.CreateFixture(wheelFixtureDef); stepTimer = new Timer(0.025 * 1000); stepTimer.addEventListener(TimerEvent.TIMER, onTick); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); stepTimer.start(); // entry point } private function onTick(a_event:TimerEvent):void { world.Step(0.025, 10, 10); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); } } } on this, the object should fall down, but the positions reported me by the trace method, are always 0. so is not a display problem, that i see everything freeze, is why the simulation is not working, and i have no idea why :( can anyone point me to the right direction of where i need to look for the problem? my settings are: windows 7 flashdevelop 4.2.1 SDK: 4.6.0 compiling for flash 10, but i tried every target i have available (till flash 11.5) project set at 30fps

    Read the article

  • Points on lines where the two lines are the closest together

    - by James Bedford
    Hey guys, I'm trying to find the points on two lines where the two lines are the closest. I've implemented the following method (Points and Vectors are as you'd expect, and a Line consists of a Point on the line and a non-normalized direction Vector from that point): void CDClosestPointsOnTwoLines(Line line1, Line line2, Point* closestPoints) { closestPoints[0] = line1.pointOnLine; closestPoints[1] = line2.pointOnLine; Vector d1 = line1.direction; Vector d2 = line2.direction; float a = d1.dot(d1); float b = d1.dot(d2); float e = d2.dot(d2); float d = a*e - b*b; if (d != 0) // If the two lines are not parallel. { Vector r = Vector(line1.pointOnLine) - Vector(line2.pointOnLine); float c = d1.dot(r); float f = d2.dot(r); float s = (b*f - c*e) / d; float t = (a*f - b*c) / d; closestPoints[0] = line1.positionOnLine(s); closestPoints[1] = line2.positionOnLine(t); } else { printf("Lines were parallel.\n"); } } I'm using OpenGL to draw three lines that move around the world, the third of which should be the line that most closely connects the other two lines, the two end points of which are calculated using this function. The problem is that the first point of closestPoints after this function is called will lie on line1, but the second point won't lie on line2, let alone at the closest point on line2! I've checked over the function many times but I can't see where the mistake in my implementation is. I've checked my dot product function, scalar multiplication, subtraction, positionOnLine() etc. etc. So my assumption is that the problem is within this method implementation. If it helps to find the answer, this is function supposed to be an implementation of section 5.1.8 from 'Real-Time Collision Detection' by Christer Ericson. Many thanks for any help!

    Read the article

  • Exporting 3DS Max animated biped character into Assimp

    - by Doug Kavendek
    I've been having some trouble with MD5 meshes exported from 3DS Max into my C++ program, using Assimp to import the model and its skeletal animation. If the models were rigged manually with bones, the export and animations work perfectly, but if they were rigged as a biped character, the animation hits a "deadly import error" and all the bones appear to get smooshed together into a big pile. This seems like it might just be a limitation of the MD5 exporter (we're currently using the one found here. Our plan is to try out a different MD5 exporter (this one), and if that still has problems, then try switching from MD5 to COLLADA. Our modeler won't be able to make time to try out these other exporters for a few days, so in the meantime I wanted to see if there were any better methods out there for getting biped rigged models from 3DS Max into our app via Assimp. Out of Assimp's supported formats, I need to figure out which will support the following: Skeletal animations Exportable from 3DS Max biped rigged models Failing that, an alternative would be a way to convert a biped character to its corresponding bones before exporting. We did find one script to do that, but it only seems meant as a starting point for modeling -- it doesn't carry over any hierarchy, skinning, or animation -- so it can't be used solely during export.

    Read the article

  • How can I bend an object in OpenGL?

    - by mindnoise
    Is there a way one could bend an object, like a cylinder or a plane using OpenGL? I'm an OpenGL beginner (I'm using OpenGL ES 2.0, if that matters, although I suspect, math matters most in this case, so it's somehow version independent), I understand the basics: translate, rotate, matrix transformations, etc. I was wondering if there is a technique which allows you to actually change the geometry of your objects (in this case by bending them)? Any links, tutorials or other references are welcomed!

    Read the article

  • Implementing algorithms via compute shaders vs. pipeline shaders

    - by TravisG
    With the availability of compute shaders for both DirectX and OpenGL it's now possible to implement many algorithms without going through the rasterization pipeline and instead use general purpose computing on the GPU to solve the problem. For some algorithms this seems to become the intuitive canonical solution because they're inherently not rasterization based, and rasterization-based shaders seemed to be a workaround to harness GPU power (simple example: creating a noise texture. No quad needs to be rasterized here). Given an algorithm that can be implemented both ways, are there general (potential) performance benefits over using compute shaders vs. going the normal route? Are there drawbacks that we should watch out for (for example, is there some kind of unusual overhead to switching from/to compute shaders at runtime)? Are there perhaps other benefits or drawbacks to consider when choosing between the two?

    Read the article

  • cocos2d-x and handling touch events

    - by Jason
    I have my sprites on screen and I have a vector that stores each sprite. Can a CCSprite* handle a touch event? Or just the CCLayer*? What is the best way to decide what sprite was touched? Should I store the coordinates of where the sprite is (in the sprite class) and when I get the event, see if where the user touched is where the sprite is by looking through the vector and getting each sprites current coordinates? UPDATE: I subclass CCSprite: class Field : public cocos2d::CCSprite, public cocos2d::CCTargetedTouchDelegate and I implement functions: cocos2d::CCRect rect(); virtual void onEnter(); virtual void onExit(); bool containsTouchLocation(cocos2d::CCTouch* touch); virtual bool ccTouchBegan(cocos2d::CCTouch* touch, cocos2d::CCEvent* event); virtual void ccTouchMoved(cocos2d::CCTouch* touch, cocos2d::CCEvent* event); virtual void ccTouchEnded(cocos2d::CCTouch* touch, cocos2d::CCEvent* event); virtual void touchDelegateRetain(); virtual void touchDelegateRelease(); I put CCLOG statements in each one and I dont hit them! When I touch the CCLayer this sprite is on though I do hit those in the class that implements the Layer and puts these sprites on the layer.

    Read the article

  • UV texture mapping with perspective correct interpolation

    - by Twodordan
    I am working on a software rasterizer for educational purposes and I am having issues with the texturing. The problem is, only one face of the cube gets correctly textured. The rest are stretched edges: You can see the running program online here. I have used cartesian coordinates, and all I do is interpolate the uv values along the scanlines. The general formula I use for interpolating the uv coordinates is pretty much the one I use for the z-buffering interpolation and looks like this (in this case for horizontal scanlines): u_Slope = (right.u - left.u) / (triangleRight_x - triangleLeft_x); v_Slope = (right.v - left.v) / (triangleRight_x - triangleLeft_x); //[...] new_u = left.u + ((currentX_onScanLine - triangleLeft_x) * u_Slope); new_v = left.v + ((currentX_onScanLine - triangleLeft_x) * v_Slope); Then, when I add each point to the pixel buffer, I restore z and uv: z = (1/z); uv.u = Math.round(uv.u * z *100);//*100 because my texture is 100x100px uv.v = Math.round(uv.v * z *100); Then I turn the u v indexes into one index in order to fetch the correct pixel from the image data (which is a 1 dimensional px array): var index = texture.width * uv.u + uv.v; //and the rest is unimportant imagedata[index].RGBA bla bla The interpolation formula is correct considering the consistency of the texture (including the straight stripes). However, I seem to get quite a lot of 0 values for either u or v. Which is probably why I only get one face right. Furthermore, why is the texture flipped horizontally? (the "1" is flipped) I must get some sleep now, but before I get into further dissecting of every single value to see what goes wrong, Can someone more experienced guess why might this be happening, just by looking at the cube? "I have no idea what I'm doing" (it's my first time implementing a rasterizer). Did I miss an important stage? Thanks for any insight. PS: My UV values are as follows: { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }, { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }

    Read the article

  • Follow point of interest by applying torque

    - by azymm
    Given a body with an orientation angle and a point of interest or targetAngle, is there an elegant solution for keeping the body oriented towards the point of interest by applying torque or impulses? I have a naive solution working below, but the effect is pretty 'wobbly', it'll overshoot each time, slowly getting closer to the target angle - undesirable effect in my case. I'd like to find a solution that is more intelligent - that can accelerate to near the target angle then decelerate and stop right at the target angle (or within a small range). If it helps, I'm using box2d and the body is a rectangle. def gameloop(dt): targetAngle = get_target_angle() bodyAngle = get_body_angle() deltaAngle = targetAngle - bodyAngle if deltaAngle > PI: deltaAngle = targetAngle - (bodyAngle + 2.0 * PI) if deltaAngle < -PI: deltaAngle = targetAngle - (bodyAngle - 2.0 * PI) # multiply by 2, for stronger reaction deltaAngle = deltaAngle * 2.0; body.apply_torque(deltaAngle); One other thing, when body has no linear velocity, the above solution works ok. But when the body has some linear velocity, the solution above causes really wonky movement. Not sure why, but would appreciate any hints as to why that might be.

    Read the article

  • Smooth terrain rendering

    - by __dominic
    I'm trying to render a smooth terrain with Direct3D. I've got a 50*50 grid with all y values = 0, and a set of 3D points that indicate the location on the grid and depth or height of the "valley" or "hill". I need to make the y values of the grid vertices higher or lower depending on how close they are to each 3D point. Thus, in the end I should have a smooth terrain renderer. I'm not sure at all what way I can do this. I've tried changing the height of the vertices based on the distance to each point just using this basic formula: dist = a² + b² + c² where a, b and c are the x, y, and z distance from a vertex to a 3D point. The result I get with this is not smooth at all. I'm thinking there is probably a better way. Here is a screenshot of what I've got for the moment: https://dl.dropbox.com/u/2562049/terrain.jpg

    Read the article

  • Homebrew development for 7th gen home consoles

    - by Brian McKenna
    I'm looking to do some homebrew development for either the Wii, Xbox360 or PS3. I'll be developing from a Linux system. The programming language doesn't matter. Wii - devkitPPC and libogc look fairly easy and complete Xbox360 - Mono.XNA looks interesting but not very feature complete PS3 - psl1ght seems interesting but I haven't been able to find out much How homebrew friendly are each of these consoles? Is someone able to give a comparison of each of these scenes?

    Read the article

  • Ambient occlusion shader just shows models as all white

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? If so then how? I'm using C++. Here is my shader: float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • Problem loading shaders with slimdx

    - by Levi
    I'm attempting to load an FX file in slimdx, I've got this exact FX file loading and compiling fine with XNA 4.0 but I'm getting errors with slimdx, here's my code to load it. using SlimDX.Direct3D11; using SlimDX.D3DCompiler; public static Effect LoadFXShader(string path) { Effect shader; using (var bytecode = ShaderBytecode.CompileFromFile(path, null, "fx_2_0", ShaderFlags.None, EffectFlags.None)) shader = new Effect(Devices.GPU.GraphicsDevice, bytecode); return shader; } Here's the shader: #define TEXTURE_TILE_SIZE 16 struct VertexToPixel { float4 Position : POSITION; float2 TextureCoords: TEXCOORD1; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Constants -------- float4x4 xView; float4x4 xProjection; float4x4 xWorld; float4x4 preViewProjection; //float random; //------- Texture Samplers -------- Texture TextureAtlas; sampler TextureSampler = sampler_state { texture = <TextureAtlas>; magfilter = Point; minfilter = point; mipfilter=linear; AddressU = mirror; AddressV = mirror;}; //------- Technique: Textured -------- VertexToPixel TexturedVS( byte4 inPos : POSITION, float2 inTexCoords: TEXCOORD0) { inPos.w = 1; VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul (xView, xProjection); float4x4 preWorldViewProjection = mul (xWorld, preViewProjection); Output.Position = mul(inPos, preWorldViewProjection); Output.TextureCoords = inTexCoords / TEXTURE_TILE_SIZE; return Output; } PixelToFrame TexturedPS(VertexToPixel PSIn) { PixelToFrame Output = (PixelToFrame)0; Output.Color = tex2D(TextureSampler, PSIn.TextureCoords); if(Output.Color.a != 1) clip(-1); return Output; } technique Textured { pass Pass0 { VertexShader = compile vs_2_0 TexturedVS(); PixelShader = compile ps_2_0 TexturedPS(); } } Now this exact shader works fine in XNA, but in slimdx I get the error ChunkDefault.fx(28,27): error X3000: unrecognized identifier 'byte4'

    Read the article

  • OBJ model loaded in LWJGL has a black area with no texture

    - by gambiting
    I have a problem with loading an .obj file in LWJGL and its textures. The object is a tree(it's a paid model from TurboSquid, so I can't post it here,but here's the link if you want to see how it should look like): http://www.turbosquid.com/FullPreview/Index.cfm/ID/701294 I wrote a custom OBJ loader using the LWJGL tutorial from their wiki. It looks like this: public class OBJLoader { public static Model loadModel(File f) throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader(f)); Model m = new Model(); String line; Texture currentTexture = null; while((line=reader.readLine()) != null) { if(line.startsWith("v ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.verticies.add(new Vector3f(x,y,z)); }else if(line.startsWith("vn ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.normals.add(new Vector3f(x,y,z)); }else if(line.startsWith("vt ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); m.texVerticies.add(new Vector2f(x,y)); }else if(line.startsWith("f ")) { Vector3f vertexIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[0]), Float.valueOf(line.split(" ")[2].split("/")[0]), Float.valueOf(line.split(" ")[3].split("/")[0])); Vector3f textureIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[1]), Float.valueOf(line.split(" ")[2].split("/")[1]), Float.valueOf(line.split(" ")[3].split("/")[1])); Vector3f normalIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[2]), Float.valueOf(line.split(" ")[2].split("/")[2]), Float.valueOf(line.split(" ")[3].split("/")[2])); m.faces.add(new Face(vertexIndicies,textureIndicies,normalIndicies,currentTexture.getTextureID())); }else if(line.startsWith("g ")) { if(line.length()>2) { String name = line.split(" ")[1]; currentTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/" + name + ".png")); System.out.println(currentTexture.getTextureID()); } } } reader.close(); System.out.println(m.verticies.size() + " verticies"); System.out.println(m.normals.size() + " normals"); System.out.println(m.texVerticies.size() + " texture coordinates"); System.out.println(m.faces.size() + " faces"); return m; } } Then I create a display list for my model using this code: objectDisplayList = GL11.glGenLists(1); GL11.glNewList(objectDisplayList, GL11.GL_COMPILE); Model m = null; try { m = OBJLoader.loadModel(new File("res/untitled4.obj")); } catch (Exception e1) { e1.printStackTrace(); } int currentTexture=0; for(Face face: m.faces) { if(face.texture!=currentTexture) { currentTexture = face.texture; GL11.glBindTexture(GL11.GL_TEXTURE_2D, currentTexture); } GL11.glColor3f(1f, 1f, 1f); GL11.glBegin(GL11.GL_TRIANGLES); Vector3f n1 = m.normals.get((int) face.normal.x - 1); GL11.glNormal3f(n1.x, n1.y, n1.z); Vector2f t1 = m.texVerticies.get((int) face.textures.x -1); GL11.glTexCoord2f(t1.x, t1.y); Vector3f v1 = m.verticies.get((int) face.vertex.x - 1); GL11.glVertex3f(v1.x, v1.y, v1.z); Vector3f n2 = m.normals.get((int) face.normal.y - 1); GL11.glNormal3f(n2.x, n2.y, n2.z); Vector2f t2 = m.texVerticies.get((int) face.textures.y -1); GL11.glTexCoord2f(t2.x, t2.y); Vector3f v2 = m.verticies.get((int) face.vertex.y - 1); GL11.glVertex3f(v2.x, v2.y, v2.z); Vector3f n3 = m.normals.get((int) face.normal.z - 1); GL11.glNormal3f(n3.x, n3.y, n3.z); Vector2f t3 = m.texVerticies.get((int) face.textures.z -1); GL11.glTexCoord2f(t3.x, t3.y); Vector3f v3 = m.verticies.get((int) face.vertex.z - 1); GL11.glVertex3f(v3.x, v3.y, v3.z); GL11.glEnd(); } GL11.glEndList(); The currentTexture is an int - it contains the ID of the currently used texture. So my model looks absolutely fine without textures: (sorry I cannot post hyperlinks since I am a new user) i.imgur.com/VtoK0.png But look what happens if I enable GL_TEXTURE_2D: i.imgur.com/z8Kli.png i.imgur.com/5e9nn.png i.imgur.com/FAHM9.png As you can see an entire side of the tree appears to be missing - and it's not transparent, since it's not in the colour of the background - it's rendered black. It's not a problem with the model - if I load it using Kanji's OBJ loader it works fine(but the thing is,that I need to write my own OBJ loader) i.imgur.com/YDATo.png this is my OpenGL init section: //init display try { Display.setDisplayMode(new DisplayMode(Support.SCREEN_WIDTH, Support.SCREEN_HEIGHT)); Display.create(); Display.setVSyncEnabled(true); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(1.0f, 0.0f, 0.0f, 1.0f); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LESS); GL11.glDepthMask(true); GL11.glEnable(GL11.GL_NORMALIZE); GL11.glMatrixMode(GL11.GL_PROJECTION); GLU.gluPerspective (90.0f,800f/600f, 1f, 500.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glEnable(GL11.GL_CULL_FACE); GL11.glCullFace(GL11.GL_BACK); //enable lighting GL11.glEnable(GL11.GL_LIGHTING); ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse).flip()); GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS,(int)material_shinyness); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT2, GL11.GL_POSITION,(FloatBuffer)temp.asFloatBuffer().put(lightPosition2).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_AMBIENT,(FloatBuffer)temp.asFloatBuffer().put(lightAmbient).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_SPECULAR,(FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_CONSTANT_ATTENUATION, 0.1f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_LINEAR_ATTENUATION, 0.0f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_QUADRATIC_ATTENUATION, 0.0f); GL11.glEnable(GL11.GL_LIGHT2); Could somebody please help me?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >