Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 574/1169 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Cocos2d rotating sprite while moving with CCBezierBy

    - by marcg11
    I've done my moving actions which consists of sequences of CCBezierBy. However I would like the sprite to rotate by following the direction of the movement (like an airplane). How sould I do this with cocos2d? I've done the following to test this out. CCSprite *green = [CCSprite spriteWithFile:@"enemy_green.png"]; [green setPosition:ccp(50, 160)]; [self addChild:green]; ccBezierConfig bezier; bezier.controlPoint_1 = ccp(100, 200); bezier.controlPoint_2 = ccp(400, 200); bezier.endPosition = ccp(300,160); [green runAction:[CCAutoBezier actionWithDuration:4.0 bezier:bezier]]; In my subclass: @interface CCAutoBezier : CCBezierBy @end @implementation CCAutoBezier - (id)init { self = [super init]; if (self) { // Initialization code here. } return self; } -(void) update:(ccTime) t { CGPoint oldpos=[self.target position]; [super update:t]; CGPoint newpos=[self.target position]; float angle = atan2(newpos.y - oldpos.y, newpos.x - oldpos.x); [self.target setRotation: angle]; } @end However it rotating, but not following the path...

    Read the article

  • Finding vectors with two points

    - by Christian Careaga
    We're are trying to get the direction of a projectile but we can't find out how For example: [1,1] will go SE [1,-1] will go NE [-1,-1] will go NW and [-1,1] will go SW we need an equation of some sort that will take the player pos and the mouse pos and find which direction the projectile needs to go. Here is where we are plugging in the vectors: def update(self): self.rect.x += self.vector[0] self.rect.y += self.vector[1] Then we are blitting the projectile at the rects coords.

    Read the article

  • Deferred contexts and inheriting state from the immediate context

    - by dreijer
    I took my first stab at using deferred contexts in DirectX 11 today. Basically, I created my deferred context using CreateDeferredContext() and then drew a simple triangle strip with it. Early on in my test application, I call OMSetRenderTargets() on the immediate context in order to render to the swap chain's back buffer. Now, after having read the documentation on MSDN about deferred contexts, I assumed that calling ExecuteCommandList() on the immediate context would execute all of the deferred commands as "an extension" to the commands that had already been executed on the immediate context, i.e. the triangle strip I rendered in the deferred context would be rendered to the swap chain's back buffer. That didn't seem to be the case, however. Instead, I had to manually pull out the immediate context's render target (using OMGetRenderTargets()) and then set it on the deferred context with OMSetRenderTargets(). Am I doing something wrong or is that the way deferred contexts work?

    Read the article

  • Locomotion-system with irregular IK

    - by htaunay
    Im having some trouble with locomtions (Unity3D asset) IK feet placement. I wouldn't call it "very bad", but it definitely isn't as smooth as the Locomotion System Examples. The strangest behavior (that is probably linked to the problem) are the rendered foot markers that "guess" where the characters next step will be. In the demo, they are smooth and stable. However, in my project, they keep flickering, as if Locomotion changed its "guess" every frame, and sometimes, the automatic defined step is too close to the previous step, or sometimes, too distant, creating a very irregular pattern. The configuration is (apparently)Identical to the human example in the demo, so I guessing the problem is my model and/or animation. Problem is, I can't figure out was it is =S Has anyone experienced the same problem? I uploaded a video of the bug to help interpreting the issue (excuse the HORRIBLE quality, I was in a hurry).

    Read the article

  • how does the rectangle bounds (x,y,width,height) in libgdx work

    - by JG22
    I cant work out how to use the rectangle bounds in libgdx I am currently using the superJumper example and have 2or 3 examples with that are pause Bounds = new Rectangle(320 - 64, 480 - 64, 64, 64); this is the pause button in the top right corner resume Bounds = new Rectangle(160 - 96, 240, 192, 36); this is a rectangle resume button in the middle of the page in the menu that comes up when the pause button is pressed. basically my question is aimed at the 360 -64 and 160 -96 because I don't know why this is used I need to create a rectangle that covers the left side of the screen and the same on the right because I want to create a on screen buttons, I have already created the does for these buttons and I have managed to get them to work but I can move the rectangles to where I want. Thank you If you can help

    Read the article

  • How to control an actor movement in UDK

    - by Mikalichov
    This might be very basic, but I couldn't find something relevant to what I need (see below). I am working on a very basic thing: a 3D environment with some buildings, and actors walking inside it. It looks like following: I mainly want to manage to have one actor standing around, idling, and another walking around the area. Right now, this is done through matinee + skeletal mesh groups, and forcing a looped animation on the actors: But I realize this is super caveman-level. So I've build an AnimTree, linking the idling and directional animations to the corresponding nodes. But then, I'm stuck. I added the AnimTree in the actors properties, but nothing happens. I've tried MoveToActor, but no success - is there a thing to set to allow an actor to move? Also, I place the actors on the map manually (they are supposed to be unique), should I spawn them instead? Every tutorial I find explains how to use an AnimTree for the player character, which is not what I want. I need a way to move the actors. I tried to look for AI tutorials, but only found UT3 bots-modifications, which is not what I need either. Since I have so much trouble finding how to do this through Kismet, I'm starting to suspect this has to be done through scripting/coding, but I would like to be sure there is no way to do it through Kismet before going that route. Every bit of answer about how to tell an actor something along the lines of "go in that direction as much as you can, then when you hit a wall turn 45° and continue" would be awesome. I'll be happy to move/edit the question if there is any problem with it

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • Interaction using Kinect in XNA

    - by Sweta Dwivedi
    So i have written a program to play a sound file when ever my RightHand.Joint touches the 3D model . . It goes like this . . even though the code works somehow but not very accurate . . for example it will play the sound when my hand is slightly under my 3D object not exactly on my 3D object . How do i make it more accurate? here is the code . . (HandX & HandY is the values coming from the Skeleton data RightHand.Joint.X etc) and also this calculation doesnt work with Animated Sprites..which i need to do foreach (_3DModel s in Solar) { float x = (float)Math.Floor(((handX * 0.5f) + 0.5f) * (resolution.X)); float y = (float)Math.Floor(((handY * -0.5f) + 0.5f) * (resolution.Y)); float z = (float)Math.Floor((handZ) / 4 * 20000); if (Math.Sqrt(Math.Pow(x - s.modelPosition.X, 2) + Math.Pow(y - s.modelPosition.Y, 2)) < 15) { //Exit(); PlaySound("hyperspace_activate"); Console.WriteLine("1" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } else { Console.WriteLine("2" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } }

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • How to have operations with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • How to set orthgraphic matrix for a 2d camera with zooming?

    - by MahanGM
    I'm using ID3DXSprite to draw my sprites and haven't set any kind of camera projection matrix. How to setup an orthographic projection matrix for camera in DirectX which it would be able to support zoom functionality? D3DXMATRIX orthographicMatrix; D3DXMATRIX identityMatrix; D3DXMatrixOrthoLH(&orthographicMatrix, nScreenWidth, nScreenHeight, 0.0f, 1.0f); D3DXMatrixIdentity(&identityMatrix); device->SetTransform(D3DTS_PROJECTION, &orthographicMatrix); device->SetTransform(D3DTS_WORLD, &identityMatrix); device->SetTransform(D3DTS_VIEW, &identityMatrix); This code is for initial setup. Then, for zooming I multiply zoom factor in nScreenWidth and nScreenHeight.

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • XNA Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • Where have the Direct3D 11 tutorials on MSDN have gone?

    - by Cam Jackson
    I've had this tutorial bookmarked for ages. I've just decided to give DX11 a real go, so I've gone through that tutorial, but I can't find where the next one in the series is! There are no links from that page to either the next in the series, or back up to the table of contents that lists all of the tutorials. These are just companion tutorials to the samples that come with the SDK, but I find them very helpful. Searching MSDN from google and the MSDN Bing search box has turned up nothing, it's like they've removed all links to these tutorials, but the pages are still there if you have the URLs. Unfortunately, MSDN URLs are akin to youtube URLs, so I can't just guess the URL of the next tutorial. Anyone have any idea what happened to these tutorials, or how I can find the others?

    Read the article

  • Different bounding volumes for culling and collision detection

    - by Serthy
    Should an object in a 3D-engine use different bounding volumes for collision-detection (broad-phase) and culling? Basically class renderBounds and class physBounds versus class boundingVolume? Each of this classes then could either contain the same type of volumes (AABB's, kDOP's, sphere's etc.) or a special fitting one for the particular object. (note: without considering of using an external physics engine)

    Read the article

  • Why my collision detection is not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him standstill on the wall? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; } void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; }

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • Line Intersection from parametric equation

    - by Sidar
    I'm sure this question has been asked before. However, I'm trying to connect the dots by translating an equation on paper into an actual function. I thought It would be interesting to ask here instead on the Math sites (since it's going to be used for games anyway ). Let's say we have our vector equation : x = s + Lr; where x is the resulting vector, s our starting point/vector. L our parameter and r our direction vector. The ( not sure it's called like this, please correct me ) normal equation is : x.n = c; If we substitute our vector equation we get: (s+Lr).n = c. We now need to isolate L which results in L = (c - s.n) / (r.n); L needs to be 0 < L < 1. Meaning it needs to be between 0 and 1. My question: I want to know what L is so if I were to substitute L for both vector equation (or two lines) they should give me the same intersection coordinates. That is if they intersect. But I can't wrap my head around on how to use this for two lines and find the parameter that fits the intersection point. Could someone with a simple example show how I could translate this to a function/method?

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • Bridge made out of blocks at an angle

    - by Pozzuh
    I'm having a bit of trouble with the math behind my project. I want the player to be able to select 2 points (vectors). With these 2 points a floor should be created. When these points are parallel to the x-axis it's easy, just calculate the amount of blocks needed by a simple division, loop through that amount (in x and y) and keep increasing the coordinate by the size of that block. The trouble starts when the 2 vectors aren't parallel to an axis, for example at an angle of 45 degrees. How do I handle the math behind this? If I wasn't completely clear, I made this awesome drawing in paint to demonstrate what I want to achieve. The 2 red dots would be the player selected locations. (The blocks indeed aren't square.) http://i.imgur.com/pzhFMEs.png.

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • Rendering Texture Quad to Screen or FBO (OpenGL ES)

    - by Usman.3D
    I need to render the texture on the iOS device's screen or a render-to-texture frame buffer object. But it does not show any texture. It's all black. (I am loading texture with image myself for testing purpose) //Load texture data UIImage *image=[UIImage imageNamed:@"textureImage.png"]; GLuint width = FRAME_WIDTH; GLuint height = FRAME_HEIGHT; //Create context void *imageData = malloc(height * width * 4); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); //Prepare image CGContextClearRect(context, CGRectMake(0, 0, width, height)); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); Simple Texture Quad drawing code mentioned here //Bind Texture, Bind render-to-texture FBO and then draw the quad const float quadPositions[] = { 1.0, 1.0, 0.0, -1.0, 1.0, 0.0, -1.0, -1.0, 0.0, -1.0, -1.0, 0.0, 1.0, -1.0, 0.0, 1.0, 1.0, 0.0 }; const float quadTexcoords[] = { 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0 }; // stop using VBO glBindBuffer(GL_ARRAY_BUFFER, 0); // setup buffer offsets glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), quadPositions); glVertexAttribPointer(ATTRIB_TEXCOORD0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(float), quadTexcoords); // ensure the proper arrays are enabled glEnableVertexAttribArray(ATTRIB_VERTEX); glEnableVertexAttribArray(ATTRIB_TEXCOORD0); //Bind Texture and render-to-texture FBO. glBindTexture(GL_TEXTURE_2D, GLid); //Actually wanted to render it to render-to-texture FBO, but now testing directly on default FBO. //glBindFramebuffer(GL_FRAMEBUFFER, textureFBO[pixelBuffernum]); // draw glDrawArrays(GL_TRIANGLES, 0, 2*3); What am I doing wrong in this code? P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now.

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >