Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 21/61 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • How can I perform 2D side-scroller collision checks in a tile-based map?

    - by bill
    I am trying to create a game where you have a player that can move horizontally and jump. It's kind of like Mario but it isn't a side scroller. I'm using a 2D array to implement a tile map. My problem is that I don't understand how to check for collisions using this implementation. After spending about two weeks thinking about it, I've got two possible solutions, but both of them have some problems. Let's say that my map is defined by the following tiles: 0 = sky 1 = player 2 = ground The data for the map itself might look like: 00000 10002 22022 For solution 1, I'd move the player (the 1) a complete tile and update the map directly. This make the collision easy because you can check if the player is touching the ground simply by looking at the tile directly below the player: // x and y are the tile coordinates of the player. The tile origin is the upper-left. if (grid[x][y+1] == 2){ // The player is standing on top of a ground tile. } The problem with this approach is that the player moves in discrete tile steps, so the animation isn't smooth. For solution 2, I thought about moving the player via pixel coordinates and not updating the tile map. This will make the animation much smoother because I have a smaller movement unit per frame. However, this means I can't really accurately store the player in the tile map because sometimes he would logically be between two tiles. But the bigger problem here is that I think the only way to check for collision is to use Java's intersection method, which means the player would need to be at least a single pixel "into" the ground to register collision, and that won't look good. How can I solve this problem?

    Read the article

  • What is causing these visual artifacts on my OpenGL sprites?

    - by Amplify91
    What could be the cause of the defects in my characters sprite? I am using OpenGL ES 2.0. I draw my sprites in a sprite batch that uses UV coordinates from one large texture atlas. If you look around the character' edges, you'll see two noticeable problems: The invisible alpha background is not invisible, but shows a strange static-like background. There are unwanted streaks where the character nears the edge of the frame (but only in some frames of the animation, this happened to be one of them). Any idea what could be causing these? I will provide related code if asked for, but I'll try to avoid just dumping the entire project and expecting someone to look through it all. EDIT: Here's a bit of code: This is how I generate my UV coordinates: private float[] createFrameUV(int frameWidth, int frameHeight, int x, int y){ float[] uv = new float[4]; if(numberOfFrames>1){ float width = (float)frameWidth / (float)mBitmap.getWidth(); float height = (float)frameHeight / (float)mBitmap.getHeight(); float u = (float)x / (float)mBitmap.getWidth(); float v = (float)y / (float)mBitmap.getHeight(); uv[0] = u; uv[1] = v; uv[2] = u + width; uv[3] = v + height; }else{ uv[0] = 0f; uv[1] = 0f; uv[2] = 1f; uv[3] = 1f; } return uv; } These are some OpenGL settings: GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);

    Read the article

  • c++ How to use angular velocity that derived from inertia and force(torque) in 3d

    - by user1217203
    I am relatively new to game development. May my terminology and description are not appropriate. Please excuse my poor phrasing and help me by giving advice on how to question better if this question seems less fitting. I really appreciate your efforts. Hi. I am having hard time interpreting the set of values I have. I have inertia and force(torque) in terms of x y z. FYI I used x and y coordinates as my ground, flat coordinates and z as my up/down. I am assuming that since f = ma, that angular acceleration must be a = f / m. So I divide my torque by inertia. Then I add those x y z values to my angular velocity variable's x y z. However these x y z values confuse me. Don't I need angle/sec or radian/sec sort of values in order to apply rotation? The x y z values I have seemed to not say anything about radians or angular movement. Question : If I have ( 1, 2, 3 ) or any ( x, y, z ) as my angular velocity, how do I actually apply it as angular movement? FYI Here I am pasting my code : float mass = 100; float devidedMass = 1.0/12 * mass; Vec3 innertia( devidedMass* (_box._size.z*_box._size.z + _box._size.x*_box._size.x), devidedMass* (_box._size.y*_box._size.y + _box._size.x*_box._size.x), devidedMass* (_box._size.y*_box._size.y + _box._size.z*_box._size.z )); box._angAccel += forceAng/innertia; box._angVelo += box._angAccel; box._angAccel.allZero(); source of my inertia calculation http://www.health.uottawa.ca/biomech/courses/apa4311/solids.pdf

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • eSTEP Newsletter for the technical EMEA partner community

    - by mseika
    We are pleased to present to you the first issue of the eSTEP Newsletter, which is dedicated to support the technical EMEA partner community in the effort to provide more information on what is going on within the corporation, what is the technical news regarding Hardware, events and all the important things which we think may be of interest to you. Invitation: STEP TechCast: Oracle Solaris 11 Express Get an insight on how Oracle Solaris 11 Express has raised the bar on the innovation introduced in Oracle Solaris 10. Learn about the new integrated features such as: network based package management tools improvements to built-in virtualization new virtualised network architecture security enhancements file system evolution  Learn how Oracle Solaris 11 Express provides greatly decreased planned system downtime, performs a completely safe system upgrade, achieves an unprecedented level of flexibility for application consolidation, and provides the highest levels of security in your datacenter. Date and time: Thursday, 7. July 2011, 13:00 - 14:00 CEST Speaker: Joost Pronk van Hoogeveen Target audience: Tech Presales Webcast Coordinates: You will find the coordinates in the eSTEP portal under the Events tab. Use your email-adress and PIN: eSTEP_2011 to get access. We are happy to get your comments and feedback.

    Read the article

  • I don't understand why one of my vbo is overwritten by another

    - by Alays
    to create a vbo I use this function: public void loadVBO(){ vboID = GL15.glGenBuffers(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboID); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, buf, GL15.GL_STATIC_DRAW); // Put the position coordinates in attribute list 0 GL20.glVertexAttribPointer(0, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 0); // Put the color components in attribute list 1 GL20.glVertexAttribPointer(1, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4); GL20.glVertexAttribPointer(2, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4+4*4); // Put the texture coordinates in attribute list 2 GL20.glVertexAttribPointer(3, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4+4*4+4*4); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); } to display a vbo I use this function: public void displayVBO(){ GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboID); GL20.glEnableVertexAttribArray(0); GL20.glEnableVertexAttribArray(1); GL20.glEnableVertexAttribArray(2); GL20.glEnableVertexAttribArray(3); GL11.glDrawArrays(GL_TRIANGLES, 0, buf.capacity()); GL20.glDisableVertexAttribArray(0); GL20.glDisableVertexAttribArray(1); GL20.glDisableVertexAttribArray(2); GL20.glDisableVertexAttribArray(3); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); } So when I call map.loadVBO() and then ocean.loadVBO(), I think the second call overwrite the first vbo I don't know how ... When I call map.display() and ocean.display(), I have the ocean draw 2 times .... Thanks.

    Read the article

  • C# XNA 2D Multiple boxes collision detection and movement

    - by zini
    Hi, I've been making simple game where you shoot boxes that are coming towards you. All game objects are simple rectangles. Now I have problem with collision detection; how to check where the collision comes so I can change the coordinates right? I have this kind of situation: http://imgur.com/8yjfW Imagine that all of those blocks are moving towards you (green box). If those orange boxes collide with each other, they should "avoid" themselves and not go through each other. I have class Enemy which has properties x, y and such. Now I'm doing the collision like this: // os.Count is an amount of other enemies colliding with this enemy if (os.Count == 0) { // If enemy doesn't collide with other enemy lasty = y; lastx = x; slope = (x - player.x) / (y - player.y); x += slope * l; // l is "movement speed" of enemy (float) if (y > player.y) { y = lasty; } else if (y < player.y) { y += l; } } else { foreach(Enemy b in os) { if (b.y > this.y) { // If some colliding enemy is closer player than this enemy, that closer one will be moved towards the player b.lasty = b.y; if (!BiggestY(os)) { // BiggestY returns true if this enemy has the biggest Y b.y += b.l; } b.x = b.lastx; } } } But this is very, very bad way to do this. I know it, but I just can't figure out other way. And as a matter in fact, this method doesn't even work pretty good; if multiple enemies are colliding same enemy they go through each other. I explained this pretty badly, but I hope that you understand this. And to sum up, as I said: How to check where the collision comes so I can change the coordinates right?

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • OpenGL: Move camera regardless of rotation

    - by Markus
    For a 2D board game I'd like to move and rotate an orthogonal camera in coordinates given in a reference system (window space), but simply can't get it to work. The idea is that the user can drag the camera over a surface, rotate and scale it. Rotation and scaling should always be around the center of the current viewport. The camera is set up as: gl.glMatrixMode(GL2.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(-width/2, width/2, -height/2, height/2, nearPlane, farPlane); where width and height are equal to the viewport's width and height, so that 1 unit is one pixel when no zoom is applied. Since these transformations usually mean (scaling and) translating the world, then rotating it, the implementation is: gl.glMatrixMode(GL2.GL_MODELVIEW); gl.glLoadIdentity(); gl.glRotatef(rotation, 0, 0, 1); // e.g. 45° gl.glTranslatef(x, y, 0); // e.g. +10 for 10px right, -2 for 2px down gl.glScalef(zoomFactor, zoomFactor, zoomFactor); // e.g. scale by 1.5 That however has the nasty side effect that translations are transformed as well, that is applied in world coordinates. If I rotate around 90° and translate again, X and Y axis are swapped. If I reorder the transformations so they read gl.glTranslatef(x, y, 0); gl.glScalef(zoomFactor, zoomFactor, zoomFactor); gl.glRotatef(rotation, 0, 0, 1); the translation will be applied correctly (in reference space, so translation along x always visually moves the camera sideways) but rotation and scaling are now performed around origin. It shouldn't be too hard, so what is it I'm missing?

    Read the article

  • An issue with tessellation a model with DirectX11

    - by Paul Ske
    I took the hardware tessellation tutorial from Rastertek and implemended texturing instead of color. This is great, so I wanted to implemended the same techique to a model inside my game editor and I noticed it doesn't draw anything. I compared the detailed tessellation from DirectX SDK sample. Inside the shader file - if I replace the HullInputType with PixelInputType it draws. So, I think because when I compiled the shaders inside the program it compiles VertexShader, PixelShader, HullShader then DomainShader. Isn't it suppose to be VertexShader, HullSHader, DomainShader then PixelShader or does it really not matter? I am just curious why wouldn't the model even be drawn when HullInputType but renders fine with PixelInputType. Shader Code: [code] cbuffer ConstantBuffer { float4x4 WVP; float4x4 World; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; Texture2D Texture; Texture2D NormalTexture; SamplerState ss { MinLOD = 5.0f; MipLODBias = 0.0f; }; struct HullOutputType { float3 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct HullInputType { float4 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct VertexInputType { float4 position : POSITION; float2 texcoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; uint uVertexID : SV_VERTEXID; }; struct PixelInputType { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; // texture coordinates float3 normal : NORMAL; float3 tangent : TANGENT; float4 color : COLOR; float3 viewDirection : TEXCOORD1; float4 depthBuffer : TEXTURE0; }; HullInputType VShader(VertexInputType input) { HullInputType output; output.position.w = 1.0f; output.position = mul(input.position,WVP); output.texcoord = input.texcoord; output.normal = input.normal; output.tangent = input.tangent; //output.normal = mul(normal,World); //output.tangent = mul(tangent,World); //output.color = output.color; //output.texcoord = texcoord; // set the texture coordinates, unmodified return output; } ConstantOutputType TexturePatchConstantFunction(InputPatch inputPatch,uint patchID : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("TexturePatchConstantFunction")] HullOutputType HShader(InputPatch patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; // Set the position for this control point as the output position. output.position = patch[pointId].position; // Set the input color as the output color. output.texcoord = patch[pointId].texcoord; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; return output; } [domain("tri")] PixelInputType DShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch patch) { float3 vertexPosition; float2 uvPosition; float4 worldposition; PixelInputType output; // Interpolate world space position with barycentric coordinates float3 vWorldPos = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; // Determine the position of the new vertex. vertexPosition = vWorldPos; // Calculate the position of the new vertex against the world, view, and projection matrices. output.position = mul(float4(vertexPosition, 1.0f),WVP); // Send the input color into the pixel shader. output.texcoord = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.normal = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.tangent = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; //output.depthBuffer = output.position; //output.depthBuffer.w = 1.0f; //worldposition = mul(output.position,WVP); //output.viewDirection = cameraDirection.xyz - worldposition.xyz; //output.viewDirection = normalize(output.viewDirection); return output; } [/code] Somethings are commented out but will be in place when fixed. I'm probably not connecting something correctly.

    Read the article

  • Render 2 images that uses different shaders

    - by Code Vader
    Based on the giawa/nehe tutorials, how can I render 2 images with different shaders. I'm pretty new to OpenGl and shaders so I'm not completely sure whats happening in my code, but I think the shaders that is called last overwrites the first one. private static void OnRenderFrame() { // calculate how much time has elapsed since the last frame watch.Stop(); float deltaTime = (float)watch.ElapsedTicks / System.Diagnostics.Stopwatch.Frequency; watch.Restart(); // use the deltaTime to adjust the angle of the cube angle += deltaTime; // set up the OpenGL viewport and clear both the color and depth bits Gl.Viewport(0, 0, width, height); Gl.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); // use our shader program and bind the crate texture Gl.UseProgram(program); //<<<<<<<<<<<< TOP PYRAMID // set the transformation of the top_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(top_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(top_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(top_pyramidUV, program, "vertexUV"); Gl.BindBuffer(top_pyramidTrianlges); // draw the textured top_pyramid Gl.DrawElements(BeginMode.Triangles, top_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<< CUBE // set the transformation of the cube program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(cube, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(cubeNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(cubeUV, program, "vertexUV"); Gl.BindBuffer(cubeQuads); // draw the textured cube Gl.DrawElements(BeginMode.Quads, cubeQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<< BOTTOM PYRAMID // set the transformation of the bottom_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(bottom_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(bottom_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(bottom_pyramidUV, program, "vertexUV"); Gl.BindBuffer(bottom_pyramidTrianlges); // draw the textured bottom_pyramid Gl.DrawElements(BeginMode.Triangles, bottom_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<<< STAR Gl.Disable(EnableCap.DepthTest); Gl.Enable(EnableCap.Blend); Gl.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.One); Gl.BindTexture(starTexture); //calculate the camera position using some fancy polar co-ordinates Vector3 position = 20 * new Vector3(Math.Cos(phi) * Math.Sin(theta), Math.Cos(theta), Math.Sin(phi) * Math.Sin(theta)); Vector3 upVector = ((theta % (Math.PI * 2)) > Math.PI) ? Vector3.Up : Vector3.Down; program_2["view_matrix"].SetValue(Matrix4.LookAt(position, Vector3.Zero, upVector)); // make sure the shader program and texture are being used Gl.UseProgram(program_2); // loop through the stars, drawing each one for (int i = 0; i < stars.Count; i++) { // set the position and color of this star program_2["model_matrix"].SetValue(Matrix4.CreateTranslation(new Vector3(stars[i].dist, 0, 0)) * Matrix4.CreateRotationZ(stars[i].angle)); program_2["color"].SetValue(stars[i].color); Gl.BindBufferToShaderAttribute(star, program_2, "vertexPosition"); Gl.BindBufferToShaderAttribute(starUV, program_2, "vertexUV"); Gl.BindBuffer(starQuads); Gl.DrawElements(BeginMode.Quads, starQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); // update the position of the star stars[i].angle += (float)i / stars.Count * deltaTime * 2 * rotate_stars; stars[i].dist -= 0.2f * deltaTime * rotate_stars; // if we've reached the center then move this star outwards and give it a new color if (stars[i].dist < 0f) { stars[i].dist += 5f; stars[i].color = new Vector3(generator.NextDouble(), generator.NextDouble(), generator.NextDouble()); } } Glut.glutSwapBuffers(); } The same goes for the textures, whichever one I mention last gets applied to both object?

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • 2D isometric picking

    - by Bikonja
    I'm trying to implement picking in my isometric 2D game, however, I am failing. First of all, I've searched for a solution and came to several, different equations and even a solution using matrices. I tried implementing every single one, but none of them seem to work for me. The idea is that I have an array of tiles, with each tile having it's x and y coordinates specified (in this simplified example it's by it's position in the array). I'm thinking that the tile (0, 0) should be on the left, (max, 0) on top, (0, max) on the bottom and (max, max) on the right. I came up with this loop for drawing, which googling seems to have verified as the correct solution, as has the rendered scene (ofcourse, it could still be wrong, also, forgive the messy names and stuff, it's just a WIP proof of concept code) // Draw code int col = 0; int row = 0; for (int i = 0; i < nrOfTiles; ++i) { // XOffset and YOffset are currently hardcoded values, but will represent camera offset combined with HUD offset Point tile = IsoToScreen(col, row, TileWidth / 2, TileHeight / 2, XOffset, YOffset); int x = tile.X; int y = tile.Y; spriteBatch.Draw(_tiles[i], new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); col++; if (col >= Columns) // Columns is the number of tiles in a single row { col = 0; row++; } } // Get selection overlay location (removed check if selection exists for simplicity sake) Point tile = IsoToScreen(_selectedTile.X, _selectedTile.Y, TileWidth / 2, TileHeight / 2, XOffset, YOffset); spriteBatch.Draw(_selectionTexture, new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); // End of draw code public Point IsoToScreen(int isoX, int isoY, int widthHalf, int heightHalf, int xOffset, int yOffset) { Point newPoint = new Point(); newPoint.X = widthHalf * (isoX + isoY) + xOffset; newPoint.Y = heightHalf * (-isoX + isoY) + yOffset; return newPoint; } This code draws the tiles correctly. Now I wanted to do picking to select the tiles. For this, I tried coming up with equations of my own (including reversing the drawing equation) and I tried multiple solutions I found on the internet and none of these solutions worked. Trying out lots of solutions, I came upon one that didn't work, but it seemed like an axis was just inverted. I fiddled around with the equations and somehow managed to get it to actually work (but have no idea why it works), but while it's close, it still doesn't work. I'm not really sure how to describe the behaviour, but it changes the selection at wrong places, while being fairly close (sometimes spot on, sometimes a tile off, I believe never more off than the adjacent tile). This is the code I have for getting which tile coordinates are selected: public Point? ScreenToIso(int screenX, int screenY, int tileHeight, int offsetX, int offsetY) { Point? newPoint = null; int nX = -1; int nY = -1; int tX = screenX - offsetX; int tY = screenY - offsetY; nX = -(tY - tX / 2) / tileHeight; nY = (tY + tX / 2) / tileHeight; newPoint = new Point(nX, nY); return newPoint; } I have no idea why this code is so close, especially considering it doesn't even use the tile width and all my attempts to write an equation myself or use a solution I googled failed. Also, I don't think this code accounts for the area outside the "tile" (the transparent part of the tile image), for which I intend to add a color map, but even if that's true, it's not the problem as the selection sometimes switches on approx 25% or 75% of width or height. I'm thinking I've stumbled upon a wrong path and need to backtrack, but at this point, I'm not sure what to do so I hope someone can shed some light on my error or point me to the right path. It may be worth mentioning that my goal is to not only pick the tile. Each main tile will be divided into 5x5 smaller tiles which won't be drawn seperately from the whole main tile, but they will need to be picked out. I think a color map of a main tile with different colors for different coordinates within the main tile should take care of that though, which would fall within using a color map for the main tile (for the transparent parts of the tile, meaning parts that possibly belong to other tiles).

    Read the article

  • Drawing a clamped uniform cubic B-spline using Cairo

    - by Tamás
    I have a bunch of coordinates which are the control points of a clamped uniform cubic B-spline on the 2D plane. I would like to draw this curve using Cairo calls (in Python, using Cairo's Python bindings), but as far as I know, Cairo supports Bézier curves only. I also know that the segments of a B-spline between two control points can be drawn using Bézier curves, but I can't find the exact formulae anywhere. Given the coordinates of the control points, how can I derive the control points of the corresponding Bézier curves? Is there any efficient algorithm for that?

    Read the article

  • Sub Rectangle or ROI problem in opencv

    - by iva123
    Hi, I'm trying to convert this c code(http://nashruddin.com/OpenCV_Eye_Detection) to the python code, but in c style, he used cvROI thing, since ROI functions are not supported by python-opencv, I tried cvGetSubRect so Here is the eye detection part of the code : eye_region = cvGetSubRect(image,cvRect(face.x,int(face.y + (face.height/4)),face.width,int(face.height/2))) eyes = cvHaarDetectObjects(eye_region,eyeCascade,memo,1.15,3,0,cvSize(25,15)) for e in eyes: cvRectangle(image, cvPoint( int(e.x), int(e.y)), cvPoint(int(e.x + e.width), int(e.y + e.height)), CV_RGB(0, 255, 0), 1, 8, 0) return image; When I run this code, It draws rectangles irrelevant places. I thought, eye_region coordinates are wrong, and tried some coordinates, but it didn't work. Any idea ? Note :Face detection method works very well, and it's code is same with the eye detection method.

    Read the article

  • OpenGL es 2.0 Read depth buffer

    - by Brian
    Hi! As far as i know, we can't read the Z(depth) value in OpenGL ES 2.0. So I am wondering how we can get the 3D world coordinates from a point on the 2D screen? Actually I have some random thoughts might work. Since we can read the RGBA value by using glReadPixels, how about we duplicate the depth buffer and store it in a color buffer(say ColorforDepth). Of course there need to be some nice convention so that we don't lose any information of the depth buffer. And then when we need a point's world coordinates , we attach this ColorforDepth color buffer to the framebuffer and then render it. So when we use glReadPixels to read the depth information at this frame. However, this will lead to 1 frame flash since the colorbuffer is a weird buffer translated from the depth buffer. I am still wondering if there is some standard way to get the depth in OpenGL es 2.0? Thx in advance!:)

    Read the article

  • iPad: Tables in Popover Views do not Scroll to Show Selected Row

    - by mahboudz
    I am having two problems with viewcontrollerss in landscape orientation on the iPad. (1) I have two popups which hold tables. The tables should scroll to a specific row to reflect a selection in the main view. Instead, the tables do scroll down some but the actual selected row remains off screen. (2) All my action sheets come up with a width of 320. In Interface Builder, all my views are created in landscape orientation. Only the main Window is not, but I don't see a way to change that. My Configuration: Upon launch, I get the following coordinates for my main window and the main viewcontroller view: Window frame {{0, 0}, {768, 1024}} mainView frame {{0, 0}, {748, 1024}} All other views after that show these coordinates when summoned (when loaded but before being presented): frame of keysig {{0, 0}, {1024, 768}} frame of instrumentSelect {{20, 0}, {1024, 768}} frame of settings {{0, 0}, {467, 300}} In all my viewControllers, i respond to shouldAutorotateToInterfaceOrientation with: return ((interfaceOrientation == UIInterfaceOrientationLandscapeLeft) || (interfaceOrientation == UIInterfaceOrientationLandscapeRight)); Everything (almost) functions as expected. The app launches into one of the two landscape modes. The views (and viewcontrollers) display everything where it belongs and taps work all across the screen as expected. However, I still have the two problems. Problem 1: I have two popups containing tables long enough to run off screen. The tables should scroll to a selected row. They do scroll i.e. they don't start visually at row 1 but they don't scroll enough to actually show the selected row. It almost seems like a UITable internal rect gets created with the wrong number and stays that way but I've checked both of the UITableView's scrollView content coordinates and they seemed reasonable. Problem 2: I think this is related to problem 1 because my actionsheets come up with a width of 320. I can only assume that the iPad allows actionSheets in only 320 or 480 widths and since it somehow thinks that the screen is oriented in portrait mode, it uses the narrower width. There you have it. I can't believe I am still getting hung up on orientation issues. I swear Apple doesn't make it easy to have a landscape app. Any ideas?

    Read the article

  • current location from CoreLocation

    - by Christian
    I have an App which launches the google Map App. The code is: UIApplication *app = [UIApplication sharedApplication]; [app openURL:[[NSURL alloc] initWithString: @"http://maps.google.com/maps?daddr=Obere+Laube,+Konstanz,+Germany&saddr="]]; The saddr= should be the current location. I get the current location with -(void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { NSLog(@"%f,%f", [newLocation coordinate]); The Log displays the correct coordinates like 2010-04-05 15:33:25.436 deBordeaux[60657:207] 37.331689,-122.030731 I didn't find the right way to transmit the coordinates to the url-string. Does someone can give me a hint how-to?

    Read the article

  • Soft Shadows in Raytracing 3D to 2D

    - by Myx
    Hello: I wish to implement soft shadows produced by area lights in my raytracer. I'm having trouble generating the random samples. So I have a scene in which I have an area light (represented as a circle) whose world (x,y,z) coordinates of the center are given, the radius is given, the normal of the plane on which the circle lies is given, as well as the color and attenuation factors. The sampling scheme I wish to use is the following: generate samples on the quadrilateral that encompasses the circle and discard points outside the circle until the required number of samples within the circle have been found. I'm having trouble understanding how I can transform the 3D coordinates of the center of the circle to its 2D representation (I don't think I can assume that the projection of the circle is on the x-y axis and thus just get rid of the z-component). I think the plane normal information should be used but I'm not sure how. Any and all suggestions are appreciated.

    Read the article

  • Android Rotate Matrix

    - by jitm
    Hello, I have matrix. This matrix represents array x and y coordinates. For example float[] src = {7,1,7,2,7,3,7,4}; I need to rotate this coordinates to 90 degrees. I use android.graphics.Matrix like this: float[] src = {7,1,7,2,7,3,7,4}; float[] dist = new float[8]; Matrix matrix = new Matrix(); matrix.preRotate(90.0f); matrix.mapPoints(dist,src); after operation rotate I have array with next values -1.0 7.0 -2.0 7.0 -3.0 7.0 -4.0 7.0 Its is good for area with 360 degrees. And how do rotate in area from 0 to 90? I need set up center of circle in this area but how ? Thanks.

    Read the article

  • How do I draw a texture-mapped triangle in MATLAB?

    - by Petter
    I have a triangle in (u,v) coordinates in an image. I would like to draw this triangle at 3D coordinates (X,Y,Z) texture-mapped with the triangle in the image. Here, u,v,X,Y,Z are all vectors with three elements representing the three corners of the triangle. I have a very ugly, slow and unsatisfactory solution in which I (1) extract a rectangular part of the image, (2) transform it to 3D space with the transformation defined by the three points, (3) draw it with surface, and (4) finally masking out everything that is not part of the triangle with AlphaData. Surely there must be an easier way of doing this?

    Read the article

  • Howto serialize a List<T> in Silverlight?

    - by Jurgen
    I have a struct called coordinate which is contained in a list in another class called segment. public struct Coordinate { public double Latitude { get; set; } public double Longtitude { get; set; } public double Altitude { get; set; } public DateTime Time { get; set; } } public class Segment { private List<Coordinate> coordinates; ... } I'd like to serialize the Segment class using the XmlSerializer using Silverlight (on Windows Phone 7). I understand from link text that XmlSerializer doesn't support List<T>. What is the advised way of serializing a resizable array coordinates? Thanks, Jurgen

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >