Search Results

Search found 41789 results on 1672 pages for 'software development'.

Page 544/1672 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • 3D rotation tool. How can I add simple extrusion?

    - by Gerve
    The 3D rotation tool is excellent but it only lets you rotate 2D objects, this means my object is wafer thin. Is there any way to add simple extrusion or depth to a symbol? I don't really want to use any 3rd party libraries like Away3D or Papervision, this is overkill for my simple 2D game. I only want to do this creating a couple motion tweens if possible. More Details: Below is what my symbol looks like (just with a bit more color). The symbol does a little 3D rotation and then flies away, it's just for something like a scoreboard within the app.

    Read the article

  • Normal maps red in OpenGL?

    - by KaiserJohaan
    I am using Assimp to import 3d models, and FreeImage to parse textures. The problem I am having is that the normal maps are actually red rather than blue when I try to render them as normal diffuse textures. http://i42.tinypic.com/289ing3.png When I open the images in a image-viewing program they do indeed show up as blue. Heres when I create the texture; OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } Here is my fragment shader. You can see I just commented out the normal-map parsing and treated the normal map texture as the diffuse texture to display it and illustrate the problem. As for the rest of the code it interacts as expected with the diffuse textures so I dont see a obvious problem there. "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Why is this? does normal-map textures need some sort of special treatment in opengl?

    Read the article

  • 2D Selective Gaussian Blur

    - by Joshua Thomas
    I am attempting to use Gaussian blur on a 2D platform game, selectively blurring specific types of platforms with different amounts. I am currently just messing around with simple test code, trying to get it to work correctly. What I need to eventually do is create three separate render targets, leave one normal, blur one slightly, and blur the last heavily, then recombine on the screen. Where I am now is I have successfully drawn into a new render target and performed the gaussian blur on it, but when I draw it back to the screen everything is purple aside from the platforms I drew to the target. This is my .fx file: #define RADIUS 7 #define KERNEL_SIZE (RADIUS * 2 + 1) //----------------------------------------------------------------------------- // Globals. //----------------------------------------------------------------------------- float weights[KERNEL_SIZE]; float2 offsets[KERNEL_SIZE]; //----------------------------------------------------------------------------- // Textures. //----------------------------------------------------------------------------- texture colorMapTexture; sampler2D colorMap = sampler_state { Texture = <colorMapTexture>; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; //----------------------------------------------------------------------------- // Pixel Shaders. //----------------------------------------------------------------------------- float4 PS_GaussianBlur(float2 texCoord : TEXCOORD) : COLOR0 { float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f); for (int i = 0; i < KERNEL_SIZE; ++i) color += tex2D(colorMap, texCoord + offsets[i]) * weights[i]; return color; } //----------------------------------------------------------------------------- // Techniques. //----------------------------------------------------------------------------- technique GaussianBlur { pass { PixelShader = compile ps_2_0 PS_GaussianBlur(); } } This is the code I'm using for the gaussian blur: public Texture2D PerformGaussianBlur(Texture2D srcTexture, RenderTarget2D renderTarget1, RenderTarget2D renderTarget2, SpriteBatch spriteBatch) { if (effect == null) throw new InvalidOperationException("GaussianBlur.fx effect not loaded."); Texture2D outputTexture = null; Rectangle srcRect = new Rectangle(0, 0, srcTexture.Width, srcTexture.Height); Rectangle destRect1 = new Rectangle(0, 0, renderTarget1.Width, renderTarget1.Height); Rectangle destRect2 = new Rectangle(0, 0, renderTarget2.Width, renderTarget2.Height); // Perform horizontal Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget1); effect.CurrentTechnique = effect.Techniques["GaussianBlur"]; effect.Parameters["weights"].SetValue(kernel); effect.Parameters["colorMapTexture"].SetValue(srcTexture); effect.Parameters["offsets"].SetValue(offsetsHoriz); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(srcTexture, destRect1, Color.White); spriteBatch.End(); // Perform vertical Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget2); outputTexture = (Texture2D)renderTarget1; effect.Parameters["colorMapTexture"].SetValue(outputTexture); effect.Parameters["offsets"].SetValue(offsetsVert); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(outputTexture, destRect2, Color.White); spriteBatch.End(); // Return the Gaussian blurred texture. game.GraphicsDevice.SetRenderTarget(null); outputTexture = (Texture2D)renderTarget2; return outputTexture; } And this is the draw method affected: public void Draw(SpriteBatch spriteBatch) { device.SetRenderTarget(maxBlur); spriteBatch.Begin(); foreach (Brick brick in blueBricks) brick.Draw(spriteBatch); spriteBatch.End(); blue = gBlur.PerformGaussianBlur((Texture2D) maxBlur, helperTarget, maxBlur, spriteBatch); spriteBatch.Begin(); device.SetRenderTarget(null); foreach (Brick brick in redBricks) brick.Draw(spriteBatch); foreach (Brick brick in greenBricks) brick.Draw(spriteBatch); spriteBatch.Draw(blue, new Rectangle(0, 0, blue.Width, blue.Height), Color.White); foreach (Brick brick in purpleBricks) brick.Draw(spriteBatch); spriteBatch.End(); } I'm sorry about the massive brick of text and images(or not....new user, I tried, it said no), but I wanted to get my problem across clearly as I have been searching for an answer to this for quite a while now. As a side note, I have seen the bloom sample. Very well commented, but overly complicated since it deals in 3D; I was unable to take what I needed to learn form it. Thanks for any and all help.

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • RenderTarget2D behavior in XNA

    - by Utkarsh Sinha
    I've been dabbling with XNA for a couple of days now. This chunk of code doesn't work as I expect. The goal is to render sprites individually and composite them on another rendertarget. P = RenderTarget2D(with RenderTargetUsage.PreserveContents) D = RenderTarget2D(with RenderTargetUsage.DiscardContents) for all sprites: graphicsDevice.SetRenderTarget(D); <draw sprite i> graphicsDevice.SetRenderTarget(P); <Draw D> graphicsDevice.SetRenderTarget(null); <Draw P> The result I get is - only the last sprite is visible. I'm sure I'm missing some piece of information about RenderTarget2D. Any hints on what that might be? Cross posted from - http://stackoverflow.com/questions/9970349/weird-rendertarget2d-behaviour

    Read the article

  • Skyrim Nexus Mods on Xbox 360 by use of dawnguard?

    - by user17895
    i think it's possible i opened up the dawnguard marketplace content and it consists 3 files: dawnguard.bsa < mod dawnguard.esp <- mod installing file. and spa.bin <-dont know where this is for. and it has been confirmed you can use the top 2 files on pc for a not fully functional dawnguard (barely functional to be exact) and if we could just replace or add a few other bsa and esp files to this marketplace content we could get mods up and running on xbox altough i need confirmation on this. I also have no clue where the spa.bin file for is, i need to examine it some further. Further this is adding a few non-distributed Files to marketplace content and wont get you booted from XBL. Also if anyone wants to examine these files for further information i will gladly share them with you. if you have any information or answers please email me at [email protected] thx

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • AS3 Stage3D Mouse click problem?

    - by Martin K
    I have a problem with Mouse interaction and Stage3D. The only way I found to register to listen to mouse clicks and interact with Stage3D, is to add a mouse eventListener directly to the .stage. However this will result in any time i click anywhere in the flash application the mouse click will fire, even if there is an overlaid 2D menu where the user intended to click. IE I have a 3D application running in the background, which listens to clicks, and I have some floating User Interface elements in the foreground, and ideally if I clicked a button in the foreground, then that would NOT fire a click event that the Stage3D would register. Any idea how to solve this problem?

    Read the article

  • UDK game Prisoners/Guards

    - by RR_1990
    For school I need to make a little game with UDK, the concept of the game is: The player is the headguard, he will have some other guard (bots) who will follow him. Between the other guards and the player are some prisoners who need to evade the other guards. It needs to look like this My idea was to let the guard bots follow the player at a certain distance and let the prisoners bots in the middle try to evade the guard bots. Now is the problem i'm new to Unreal Script and the school doesn't support me that well. Untill now I have only was able to make the guard bots follow me. I hope you guys can help me or make me something that will make this game work. Here is the class i'm using to let te bots follow me: class ChaseControllerAI extends AIController; var Pawn player; var float minimalDistance; var float speed; var float distanceToPlayer; var vector selfToPlayer; auto state Idle { function BeginState(Name PreviousStateName) { Super.BeginState(PreviousStateName); } event SeePlayer(Pawn p) { player = p; GotoState('Chase'); } Begin: player = none; self.Pawn.Velocity.x = 0.0; self.Pawn.Velocity.Y = 0.0; self.Pawn.Velocity.Z = 0.0; } state Chase { function BeginState(Name PreviousStateName) { Super.BeginState(PreviousStateName); } event PlayerOutOfReach() { `Log("ChaseControllerAI CHASE Player out of reach."); GotoState('Idle'); } // class ChaseController extends AIController; CONTINUED // State Chase (continued) event Tick(float deltaTime) { `Log("ChaseControllerAI in Event Tick."); selfToPlayer = self.player.Location - self.Pawn.Location; distanceToPlayer = Abs(VSize(selfToPlayer)); if (distanceToPlayer > minimalDistance) { PlayerOutOfReach(); } else { self.Pawn.Velocity = Normal(selfToPlayer) * speed; //self.Pawn.Acceleration = Normal(selfToPlayer) * speed; self.Pawn.SetRotation(rotator(selfToPlayer)); self.Pawn.Move(self.Pawn.Velocity*0.001); // or *deltaTime } } Begin: `Log("Current state Chase:Begin: " @GetStateName()@""); } defaultproperties { bAdjustFromWalls=true; bIsPlayer= true; minimalDistance = 1024; //org 1024 speed = 500; }

    Read the article

  • How can we view 3d objects from top down view in TD game

    - by Syed
    I am making a tower defense game. I am working in x and y axis only. I have made a grid, snapped towers and made a pathfinding algo to run enemy. Initially I have worked with cubes and spheres in place of towers and enemies. Now I am going to place real towers (3D). Note that I haven't used z axis up till now. The user will analyze the game from top down view. I want the user to see towers placement with a little bit of 3d view but I have made my all code in 2d thing. Is there any solution to my problem that somewhat tower placement would view a 3D touch or you can say 2.5D ?? (like fieldrunners) or should I have to involve z axis and ignoring y axis ?

    Read the article

  • How to create a fountain in UDK

    - by user36425
    I'm trying to make a fountain in my level in UDK, I made the base of the fountain by using a Cylinder build and now I'm trying to put water in it. I went to use the fluidSurfaceActor but I notice that this is square but my fountain is a cylinder. Is there a way that I can change the shape of the fluidSurfaceActor to fit the builder brush shape or is there another way to do this? Or is it hopeless and I have to make my fountain into a cube? Here is a link/picture to the screenprint of what I'm talking about:

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • How do I consistently re-size my game window and elements?

    - by Milo
    In my 2D game, I have a flow layout. Inside the flow layout are tables. I have a slider that lets the user make the tables larger or smaller. This makes the background larger or smaller too. Everything should scale proportionally which means the background should stay at the same position when I make things larger, and it almost does. When the scrollbar is at 0, it does exactly this. As the scrollbar gets further down problems arise. I'll toggle the slider maybe 3 times and on the fourth time, the background jumps a little lower on the Y axis. In order to be efficient, I only start rendering the background near the parent of the flow layout. Here it is: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { int cx, cy, cw, ch; g->getClippingRect(cx,cy,cw,ch); g->setClippingRect(absRect.getX(),absRect.getY(),absRect.getWidth(),absRect.getHeight()); float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; float offsetX = m_activeTables[0]->getLocation().getX() - w; float offsetY = m_activeTables[0]->getLocation().getY() - h; int startY = childRect.getY(); if(moo) { std::cout << "S=" << startY << ","; } int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } if(moo) { std::cout << "\n"; moo = false; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } The numeric problem seems to be in the value of startY. I outputted startY figuring out its value: As you can see here, this is me only zooming in, pay attention to the final number before the next s=. You'll notice that, what should happen is, the numbers should be linear, ex: -40, -38, -36, -34, -32, -30, etc. As you'll notice, the start numbers linearly correlate ex: 62k, 64k, 66k, 68k, 70k etc.. but the end result is wrong every third or 4th time. Here is most of the resize code: void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; agui::Gui* gotGui = getGui(); float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); if(gotGui) { gotGui->toggleWidgetLocationChanged(false); } updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); if((int)(newVal + 0.5f) > (int)newVal) { newVal++; } m_vScroll->setValue(newVal); static int x = 0; x++; moo = true; //std::cout << m_vScroll->getValue() << std::endl; if(gotGui) { gotGui->toggleWidgetLocationChanged(true); } if(gotGui) { gotGui->_widgetLocationChanged(); } } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { if(getGui()) { getGui()->toggleWidgetLocationChanged(false); } m_flow->setLocation(0,-val); if(getGui()) { getGui()->toggleWidgetLocationChanged(true); getGui()->_widgetLocationChanged(); } }

    Read the article

  • Visualization tools for physical simulations

    - by Nick
    I'm interested in starting some physics simulations and I'm getting hung up on the visualization side of things. I have lots of resources for reading how to implement the simulation itself but I'd rather not learn two things at once - the simulation part and a new complex visualization API. Are there any high-level visualization tools that are language independent? I understand that I'll have to learn some new code for visualization but I'd like to start at a high level, OpenGL is my long-term goal and not my prototype goal.

    Read the article

  • Extract derived 3D scaling from a 3D Sprite to set to a 2D billboard

    - by Bill Kotsias
    I am trying to get the derived position and scaling of a 3D Sprite and set them to a 2D Sprite. I have managed to do the first part like this: var p:Point = sprite3d.local3DToGlobal(new Vector3D(0,0,0)); billboard.x = p.x; billboard.y = p.y; But I can't get the scaling part correctly. I am trying this: var mat:Matrix3D = sprite3d.transform.getRelativeMatrix3D(stage); // get derived matrix(?) var scaleV:Vector3D = mat.decompose()[2]; // get scaling vector from derived matrix var scale:Number = scaleV.length; billboard.scaleX = scale; billboard.scaleY = scale; ...but the result is apparently wrong. PS. One might ask what I am trying to achieve. I am trying to create "billboard" 3D sprites, i.e. sprites which are affected by all 3D transformations except rotations, thus they always face the "camera".

    Read the article

  • Encode two integers into colour values and compare them in a HLSL shader

    - by Ben Slinger
    I am writing a 2D point and click adventure game in Monogame, and I'd like to be able to create an image mask for every room which defines which parts of the background a character can walk behind, and at which Y value a character needs to be at for the background to be drawn above the character. I haven't done any shader work before but after doing some reading I thought the following solution should work: Create a mask for the room with different walk behind areas painted in a colour that defines the baseline Y value (Walk Behind Mask) Render all objects to a RenderTarget2D (Base Texture) Render all objects to a different RenderTarget2D, but changing every pixel of each object to a colour that defines its Y value (Position Mask) Pass these two textures plus the image mask into the shader, and for each pixel compare the colour of the image mask to the colour of the Position Mask to the Walk Behind Mask - if the Position Mask pixel is larger (thus lower on the screen and closer to the camera) than the Walk Behind Mask, draw the pixel from the Base Texture, otherwise draw a transparent pixel (allowing the background to show through). I've got it mostly working, but I'm having trouble packing and unpacking the Y values into colours and retrieving them correctly in the shader. Here are some code examples of how I'm doing it so far: (When drawing to the Position Mask RenderTarget2D) Color posColor = new Color(((int)Position.Y >> 16) & 255, ((int)Position.Y >> 8) & 255, (int)Position.Y & 255); So as far as I can tell, this should be taking the first 3 bytes of the position integer and encoding them into a 4 byte colour (ignoring the alpha as the 4th byte). This seems to work fine, as when my character is at Y = 600, the resulting Color from this is: {[Color: R=0, G=2, B=88, A=255, PackedValue=4283957760]}. I then have an area in my Walk Behind Mask that I only want the character to be displayed behind if his Y value is lower than 655, so I've painted it with R=0, G=2, B=143, A=255. Now, I think I have the shader OK as well, here's what I have: sampler BaseTexture : register(s0); sampler MaskTexture : register(s1); sampler PositionTexture : register(s2); float4 mask( float2 coords : TEXCOORD0 ) : COLOR0 { float4 color = tex2D(BaseTexture, coords); float4 maskColor = tex2D(MaskTexture, coords); float4 positionColor = tex2D(PositionTexture, coords); float maskCompare = (maskColor.r * pow(2,24)) + (maskColor.g * pow(2,16)) + (maskColor.b * pow(2,8)); float positionCompare = (positionColor.r * pow(2,24)) + (positionColor.g * pow(2,16)) + (positionColor.b * pow(2,8)); return positionCompare < maskCompare ? float4(0,0,0,0) : color; } technique Technique1 { pass NoEffect { PixelShader = compile ps_3_0 mask(); } } This isn't working, however - currently all characters are displayed behind the walk behind area, regardless of their Y value. I tried printing out some debug info by grabbing the pixel from both the Position Mask and the Walk Under Mask under the current mouse position, and it seems like maybe the colours aren't being rendered to the Position Mask correctly? When calculating the colour in that code above I'm getting R=0, G=2, B=88, A=255, but when I mouseover my character I get R=0, G=0, B=30, A=255. Any ideas what I'm doing wrong? It seems like maybe I'm losing some information when rendering to the RenderTarget2D, but I'm now knowledgeable enough to figure out what's happening. Also, I should probably ask, is this an efficient way to do this? Will there be a performance impact? Edit: Whoops, turns out there was a bug that I'd introduced myself, I was drawing out the Position Mask with the position Color, left over from some early testing I was doing. So this solution is working perfectly, though I'm still interested in whether this is an efficient solution performance wise.

    Read the article

  • Vertex buffer acting strange? [on hold]

    - by Ryan Capote
    I'm having a strange problem, and I don't know what could be causing it. My current code is identical to how I've done this before. I'm trying to render a rectangle using VBO and orthographic projection.   My results:     What I expect: 3x3 rectangle in the top left corner   #include <stdio.h> #include <GL\glew.h> #include <GLFW\glfw3.h> #include "lodepng.h"   static const int FALSE = 0; static const int TRUE = 1;   static const char* VERT_SHADER =     "#version 330\n"       "layout(location=0) in vec4 VertexPosition; "     "layout(location=1) in vec2 UV;"     "uniform mat4 uProjectionMatrix;"     /*"out vec2 TexCoords;"*/       "void main(void) {"     "    gl_Position = uProjectionMatrix*VertexPosition;"     /*"    TexCoords = UV;"*/     "}";   static const char* FRAG_SHADER =     "#version 330\n"       /*"uniform sampler2D uDiffuseTexture;"     "uniform vec4 uColor;"     "in vec2 TexCoords;"*/     "out vec4 FragColor;"       "void main(void) {"    /* "    vec4 texel = texture2D(uDiffuseTexture, TexCoords);"     "    if(texel.a <= 0) {"     "         discard;"     "    }"     "    FragColor = texel;"*/     "    FragColor = vec4(1.f);"     "}";   static int g_running; static GLFWwindow *gl_window; static float gl_projectionMatrix[16];   /*     Structures */ typedef struct _Vertex {     float x, y, z, w;     float u, v; } Vertex;   typedef struct _Position {     float x, y; } Position;   typedef struct _Bitmap {     unsigned char *pixels;     unsigned int width, height; } Bitmap;   typedef struct _Texture {     GLuint id;     unsigned int width, height; } Texture;   typedef struct _VertexBuffer {     GLuint bufferObj, vertexArray; } VertexBuffer;   typedef struct _ShaderProgram {     GLuint vertexShader, fragmentShader, program; } ShaderProgram;   /*   http://en.wikipedia.org/wiki/Orthographic_projection */ void createOrthoProjection(float *projection, float width, float height, float far, float near)  {       const float left = 0;     const float right = width;     const float top = 0;     const float bottom = height;          projection[0] = 2.f / (right - left);     projection[1] = 0.f;     projection[2] = 0.f;     projection[3] = -(right+left) / (right-left);     projection[4] = 0.f;     projection[5] = 2.f / (top - bottom);     projection[6] = 0.f;     projection[7] = -(top + bottom) / (top - bottom);     projection[8] = 0.f;     projection[9] = 0.f;     projection[10] = -2.f / (far-near);     projection[11] = (far+near)/(far-near);     projection[12] = 0.f;     projection[13] = 0.f;     projection[14] = 0.f;     projection[15] = 1.f; }   /*     Textures */ void loadBitmap(const char *filename, Bitmap *bitmap, int *success) {     int error = lodepng_decode32_file(&bitmap->pixels, &bitmap->width, &bitmap->height, filename);       if (error != 0) {         printf("Failed to load bitmap. ");         printf(lodepng_error_text(error));         success = FALSE;         return;     } }   void destroyBitmap(Bitmap *bitmap) {     free(bitmap->pixels); }   void createTexture(Texture *texture, const Bitmap *bitmap) {     texture->id = 0;     glGenTextures(1, &texture->id);     glBindTexture(GL_TEXTURE_2D, texture);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0,              GL_RGBA, GL_UNSIGNED_BYTE, bitmap->pixels);       glBindTexture(GL_TEXTURE_2D, 0); }   void destroyTexture(Texture *texture) {     glDeleteTextures(1, &texture->id);     texture->id = 0; }   /*     Vertex Buffer */ void createVertexBuffer(VertexBuffer *vertexBuffer, Vertex *vertices) {     glGenBuffers(1, &vertexBuffer->bufferObj);     glGenVertexArrays(1, &vertexBuffer->vertexArray);     glBindVertexArray(vertexBuffer->vertexArray);       glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj);     glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 6, (const GLvoid*)vertices, GL_STATIC_DRAW);       const unsigned int uvOffset = sizeof(float) * 4;       glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);     glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)uvOffset);       glEnableVertexAttribArray(0);     glEnableVertexAttribArray(1);       glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }   void destroyVertexBuffer(VertexBuffer *vertexBuffer) {     glDeleteBuffers(1, &vertexBuffer->bufferObj);     glDeleteVertexArrays(1, &vertexBuffer->vertexArray); }   void bindVertexBuffer(VertexBuffer *vertexBuffer) {     glBindVertexArray(vertexBuffer->vertexArray);     glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj); }   void drawVertexBufferMode(GLenum mode) {     glDrawArrays(mode, 0, 6); }   void drawVertexBuffer() {     drawVertexBufferMode(GL_TRIANGLES); }   void unbindVertexBuffer() {     glBindVertexArray(0);     glBindBuffer(GL_ARRAY_BUFFER, 0); }   /*     Shaders */ void compileShader(ShaderProgram *shaderProgram, const char *vertexSrc, const char *fragSrc) {     GLenum err;     shaderProgram->vertexShader = glCreateShader(GL_VERTEX_SHADER);     shaderProgram->fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);       if (shaderProgram->vertexShader == 0) {         printf("Failed to create vertex shader.");         return;     }       if (shaderProgram->fragmentShader == 0) {         printf("Failed to create fragment shader.");         return;     }       glShaderSource(shaderProgram->vertexShader, 1, &vertexSrc, NULL);     glCompileShader(shaderProgram->vertexShader);     glGetShaderiv(shaderProgram->vertexShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile vertex shader.");         return;     }       glShaderSource(shaderProgram->fragmentShader, 1, &fragSrc, NULL);     glCompileShader(shaderProgram->fragmentShader);     glGetShaderiv(shaderProgram->fragmentShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile fragment shader.");         return;     }       shaderProgram->program = glCreateProgram();     glAttachShader(shaderProgram->program, shaderProgram->vertexShader);     glAttachShader(shaderProgram->program, shaderProgram->fragmentShader);     glLinkProgram(shaderProgram->program);          glGetProgramiv(shaderProgram->program, GL_LINK_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to link shader.");         return;     } }   void destroyShader(ShaderProgram *shaderProgram) {     glDetachShader(shaderProgram->program, shaderProgram->vertexShader);     glDetachShader(shaderProgram->program, shaderProgram->fragmentShader);       glDeleteShader(shaderProgram->vertexShader);     glDeleteShader(shaderProgram->fragmentShader);       glDeleteProgram(shaderProgram->program); }   GLuint getUniformLocation(const char *name, ShaderProgram *program) {     GLuint result = 0;     result = glGetUniformLocation(program->program, name);       return result; }   void setUniformMatrix(float *matrix, const char *name, ShaderProgram *program) {     GLuint loc = getUniformLocation(name, program);       if (loc == -1) {         printf("Failed to get uniform location in setUniformMatrix.\n");         return;     }       glUniformMatrix4fv(loc, 1, GL_FALSE, matrix); }   /*     General functions */ static int isRunning() {     return g_running && !glfwWindowShouldClose(gl_window); }   static void initializeGLFW(GLFWwindow **window, int width, int height, int *success) {     if (!glfwInit()) {         printf("Failed it inialize GLFW.");         *success = FALSE;        return;     }          glfwWindowHint(GLFW_RESIZABLE, 0);     *window = glfwCreateWindow(width, height, "Alignments", NULL, NULL);          if (!*window) {         printf("Failed to create window.");         glfwTerminate();         *success = FALSE;         return;     }          glfwMakeContextCurrent(*window);       GLenum glewErr = glewInit();     if (glewErr != GLEW_OK) {         printf("Failed to initialize GLEW.");         printf(glewGetErrorString(glewErr));         *success = FALSE;         return;     }       glClearColor(0.f, 0.f, 0.f, 1.f);     glViewport(0, 0, width, height);     *success = TRUE; }   int main(int argc, char **argv) {          int err = FALSE;     initializeGLFW(&gl_window, 480, 320, &err);     glDisable(GL_DEPTH_TEST);     if (err == FALSE) {         return 1;     }          createOrthoProjection(gl_projectionMatrix, 480.f, 320.f, 0.f, 1.f);          g_running = TRUE;          ShaderProgram shader;     compileShader(&shader, VERT_SHADER, FRAG_SHADER);     glUseProgram(shader.program);     setUniformMatrix(&gl_projectionMatrix, "uProjectionMatrix", &shader);       Vertex rectangle[6];     VertexBuffer vbo;     rectangle[0] = (Vertex){0.f, 0.f, 0.f, 1.f, 0.f, 0.f}; // Top left     rectangle[1] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top right     rectangle[2] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[3] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top left     rectangle[4] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[5] = (Vertex){3.f, 3.f, 0.f, 1.f, 1.f, 1.f}; // Bottom right       createVertexBuffer(&vbo, &rectangle);            bindVertexBuffer(&vbo);          while (isRunning()) {         glClear(GL_COLOR_BUFFER_BIT);         glfwPollEvents();                    drawVertexBuffer();                    glfwSwapBuffers(gl_window);     }          unbindVertexBuffer(&vbo);       glUseProgram(0);     destroyShader(&shader);     destroyVertexBuffer(&vbo);     glfwTerminate();     return 0; }

    Read the article

  • Taking fixed direction on hemisphere and project to normal (openGL)

    - by Maik Xhani
    I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions: vec3 sampleDirections[6] = {vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, -0.700629f), vec3(-0.509037f, 0.5f, -0.700629), vec3(-0.823639f, 0.5f, 0.267617f)}; now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta = acos(sqrt(1.0f-rand1)); float phi = 6.283185f * rand2; vec3 s = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)); vec3 v = normalize(cross(n,vec3(0.0072, 1.0, 0.0034))); vec3 u = cross(v, n); u = s.x*u; v = s.y*v; vec3 w = s.z*n; vec3 direction = u+v+w; return normalize(direction); } ** EDIT ** This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) { vec3 x; vec3 z; if(abs(n.x) < abs(n.y)){ if(abs(n.x) < abs(n.z)){ x = vec3(1.0f,0.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } }else{ if(abs(n.y) < abs(n.z)){ x = vec3(0.0f,1.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } } z = normalize(cross(x,n)); x = cross(n,z); mat3 M = mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z); return M*sampleDir; } So if my n = (0,0,1); and my sampleDir = (0,1,0); shouldn't the M*sampleDir be (0,0,1)? Cause that is what I was expecting.

    Read the article

  • OpenGL camera moves faster than player

    - by opiop65
    I have a side scroller game made in OpenGL, and I'm trying to center the player in the viewport when he moves. I know how to do it: cameraX = Width / 2 / TileSize - playerPosX cameraY = Height / 2 / TileSize - playerPosY However, I have a problem. The player and "camera" move, but the player moves faster than the "camera" scrolls. So, the player can actually move out of the screen. Some code, this is how I translate the camera: public Camera(){ } public void update(Player p){ glTranslatef(-p.getPos().x - Main.WIDTH / 64 / 2, -p.getPos().y - Main.HEIGHT / 64 / 2, 1); } Here's how I move the player: public void update(){ if(Keyboard.isKeyDown(Keyboard.KEY_D)){ this.move(MOVESPEED, 0); } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ this.move(-MOVESPEED, 0); } } The move method: public void move(float x, float y){ this.getPos().set(this.getPos().x + x, this.getPos().y + y); } And then after I move the player, I update the player's geometry, which shouldn't matter. What am I doing wrong here, this seems like such a simple problem, yet it doesn't work!

    Read the article

  • 2D Tile Game - Smooth Biome Terrain Transitions

    - by Cyral
    While working on my 2D tile based game, I encountered a problem. I use Perlin Noise to generate the terrain. Some biomes (Desert, Forest, etc) have different flatness values depending on terrain, which causes the end/start of a new biome to have a big cliff because the terrain is different. When 2 biomes have the same flatness, they are fine, but if they are different, this can happen. Is there any way to keep this from happening? Example (In programmer art)

    Read the article

  • Sliding collision response

    - by dbostream
    I have been reading plenty of tutorials about sliding collision responses yet I am not able to implement it properly in my project. What I want to do is make a puck slide along the rounded corner boards of a hockey rink. In my latest attempt the puck does slide along the boards but there are some strange velocity behaviors. First of all the puck slows down a lot pretty much right away and then it slides for awhile and stops before exiting the corner. Even if I double the speed I get a similar behavior and the puck does not make it out of the corner. I used some ideas from this document http://www.peroxide.dk/papers/collision/collision.pdf. This is what I have: Update method called from the game loop when it is time to update the puck (I removed some irrelevant parts). I use two states (current, previous) which are used to interpolate the position during rendering. public override void Update(double fixedTimeStep) { /* Acceleration is set to 0 for now. */ Acceleration.Zero(); PreviousState = CurrentState; _collisionRecursionDepth = 0; CurrentState.Position = SlidingCollision(CurrentState.Position, CurrentState.Velocity * fixedTimeStep + 0.5 * Acceleration * fixedTimeStep * fixedTimeStep); /* Should not this be affected by a sliding collision? and not only the position. */ CurrentState.Velocity = CurrentState.Velocity + Acceleration * fixedTimeStep; Heading = Vector2.NormalizeRet(CurrentState.Velocity); } private Vector2 SlidingCollision(Vector2 position, Vector2 velocity) { if(_collisionRecursionDepth > 5) return position; bool collisionFound = false; Vector2 futurePosition = position + velocity; Vector2 intersectionPoint = new Vector2(); Vector2 intersectionPointNormal = new Vector2(); /* I did not include the collision detection code, if a collision is detected the intersection point and normal in that point is returned. */ if(!collisionFound) return futurePosition; /* If no collision was detected it is safe to move to the future position. */ /* It is not exactly the intersection point, but slightly before. */ Vector2 newPosition = intersectionPoint; /* oldVelocity is set to the distance from the newPosition(intersection point) to the position it had moved to had it not collided. */ Vector2 oldVelocity = futurePosition - newPosition; /* Project the distance left to move along the intersection normal. */ Vector2 newVelocity = oldVelocity - intersectionPointNormal * oldVelocity.DotProduct(intersectionPointNormal); if(newVelocity.LengthSq() < 0.001) return newPosition; /* If almost no speed, no need to continue. */ _collisionRecursionDepth++; return SlidingCollision(newPosition, newVelocity); } What am I doing wrong with the velocity? I have been staring at this for very long so I have gone blind. I have tried different values of recursion depth but it does not seem to make it better. Let me know if you need more information. I appreciate any help. EDIT: A combination of Patrick Hughes' and teodron's answers solved the velocity problem (I think), thanks a lot! This is the new code: I decided to use a separate recursion method now too since I don't want to recalculate the acceleration in each recursion. public override void Update(double fixedTimeStep) { Acceleration.Zero();// = CalculateAcceleration(fixedTimeStep); PreviousState = new MovingEntityState(CurrentState.Position, CurrentState.Velocity); CurrentState = SlidingCollision(CurrentState, fixedTimeStep); Heading = Vector2.NormalizeRet(CurrentState.Velocity); } private MovingEntityState SlidingCollision(MovingEntityState state, double timeStep) { bool collisionFound = false; /* Calculate the next position given no detected collision. */ Vector2 futurePosition = state.Position + state.Velocity * timeStep; Vector2 intersectionPoint = new Vector2(); Vector2 intersectionPointNormal = new Vector2(); /* I did not include the collision detection code, if a collision is detected the intersection point and normal in that point is returned. */ /* If no collision was detected it is safe to move to the future position. */ if (!collisionFound) return new MovingEntityState(futurePosition, state.Velocity); /* Set new position to the intersection point (slightly before). */ Vector2 newPosition = intersectionPoint; /* Project the new velocity along the intersection normal. */ Vector2 newVelocity = state.Velocity - 1.90 * intersectionPointNormal * state.Velocity.DotProduct(intersectionPointNormal); /* Calculate the time of collision. */ double timeOfCollision = Math.Sqrt((newPosition - state.Position).LengthSq() / (futurePosition - state.Position).LengthSq()); /* Calculate new time step, remaining time of full step after the collision * current time step. */ double newTimeStep = timeStep * (1 - timeOfCollision); return SlidingCollision(new MovingEntityState(newPosition, newVelocity), newTimeStep); } Even though the code above seems to slide the puck correctly please have a look at it. I have a few questions, if I don't multiply by 1.90 in the newVelocity calculation it doesn't work (I get a stack overflow when the puck enters the corner because the timeStep decreases very slowly - a collision is found early in every recursion), why is that? what does 1.90 really do and why 1.90? Also I have a new problem, the puck does not move parallell to the short side after exiting the curve; to be more exact it moves outside the rink (I am not checking for any collisions with the short side at the moment). When I perform the collision detection I first check that the puck is in the correct quadrant. For example bottom-right corner is quadrant four i.e. circleCenter.X < puck.X && circleCenter.Y puck.Y is this a problem? or should the short side of the rink be the one to make the puck go parallell to it and not the last collision in the corner? EDIT2: This is the code I use for collision detection, maybe it has something to do with the fact that I can't make the puck slide (-1.0) but only reflect (-2.0): /* Point is the current position (not the predicted one) and quadrant is 4 for the bottom-right corner for example. */ if (GeometryHelper.PointInCircleQuadrant(circleCenter, circleRadius, state.Position, quadrant)) { /* The line is: from = state.Position, to = futurePosition. So a collision is detected when from is inside the circle and to is outside. */ if (GeometryHelper.LineCircleIntersection2d(state.Position, futurePosition, circleCenter, circleRadius, intersectionPoint, quadrant)) { collisionFound = true; /* Set the intersection point to slightly before the real intersection point (I read somewhere this was good to do because of floting point precision, not sure exactly how much though). */ intersectionPoint = intersectionPoint - Vector2.NormalizeRet(state.Velocity) * 0.001; /* Normal at the intersection point. */ intersectionPointNormal = Vector2.NormalizeRet(circleCenter - intersectionPoint) } } When I set the intersection point, if I for example use 0.1 instead of 0.001 the puck travels further before it gets stuck, but for all values I have tried (including 0 - the real intersection point) it gets stuck somewhere (but I necessarily not get a stack overflow). Can something in this part be the cause of my problem? I can see why I get the stack overflow when using -1.0 when calculating the new velocity vector; but not how to solve it. I traced the time steps used in the recursion (initial time step is always 1/60 ~ 0.01666): Recursion depth Time step next recursive call [Start recursion, time step ~ 0.016666] 0 0,000985806527246773 [No collision, stop recursion] [Start recursion, time step ~ 0.016666] 0 0,0149596704364629 1 0,0144883449376379 2 0,0143155612984837 3 0,014224925727213 4 0,0141673917461608 5 0,0141265435314026 6 0,0140953966184117 7 0,0140704653746625 ...and so on. As you can see the collision is detected early in every recursive call which means the next time step decreases very slowly thus the recursion depth gets very big - stack overflow.

    Read the article

  • 2D tower defense - A bullet to an enemy

    - by Tashu
    I'm trying to find a good solution for a bullet to hit the enemy. The game is 2D tower defense, the tower is supposed to shoot a bullet and hit the enemy guaranteed. I tried this solution - http://blog.wolfire.com/2009/07/linear-algebra-for-game-developers-part-1/ The link mentioned to subtract the bullet's origin and the enemy as well (vector subtraction). I tried that but a bullet just follows around the enemy. float diffX = enemy.position.x - position.x; float diffY = enemy.position.y - position.y; velocity.x = diffX; velocity.y = diffY; position.add(velocity.x * deltaTime, velocity.y * deltaTime); I'm familiar with vectors but not sure what steps (vector math operations) to be done to get this solution working.

    Read the article

  • Google play game services and Facebook integration in one game

    - by Ineentho
    We are creating a cross platform game for iOS and Android. We have thought about how and with which services we should integrate achievements and scoreboards with. For the iOS part, we are pretty sure that this how we want to do, in order from when the user opens the app for the first time: Connect with Game Center (Should be automatic, the user shouldn't even notice?) We will also get the players nickname for public scoreboards here. Ask if the user wants to connect with Facebook so that we can compare the players highscores with their friends. We could add Google play game services there as well, but I don't feel like that adds anything to the experience for the end user. Now comes the tricky part: Android We thought that we could do just like for iOS, except that we replace Game Center with Google Play Game Services. However, unlike Game Center, Game Services will ask the user to log in to their Google+ account and allow us to access their account. So now, what we have is a double login, first with Google+ and then with Facebook. What will users think about that? Should we scrap Play Services entirely and just ask the user for a nickname within our app and user Facebook for achievements?

    Read the article

  • Good baseline size for an A* Search grid?

    - by Jo-Herman Haugholt
    I'm working on a grid based game/prototype with a continuous open map, and are currently considering what size to make each segment. I've seen some articles mention different sizes, but most of them is really old, so I'm unsure how well they map to the various platforms and performance demands common today. As for the project, it's a hybrid of 2D and 3D, but for path-finding purposes, the majority of searches would be approximately 2D. From a graphics perspective, the minimum segment size would be 64x64 in the XZ plane to minimize loaded segments while ensuring full screen coverage. I figure pathfinding would be an important indicator of maximum practical size.

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >