Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 269/540 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • Why aren't tangent space normal maps completely blue?

    - by seahorse
    Why aren't normal maps just blue? I would think that normal maps should be predominantly blue in color because the Z component of the normal is represented by blue. Normals point out of the surface in the Z direction so we should see blue as the predominant colour since the Z component is dominant. By definition tangent space is perpendicular to the surface. At any point we should have the normal always pointing in the Z (blue direction) with no X (red direction) or Y (green direction). Thus the normal map (since it is a "normal map") should have the colour of the normals which is just blue (R = x = 0, G = y = 0, B = z = 1) with no shades in between. But normal maps are not so, and they have gradients of shades in them. Why is this so?

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    I've read lots of articles about Data Oriented Design (DOD) and I understand it but I can't design an Object Oriented Programming (OOP) system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OOP interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • Cannot compute wNear and wFar from projection matrix

    - by DeadMG
    I've got the following error from Direct3D when attempting to render in 3D: Direct3D9: (WARN) :Cannot compute WNear and WFar from the supplied projection matrix Direct3D9: (WARN) :Setting wNear to 0.0 and wFar to 1.0 My projection matrix is as follows: D3DXMatrixPerspectiveFovLH( &Projection, D3DXToRadian(90), (float)GetDimensions().x / (float)GetDimensions().y, NearPlane, FarPlane ); D3DCALL(device->SetTransform( D3DTS_PROJECTION, &Projection )); The NearPlane is 0.1f, the FarPlane is 40.0f, and the dimensions are 1920x1018. This code was working earlier but I appear to have broken it, and I'm not sure where the fault is. Previously I've only encountered it if NearPlane was 0, and Google hasn't suggested any other causes either. Any suggestions?

    Read the article

  • Xna Equivalent of Viewport.Unproject in a draw call as a matrix transformation

    - by Nick Crowther
    I am making a 2D sidescroller and I would like to draw my sprite to world space instead of client space so I do not have to lock it to the center of the screen and when the camera stops the sprite will walk off screen instead of being stuck at the center. In order to do this I wanted to make a transformation matrix that goes in my draw call. I have seen something like this: http://stackoverflow.com/questions/3570192/xna-viewport-projection-and-spritebatch I have seen Matrix.CreateOrthographic() used to go from Worldspace to client space but, how would I go about using it to go from clientspace to worldspace? I was going to try putting my returns from the viewport.unproject method I have into a scale matrix such as: blah = Matrix.CreateScale(unproject.X,unproject.Y,0); however, that doesn't seem to work correctly. Here is what I'm calling in my draw method(where X is the coordinate my camera should follow): Vector3 test = screentoworld(X, graphics); var clienttoworld = Matrix.CreateScale(test.X,test.Y, 0); animationPlayer.Draw(theSpriteBatch, new Vector2(X.X,X.Y),false,false,0,Color.White,new Vector2(1,1),clienttoworld); Here is my code in my unproject method: Vector3 screentoworld(Vector2 some, GraphicsDevice graphics): Vector2 Position =(some.X,some.Y); var project = Matrix.CreateOrthographic(5*graphicsdevice.Viewport.Width, graphicsdevice.Viewport.Height, 0, 1); var viewMatrix = Matrix.CreateLookAt( new Vector3(0, 0, -4.3f), new Vector3(X.X,X.Y,0), Vector3.Up); //I have also tried substituting (cam.Position.X,cam.Position.Y,0) in for the (0,0,-4.3f) Vector3 nearSource = new Vector3(Position, 0f); Vector3 nearPoint = graphicsdevice.Viewport.Unproject(nearSource, project, viewMatrix, Matrix.Identity); return nearPoint;

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • Java - 2d Array Tile Map Collision

    - by Corey
    How would I go about making certain tiles in my array collide with my player? Like say I want every number 2 in the array to collide. I am reading my array from a txt file if that matters and I am using the slick2d library. Here is my code if needed. public class Tiles { Image[] tiles = new Image[3]; int[][] map = new int[500][500]; Image grass, dirt, mound; SpriteSheet tileSheet; int tileWidth = 32; int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.txt")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update() { } public void render(GameContainer gc) { for(int x = 0; x < 50; x++) { for(int y = 0; y < 50; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } I tried something like this, but I it doesn't ever "collide". X and y are my player position. if (tiles.map[(int)x/32][(int)y/32] == 2) { System.out.println("Collided"); }

    Read the article

  • Normals vs Normal maps

    - by KaiserJohaan
    I am using Assimp asset importer (http://assimp.sourceforge.net/lib_html/index.html) to parse 3d models. So far, I've simply pulled out the normal vectors which are defined for each vertex in my meshes. Yet I have also found various tutorials on normal maps... As I understand it for normal maps, the normal vectors are stored in each texel of a normal map, and you pull these out of the normal texture in the shader. Why is there two ways to get the normals, which one is considered best-practice and why?

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • Algorithm for waypoint path following?

    - by Thierry Savard Saucier
    I have a worldmap, with different cities on it. The player can choose a city from a menu, or click on an available cities on the world map, and the toon should walk over there. I want him to follow a predefined path. Lets say our hero is on the city 1. He clicks on city 4. I want him to follow the path to city 2 and from there to city 4. I was handling this easily with arrow movement (left right top bottom) since its a single check. Now I'm not sure how I should do this. Should I loop threw each possible path and check which one leads me to D the fastest ... and if I do how do I avoid running in circle forever with cities 1-5-2 ?

    Read the article

  • Smooth Camera Rotation around 90 degrees

    - by Nicholas
    I'm developing a third person 3D platformer in XNA. My problem is when I try to rotate the camera around the player. I would like to rotate (and animate) the camera 90 degrees around the player. So the camera should rotate until it has reached 90 degrees from the starting position. I cannot figure out how to keep track of the rotation, and when the rotation has made the full 90 degrees. Currently my cameras update: public void Update(Vector3 playerPosition) { if (rotateCamera) { position = Vector3.Transform(position - playerPosition, Matrix.CreateRotationY(0.1f)) + playerPosition; } this.viewMatrix = Matrix.CreateLookAt(position, playerPosition, Vector3.Up); } The initial position of the camera is set in the constructor. The "rotateCamera" bool is set on keypress. Thanks for the help in advance. Cheers.

    Read the article

  • changing body type without change of center of mass

    - by philipp
    I have an box2d project with some bodies in it, which move around without any user interaction. But if the user selects one of the bodies, he should be able to move it around just like he wants to. To keep it short, I want to change the type of the body to "kinematic" while the user controls it and back to "dynamic" afterwards. If I do so the rotation center of the body changes with the change of the type. How can I reset this? The body's fixture is created of a single b2PolygonShape, with its vertices set via SetAsArray(). Greetings philipp EDIT:: So I looked around about setting the local center of bodies. what brought me to this solution: var md = new b2MassData(); this.body.GetMassData( md ); this.body.SetType( b2body.b2_kinematicBody ); this.body.SetMassData( md ); that did not work, so I had a look at the source and found that SetMassData() always returns if the body is not "dynamic". So I tried this: var md = new b2MassData(); this._body.GetMassData( md ); this._body.SetType( b2Body.b2_kinematicBody ); this._body.m_sweep.localCenter.Set( md.center.x, md.center.y ); what actually is modifying the private data of the body. But it works and no errors appear, but can I really do this without the risk of breaking the application, or in other words, under which circumstances could this solution might cause errors? n.b.: I am using box2dweb of the latest release. Greetings philipp

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • How can I implement 2D cel shading in XNA?

    - by Artii
    So I was just wondering on how to give a scene I am rendering a hand drawn look (like say Crayon Physics). I don't really want to preprocess the sprites and was thinking of using a shader. Cel shading supplies the effect I want to achieve, but I am only aware of the 3D instances for it. So I wanted to ask if anyone knew a way to get this effect in 2D, or if cel shading would work just as fine on 2D scenes?

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • First Person Camera strafing at angle

    - by Linkandzelda
    I have a simple camera class working in directx 11 allowing moving forward and rotating left and right. I'm trying to implement strafing into it but having some problems. The strafing works when there's no camera rotation, so when the camera starts at 0, 0, 0. But after rotating the camera in either direction it seems to strafe at an angle or inverted or just some odd stuff. Here is a video uploaded to Dropbox showing this behavior. https://dl.dropboxusercontent.com/u/2873587/IncorrectStrafing.mp4 And here is my camera class. I have a hunch that it's related to the calculation for camera position. I tried various different calculations in strafe and they all seem to follow the same pattern and same behavior. Also the m_camera_rotation represents the Y rotation, as pitching isn't implemented yet. #include "camera.h" camera::camera(float x, float y, float z, float initial_rotation) { m_x = x; m_y = y; m_z = z; m_camera_rotation = initial_rotation; updateDXZ(); } camera::~camera(void) { } void camera::updateDXZ() { m_dx = sin(m_camera_rotation * (XM_PI/180.0)); m_dz = cos(m_camera_rotation * (XM_PI/180.0)); } void camera::Rotate(float amount) { m_camera_rotation += amount; updateDXZ(); } void camera::Forward(float step) { m_x += step * m_dx; m_z += step * m_dz; } void camera::strafe(float amount) { float yaw = (XM_PI/180.0) * m_camera_rotation; m_x += cosf( yaw ) * amount; m_z += sinf( yaw ) * amount; } XMMATRIX camera::getViewMatrix() { updatePosition(); return XMMatrixLookAtLH(m_position, m_lookat, m_up); } void camera::updatePosition() { m_position = XMVectorSet(m_x, m_y, m_z, 0.0); m_lookat = XMVectorSet(m_x + m_dx, m_y, m_z + m_dz, 0.0); m_up = XMVectorSet(0.0, 1.0, 0.0, 0.0); }

    Read the article

  • Pathfinding and BSP with Box2D

    - by Amplify91
    I'm looking into implementing AI in my 2D side-scrolling platformer, and I'm looking into using algorithms such as A*. For many kinds of pathfinding, we need some sort of grid or systems of nodes or polygon areas. My problem is that I am using Box2d for physics and I am not sure how best to create a structure that my AI can use besides placing individual nodes manually (something I really want to avoid) and using some sort of steering behavior. My level design is tile-based with each tile being about half of the height/width of my main character. The tiles are not all square (some are sloped). I'd like to have a system that can see what the terrain looks like for pathfinding and also keep track of the positions of other actors such as enemies. I'd like to avoid directly placing any nodes into my level design except for possible endpoints or goals. This question is related: How do you do AI path following within a 2d physics engine like farseer/box2d?, but it doesn't specify what kind of structure I could use instead of a list of nodes. I'm looking for some kind of grid or type of BSP that I can query for algorithms like A*.

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Image with FadeIn effect blinks when added to scene

    - by Ef Es
    I am trying to add an image to the scene, but it should just be added to the scene invisible, FadeIn and then be deleted when the effect finishes. My problem is that the images blink once when they are added to the scene, then they do the intended effect. My best guess is that when they are added they show on the scene for a split second before starting the animation. I though of making them invisible for a split second before activating them, but I am not sure how to code it. const bool Sunbeams::add() { const CCSize kSceenSize = CCDirector::sharedDirector()->getWinSize(); const int nRayType = random( m_kRays.size()); const CCPoint kPosition( random( static_cast < int >( kSceenSize.width)), 0.0f); const float fDuration = random( m_fDurationVariance) + m_fDurationMin; CCSprite* pkLightBeam = CCSprite::spriteWithTexture( m_kRays[nRayType]); if ( !pkLightBeam) { msg::debug( "Sunbeams::add", "Failed to create sprite from ray '%d'!\n", m_kRays[nRayType]); return false; } pkLightBeam->setAnchorPoint( CCPointZero); pkLightBeam->setPosition( kPosition); m_kActiveBeams.push_back( pkLightBeam); CCDirector::sharedDirector()->getRunningScene()->addChild( pkLightBeam); CCActionInterval* pkAction = CCFadeIn::actionWithDuration( fDuration); CCActionInterval* pkActionBack = pkAction->reverse(); pkLightBeam->runAction( CCSequence::actions( pkAction, pkActionBack, 0)); return true; }

    Read the article

  • Unity: Assigning a key to perform an action in the inspector

    - by Marc Pilgaard
    I am trying to write a simple piece of code in JavaScript where a button toggles the activation of a shield, by dragging a prefab with Resources.load("ActivateShieldPreFab") and destroying it again (Haven't implemented that yet). I wish to assign this button through the inspector, so I have created a string variable which appears as intended in the inspector. Though it doesn't seem to register the inspector input, even though I changed the value through the inspector. It only provides the error: "Input Key named: is unknown" When the button name is assigned within the code, there is no issues. Code as follows: var ShieldOn = false; var stringbutton : String; function Start(){ } function Update () { if(Input.GetKey(stringbutton) && ShieldOn != true) { Instantiate(Resources.load("ActivateShieldPreFab"), Vector3 (0, 0, 0), Quaternion.identity); ShieldOn = true; } }

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >