Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 403/1332 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Xcode workspace with Unity3D as a sub-project?

    - by Di Wu
    Let's say we're developing a 2D game with Cocos2d-iPhone and UIKit and CoreAnimation. But we're also considering leveraging the 3D capabilities of Unity 3D. Is it possible that we add the Unity3D-generated Xcode project as a sub-project into the workspace and expose the 3D UI element as some kind of UIView subclass so that the native UIKit and CoreAnimation code could use them without the need to mess up with their underlying Unity3D implementation?

    Read the article

  • morph a sphere to a cube and a a cube to a sphere with GLSL

    - by nkint
    hi i'm getting started with glsl with quartz composer. i have a patch with a particle system in which each particle is mapped into a sphere with a blend value. with blend=0 particles are in random positions, blend=1 particles are in the sphere. the code is here: vec3 sphere(vec2 domain) { vec3 range; range.x = radius * cos(domain.y) * sin(domain.x); range.y = radius * sin(domain.y) * sin(domain.x); range.z = radius * cos(domain.x); return range; } // in main: normal = sphere(p0); * blend + gl_Normal * (1.0 - blend); i'd like the particle to be on a cube if blend=0 i've tried to find but i can't figure out some parametric equation for the cube. mayebe it is not the right way?

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Camera Projection back Into 3D world, offset error

    - by Anthony
    I'm using XNA to simulate a robot in a 3D world and then do image analysis on what the camera sees. I have my camera looking down in front of the direction that the robot is going, and I have the robot detecting white pixels. I'm trying to take the white pixels that it finds and project them back into the 3D world so that I can see if it is actually detecting the correct pixels. I almost have it working, but there is an offset between where the white is in in the World and were I put my orange triangles (which represent what the robot things is white). /// <summary> /// Takes a bool map of and makes vertex positions based on the map. /// </summary> /// <param name="c"> The bool map</param> private void ProjectBoolMapOnGroundAnthony2(bool[,] c) { float triangleSize = 0.04f; // Point of interest in World W cordinate system. Vector3 pointOfInterest_W = Vector3.Zero; // Point of interest in Robot Cordinate system R Vector3 pointOfInterest_R = Vector3.Zero; // alpha is the angle from the robot camera to where it is looking in the center. //double alpha = Math.Atan(1.8f / 1); /// Matrix representation of the view determined by the position, target, and updirection. Matrix View = ((SimulationMain)Game).mainRobot.robotCameraView.View; /// Matrix representation of the view determined by the angle of the field of view (Pi/4), aspectRatio, nearest plane visible (1), and farthest plane visible (1200) Matrix Projection = ((SimulationMain)Game).mainRobot.robotCameraView.Projection; /// Matrix representing how the real world cordinates differ from that of the rendering by the camera. Matrix World = ((SimulationMain)Game).mainRobot.robotCameraView.World; Plane groundPlan = new Plane(Vector3.UnitZ, 0.0f); for (int x = 0; x < this.screenWidth; x++) { for (int y = 0; y < this.screenHeight; ) { if (c[x, y] == true && this.count1D < 62000) { int j = 1; Vector3 nearPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 0), Projection, View, World); Vector3 farPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 1), Projection, View, World); //Vector3 pointOfInterest_W = Vector3.in Ray ray = new Ray(nearPlanePoint, farPlanePoint); pointOfInterest_W = ray.Position + ray.Direction * (float) ray.Intersects(groundPlan); this.vertexArray2[this.count1D + 0].Position.X = pointOfInterest_W.X - triangleSize; this.vertexArray2[this.count1D + 0].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 0].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 0].Color = Color.DarkOrange; // Put another vertex a the position but +1 in the X direction triangleSize //this.vertexArray2[this.count1D + 1].Position.X = pointOnGroud.X + 3; //this.vertexArray2[this.count1D + 1].Position.Y = pointOnGroud.Y + j; this.vertexArray2[this.count1D + 1].Position.X = pointOfInterest_W.X; this.vertexArray2[this.count1D + 1].Position.Y = pointOfInterest_W.Y + triangleSize * j; this.vertexArray2[this.count1D + 1].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 1].Color = Color.Red; // Put another vertex a the position but +1 in the X direction //this.vertexArray2[this.count1D + 0].Position.X = pointOnGroud.X; //this.vertexArray2[this.count1D + 0].Position.Y = pointOnGroud.Y + 3 + j; this.vertexArray2[this.count1D + 2].Position.X = pointOfInterest_W.X + triangleSize; this.vertexArray2[this.count1D + 2].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 2].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 2].Color = Color.Orange; this.count1D += 3; y += j; } else { y++; } } } } The world is a grass texture with lines on it. The world plane is normal at (0,0,1). Any ideas on why there is an offset? Any Ideas? Thanks for the help, Anthony G.

    Read the article

  • 2D soft-body physics engines?

    - by Griffin
    Hi so i've recently learned the SFML graphics library and would like to use or make a non-rigid body 2D physics system to use with it. I have three questions: The definition of rigid body in Box2d is A chunk of matter that is so strong that the distance between any two bits of matter on the chunk is completely constant. And this is exactly what i don't want as i would like to make elastic, deformable, breakable, and re-connection bodies. 1. Are there any simple 2D physics engines, but with these kinds of characteristics out there? preferably free or opensource? 2. If not could i use box2d and work off of it to create it even if it's based on rigid bodies? 3. Finally, if there is a simple physics engine like this, should i go through with the proccess of creating a new one anyway, simply for experience and to enhance physics math knowledge? I feel like it would help if i ever wanted to modify the code of an existing engine, or create a game with really unique physics. Thanks!

    Read the article

  • farseer physics xbox samples not working

    - by Hugh
    I have downloaded a few of the sample projects from the official farseer physics website and i just cant get them to run on my xbox. -My connection to the xbox is fine, other xbox projects debug fine on my xbox -I have tried running both the xbox versions of the samples (for example the farseer hello world sample project) and the windows version by right-clicking the project and making a copy for xbox. I get a bunch of errors but what i always get is "unreachable code detected" referring to code in the farseer library, it seems to be a problem to do with referencing/linking the farseer library to the main game project. Help please!

    Read the article

  • Why don't more games use vector art?

    - by Parris
    It would seem to me that vector art is more efficient in terms of resources/scalability; however, in most cases I have seen artists using bitmap/rasterized art. Is this a limitation put on the artists by the game programmers/designers? As a programmer I think vector art would be more ideal, since it allows for scaling up resolution without having to recreate the art, creating really large graphics or causing graphics to become blurry. The questions: why aren't more people using SVG/AI to create 2D game art? Would it actually be preferred (and who prefers it)? Are bitmap graphics a standard or a limitation (or maybe neither)? Background: I am working on an engine, and I had some kinda cool ideas for vector based graphics; however, I don't want to piss off artists in the future. I guess this is more a question centered around pragmatism and developing games.

    Read the article

  • Slick and Timers?

    - by user3491043
    I'm making a game where I need events to happen in a precise amount of time. Explanation : I want that event A happens at 12000ms, and event B happens every 10000ms. So "if"s should looks like this. //event A if(Ticks == 12000) //do things //even B if(Ticks % 10000 == 0) //do stuff But now how can I have this "Ticks" value ? I tried to declare an int and then increasing it in the update method, I tried 2 ways of increasing it : Ticks++; It doesn't works because the update method is not always called every microseconds. Ticks += delta; It's kinda good but the delta is not always equals to 1, so I can miss the precise values I need in the if statements So if you know how can I do events in a precise amount of time please tell me how can I do this

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • C# Item system design approach, should I use abstract classes, interfaces or virutals?

    - by vexe
    I'm working on a Resident Evil 1/2/3/0/Remake type of game. Currently I've done a big part of the inventory system (here's a link if you wanna see my inventory, pretty outdated, added a lot of features and made a lot of enhancements) Now I'm thinking about how to approach the items system, If you've played any Resident Evil game or any of its likes you should be familiar with what I'm trying to achieve. Here's a very simple category I made for the items: So you have different items, with different operations you could perform on them, there are usable items that you could use, like for example herbs and first aid kits that 'using' them would affect your health, keys to unlock doors, and equipable items that you could 'equip' like weapons. Also, you can 'combine' two items together to get new one, like for example mixing a green and red herb would give you a new type of herb, or combining a lighter with a paper, would give you a burnt paper, or ammo with a gun, would reload the gun or something. etc. You know the usual RE drill. Not all items are 'transformable', in that, for example: lighter + paper = burnt paper (it's the paper that 'transforms' to burnt paper and not the lighter, the lighter is not transformable it will remain as it is) green herb + red herb = newHerb1 (both herbs will vanish and transform to this new type of herb) ammo + gun = reload gun (ammo state will remain as it is, it won't change but it will just decrease, nothing will happen to the gun it just gets reloaded) Also a key note to remember is that you can't just combine items randomly, each item has a 'mating' item(s). So to sum up, different items, and different operations on them. The question is, how to approach this, design-wise? I've been learning about interfaces, but it just doesn't quite get into my head, I mean, why not just use classes with the good old inheritance? I know the technical details of interfaces and that the cool thing about them is that they don't require an inheritance chain, but I just can't see how to use them properly, that is, if they were the right thing to use here. So should I go with just classes and inheritance? just like in the tree I showed you? or should I think more about how to use interfaces? (IUsable, IEquipable, ITransformable) - why not just use classes UsableItem, Equipable item, TransformableItem? I want something that won't give me headaches in the long run, something resilient/flexible to future changes. I'm OK using classes, but I smell something better here. The way I'm thinking is to possibly use both inheritance and interfaces, so that you have a branch like this: item - equipable - weapon. but then again, the weapon has methods like 'reload' 'examine' 'equip' some of them 'combine' so I'm thinking to make weapon implement ICombinable?... not all items get used the same way, using herbs will increase your health, using a key will open a door, so IUsable maybe? Should I use a big database (XML for example) for all the items, items names, mates, nRowsReq, nColsReq, etc? Thanks so much for your answers in advanced, note that demo 3 is coming after I'm done with items :D

    Read the article

  • Graph Isomorphism > What kind of Graph is this?

    - by oodavid
    Essentially, this is a variation of Comparing Two Tree Structures, however I do not have "trees", but rather another type of graph. I need to know what kind of Graph I have in order to figure out if there's a Graph Isomorphism Special Case... As you can see, they are: Not Directed Not A Tree Cyclic Max 4 connections But I still don't know the correct terminology, nor the which Isomorphism algorithm to pursue, guidance appreciated.

    Read the article

  • Spreadsheets in Game Design?

    - by Joey Green
    There have been two instances from the past two weeks that I've heard from well known successful game developers that they use spreadsheets when designing games. The first being David Whatley in this GDCVault video: http://gdcvault.com/play/1012372/From-Zero-to-Time-Magazine The second being the guys that do Walled Garden Weekly: http://walledgardenweekly.com/ David said he models everything out and uses excel models to see how everything plays out. What on earth is he talking about? Is it seeing how the game mechanics react to each other? Is there somewhere where I can learn more about how to do this? Thanks

    Read the article

  • Game Maker Studio Gravity Problems

    - by Dusty
    I've started messing around with Game Maker Studio. The problem I'm having is trying to get a gravity code for orbiting. Here's how i did it in XNA foreach (GravItem Item in StarSystem.ActiveItems.OfType<GravItem>()) { if (this != Item) { Velocity += (10 * Vector2.Normalize(Item.Position - this.Position * (this.Mass * Item.Mass) / (Vector2.DistanceSquared(this.Position, Item.Position)) / (this.Mass)); } } Simple and works well, things or bit and everything is nice. but in Game maker i don't have the luxury of Vector2's or a For-each loop to loop threw all the objects that have a mass. I've tried a few different things but nothing seems to work distance = distance_to_object(obj_moon); //--Gravity hspeed += (0.5 * (distance) * (Mass * obj_moon.Mass) / (sqr(distance)) / Mass) vspeed += (0.5 * (distance) * (Mass * obj_moon.Mass) / (sqr(distance)) / Mass) thanks for the help

    Read the article

  • Ogre3D Fog with overlays

    - by Yourdoom
    I'm building a game with Ogre3d, I've got fog working properly with: scenemanager->setFog(Ogre::FOG_LINEAR, Ogre::ColourValue( 0.23f, 0.725f, 1.0f ), 0, 18, 20 ); However I'm currently implementing a GUI system (libRocket) which is rendered on top of everything else, and this removes the fog, does anyone know how to fix this? (I'm using the default libRocket rendering system for ogre as included in the samples, but this problem also appears when using a semi-transparent overlay).

    Read the article

  • Java Slick2d Animation not working

    - by user3558075
    Hello everyone I am trying to make a simple 2d game using java and the slick2d library. this is my first time doing it and i need some help. right now I am trying to make the Animations for the character, so that when you go right he turns right and when you go left he turns left... but I keep getting an error when im drawing the character, ive try'd re-downloading slick but that didnt work. when i get rid of the player.draw(x,y); line of code it dosen't crash but the character isnt there. heres my code, can anyone help? package enteties; import input.Keyinput; import org.newdawn.slick.Animation; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import org.newdawn.slick.Input; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import playerinfo.Playerinfo; public class Player extends BasicGameState{ Playerinfo pi = new Playerinfo(); Keyinput ki = new Keyinput(); Animation player,up,down,left,right; public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { Image[] goingUp = {new Image("res/buckysBack.png") , new Image("res/charBack.png")}; Image[] goingDown = {new Image("res/buckysFront.png") , new Image("res/charFront.png")}; Image[] goingLeft = {new Image("res/buckysLeft.png") , new Image("res/charLeft.png")}; Image[] goingRight = {new Image("res/buckysRight.png") , new Image("res/charRight.png")}; int[] duration = {200,200}; Animation up = new Animation(goingUp,duration,false); Animation down = new Animation(goingDown,duration,false); Animation left = new Animation(goingLeft,duration,false); Animation right = new Animation(goingRight,duration,false); player = up; } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) throws SlickException { //error happens here, when i remove this line it dosent crash player.draw(720,450); } public void update(GameContainer gc, StateBasedGame sbg, int delta) throws SlickException { } public int getID() { return 0; } }

    Read the article

  • How to render 2D particles as fluid?

    - by luke
    Suppose you have a nice way to move your 2D particles in order to simulate a fluid (like water). Any ideas on how to render it? This is for a 2D game, where the perspective is from the side, like this. The water will be contained in boxes that can be broken in order to let it fall down and interact with other objects. The simplest way that comes to my mind is to use a small image for each particle. I am interested in hearing more ways of rendering water.

    Read the article

  • Rendering of 2d water

    - by luke
    Suppose you have a nice way to move your 2D particles in order to simulate a fluid (like water). Any ideas on how to render it? Consider the fact that the game is a 2D game. The perspective is like this (the first image i have found): an example of 2d water. The water will be contained in boxes that can be broken in order to let it fall down and interact with other objects. The most simple way that comes to my mind is to use a small image for each particle. I am interested in hearing more ways of rendering water. Thank you.

    Read the article

  • Exporting the frames in a Flash CS5.5 animation and possibly creating the spritesheet

    - by Adam Smith
    Some time ago, I asked a question here to know what would be the best way to create animations when making an Android game and I got great answers. I did what people told me there by exporting each frame from a Flash animation manually and creating the spritesheet also manually and it was very tedious. Now, I changed project and this one is going to contain a lot more animations and I feel like there has to be a better way to to export each frame individually and possibly create my spritesheets in an automated manner. My designer is using Flash CS5.5 and I was wondering if all of this was possible, as I can't find an option or code examples on how to save each frame individually. If this is not possible using Flash, please recommend me another program that can be used to create animations without having to create each frame on its own. I'd rather keep Flash as my designer knows how to use it and it's giving great results.

    Read the article

  • 3d transformation of game world keeping gameplay 2d - COCOS2D 2.0

    - by samfisher
    Using: COCOS2D + iOS. I want to rotate the game world, may be loading another .tmx file for another dimensions when user want to switch dimension. the effect what I am looking for is something like this:CLICK HERE What I have thought of till now: rotating CCCamera will be mandatory. Question: How will I have the other part of the level in place while the camera rotates/rotating? I can load a CCSprite and rotate it accordingly to the 3rd dimension. phew..!! Question: When the camera and world is rotated, will the player controls work properly.. I think not...? I think a better option would be to checkout with COCOS3D... there I could implement 3d world... right?? Question: Not sure how well 2d dynamics will work there as I want to user Box2d as physics engine.. could anyone provide suggestions? Regards, Sam

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • How can I make an object's hitbox rotate with its texture?

    - by Matthew Optional Meehan
    In XNA, when you have a rectangular sprite that doesnt rotate, it's easy to get its four corners to make a hitbox. However, when you do a rotation, the points get moved and I assume there is some kind of math that I can use to aquire them. I am using the four points to draw a rectangle that visually represents the hitboxes. I have seen some per-pixel collision examples, but I can forsee they would be hard to draw a box/'convex hull' around. I have also seen physics like farseer but I'm not sure if there is a quick tutorial to do what I want.

    Read the article

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • Admob banner not getting remove from superview

    - by Anil gupta
    I am developing one 2d game using cocos2d framework, in this game i am using admob for advertising, in some classes not in all classes but admob banner is visible in every class and after some time game getting crash also. I am not getting how admob banner is comes in every class in fact i have not declare in Rootviewcontroller class. can any one suggest me how to integrate Admob in cocos2d game, i want Admob banner in particular classes not in every class, I am using latest google admob sdk, my code is below: Thanks in advance ` -(void)AdMob{ NSLog(@"ADMOB"); CGSize winSize = [[CCDirector sharedDirector]winSize]; // Create a view of the standard size at the bottom of the screen. if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-364, size.height - GAD_SIZE_728x90.height, GAD_SIZE_728x90.width, GAD_SIZE_728x90.height)]; } else { // It's an iPhone bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-160, size.height - GAD_SIZE_320x50.height, GAD_SIZE_320x50.width, GAD_SIZE_320x50.height)]; } if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { bannerView_.adUnitID =@"a15062384653c9e"; } else { bannerView_.adUnitID =@"a15062392a0aa0a"; } bannerView_.rootViewController = self; [[[CCDirector sharedDirector]openGLView]addSubview:bannerView_]; [bannerView_ loadRequest:[GADRequest request]]; GADRequest *request = [[GADRequest alloc] init]; request.testing = [NSArray arrayWithObjects: GAD_SIMULATOR_ID, nil]; // Simulator [bannerView_ loadRequest:request]; } //best practice for removing the barnnerView_ -(void)removeSubviews{ NSArray* subviews = [[CCDirector sharedDirector]openGLView].subviews; for (id SUB in subviews){ [(UIView*)SUB removeFromSuperview]; [SUB release]; } NSLog(@"remove from view"); } //this makes the refreshTimer count -(void)targetMethod:(NSTimer *)theTimer{ //INCREASE OF THE TIMER AND SECONDS elapsedTime++; seconds++; //INCREASE OF THE MINUTOS EACH 60 SECONDS if (seconds>=60) { seconds=0; minutes++; [self removeSubviews]; [self AdMob]; } NSLog(@"TIME: %02d:%02d", minutes, seconds); } `

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >