Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 558/1332 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • How to detect GLSL warnings?

    - by msell
    After compiling a shader with glCompileShader, I can call glGetShaderiv with GL_COMPILE_STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL_COMPILE_STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved?

    Read the article

  • Which isometric angles can be mirrored (and otherwise transformed) for optimization?

    - by Tom
    I am working on a basic isometric game, and am struggling to find the correct mirrors. Mirror can be any form of transform. I have managed to get SE out of SW, by scaling the sprite on X axis by -1. Same applies for NE angle. Something is bugging me, that I should be able to also mirror N to S, but I cannot manage to pull this one off. Am I just too sleepy and trying to do the impossible, or a basic -1 scale on Y axis is not enough? What are the common used mirror table for optimizing 8 angle (N, NE, E, SE, S, SW, W, NW) isometric sprites?

    Read the article

  • How to achieve selection of a tile from a tile sheet based on an ID?

    - by Bugster
    Let's say I have a tile sheet that contains 8 sprites per sheet. Each sprite is a tile of 30x30. I wrote my own custom map parser/map loader however I'm having trouble extracting a certain tile sprite from the file. I'll describe my problem better in order for everyone to understand. I wrote an enum of materials, each material has a value according to it's location relative to the tile sheet. For example void is 1, grass is 2, rock is 3, etc. So in my tile sheet they are represented as such: +---+---+---+---+---+ | 1 | 2 | 3 | 4 | 5 | +---+---+---+---+---+ Which is equivalent to: +------+-------+-------+ | void | grass | stone | +------+-------+-------+ Basically when rendering, I created a tile class, each tile has 2 coordinates: X and Y (They are calculated automatically) and a material which can be represented either as a number, either as a value (ID). When rendering, I have a vector of sprites which are all taken from 1 file called tilesheet.png, however each of them must only draw a certain portion of the tile sheet, for example say I have something like this: tile coordinateBounds(topLeftX, topLeftY, tileWidth, tileHeight); During the initialization of the map I calculate an array of tiles, and I give each of them their position, their materials based on the values in a map file and a few other variables such as collision. I need to apply the coordinateBounds to each of them according to their material value. For example if the material is grass it should only take the grass sprite from the tilesheet. I must also mention I'm using SFML, and there are no borders or spacing between the tiles.

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • HTML5 Canvas Converting between cartesian and isometric coordinates

    - by Amir
    I'm having issues wrapping my head around the Cartesian to Isometric coordinate conversion in HTML5 canvas. As I understand it, the process is two fold: (1) Scale down the y-axis by 0.5, i.e. ctx.scale(1,0.5); or ctx.setTransform(1,0,0,0.5,0,0); This supposedly produces the following matrix: [x; y] x [1, 0; 0, 0.5] (2) Rotate the context by 45 degrees, i.e. ctx.rotate(Math.PI/4); This should produce the following matrix: [x; y] x [cos(45), -sin(45); sin(45), cos(45)] This (somehow) results in the final matrix of ctx.setTransform(2,-1,1,0.5,0,0); which I cannot seem to understand... How is this matrix derived? I cannot seem to produce this matrix by multiplying the scaling and rotation matrices produced earlier... Also, if I write out the equation for the final transformation matrix, I get: newX = 2x + y newY = -x + y/2 But this doesn't seem to be correct. For example, the following code draws an isometric tile at cartesian coordinates (500, 100). ctx.setTransform(2,-1,1,0.5,0,0); ctx.fillRect(500, 100, width*2, height); When I check the result on the screen, the actual coordinates are (285, 215) which do not satisfy the equations I produced earlier... So what is going on here? I would be very grateful if you could: (1) Help me understand how the final isometric transformation matrix is derived; (2) Help me produce the correct equation for finding the on-screen coordinates of an isometric projection. Many thanks and kind regards

    Read the article

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

  • Level Representation in a 2D Game

    - by meszar.imola
    I would like to create a 2D game, where a character should move on a stage/level. My stage would be static, constructed some little cubes, similar to the well-known Mario game: some of the elements should represent an element of the way where the character can step, but if the element is missing, the character should fall. My problem is, how to represent this programmatically? My first thought was to represent the stage with a vector, which should contain boolean elements, depending on the state of the element on the stage - if it's missing or not. But this means, I have to verify at my character's x or y position change if it has a stage element under or not (if not, to simulate the falling of the character) - I think it is not the best practice, it's not the beautiful solution. Can you help me with some advice, how to represent the stage?

    Read the article

  • How can I spawn a sprite based on a condition after the game starts?

    - by teddyweedy
    How do I spawn a sprite anytime later in the game when the game starts? I'm using FlashDevelop with Flixel. I did try it in override public function create(): void. It works only in the beginning. I try it using if (FlxG.score == 1) but it doesn't work. I also tried it in override public function update(): void. It works and it is moving but it leaves a sprite making it a multi-sprite. I also did try FlxGroup but to no avail.

    Read the article

  • How to handle the different frame rate on different devices?

    - by Fenwick
    I am not quite sure how frame per second works on a web page. I have a Canvas game that involves in moving an image from point A to B, and measuring the time elapsed. The code can be as simple as: var timeStamp = Date.now(); function update(){ obj.y += obj.speed; text = "Time: "+ (Date.now() - timeStamp) + "ms"; } The function update() is called every frame. The problem is that the time elapsed is different from device to device. It is pretty short on my PC, but get longer on my iPad, and is much longer on my cell phone. I thought it is because the FPS is smaller on mobile devices, so instead of calling update() every frame, I call it every 1ms by using a setInterval. But this does not solve the problem. In my understanding, the function for setInterval is invoked based on the increment in system time, other than frame rate, so it should fix the problem. Am I missing anything here? If the setInterval function is called based on FPS, is there any way to get around with the FPS difference across devices? On a side note, I have sort of a "water simulator" on the same canvas. It involves in redrawing about 60 objects which can be 600x600 pixels for every frame, so it could be a frame rate killer. I am using Phaser.js but not really using much of its functionalities, if that helps.

    Read the article

  • Costs/profit of/when starting an indie company

    - by Jack
    In short, I want to start a game company. I do not have much coding experience (just basic understanding and ability to write basic programs), any graphics design experience, any audio mixing experience, or whatever else technical. However, I do have a lot of ideas, great analytical skills and a very logical approach to life. I do not have any friends who are even remotely technical (or creative in regards to games for that matter). So now that we've cleared that up, my question is this: how much, minimally, would it cost me to start such a company? I know that a game could be developed in under half a year, which means it would have to operate for half a year prior, and that's assuming that the people working on the first project do their jobs good, don't leave game breaking bugs, a bunch of minor bugs, etc.. So how much would it cost me, and what would be the likely profit in half a year? I'm looking at minimal costs here, as to do it, I would have to sell my current apartment and buy a new, smaller one, pay taxes, and likely move to US/CA/UK to be closer to technologically advanced people (and be able to speak the language of course). EDIT: I'm looking at a small project for starters, not a huge AAA title.

    Read the article

  • What are my tool options to prototype a 2D multiplayer game?

    - by Asher Einhorn
    I'm looking for the best tool to allow me to quickly put together a 2D game that relies largely on networking. It's extremely likely that this game will require a server side program to constantly run. I have little experience with these things and since it's a prototype i'd like the easiest options for achieving this. I am looking to make this game for the web and mobile devices, although at present I only have access to ios hardware, (no android etc). I just want to get the bare bones of this set up so I can test it from the earliest opportunity to see if it's fun.

    Read the article

  • What are the steps taken by this GLSL code?

    - by user827992
    1 void main(void) 2 { 3 vec2 pos = mod(gl_FragCoord.xy, vec2(50.0)) - vec2(25.0); 4 float dist_squared = dot(pos, pos); 5 6 gl_FragColor = (dist_squared < 400.0) 7 ? vec4(.90, .90, .90, 1.0) 8 : vec4(.20, .20, .40, 1.0); 9 } taken from http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html Now, this looks really trivial and simple, but my problem is with the mod function. This function is taking 2 vec2 as inputs but is supposed to take just 2 atomic arguments according to the official documentation, also this function makes an implicit use of the floor function that only accepts, again, 1 atomic argument. Can someone explain this to me step by step and point out what I'm not getting here? It's some kind of OpenGL trick? OpenGL Math trick? in the GLSL docs i always find and explicit reference to the type accepted by the function and vec2 it's not there.

    Read the article

  • Converting Degrees to X and Y Coordinate change

    - by gopgop
    I am using a float positioning system in my game. IE float x,y,z now I want to get the location of the mouse, then to fire an arrow to it. X0 = the players X location X1 = the mouse X location Y0 = the players Y location Y1 = the mouse Y location I want to make a method that the parameters are Degrees and it sets my Yspeed and my Xspeed accordingly to get to the mouse xy starting at player xy How do would I accomplish this?

    Read the article

  • how to design transparent screen in libgdx

    - by ved
    this question is for LibGdx geeks. I want to make transparent screen in my game. For example, when level completes I want a new transparent screen pop up and show player's high score, buttons to navigate on next level etc like in angry birds kind of screen. This type of screen can also use, when user click on pause button, to show pause screen. Please guide me to design this kind of screen. Or if I am going wrong to make transparent screens for this kind of situation. Please guide me for better one.

    Read the article

  • How does this circle collision detection math work?

    - by Griffin
    I'm going through the wildbunny blog to learn about collision detection. I'm confused about how the vectors he's talking about come into play. Here's the part that confuses me: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. I understand that p is the distance by which the circles penetrate, but I don't get what exactly N and P are. It seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||). What's the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Using Instance Nodes, worth it?

    - by Twitch
    I am making a 2d game where there are various environments with lots and lots of objects. There is a forest scene with like 1200 objects in total(trees mainly), of which around 100 are visible on the camera at any given time, as you move through the level. These are comprised of around 20 different kind of trees and other props. Each object is usually 2-6 triangles with a transparent texture. My developer asked me to replace each object in the scene with a node, and keeping only a minimal amount of actual objects which would be 300+ or so(?), since there are a few modified unique meshes. So he can instantiate the actual objects to keep the game light. Is this actually effective? And if so how much? I 've read about draw calls and such and I suppose that if I combine each texture (10 kinds of trees) in 1 mesh it will have the same effect?

    Read the article

  • Cube chunk via list ToArray()

    - by Christian Frantz
    I've created a list of vertices that I call for each cube made in my array "cubes". When each cube is create, SetUpVertices is called which is a method that stores the 8 vertices of my cube. At the end of my list creation, I create a vertex buffer, and set the data of the list that contains vertices of all 25 cubes to that vertex buffer, effectively creating a "chunk" of cubes. The problem is that Invalid Operation Exception "The array is not the correct size for the amount of data requested." at the line vertices.ToArray(). I don't have an array for this, as the amount of cubes will be changing and arrays aren't dynamic. What could be the cause of this? for (int x = 0; x < 5; x++) { for (int z = 0; z < 5; z++) { SetUpVertices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z], z), color)); } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionColor), 8, BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionColor>(vertices.ToArray());

    Read the article

  • How do I accomplish fading texture trails in UDK?

    - by kdshay
    I would like to know how to leave a fading texture/material trail in udk. For example (I'm not sure if there is a special name for this effect): A character may leave footprints that fade after x number of seconds Or, a tank may leave a tracks trail as in Civilization IV. Here is another example of this type of effect. Skip to 1:00 and watch the green slime texture. http://www.youtube.com/watch?v=KdJIauWjE8s How do I accomplish this effect in UDK? Any good tutorials? Thank you.

    Read the article

  • How does opengl-es 2 assemble primitives?

    - by stephelton
    Two things I'm quite confused about. 1) OpenGL ES 2.0 creates primitives before the vertex shader is invoked. Why, then, does it not automatically provide the vertex shader the position of the vertex? 2) OpenGL ES 2.0 supports glDrawElements(), but it does not support glEnableClientState() or GL_VERTEX_ARRAY, so how can this call possibly be used to construct primitives? NOTE: this is OpenGL ES 2.0, NOT normal OpenGL! Thanks!

    Read the article

  • what knowledge would I need to make a good simulation games

    - by Skeith
    I have an idea for a game like theme park but don't know how simulation games are made. I am not some noob on his first game so I appreciated constructive answers instead of "its hard, don't do it". What I want is to know how simulation game mechanics are put together. I figure it would be heaver on the AI than normal games and not knowing much about AI would like to know some programming techniques I should look into for this style game. specific techniques please not just a book on ai. what sort of architecture would be used? I guess it would have some sort of probability engine with pre designed events that are triggered based on the AI state. Would it use a FSM or be purely event driven ? Any information on how a sims game functions would be cool.

    Read the article

  • What is a good method for coloring textures based on a palette in XNA?

    - by Bob
    I've been trying to work on a game with the look of an 8-bit game using XNA, specifically using the NES as a guide. The NES has a very specific palette and each sprite can use up to 4 colors from that palette. How could I emulate this? The current way I accomplish this is I have a texture with defined values which act as indexes to an array of colors I pass to the GPU. I imagine there must be a better way than this, but maybe this is the best way? I don't want to simply make sure I draw every sprite with the right colors because I want to be able to dynamically alter the palette. I'd also prefer not to alter the texture directly using the CPU.

    Read the article

  • SpriteBatch.Begin() making my model not render correctly

    - by manning18
    I was trying to output some debug information using DrawString when I noticed my model suddenly was being rendered like it was inside-out (like the culling had been disabled or something) and the texture maps weren't applied I commented out the DrawString method until I only had SpriteBatch.Begin() and .End() and that was enough to cause the model rendering corruption - when I commented those calls out the model rendered correctly What could this be a symptom of? I've stripped it down to the barest of code to isolate the problem and this is what I noticed. Draw code below (as stripped down as possible) GraphicsDevice.Clear(Color.LightGray); foreach (ModelMesh mesh in TIEAdvanced.Meshes) { foreach (Effect effect in mesh.Effects) { if (effect is BasicEffect) ((BasicEffect)effect).EnableDefaultLighting(); effect.CurrentTechnique.Passes[0].Apply(); } } spriteBatch.Begin(); spriteBatch.DrawString(spriteFont, "Camera Position: " + cameraPosition.ToString(), new Vector2(10, 10), Color.Blue); spriteBatch.End(); GraphicsDevice.DepthStencilState = DepthStencilState.Default; TIEAdvanced.Draw(Matrix.CreateScale(0.025f), viewMatrix, projectionMatrix);

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • Can you shade a specific section of a sprite? If so, how? [Java]

    - by l5p4ngl312
    I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some sort of shading. It is difficult to distinguish between separate elevations when the camera is facing away from the slope because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block. I am using LWJGL. Heres a link to a screenshot from the engine: http://i44.tinypic.com/qxqlix.jpg

    Read the article

  • sdl stencil buffer

    - by noddy
    I am trying to use the stencil buffer for rendering reflection and am working with SDL and OpenGL. When I give the command SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,8),I get a return value of 0 indicating success,but when I try to get the size allocated using SDL_GL_GetAtribute( SDL_GL_STENCIL_SIZE,&i),I get a value of 0 for my stencil buffer due to which I am not getting the desired rendering. Can someone help me to correct my mistake? Is there some other initialization also required? Thanks

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >