Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 533/1016 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Fast lighting with multple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Why RenderTarget2D overwrites other objects when trying to put some text in a model?

    - by cad
    I am trying to draw an object composited by two cubes (A & B) (one on top of the other, but for now I have them a little bit more open). I am able to do it and this is the result. (Cube A is the blue and Cube B is the one with brown text that comes from a png texture) But I want to have any text as parameter in the cube B. I have tried what @alecnash suggested in his question, but for some reason when I try to draw cube B, cube A dissapears and everything turns purple. This is my draw code: public void Draw(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch, Matrix viewMatrix, Matrix projectionMatrix) { graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; graphicsDevice.SamplerStates[0] = SamplerState.LinearClamp; // CUBE A basicEffect.View = viewMatrix; basicEffect.Projection = projectionMatrix; basicEffect.World = Matrix.CreateTranslation(ModelPosition); basicEffect.VertexColorEnabled = true; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); drawCUBE_TOP(graphicsDevice); drawCUBE_Floor(graphicsDevice); DrawFullSquareStripesFront(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesLeft(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesRight(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesBack(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); } // CUBE B // Set the World matrix which defines the position of the cube texturedCubeEffect.World = Matrix.CreateTranslation(ModelPosition); // Set the View matrix which defines the camera and what it's looking at texturedCubeEffect.View = viewMatrix; // Set the Projection matrix which defines how we see the scene (Field of view) texturedCubeEffect.Projection = projectionMatrix; // Enable textures on the Cube Effect. this is necessary to texture the model texturedCubeEffect.TextureEnabled = true; Texture2D a = SpriteFontTextToTexture(graphicsDevice, spriteBatch, arialFont, "TEST ", Color.Black, Color.GhostWhite); texturedCubeEffect.Texture = a; //texturedCubeEffect.Texture = cubeTexture; // Enable some pretty lights texturedCubeEffect.EnableDefaultLighting(); // apply the effect and render the cube foreach (EffectPass pass in texturedCubeEffect.CurrentTechnique.Passes) { pass.Apply(); cubeToDraw.RenderToDevice(graphicsDevice); } } private Texture2D SpriteFontTextToTexture(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch, SpriteFont font, string text, Color backgroundColor, Color textColor) { Vector2 Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(graphicsDevice, (int)Size.X, (int)Size.Y); graphicsDevice.SetRenderTarget(renderTarget); graphicsDevice.Clear(Color.Transparent); spriteBatch.Begin(); //have to redo the ColorTexture //spriteBatch.Draw(ColorTexture.Create(graphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); spriteBatch.DrawString(font, text, Vector2.Zero, textColor); spriteBatch.End(); graphicsDevice.SetRenderTarget(null); return renderTarget; } The way I generate texture with dynamic text is: Texture2D a = SpriteFontTextToTexture(graphicsDevice, spriteBatch, arialFont, "TEST ", Color.Black, Color.GhostWhite); After commenting several parts to see what caused the problem, it seems to be located in this line graphicsDevice.SetRenderTarget(renderTarget);

    Read the article

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • Unable to create suitable graphics device?

    - by kraze
    I've been following the Eye of the Dragon tutorial, which is basicly a guide to making a 2D RPG game, obviously. I recently finished the tutorial about making pop up screens in the menu and changed the screen to load as a full screen. Whenever I try and load the game it just goes black and my mouse sits there. I cannot change out of it other then with CtrlAltDel. Once i do that it says No suitable graphics card found, unable to create graphics device. I read somewhere about XNA not allowing more then one screen when any one of them is full screen. but it wasnt very informative. Anyone have any ideas whats going on and/or how to fix this? Just incase if this helps this is the code for the graphics device: public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 900; graphics.PreferredBackBufferHeight = 768; graphics.IsFullScreen = true; this.Window.Title = "Eyes of the Dragon"; Content.RootDirectory = "Content"; }

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • Increase moving speed of body

    - by Siddharth
    How to move ball speedily on the screen using box2d in libGDX? public class Box2DDemo implements ApplicationListener { private SpriteBatch batch; private TextureRegion texture; private World world; private Body groundDownBody, groundUpBody, groundLeftBody, groundRightBody, ballBody; private BodyDef groundBodyDef1, groundBodyDef2, groundBodyDef3, groundBodyDef4, ballBodyDef; private PolygonShape groundDownPoly, groundUpPoly, groundLeftPoly, groundRightPoly; private CircleShape ballPoly; private Sprite sprite; private FixtureDef fixtureDef; private Vector2 ballPosition; private Box2DDebugRenderer renderer; Vector2 vector2; @Override public void create() { texture = new TextureRegion(new Texture( Gdx.files.internal("img/red_ring.png"))); sprite = new Sprite(texture); sprite.setOrigin(sprite.getWidth() / 2, sprite.getHeight() / 2); batch = new SpriteBatch(); world = new World(new Vector2(0.0f, -10.0f), false); groundBodyDef1 = new BodyDef(); groundBodyDef1.type = BodyType.StaticBody; groundBodyDef1.position.x = 0.0f; groundBodyDef1.position.y = 0.0f; groundDownBody = world.createBody(groundBodyDef1); groundBodyDef2 = new BodyDef(); groundBodyDef2.type = BodyType.StaticBody; groundBodyDef2.position.x = 0f; groundBodyDef2.position.y = Gdx.graphics.getHeight(); groundUpBody = world.createBody(groundBodyDef2); groundBodyDef3 = new BodyDef(); groundBodyDef3.type = BodyType.StaticBody; groundBodyDef3.position.x = 0f; groundBodyDef3.position.y = 0f; groundLeftBody = world.createBody(groundBodyDef3); groundBodyDef4 = new BodyDef(); groundBodyDef4.type = BodyType.StaticBody; groundBodyDef4.position.x = Gdx.graphics.getWidth(); groundBodyDef4.position.y = 0f; groundRightBody = world.createBody(groundBodyDef4); groundDownPoly = new PolygonShape(); groundDownPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.density = 0f; fixtureDef.restitution = 1f; fixtureDef.friction = 0f; fixtureDef.shape = groundDownPoly; fixtureDef.filter.groupIndex = 0; groundDownBody.createFixture(fixtureDef); groundUpPoly = new PolygonShape(); groundUpPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundUpPoly; fixtureDef.filter.groupIndex = 0; groundUpBody.createFixture(fixtureDef); groundLeftPoly = new PolygonShape(); groundLeftPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundLeftPoly; fixtureDef.filter.groupIndex = 0; groundLeftBody.createFixture(fixtureDef); groundRightPoly = new PolygonShape(); groundRightPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundRightPoly; fixtureDef.filter.groupIndex = 0; groundRightBody.createFixture(fixtureDef); ballPoly = new CircleShape(); ballPoly.setRadius(16f); fixtureDef = new FixtureDef(); fixtureDef.shape = ballPoly; fixtureDef.density = 1f; fixtureDef.friction = 1f; fixtureDef.restitution = 1f; ballBodyDef = new BodyDef(); ballBodyDef.type = BodyType.DynamicBody; ballBodyDef.position.x = (int) 200; ballBodyDef.position.y = (int) 200; ballBody = world.createBody(ballBodyDef); // ballBody.setLinearVelocity(200f, 200f); // ballBody.applyLinearImpulse(new Vector2(250f, 250f), // ballBody.getLocalCenter()); ballBody.createFixture(fixtureDef); renderer = new Box2DDebugRenderer(true, false, false); } @Override public void dispose() { ballPoly.dispose(); groundLeftPoly.dispose(); groundUpPoly.dispose(); groundDownPoly.dispose(); groundRightPoly.dispose(); world.destroyBody(ballBody); world.dispose(); } @Override public void pause() { } @Override public void render() { world.step(1f/30f, 3, 3); Gdx.gl.glClearColor(1f, 1f, 1f, 1f); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); batch.begin(); vector2 = ballBody.getLinearVelocity(); System.out.println("X=" + vector2.x + " Y=" + vector2.y); ballPosition = ballBody.getPosition(); renderer.render(world,batch.getProjectionMatrix()); // int preX = (int) (vector2.x / Math.abs(vector2.x)); // int preY = (int) (vector2.y / Math.abs(vector2.y)); // // if (Math.abs(vector2.x) == 0.0f) // ballBody1.setLinearVelocity(1.4142137f, vector2.y); // else if (Math.abs(vector2.x) < 1.4142137f) // ballBody1.setLinearVelocity(preX * 5, vector2.y); // // if (Math.abs(vector2.y) == 0.0f) // ballBody1.setLinearVelocity(vector2.x, 1.4142137f); // else if (Math.abs(vector2.y) < 1.4142137f) // ballBody1.setLinearVelocity(vector2.x, preY * 5); batch.draw(sprite, (ballPosition.x - (texture.getRegionWidth() / 2)), (ballPosition.y - (texture.getRegionHeight() / 2))); batch.end(); } @Override public void resize(int arg0, int arg1) { } @Override public void resume() { } } I implement above code but I can not achieve higher moving speed of the ball

    Read the article

  • How can I make this arcade-highscore game more fun/interesting?

    - by j-a
    I'm having difficulties getting the fun factor into this iPhone game, and I am looking for some ideas or advice. I was asked to generalize the question a bit. What are some techniques for arcade highscore games that can be applied to this game in order to: Make each second of the game fun and challenging, from the first second to the end of the game. Regardless of skill level. Make the player want to try again and again to beat the high score. Briefly about the game: you aim using your finger and pull the bow chord and release by lifting your finger. That part feels quite nice how the bow interacts with the finger. The game idea: hearts fall down and you get 1 pt for each heart you shoot. You start with a few arrows and every now and then a bag of arrow comes down which - if you hit it, you get more arrows. Once your out of arrows the game is over. So it is all about beating your previous high score or your friends high scores. Unfortunately I don't find it that fun. Thankful for any ideas/suggestions/thoughts on how to make it more fun/interesting.

    Read the article

  • Simple moving object jitters every couple of seconds [on hold]

    - by Liam
    I'm trying to get smooth movement in my game, right now every couple of seconds the moving square jitters. I'm using C++ with SDL2. I made a very simple project to test different methods so all that's happening is a box moves across the screen. Here's a pastebin of the code http://pastebin.com/7YxxSw0D Here's a link to a dropbox folder containing the 'game' https://www.dropbox.com/sh/0ygntl140qv8iv0/AABVuuk6khArOJmdBi1OaFlua?dl=0 Any input would be greatly appreciated, and let me know if you need any more info. Thanks!

    Read the article

  • Problem when texturing triangles using glVertexPointer()

    - by tigrou
    I'm having a problem for displaying a single quad, here is how i do : float tex_coord[] = { 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0 }; //how many coords should i give ? int indexes[] = { 3, 2, 0, 0, 1, 3 } float vertexes[] = { -37, 0, 30, -38, 0, 29, -41, 0, 32, -42, 0, 31 } glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vertexes); glTexCoordPointer(2, GL_FLOAT, 0, tex_coord); glDrawElements(GL_TRIANGLES, 2, GL_UNSIGNED_INT, indices); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); The result : (with 2 triangles) (with 4 triangles)

    Read the article

  • Box2D random crash when adding joints

    - by user46624
    I'm currently working on a project which uses Box2D, when the player uses a certain key it should anchor to the ground. For that I use a weld joint, but when I add the joint the game will sometimes crash, it has a 1/10 change to crash. The error I recieve: Showing the controller Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 11 at org.jbox2d.dynamics.Island.add(Island.java:577) at org.jbox2d.dynamics.World.solve(World.java:1073) at org.jbox2d.dynamics.World.step(World.java:598) The code for adding the joints: WeldJointDef def = new WeldJointDef(); def.initialize(body, anchoredObject.body, body.getWorldCenter()); weldJoint = (WeldJoint) world.createJoint(def); I still get the error if I synchronize it

    Read the article

  • How to debug Android App Eclipse?

    - by user2534694
    Ok. So while this isnt a programming question. I wanted to know how do people debug apps? How do you view log cat, and where these exceptions are thrown etc? And do I need to run the app on the emulator to see all the stuff, or is there a way to view this after running the app on my phone(while not being connected to the computer) Links to plugins and tips would be really helpful, as im gonna start work on my next game, and while the first one works fine, had a lot of problems while debugging.

    Read the article

  • Reasons to disable game save during combat (e.g. Mass Effect 2)

    - by Steve V.
    So I've been playing Mass Effect 2 (PC) and one of the things I've noticed is that you can only save your game when you're not engaged in combat. As soon as the first enemy shows up on your radar, the save button is disabled. Once combat is over, save functionality reappears. It seems reasonable to assume that Mass Effect 2 is a state machine, and therefore, the internal state of the program at any moment can be captured and reloaded later. This is basically a solved problem - games have been designed this way since the Half-Life era. It also seems reasonable to assume that BioWare knew what they were doing when they made the decision not to follow this model - it's a tried and true system; BioWare wouldn't have done it the way they did without some good reason. What reasons are there to disable game save functionality during combat?

    Read the article

  • How to manage enemy movement and shoot in a shmup?

    - by whatever
    I'm wondering what is the best (or at least a good) way of managing enemies in a shoot-em-up. Basically, what I'd do would be a class that manages displaying and updating positions of all the enemies. But how to create good deplacements for enemies? A list of where-to-go points? gravitating around some fixed points (with ponderation, distance evaluation etc.)? Same question for the shoot patterns? Can you please put me on a track?

    Read the article

  • After one has made many grid based puzzles how does one make then into a PDF ready for printing

    - by alan ross
    After one has generated many grid based puzzles like sudoku, kakuro or even plain crosswords and now one has to print them in a book. How does one make a pdf (book file) from them automatically. To explain the question better. One has the puzzle ready in computer format like ..35.6.89 for all nine rows. The dot being the empty cell. How does one convert then to a picture on a PDF page complete with box, automatically without doing them individually and then print a book from the pdf file. As can be seen there are other things also printed on the page all this is done automatically.

    Read the article

  • How to swap row/column major order?

    - by 0X1A
    I'm trying to get a sprite sheet clipped in the right order but I'm a bit stumped, every iteration I've tried has tended to be in the wrong order. This is my current implementation. Frames = (TempSurface->h / ClipHeight) * (TempSurface->w / ClipWidth); SDL_Rect Clips[Frames]; for (i = 0; i < Frames; i++) { if (i != 0 && i % (TempSurface->h / ClipHeight) == 0) ColumnIndex++; Clips[i].x = ColumnIndex * ClipWidth; Clips[i].y = i % (TempSurface->h / ClipHeight) * ClipHeight; Clips[i].w = ClipWidth; Clips[i].h = ClipHeight; } Where TempSurface is the entire sheet loaded to a SDL_Surface and Clips[] is an array of SDL_Rects. What results from this is a sprite sheet set to SDL_Rects in the wrong order. For example I need a sheet of dimensions 4x4 to look like this: | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | | 8 | 9 | 10| 11| | 12| 13| 14| 15| But instead I get this order: | 0 | 4 | 8 | 12| | 1 | 5 | 9 | 13| | 2 | 6 | 10| 14| | 3 | 7 | 11| 15| Columns and rows order does not match. What should I do to fix the order?

    Read the article

  • Shader compile log depending on hardware

    - by dreta
    I'm done with the core of my graphics engine and I'm testing it on every platform I can get my hands on. Now, what I noticed is that different drivers return different shader and program compile log content. For example, on my friend's laptop if you successfuly compile a shader then the log is simply empty. However on my PC I get some useful information along with it. So if I compile a vertex shader, I'll get: Vertex shader was successfully compiled to run on hardware. Which isn't that impressive, but is what happens when I compile a program. On my friend's computer the log is empty, since the program compiles. However on my own computer I get: Vertex shader(s) linked, fragment shader(s) linked. Which is awesome, because I'm attaching a geometry shader with 0 (I have a geometry shader file with trash, so it doesn't compile and the pointer is set to 0), and the compiler just tells me which shaders linked. Now it got me thinking, if I was going to buy a graphics card, is there a way for me to get the information about whether or not I'll get this "extended" compile information? Maybe it's vendor specific? Now I don't expect an answer TBH, this seems a bit obscure, but maybe somebody has any experience with this and could post it.

    Read the article

  • How are dependant quests generated in Guild Wars 2?

    - by Aufziehvogel
    I recently read that Guild Wars 2 uses a system where the creation of quests depends on which actions user took when they were presented another quest. An example was: There might be a quest to protect a person. If users do not take this action, the person might be kidnapped and later there is a quest to rescue this person. Is there any information on whether the creation of these quests is somehow automatic? From the article it sounded like automatically, but from the specific example you could also guess that people just created a task-set where they added conditions (Task 1 taken: OK; Task 1 not taken: Show Task 2). From what I heard about AI they might also have implemented some sort of a huge neural network to make decisions?

    Read the article

  • Collision representation in game overworlds

    - by Akroy
    I'm implementing a 2D overworld where one can walk through an area that is not tile based. I was wondering the best way to implement collisions. In the past when I've done similar things, I've used one image (or set of images) to show an elaborately drawn world and then a second binary image that does nothing but differentiate "wall" and "not wall". Then, I'd use the first for all drawing to the screen, but the second for collision detection. Having another image of the same size to represent collisions seems like lots of overhead. Is there a better way to handle this? (I'm currently using C++ with SDL, although I'm more interested in general concepts)

    Read the article

  • Getting Started with Component Architecture: DI?

    - by ashes999
    I just moved away from MVC towards something more component-architecture-like. I have no concept of messages yet (it's rough prototype code), objects just get internal properties and values of other classes for now. That issue aside, it seems like this is turning into an aspect-oriented-programming challenge. I've noticed that all entities with, for example, a position component will have similar properties (get/set X/Y/Z, rotation, velocity). Is it a common practice, and/or good idea, to push these behind an interface and use dependency injection to inject a generic class (eg. PositionComponent) which already has all the boiler-plate code? (I'm sure the answer will affect the model I use for message/passing)

    Read the article

  • Why are my sprite sheet's frames not visible in Cocos Builder?

    - by Ramy Al Zuhouri
    I have created a sprite sheet with zwoptex. Then I just dragged the .plist and .png files to my Cocos Builder project. After this I wanted to take a sprite frame and set it to a sprite: But the sprite image is empty! At first I thought it was just empty in Cocos Builder, and that it must have been working correctly when imported to a project. But if I try to run that scene with CCBReader I still see an empty sprite. Why?

    Read the article

  • How do I consolidate the differences between iOS and Android update loops?

    - by kkan
    I'm currently working on moving some Android-ndk code to the iPhone. From looking at some samples it seems that the main loop is handled for you and all you've got to do is override the render method on the view to handle the rendering. Then add a selector to handle the update methods. The render method itself looks like it's attached to the windows refresh. But in android I've got my own game loop that controls the rendering and updates using C++ time.h. Is it possible to implement the same here bypassing Apple's loop? I'd really like the keep the structures of the code similar.

    Read the article

  • C# XNA render an entire frame to a texture2d

    - by redcodefinal
    I asked a question here: C# XNA Make rendered screen a texture2d But, I ended up not getting the exact result I was looking for since I didn't ask the question right. In a game I am writing, I render an extremely large city out of objects, this can cause lag when moving the camera to view things that are off screen. I need a way to render then ENTIRE city, even the stuff that is off screen, and make it into a Texture2D. The answer I chose for the last one didn't work entirely right because it only gets what is on screen, not what is off.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >