Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 440/1016 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • Beggining OpenGL vs beggining DirectX and some question about the philosophical difference between them

    - by jokoon
    I'm begginning with Direct X at school, and my teacher said it was harder to begin with than OpenGL, but I read several things that in fact, Direct X was more advanced than OpenGL in terms of recent graphic cards features. Since I'm far from wanting to do top notch effects, which can already be implemented with existing engines and/or shaders, I wanted to know your opinion: Can OpenGL be considered like a more basic, KISS, hardware agnostic, graphic library to just do 3D with acceleration, and consider DirectX like a top notch, game-oriented graphic API that will always support the next-gen 3D chips ? Citation from wikipedia on http://en.wikipedia.org/wiki/Id_Tech_5 : John Carmack mentioned in his keynote at QuakeCon 2007 that the id Tech 5 engine will not be using the DirectX 10 API. I don't want to seem like I'm minding open source because Carmack does and because he is famous, it's just that android and iPhone are out there, and Direct X doesn't seems to me to be the necessary API to know, since Windows supports OpenGL, and since the 360 is just a console among other consoles.

    Read the article

  • How do I tackle top down RPG movement?

    - by WarmWaffles
    I have a game that I am writing in Java. It is a top down RPG and I am trying to handle movement in the world. The world is largely procedural and I am having a difficult time tackling how to handle character movement around the world and render the changes to the screen. I have the world loaded in blocks which contains all the tiles. How do I tackle the character movement? I am stumped and can't figure out where I should go with it. EDIT: Well I was abstract with the issue at hand. Right now I can only think of rigidly sticking everything into a 2D array and saving the block ID and the player offset in the Block or I could just "float" everything and move about between tiles so to speak.

    Read the article

  • Game programming basics under Windows

    - by dreta
    I've been trying to learn some Windows programming using the Win32 API. Now, i'm used to working with the OS layer being abstracted away, mostly thanks to libraries like SFML or Allegro. Could you guys help me out and tell me if i'm thinking right here. The place for my gameloop is where i'm reading the messages? while (TRUE) { if (PeekMessage (&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break ; TranslateMessage (&msg) ; DispatchMessage (&msg) ; } else { //my game loop goes here } } Now the slightly bigger issue, that is, drawing. Do i run my drawing where i normaly do it, inside the game loop after the game logic? Or do i do it when WM_PAIN is being called and just call InvalidateRect (hwnd, NULL, TRUE); when i want to draw? This does feel weird, the WM_PAINT is a queued message, so i don't know for sure when it'll be called. So if i wanted to avoid this, do i just get the device handle inside the game loop and only ValidateRect (hwnd, NULL); in the WM_PAINT case (beside the ValidateRect (hwnd, NULL); called after drawing in the game loop)? Actually, now that i think about it, do i even need WM_PAINT in this situation or can i skip it and let DefWindowProc handle it (does it validate the screen if WM_PAINT isn't processed)? If this is any important, i'm setting up my code for OpenGL.

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Render graphics using Doubles in Graphics2D

    - by thedeadlybutter
    Currently, I have a JFrame for my game to render in, and I'm using Graphics2D for drawing (The games graphics are fairly simple 2D sprites). However, my delta variable is a double, and all of the Graphics 2D methods (And Grpahics) use int. I tried to type cast the delta to an int, but it just rounds down to 0. So my question is, how can I render graphics using Graphics2D in Java with coordinates that are doubles. Can I convert it to work with Graphics2D if there is no built in way? Or, is there a graphics library that can support doubles for coordinates?

    Read the article

  • Tips and Tools for creating Spritesheet animations

    - by Spooks
    I am looking for a tool that I can use to create sprite sheet easily. Right now I am using Illustrator, but I can never get the center of the character in the exact position, so it looks like it is moving around(even though its always in one place), while being loop through the sprite sheet. Is there any better tools that I can be using? Also what kind of tips would you give for working with a sprite sheet? Should I create each part of the character in individual layers (left arm, right arm, body, etc.) or everything at once? any other tips would also be helpful! thank you

    Read the article

  • Can you shade a specific section of a sprite? If so, how?

    - by l5p4ngl312
    I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some sort of shading. It is difficult to distinguish between separate elevations when the camera is facing away from the slope because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block. I am using LWJGL. Heres a link to a screenshot from the engine: http://i44.tinypic.com/qxqlix.jpg

    Read the article

  • Rotate 3D Model from a custom position

    - by Nipuna Silva
    I have a 3D Model like above in which i want to rotate it from a given location(pointed in red) but I can only rotate it from the middle. How can I rotate it from a custom point. Edit: I successfully able to rotate the model from the below position by getting the radius of the model and applying it to the world matrix Vector3 point = new Vector3(-radius, 0, 0); world = Matrix.CreateTranslation(-radius, 0, 0); But now I cannot change the position of the object and it always centered in middle of the screen. I think that's because i applied the above code. How can I place it anywhere I want?

    Read the article

  • EaseFunction in LoopEntityModifier

    - by Siddharth
    For my game, I need EaseFunction in LoopEntityModifier. In my game, I am rotating ball over certain object. For giving effect I want to use EaseFunction. I want to rotate ball around an object take around 4 to 5 round that was already rotating but I want add some effect so that it looks good. For this I have to use EaseFunction which suits my needs. But if I put EaseFunction in rotation modifier then each round rotation modifier apply an effect of EaseFunction that I want only one time occur either starting or ending time. So if I can able to provide EaseFunction in LoopEntityModifier then it will good for me or something similar also work for me. At present my code is something similar like this. new LoopEntityModifier(new RotationModifier(...)); I hope someone has some idea on this.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Weird rotation problem

    - by Phil
    I'm creating a simple tank game. No matter what I do, the turret keeps facing the target with it's side. I just can't figure out how to turn it 90 degrees in Y once so it faces it correctly. I've checked the pivot in Maya and it doesn't matter how I change it. This is the code I use to calculate how to face the target: void LookAt() { var forwardA = transform.forward; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 20) { //Rotate to transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 20) { transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } } I'm using Unity3d and would appreciate any help I can get! Thanks!

    Read the article

  • How do you pack resources in a game when you have too many of them?

    - by ThePlan
    I've recently made a basic space invaders clone in C++ using the Allegro 5 framework. It took me a long time, but after I finished, I realized I had about 10 sprites, and 13MB worth of DLLs (Some of the people didn't even have the mingW dlls) which were making people who played the game very confused. How can I "pack" all my resources in a way that I can easily add-remove data to my game, and to reduce the size taken by the resource, basically placing them in 1 spot? I'm using codeblocks.

    Read the article

  • OpenGL ES 1 Pixel Error?

    - by Beginner001
    I am developing a game on android using OpenGL ES 1.0 for Android OS. It is a 2d game using a simple Orthographic projection and textures for the sprites. One of these textures has a small line (it looks like 1 pixel) all the way across the top that has the same colors as the bottom 1-pixel line of the texture. It is almost as if the bottom line of the image raster was copied and pasted as the top line as well. Is anyone familiar with this type of error? What could the problem be?

    Read the article

  • Why does my model render differently than its preview?

    - by Raven Dreamer
    I've been importing .fbx files that I made with 3DS Max 2012 into Unity, and it's quite neat to see my models running around in game. However, I can't help but notice that the models, as they're rendered in game, vary substantially from what they look like in the preview (and also what they looked like in 3DS Max). Observe: In-Game Unity Preview 3DS Max My gut tells me that I'm not setting up Unity's lighting system properly. What, then, do I need to do, to either my scene or my model, in order to get the left-most picture to look like the middle one?

    Read the article

  • Looking for a small, light scene graph style abstraction lib for shader based OpenGL

    - by Pris
    I'm looking for a 'lean and mean' c/c++ scene graph library for OpenGL that doesn't use any deprecated functionality. It should be cross platform (strictly speaking I just dev on Linux so no love lost if it doesn't work on Windows), and it should be possible to deploy to mobile targets (ie OpenGLES2, and no crazy mandatory dependencies that wouldn't port well to modern mobile frameworks like iOS, Android, etc), with a license that's compatible with closed source software (LGPL or more liberal). Specific nice-to-haves would be: Cameras and Viewers (trackball, fly-by, etc) Object transform hierarchies (if B is a child of A, and you move A, B has the same transform applied to it) Simple animation Scene optimization (frustum culling, use VBOs, minimize state changes, etc) Text I've played around with OpenSceneGraph a lot and it's pretty amazing for fixed function pipeline stuff, but I've had a few of problems using it with the programmable pipeline and after going through their mailing list, it seems several people have had similar issues (going back years). Kitware's VES looks neat (http://www.vtk.org/Wiki/VES), but VES + VTK is pretty heavy. VTK is also typically for analyzing scientific data and I've read that it's not that appropriate for a general use case (not that great at rendering a lot of objects on scene,etc) I'm currently looking at VisualizationLibrary (http://www.visualizationlibrary.org/documentation/pag_gallery.html) which looks like it offers some of the functionality I'd like, but it doesn't explicitly support mobile targets. Other solutions like Ogre, Horde3D, Irrlicht, etc tend to be full on game engines and that's not really what I'm looking for. I'd like some suggestions for other libraries that I may have missed... please note I'm not willing to roll my own solution from scratch.

    Read the article

  • Inverse projection: question about w coordinate

    - by fayeWilly
    I have to perform in shader an inverse projection from a u/v of a render target. What I do is: Get NDC as 2*(u,v,depth) - 1 Then world space as tmp = (P*V)^-1 * (NDC,1.0); world space = tmp/tmp.w; This apparently works, but I am confused about the w division there. Why this work? Shouldn't be a multiplication by a w somewhere (as in the "forward" pipeline there is the perpsective division?) Thank you, Faye

    Read the article

  • Game state management (Game, Menu, Titlescreen, etc)

    - by munchor
    Basically, in every single game I've made so far, I always have a variable like "current_state", which can be "game", "titlescreen", "gameoverscreen", etc. And then on my Update function I have a huge: if current_state == "game" game stuf ... else if current_state == "titlescreen" ... However, I don't feel like this is a professional/clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?

    Read the article

  • How to implement explosion in OpenGL?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

  • How to get the Exact Collision Point and ignore the collision (from 2 "ghost bodies")

    - by Moritz
    I have a very basic problem with Box2D. For a arenatype game where you can throw scriptable "missiles" at other players I decided to use Box2D for the collision detection between the players and the missiles. Players and missiles have their own circular shape with a specific size (varying). But I don´t want to use dynamic bodies because the missiles need to move themselve in any way they want to (defined in the script) and shouldnt be resolved unless the script wants it. The behavior I look for is as following (for each time step): velocity of missiles is set by the specific missile script each missile is moved according to that velocity if a collision accurs now, I want to get the exact position of impact, and now I need a mechanism to decide if the missile should just ignore the collision (for example collision between two fireballs which shouldnt interact) or take it (so they are resolved and dont overlap anymore) So is there a way in Box2D to create Ghost bodies and listen to collisions from them, then deciding if they should ignore the collision or should take them and resolve their position? I hope I was clear enough and would be happy about any help!

    Read the article

  • Is AGS outdated for Point & Click Adventures?

    - by Aidan Moore
    Is Adventure Game Studio (AGS) outdated? I am working on a Point and Click Adventure game being coded on the AGS engine, and just recently, the question of 'is this outdated?' has come up. I'll admit, AGS is a rather old, and kind of went out of style with the P&C genre itself, but I have not found anything quite like it that specializes in this specific format of games. So my big question is not only 'is this outdated?' but also 'Is there a better alternative?'

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >