Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 354/774 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • Is there a good way to get pixel-perfect collision detection in XNA?

    - by ashes999
    Is there a well-known way (or perhaps reusable bit of code) for pixel-perfect collision detection in XNA? I assume this would also use polygons (boxes/triangles/circles) for a first-pass, quick-test for collisions, and if that test indicated a collision, it would then search for a per-pixel collision. This can be complicated, because we have to account for scale, rotation, and transparency. WARNING: If you're using the sample code from the link from the answer below, be aware that the scaling of the matrix is commented out for good reason. You don't need to uncomment it out to get scaling to work.

    Read the article

  • Multiple textures on a mesh created in blender and imported in xna

    - by alecnash
    I created a cube in blender which has multiple images applied to its faces. I am trying to import the model into xna and get the same results as shown when rendering the model in blender. I go through every mesh (for the cube its only one) and through every part but only the first image used in blender is displayed in every face. The code I am using to fetch the texture looks like that: foreach (ModelMesh m in model.Meshes) { foreach (Effect e in m.Effects) { foreach (var part in m.MeshParts) { e.CurrentTechnique = e.Techniques["Lambert"]; e.Parameters["view"].SetValue(camera.viewMatrix); e.Parameters["projection"].SetValue(camera.projectionMatrix); e.Parameters["colorMap"].SetValue(modelTextures[part.GetHashCode()]); } } m.Draw(); } Am I missing something?

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

  • How do I morph between meshes that have different vertex counts?

    - by elijaheac
    I am using MeshMorpher from the Unify wiki in my Unity project, and I want to be able to transform between arbitrary meshes. This utility works best when there are an equal number of vertices between the two meshes. Is there some way to equalize the vertex count between a set of meshes? I don't mean that this would reduce the vertex count of a mesh, but would rather add redundant vertices to any meshes with smaller counts. However, if there is an alternate method of handling this (other than increasing vertices), I would like to know.

    Read the article

  • XNA Skinned Model - Keyframe.Bone out of range exception

    - by idlackage
    I'm getting an IndexOutOfRangeException on this line of AnimationPlayer.cs: boneTransforms[keyframe.Bone] = keyframe.Transform; I don't get what it's really referring to. The error happens when keyframe.Bone is 14, but I have no idea what that's supposed to mean. The 14th bone of my model? What would that even be? I read this thread, but nothing there seemed to work. I don't have many bones, stray edges/verts, unassigned verts, unparented/non-root bones, or bones with dots in the name. What else can I be missing? Thank you for any help!

    Read the article

  • Role of an entity state in a component based system?

    - by Paul
    Component-based entity systems are all the rage these days; everyone seems to agree they are the way to go, but no one really has a definitive implementation of such a system. I was wondering, what role do entity states (walking-left, standing, jumping, etc) have in a CBS? Do they act like controllers (i.e. they handle events and change the entity's attributes based on those events)? What about cases where a state would, for example, require that the entity enters no-clip mode? Should, that state, when it enters, maybe set the CollisionComponent of the entity to a null pointer or something? (Then, on exit, the state should restore the entity's CollisionComponent to its previous state.) Also, I guess it's the current state's job to change the entity's state to something else, right?

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • error trying to display semi transparent rectangle

    - by scott lafoy
    I am trying to draw a semi transparent rectangle and I keep getting an error when setting the textures data. The size of the data passed in is too large or too small for this resource. dummyRectangle = new Rectangle(0, 0, 8, 8); Byte transparency_amount = 100; //0 transparent; 255 opaque dummyTexture = new Texture2D(ScreenManager.GraphicsDevice, 8, 8); Color[] c = new Color[1]; c[0] = Color.FromNonPremultiplied(255, 255, 255, transparency_amount); dummyTexture.SetData<Color>(0, dummyRectangle, c, 0, 1); the error is on the SetData line: "The size of the data passed in is too large or too small for this resource." Any help would be appreciated. Thank you.

    Read the article

  • 2D water with dynamic waves

    - by user1103457
    New Super Mario Bros has really cool 2D water that I'd like to learn how to create. Here's a video showing it. When something hits the water, it creates a wave. There are also constant "background" waves. You can get a good look at the constant waves just after 00:50 when the camera isn't moving. I assume the splashes in NSMB work as in the first part of this tutorial. But in NSMB the water also has constant waves on the surface, and the splashes look very different. Another difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. I am referring to the splashes that the player creates when jumping in and out of the water. How do they create the constant waves and the splashes? I am especially interested in the splashes, and how they work together with the constant waves. I am programming in XNA. I've tried this myself, but couldn't really get it all to work well together. Bonus questions: How do they create the light spots just under the surface of the waves and how do they texture the deeper parts of the water? This is the first time I try to create water like this. EDIT: I assume the constant waves are created using a sine function. The splashes are probably created in a way like in the tutorial. (But they are not the same, so I am still interested in how to make this kind of splashes) But I have a lot of trouble combining those things. I know I can use the sine function to set the height of a specific watercolumn but the splashes are using the speed, to determine the new height. I can't figure out how to combine those. Not that I am not asking how the developers of new super mario bros did this exactly. I am just interested in ways to recreate an effect like it. This week I have an examweek so I don't have time to work on the code. After this week I will spend a lot of time on it. But I am constantly thinking about it, so that's why I will be checking comments etc. I just won't be looking at the code since it might be too time-consuming.

    Read the article

  • HedgeWar code confusion

    - by BluFire
    I looked at an open source project(HedgeWars) that was built using many programming languages such as C++ and Java. While I was looking through the code, I couldn't help noticing that all the math and physics were gone from the Java code. HedgeWars I imported the project file called "SDL-android-project" which was a sub folder to "android build" and project files. My question is where is all the math and physics inside the code? Do I have to look at the C++ code in order to see it? I think Hedgewars was originally programmed in C++ but the files are confusing be because of its size and the fact that it has several programming languages inside.

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • Is it safe StringToHash() to use in Unity?

    - by Sebastian Krysmanski
    I'm currently browsing through the Unity tutorials and saw that they're recommending to use Animator.StringToHash("some string") to created unique ids for animation properties (see here). Since I'm a programmer, to me the word "hash" doesn't represents something unique. Like the Java documentation for hashValue() states: It is not required that if two objects are unequal [...], then calling the hashCode method on each of the two objects must produce distinct integer results. So, according to this (and my definition of "hash"), two strings may have the same hash value. (You can also argue that there are an infinite number of possible strings but only 2^32 possible int values.) So, is there a possibility that StringToHash() will give me an id that actually belongs to another property (than the one I requested the hash for)?

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • Making more complicated systems(entity-component-system model question)

    - by winch
    I'm using a model where entities are collections of components and components are just data. All the logic goes into systems which operate on components. Making basic systems(for Rendering and handling collision) was easy. But how do I do more compilcated systems? For example, in a CollisionSystem I can check if entity A collides with entity B. I have this code in CollisionSystem for checking if B damages A: if(collides(a, b)) { HealthComponent* hc = a->get<HealthComponent(); hc.reduceHealth(b->get<DamageComponent>()->getDamage()); But I feel that this code shouldn't belong to Collision system. Where should code like this be and which additional systems should I create to make this code generic?

    Read the article

  • sprite animation system height recalculating has some issues

    - by Nicolas Martel
    Basically, the way it works is that it update the frame to show every let's say 24 ticks and every time the frame update, it recalculates the height and width of the new sprite to render so that my gravity logics and stuff works well. But the problem i am having now is a bit hard to explain in words only, therefore i will use this picture to assist me The picture So what i need basically is that if let's say i froze the sprite at the first frame, then unfreeze it and freeze it at the second frame, have the second frame's sprite (let's say it's a prone move) simply stand on the foothold without starting the gravity and when switching back, have the first sprite go back on the foothold like normal without being under the foothold. I had 2 ideas on doing this but I'm not sure it's the most efficient ways to do it so i wanna hear your inputs.

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Drawing of a huge model - How to regain performance?

    - by marc wellman
    I have a huge model I want to draw in my XNA application but because of its size I am experiencing a tremendous loss of performance. The model has about ~50 000 000 edges and has a size on disk of 205 MB in DirectX Format. Please don't ask whether this model has to be that big - yes it has! Is there a way to transfer the model directly to my GPU in order to let the GPU do the drawing like when transferring a VertexBuffer like this: graphicsDevice.Vertices[1].SetSource(_instanceBuffers[i], 0, _sizeofMatrix); because when I try to fill a vertexBuffer with all the vertices I am getting a OutOfMemoryException.

    Read the article

  • Rotating multiple points at once in 2D

    - by Deukalion
    I currently have an editor that creates shapes out of (X, Y) coordinates and then triangulate that to make up a shape of those points. What will I have to do to rotate all of those points simultaneously? Say I click the screen in my editor, it locates the point where I've clicked and if I move the mouse up or down from that point it calculates rotation on X and Y axis depending on new position relevant to first position, say I move up 10 on the Y axis it rotates that way and the same way for X. Or simply, somehow to enter rotation degree: 90, 180, 270, 360, for example. I use VertexPositionColor at the moment. What are the best algorithms or methods that I can look at to rotate multiple points in 2D at once? Also: Since this is an editor I do now want to rotate it on the Matrix, so if I want to rotate the whole shape 180 degree that's the new "position" of all the points, so that's the new rotation = 0 for example. Later on I probably will use World Matrix rotation for this, but not now.

    Read the article

  • List of Open Source Java Games for Android

    - by BluFire
    I'm wondering if there are any more opensource games than the ones that you can plainly see when you search a list of open source games for android on google. Such as, is there a good website that has compiled open source games? I don't want an answer of "go google it" or "en.wikipedia.org/wiki/List_of_open_source_Android_applications" it gets really annoying on posts when people just give lazy answers.

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >