Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 488/1285 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • Is chess-like AI really inapplicable in turn-based strategy games?

    - by Joh
    Obviously, trying to apply the min-max algorithm on the complete tree of moves works only for small games (I apologize to all chess enthusiasts, by "small" I do not mean "simplistic"). For typical turn-based strategy games where the board is often wider than 100 tiles and all pieces in a side can move simultaneously, the min-max algorithm is inapplicable. I was wondering if a partial min-max algorithm which limits itself to N board configurations at each depth couldn't be good enough? Using a genetic algorithm, it might be possible to find a number of board configurations that are good wrt to the evaluation function. Hopefully, these configurations might also be good wrt to long-term goals. I would be surprised if this hasn't been thought of before and tried. Has it? How does it work?

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

  • Game programming basics under Windows

    - by dreta
    I've been trying to learn some Windows programming using the Win32 API. Now, i'm used to working with the OS layer being abstracted away, mostly thanks to libraries like SFML or Allegro. Could you guys help me out and tell me if i'm thinking right here. The place for my gameloop is where i'm reading the messages? while (TRUE) { if (PeekMessage (&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break ; TranslateMessage (&msg) ; DispatchMessage (&msg) ; } else { //my game loop goes here } } Now the slightly bigger issue, that is, drawing. Do i run my drawing where i normaly do it, inside the game loop after the game logic? Or do i do it when WM_PAIN is being called and just call InvalidateRect (hwnd, NULL, TRUE); when i want to draw? This does feel weird, the WM_PAINT is a queued message, so i don't know for sure when it'll be called. So if i wanted to avoid this, do i just get the device handle inside the game loop and only ValidateRect (hwnd, NULL); in the WM_PAINT case (beside the ValidateRect (hwnd, NULL); called after drawing in the game loop)? Actually, now that i think about it, do i even need WM_PAINT in this situation or can i skip it and let DefWindowProc handle it (does it validate the screen if WM_PAINT isn't processed)? If this is any important, i'm setting up my code for OpenGL.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • Best way to implement an AI for Dominion? [on hold]

    - by j will
    I'm creating a desktop client and server backend for the game, Dominion, by Donald X. Vaccarino. I've been reading up on AI techniques and algorithms and I just wanted to what is the best way to implement an AI for such a game? Would it better to look at neural networks, genetic algorithms, decision trees, fuzzy logic, or any other methodology? For those who do not know how Dominion works, check out this part of the wikipedia article: http://en.wikipedia.org/wiki/Dominion_(card_game)#Gameplay

    Read the article

  • Deterministic Multiplayer RTS game questions?

    - by Martin K
    I am working on a cross-platform multiplayer RTS game where the different clients and a server(flash and C#), all need to stay deterministically synchronised. To deal with Floatpoint inconsistencies, I've come across this method: http://joshblog.net/2007/01/30/flash-floating-point-number-errors/#comment-49912 which basically truncates off the nondeterministic part: return Math.round(1000 * float) / 1000; Howewer my concern is that every time there is a division, there is further chance of creating additional floatpoint errors, in essence making it worse? . So it occured to me, how about something like this: function floatSafe(number:Number) : Number {return Math.round(float* 1024) / 1024; } ie dividing with only powers of 2 ? What do you think? . Ironically with the above method I got less accurate results: trace( floatSafe(3/5) ) // 0.599609375 where as with the other method(dividing with 1000), or just tracing the raw value I am getting 3/5 = 0.6 or Maybe thats what 3/5 actually is, IE 0.6 cannot really be represented with a floatpoint datatype, and would be more consistent across different platforms?

    Read the article

  • Drawing order in XNA

    - by marc wellman
    When manually setting the drawing order of game components by setting int DrawableGameComponent.DrawOrder can one use any integer numbers as long an order is defined like component1 = drawing order: 2 component2 = drawing order: 5 component3 = drawing order: 10 component4 = drawing order: 323 or do these integers have to be consecutive and starting with zero like component1 = drawing order: 0 component2 = drawing order: 1 component3 = drawing order: 2 component4 = drawing order: 3 ?

    Read the article

  • Formula for three competing heroes, each has one they can beat and one they're beaten by

    - by Georgiadis Abraam
    I am trying to design a game for a project I have, The main idea is: 3 Types of heroes 3 Stats per hero There are no levels involved so the differences must be located on stats. Fight logic - The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week I am trying to find a stats based formula that will allow me to fix this but I can't, I was meddling with numbers yesterday and it was decent but I can't extract the formula out of it. Could you please guide me or give me hints on how should I start creating formulas on a Non lvl game that fulfills the fight logic?

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Facebook Game database design

    - by facebook-100000781341887
    Hi, I'm currently develop a facebook mafia like PHP game(of course, a light weight version), here is a simplify database(MySQL) of the game id-a <int3> <for index> uid <chr15> <facebook uid> HP <int3> <health point> exp <int3> <experience> money <int3> <money> list_inventory <chr5> <the inventory user hold...some special here, talk next> ... and 20 other fields just like reputation, num of combat... *the number next to the type is the size(byte) of the type For the list_inventory, there have 40 inventorys in my game, (actually, I have 5 these kind of list in my database), and each user can only contain 1 qty of each inventory, therefore, I assign 5 char for this field and each bit of char as 1 item(5 char * 8 bit = 40 slot), and I will do some manipulation by PHP to extract the data from this 5 byte. OK, I was thinking on this, if this game contains 100,000 user, and only 10% are active, therefore, if use my method, for the space use, 5 byte * 100,000 = 500 KB if I use another method, create a table user_hold_inventory, if the user have the inventory, then insert a record into this table, so, for 10,000 active user, I assume they got all item, but for other, I assume they got no item, here is the fields of the new table id-b <int3> <for index> id-a <int3> <id of the user table> inv_no <int1> <inventory that user hold> for the space use, ([id] (3+3) byte + [inv_no] 1 byte ) * [active user] 10,000 * [all inventory] * 40 = 2.8 MB seems method 2 have use more space, but it consume less CPU power. Please comment these 2 method or please correct me if there have another better method rather than what I think. Another question is, my database contain 26 fields, but I counted 5 of them are not change frquently, should I need to separate it on the other table or not? So many words, thanks for reading :)

    Read the article

  • Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement?

    - by rraallvv
    I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps/s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update: The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values.

    Read the article

  • 2D Platform Game Jumping

    - by Bradley Kreuger
    I'm currently writing a game in XNA for fun which uses C#. I have got my sprites loaded and when the character moves right he looks like he is running right and when he moves left he looks like he is running left. I been looking everywhere for a good coding example for how to create a jumping ability. I have read all the physics stuff that I can stand and it doesn't help when I can't figure out how to use say space bar to jump yet can't keep them from using space just jump again until they land.

    Read the article

  • Custom extensible file format for 2d tiled maps

    - by Christian Ivicevic
    I have implemented much of my game logic right now, but still create my maps with nasty for-loops on-the-fly to be able to work with something. Now I wanted to move on and to do some research on how to (un)serialize this data. (I do not search for a map editor - I am speaking of the map file itself) For now I am looking for suggestions and resources, how to implement a custom file format for my maps which should provide the following functionality (based on MoSCoW method): Must have Extensibility and backward compatibility Handling of different layers Metadata on whether a tile is solid or can be passed through Special serialization of entities/triggers with associated properties/metadata Could have Some kind of inclusion of the tileset to prevent having scattered files/tilesets I am developing with C++ (using SDL) and targetting only Windows. Any useful help, tips, suggestions, ... would be appreciated!

    Read the article

  • Drawing of a huge model - How to regain performance?

    - by marc wellman
    I have a huge model I want to draw in my XNA application but because of its size I am experiencing a tremendous loss of performance. The model has about ~50 000 000 edges and has a size on disk of 205 MB in DirectX Format. Please don't ask whether this model has to be that big - yes it has! Is there a way to transfer the model directly to my GPU in order to let the GPU do the drawing like when transferring a VertexBuffer like this: graphicsDevice.Vertices[1].SetSource(_instanceBuffers[i], 0, _sizeofMatrix); because when I try to fill a vertexBuffer with all the vertices I am getting a OutOfMemoryException.

    Read the article

  • Game state management (Game, Menu, Titlescreen, etc)

    - by munchor
    Basically, in every single game I've made so far, I always have a variable like "current_state", which can be "game", "titlescreen", "gameoverscreen", etc. And then on my Update function I have a huge: if current_state == "game" game stuf ... else if current_state == "titlescreen" ... However, I don't feel like this is a professional/clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • libgdx arrays onTouch() method and delays for objects

    - by johnny-b
    i am trying to create random bullets but it is not working for some reason. also how can i make a delay so the bullets come every 30 seconds or 1 minute???? also the onTouch method does not work and it is not taking the bullet away???? shall i put the array in the GameRender class? thanks public class GameWorld { public static Ball ball; private Bullet bullet1; private ScrollHandler scroller; private Array<Bullet> bullets = new Array<Bullet>(); public GameWorld() { ball = new Ball(280, 273, 32, 32); bullet = new Bullet(-300, 200); scroller = new ScrollHandler(0); bullets.add(new Bullet(bullet.getX(), bullet.getY())); bullets = new Array<Bullet>(); Bullet bullet = null; float bulletX = 0.0f; float bulletY = 0.0f; for (int i=0; i < 10; i++) { bulletX = MathUtils.random(-10, 10); bulletY = MathUtils.random(-10, 10); bullet = new Bullet(bulletX, bulletY); bullets.add(bullet); } } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet1() { return bullet1; } } i also tried this and it is not working, i used this in the GameRender class Array<Bullet> enemies=new Array<Bullet>(); //in the constructor of the class enemies.add(new Bullet(bullet.getX(), bullet.getY())); // this throws an exception for some reason??? this is in the render method for(int i=0; i<bullet.size; i++) bullet.get(i).draw(batcher); //this i am using in any method that will allow me from the constructor to update to render for(int i=0; i<bullet.size; i++) bullet.get(i).update(delta); this is not taking the bullet out @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { for(int i=0; i<bullet.size; i++) if(bullet.get(i).getBounds().contains(screenX,screenY)) bullet.removeIndex(i--); return false; } thanks for the help anyone.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >