Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 594/1505 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Velocity collision detection (2D)

    - by ultifinitus
    Alright, so I have made a simple game engine (see youtube) And my current implementation of collision resolution has a slight problem, involving the velocity of a platform. Basically I run through all of the objects necessary to detect collisions on and resolve those collisions as I find them. Part of that resolution is setting the player's velocity = the platform's velocity. Which works great! Unless I have a row of platforms moving at different velocities or a platform between a stack of tiles.... (current system) bool player::handle_collisions() { collisions tcol; bool did_handle = false; bool thisObjectHandle = false; for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(prevPos.x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.y >= collideQueue[temp]->get_prev_pos().y + collideQueue[temp]->get_img()->get_height()) if (tcol.top > 0) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.y + get_img()->get_height() <= collideQueue[temp]->get_prev_pos().y) if (tcol.bottom > 0) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x/*collideQueue[temp]->get_vel().x*/,collideQueue[temp]->get_vel().y); ableToJump = true; jumpTimes = maxjumpable; thisObjectHandle = did_handle = true; } /// /// ADD CODE FROM NEXT CODE BLOCK HERE (on forum, not in code) /// } for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.x + get_img()->get_width() <= collideQueue[temp]->get_prev_pos().x) if (tcol.left > 0) { add_pos(-tcol.left,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.x >= collideQueue[temp]->get_prev_pos().x + collideQueue[temp]->get_img()->get_width()) if (tcol.right > 0) { add_pos(tcol.right,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } } return did_handle; } (if I add the following code {where the comment to do so is}, which is glitchy, the above problem doesn't happen, though it brings others) if (!thisObjectHandle) { if (tcol.bottom > tcol.top) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } else if (tcol.top > tcol.bottom) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } } How would you change my system to prevent this?

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • (Abstract) Game engine design

    - by lukeluke
    I am writing a simple 2D game (for mobile platforms) for the first time. From an abstract point of view, i have the main player controlled by the human, the enemies, elments that will interact with the main player, other living elements that will be controlled by a simple AI (both enemies and non-enemies). The human player will be totally controlled by the player, the other actors will be controlled by AI. So i have a class CActor and a class CActorLogic to start with. I would define a CActor subclass CHero (the main player controlled with some input device). This class will probably implement some type of listener, in order to capture input events. The other players controlled by the AI will be probably a specific subclass of CActor (a subclass per-type, obviously). This seems to be reasonable. The CActor class should have a reference to a method of CActorLogic, that we will call something like CActorLogic::Advance() or similar. Actors should have a visual representation. I would introduce a CActorRepresentation class, with a method like Render() that will draw the actor (that is, the right frame of the right animation). Where to change the animation? Well, the actor logic method Advance() should take care of checking collisions and other things. I would like to discuss the design of a game engine (actors, entities, objects, messages, input handling, visualization of object states (that is, rendering, sound output and so on)) but not from a low level point of view, but from an high level point of view, like i have described above. My question is: is there any book/on line resource that will help me organize things (using an object oriented approach)? Thanks

    Read the article

  • Beginner question about vertex arrays in OpenGL

    - by MrDatabase
    Is there a special order in which vertices are entered into a vertex array? Currently I'm drawing single textures like this: glBindTexture(GL_TEXTURE_2D, texName); glVertexPointer(2, GL_FLOAT, 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, coordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); where vertices has four "xy pairs". This is working fine. As a test I doubled the sizes of the vertices and coordinates arrays and changed the last line above to: glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); since vertices now contains eight "xy pairs". I do see two textures (the second is intentionally offset from the first). However the textures are now distorted. I've tried passing GL_TRIANGLES to glDrawArrays instead of GL_TRIANGLE_STRIP but this doesn't work either. I'm so new to OpenGL that I thought it's best to just ask here :-) Cheers!

    Read the article

  • Material, Pass, Technique and shaders

    - by Papi75
    I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture: class Material { void sendToShader() { program->sendUniform( nameInShader, valueInMaterialOrOther ); } private: Blend blendmode; ///< Alpha, Add, Multiply, … Color ambient; Color diffuse; Color specular; DrawingMode drawingMode; // Line Triangles, … Program* program; std::map<string, TexturePacket> textures; // List of textures with TexturePacket = { Texture*, vec2 offset, vec2 scale} }; How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks!

    Read the article

  • Object-Oriented OpenGL

    - by Sullivan
    I have been using OpenGL for a while and have read a large number of tutorials. Aside from the fact that a lot of them still use the fixed pipeline, they usually throw all the initialisation, state changes and drawing in one source file. This is fine for the limited scope of a tutorial, but I’m having a hard time working out how to scale it up to a full game. How do you split your usage of OpenGL across files? Conceptually, I can see the benefits of having, say, a rendering class that purely renders stuff to screen, but how would stuff like shaders and lights work? Should I have separate classes for things like lights and shaders?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Dynamic body implementation

    - by ArturoVM
    I am writing a 2D game where one of the characters has some very particular requirements. This character is a body with no particular shape (similar to a fluid, but not so much), it has to be able to grow and shrink (as in actually growing, not just scaling), and it has to have collision detection (even if it's basic). Because of this requirements, it obviously can't be based on a sprite, so direct rendering of the shape should be the logical thing to do. I assume this is no easy task, but I just couldn't find a good physics engine that covers these requirements (or at least no tutorial on how to do it; I particularly searched for Box2D tutorials). Is there a way of doing this with Box2D, SDL, or any other physics or game engine out there? If not, what's a good place to start? I am really clueless as far as soft-body physics are concerned.

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

  • What's a viable way to get public properties from child objects?

    - by Raven Dreamer
    I have a GameObject (RoomOrganizer in the picture below) with a "RoomManager" script, and one or more child objects, each with a 'HasParallelagram' component attached, likeso: I've also got the following in the aforementioned "RoomManager" void Awake () { Rect tempRect; HasParallelogram tempsc; foreach (Transform child in transform) { try { tempsc = child.GetComponent<HasParallelogram>(); tempRect = tempsc.myRect; blockedZoneList.Add(new Parallelogram(tempRect)); Debug.Log(tempRect.ToString()); } catch( System.NullReferenceException) { Debug.Log("Null Reference Caught"); } } } Unfortunately, attempting to assign tempRect = tempsc.myRect causes a null pointer at run time. Am I missing some crucial step? HasParallelgram is an empty script with a public Rect set in the editor and nothing else. What's the proper way to get a child's component?

    Read the article

  • Pixmaps, ByteBuffers, and Textures....Oh my

    - by odaymichael
    My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section; kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • 2D basic map system

    - by Cyril
    i'm currently coding a 2D game in Java, and I would like to have some clues on how-to build this system : the screen is moving on a grander map, for instance, the screen represent 800*600 units on a 100K*100K map. When you command your unit to go to another position, the screen move on this map AND when you move your mouse on a side or another of the screen, you move the screen on the map. Not sure that i'm clear, but we can retrieve this system in most RTS games (warcraft/starcraft for example). I'm currently using Slick 2D. Any idea ? Thanks.

    Read the article

  • Checking for alternate keys with XNA IsKeyDown

    - by jocull
    I'm working on picking up XNA and this was a confusing point for me. KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left) || keyState.IsKeyDown(Keys.A)) { //Do stuff... } The book I'm using (Learning XNA 4.0, O'Rielly) says that this method accepts a bitwise OR series of keys, which I think should look like this... KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left | Keys.A)) { //Do stuff... } But I can't get it work. I also tried using !IsKeyUp(... | ...) as it said that all keys had to be down for it to be true, but had no luck with that either. Ideas? Thanks.

    Read the article

  • Interpolating between two networked states?

    - by Vaughan Hilts
    I have many entities on the client side that are simulated (their velocities are added to their positions on a per frame basis) and I let them dead reckon themselves. They send updates about where they were last seen and their velocity changes. This works great and other players see this work find. However, after a while these players begin to desync after some time. This is because of latency. I'd like to know how I can interpolate between states so they appear to be in the correct position. I know where the player was LAST seen and their current velocity but interpolating to the last seen state causes the player to actually move -backwards-. I could not use velocity at all for other clients and simply 'lerp' them towards the appropriate direction but I feel this would cause jaggy movement. What are the alternatives?

    Read the article

  • Register Game Object Components in Game Subsystems? (Component-based Game Object design)

    - by topright
    I'm creating a component-based game object system. Some tips: GameObject is simply a list of Components. There are GameSubsystems. For example, rendering, physics etc. Each GameSubsystem contains pointers to some of Components. GameSubsystem is a very powerful and flexible abstraction: it represents any slice (or aspect) of the game world. There is a need in a mechanism of registering Components in GameSubsystems (when GameObject is created and composed). There are 4 approaches: 1: Chain of responsibility pattern. Every Component is offered to every GameSubsystem. GameSubsystem makes a decision which Components to register (and how to organize them). For example, GameSubsystemRender can register Renderable Components. pro. Components know nothing about how they are used. Low coupling. A. We can add new GameSubsystem. For example, let's add GameSubsystemTitles that registers all ComponentTitle and guarantees that every title is unique and provides interface to quering objects by title. Of course, ComponentTitle should not be rewrited or inherited in this case. B. We can reorganize existing GameSubsystems. For example, GameSubsystemAudio, GameSubsystemRender, GameSubsystemParticleEmmiter can be merged into GameSubsystemSpatial (to place all audio, emmiter, render Components in the same hierarchy and use parent-relative transforms). con. Every-to-every check. Very innefficient. con. Subsystems know about Components. 2: Each Subsystem searches for Components of specific types. pro. Better performance than in Approach 1. con. Subsystems still know about Components. 3: Component registers itself in GameSubsystem(s). We know at compile-time that there is a GameSubsystemRenderer, so let's ComponentImageRender will call something like GameSubsystemRenderer::register(ComponentRenderBase*). pro. Performance. No unnecessary checks as in Approach 1. con. Components are badly coupled with GameSubsystems. 4: Mediator pattern. GameState (that contains GameSubsystems) can implement registerComponent(Component*). pro. Components and GameSubystems know nothing about each other. con. In C++ it would look like ugly and slow typeid-switch. Questions: Which approach is better and mostly used in component-based design? What Practice says? Any suggestions about implementation of Approach 4? Thank you.

    Read the article

  • Getting isometric grid coordinates from standard X,Y coordinates

    - by RoryHarvey
    I'm currently trying to add sprites to an isometric Tiled TMX map using Objects in cocos2d. The problem is the X and Y metadata from TMX object are in standard 2d format (pixels x, pixels y), instead of isometric grid X and Y format. Usually you would just divide them by the tile size, but isometric needs some sort of transform. For example on a 64x32 isometric tilemap of size 40 tiles by 40 tiles an object at (20,21)'s coordinates come out as (640,584) So the question really is what formula gets (20,21) from (640,584)?

    Read the article

  • How to get location of sprite placed on rotating circle in cocos2d android?

    - by Real_steel4819
    I am developing a game using cocos2d and i got stuck here when finding location of sprite placed on rotating circle on background, so that when i hit at certain position on circle its not getting hit at wanted position,but its going away from it and placing target there.I tried printing the position of hit on spriteMoveFinished() and ccTouchesEnded(). Its giving initial position and not rotated position. CGPoint location = CCDirector.sharedDirector().convertToGL(CGPoint.ccp(event.getX(), event.getY())); This is what i am using to get location.

    Read the article

  • Unity3D Android - Move your character to a specific x position

    - by user3666251
    Im making a new game for android and I wanted to move my character (which is a cube for now) to a specific x location (on top of a flying floor/ground thingy) but I've been having some troubles with it.I've been using this script : var jumpSpeed: float = 3.5; var distToGround: float; function Start(){ // get the distance to ground distToGround = collider.bounds.extents.y; } function IsGrounded(): boolean { return Physics.Raycast(transform.position, -Vector3.up, distToGround + 0.1); } function Update () { // Move the object to the right relative to the camera 1 unit/second. transform.Translate(Vector3.forward * Time.deltaTime); if (Input.anyKeyDown && IsGrounded()){ rigidbody.velocity.x = jumpSpeed; } } And this is the result (which is not what I want) : https://www.youtube.com/watch?v=Fj8B6eI4dbE&feature=youtu.be Anyone has any idea how to do this ? Im new in unity and scripting.Im using java btw. Ty.

    Read the article

  • How to follow object on CatmullRomSplines at constant speed (e.g. train and train carriage)?

    - by Simon
    I have a CatmullRomSpline, and using the very good example at https://github.com/libgdx/libgdx/wiki/Path-interface-%26-Splines I have my object moving at an even pace over the spline. Using a simple train and carriage example, I now want to have the carriage follow the train at the same speed as the train (not jolting along as it does with my code below). This leads into my main questions: How can I make the carriage have the same constant speed as the train and make it non jerky (it has something to do with the derivative I think, I don't understand how that part works)? Why do I need to divide by the line length to convert to metres per second, and is that correct? It wasn't done in the linked examples? I have used the example I linked to above, and modified for my specific example: private void process(CatmullRomSpline catmullRomSpline) { // Render path with precision of 1000 points renderPath(catmullRomSpline, 1000); float length = catmullRomSpline.approxLength(catmullRomSpline.spanCount * 1000); // Render the "train" Vector2 trainDerivative = new Vector2(); Vector2 trainLocation = new Vector2(); catmullRomSpline.derivativeAt(trainDerivative, current); // For some reason need to divide by length to convert from pixel speed to metres per second but I do not // really understand why I need it, it wasn't done in the examples??????? current += (Gdx.graphics.getDeltaTime() * speed / length) / trainDerivative.len(); catmullRomSpline.valueAt(trainLocation, current); renderCircleAtLocation(trainLocation); if (current >= 1) { current -= 1; } // Render the "carriage" Vector2 carriageLocation = new Vector2(); float carriagePercentageCovered = (((current * length) - 1f) / length); // I would like it to follow at 1 metre behind carriagePercentageCovered = Math.max(carriagePercentageCovered, 0); catmullRomSpline.valueAt(carriageLocation, carriagePercentageCovered); renderCircleAtLocation(carriageLocation); } private void renderPath(CatmullRomSpline catmullRomSpline, int k) { // catMulPoints would normally be cached when initialising, but for sake of example... Vector2[] catMulPoints = new Vector2[k]; for (int i = 0; i < k; ++i) { catMulPoints[i] = new Vector2(); catmullRomSpline.valueAt(catMulPoints[i], ((float) i) / ((float) k - 1)); } SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Line); SHAPE_RENDERER.setColor(Color.NAVY); for (int i = 0; i < k - 1; ++i) { SHAPE_RENDERER.line((Vector2) catMulPoints[i], (Vector2) catMulPoints[i + 1]); } SHAPE_RENDERER.end(); } private void renderCircleAtLocation(Vector2 location) { SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Filled); SHAPE_RENDERER.setColor(Color.YELLOW); SHAPE_RENDERER.circle(location.x, location.y, .5f); SHAPE_RENDERER.end(); } To create a decent sized CatmullRomSpline for testing this out: Vector2[] controlPoints = makeControlPointsArray(); CatmullRomSpline myCatmull = new CatmullRomSpline(controlPoints, false); .... private Vector2[] makeControlPointsArray() { Vector2[] pointsArray = new Vector2[78]; pointsArray[0] = new Vector2(1.681817f, 10.379999f); pointsArray[1] = new Vector2(2.045455f, 10.379999f); pointsArray[2] = new Vector2(2.663636f, 10.479999f); pointsArray[3] = new Vector2(3.027272f, 10.700000f); pointsArray[4] = new Vector2(3.663636f, 10.939999f); pointsArray[5] = new Vector2(4.245455f, 10.899999f); pointsArray[6] = new Vector2(4.736363f, 10.720000f); pointsArray[7] = new Vector2(4.754545f, 10.339999f); pointsArray[8] = new Vector2(4.518181f, 9.860000f); pointsArray[9] = new Vector2(3.790908f, 9.340000f); pointsArray[10] = new Vector2(3.172727f, 8.739999f); pointsArray[11] = new Vector2(3.300000f, 8.340000f); pointsArray[12] = new Vector2(3.700000f, 8.159999f); pointsArray[13] = new Vector2(4.227272f, 8.520000f); pointsArray[14] = new Vector2(4.681818f, 8.819999f); pointsArray[15] = new Vector2(5.081817f, 9.200000f); pointsArray[16] = new Vector2(5.463636f, 9.460000f); pointsArray[17] = new Vector2(5.972727f, 9.300000f); pointsArray[18] = new Vector2(6.063636f, 8.780000f); pointsArray[19] = new Vector2(6.027272f, 8.259999f); pointsArray[20] = new Vector2(5.700000f, 7.739999f); pointsArray[21] = new Vector2(5.300000f, 7.440000f); pointsArray[22] = new Vector2(4.645454f, 7.179999f); pointsArray[23] = new Vector2(4.136363f, 6.940000f); pointsArray[24] = new Vector2(3.427272f, 6.720000f); pointsArray[25] = new Vector2(2.572727f, 6.559999f); pointsArray[26] = new Vector2(1.900000f, 7.100000f); pointsArray[27] = new Vector2(2.336362f, 7.440000f); pointsArray[28] = new Vector2(2.590908f, 7.940000f); pointsArray[29] = new Vector2(2.318181f, 8.500000f); pointsArray[30] = new Vector2(1.663636f, 8.599999f); pointsArray[31] = new Vector2(1.209090f, 8.299999f); pointsArray[32] = new Vector2(1.118181f, 7.700000f); pointsArray[33] = new Vector2(1.045455f, 6.880000f); pointsArray[34] = new Vector2(1.154545f, 6.100000f); pointsArray[35] = new Vector2(1.281817f, 5.580000f); pointsArray[36] = new Vector2(1.700000f, 5.320000f); pointsArray[37] = new Vector2(2.190908f, 5.199999f); pointsArray[38] = new Vector2(2.900000f, 5.100000f); pointsArray[39] = new Vector2(3.700000f, 5.100000f); pointsArray[40] = new Vector2(4.372727f, 5.220000f); pointsArray[41] = new Vector2(4.827272f, 5.220000f); pointsArray[42] = new Vector2(5.463636f, 5.160000f); pointsArray[43] = new Vector2(5.554545f, 4.700000f); pointsArray[44] = new Vector2(5.245453f, 4.340000f); pointsArray[45] = new Vector2(4.445455f, 4.280000f); pointsArray[46] = new Vector2(3.609091f, 4.260000f); pointsArray[47] = new Vector2(2.718181f, 4.160000f); pointsArray[48] = new Vector2(1.990908f, 4.140000f); pointsArray[49] = new Vector2(1.427272f, 3.980000f); pointsArray[50] = new Vector2(1.609090f, 3.580000f); pointsArray[51] = new Vector2(2.136363f, 3.440000f); pointsArray[52] = new Vector2(3.227272f, 3.280000f); pointsArray[53] = new Vector2(3.972727f, 3.340000f); pointsArray[54] = new Vector2(5.027272f, 3.360000f); pointsArray[55] = new Vector2(5.718181f, 3.460000f); pointsArray[56] = new Vector2(6.100000f, 4.240000f); pointsArray[57] = new Vector2(6.209091f, 4.500000f); pointsArray[58] = new Vector2(6.118181f, 5.320000f); pointsArray[59] = new Vector2(5.772727f, 5.920000f); pointsArray[60] = new Vector2(4.881817f, 6.140000f); pointsArray[61] = new Vector2(5.318181f, 6.580000f); pointsArray[62] = new Vector2(6.263636f, 7.020000f); pointsArray[63] = new Vector2(6.645453f, 7.420000f); pointsArray[64] = new Vector2(6.681817f, 8.179999f); pointsArray[65] = new Vector2(6.627272f, 9.080000f); pointsArray[66] = new Vector2(6.572727f, 9.699999f); pointsArray[67] = new Vector2(6.263636f, 10.820000f); pointsArray[68] = new Vector2(5.754546f, 11.479999f); pointsArray[69] = new Vector2(4.536363f, 11.599998f); pointsArray[70] = new Vector2(3.572727f, 11.700000f); pointsArray[71] = new Vector2(2.809090f, 11.660000f); pointsArray[72] = new Vector2(1.445455f, 11.559999f); pointsArray[73] = new Vector2(0.936363f, 11.280000f); pointsArray[74] = new Vector2(0.754545f, 10.879999f); pointsArray[75] = new Vector2(0.700000f, 9.939999f); pointsArray[76] = new Vector2(0.918181f, 9.620000f); pointsArray[77] = new Vector2(1.463636f, 9.600000f); return pointsArray; } Disclaimer: My math is very rusty, so please explain in lay mans terms....

    Read the article

  • Alternatives to multiple sprite batches for achieving 2D particle system depth

    - by Ergwun
    In my 2D XNA game, I render all my sprites with a single sprite batch using SpriteSortMode.BackToFront and BlendState.AlphaBlend. I'm adding a particle system based on the App Hub particles sample. Since this uses SpriteSortMode.Deferred and BlendState.Additive, I will need to have two SpriteBatch.Begin / SpriteBatch.End pairs: one for 'regular' sprites, and one for particles. In my top-down shooter, If I want to have explosions appear under planes, but above the ground, then I believe I will have to have three Begin/End pairs, first to draw everything under the explosions, then to draw the explosions, then to draw everything above the explosions. If I want to have particle effects at multiple different depths, then I'm going to need even more Begin/Endpairs. This is all easy to code, but I'm wondering if there is an alternative way to handle this?

    Read the article

  • How to do directional per fragment lighting in world space?

    - by user
    I am attempting to create a GLSL shader for simple, per-fragment directional light. So far, after following many tutorials, I have continually ran into the issue: my light is specified in world coordinates, however, the shader treats the light's position as being in eye space, thus, the light direction changes when I move the camera. My question is, how to I transform a directional light position such as (50, 50, 50, 0) into eye space, or, would doing things this way be the incorrect approach to the problem?

    Read the article

  • How to convert from wav or mp3 to raw PCM [on hold]

    - by Komyg
    I am developing a game using Cocos2d-X and Marmalade SDK, and I am looking for any recommendations of programs that can convert audio files in mp3 or wav format to raw PCM 16 format. The problem is that I am using the SimpleAudioEngine class to play sounds in my game and in Marmalade it only supports files that are encoded as raw PCM 16. Unfortunately I've been having a very hard time finding a program that can do this type of conversion, so I am looking for a recommendation.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >