Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 337/774 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Draw multiple objects with textures

    - by Simplex
    I want to draw cubes using textures. void OperateWithMainMatrix(ESContext* esContext, GLfloat offsetX, GLfloat offsetY, GLfloat offsetZ) { UserData *userData = (UserData*) esContext->userData; ESMatrix modelview; ESMatrix perspective; //Manipulation with matrix ... glVertexAttribPointer(userData->positionLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //in cubeFaces coordinates verticles cube glVertexAttribPointer(userData->normalLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //for normals (use in fragment shaider for textures) glEnableVertexAttribArray(userData->positionLoc); glEnableVertexAttribArray(userData->normalLoc); // Load the MVP matrix glUniformMatrix4fv(userData->mvpLoc, 1, GL_FALSE, (GLfloat*)&userData->mvpMatrix.m[0][0]); //Bind base map glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, userData->baseMapTexId); //Set the base map sampler to texture unit to 0 glUniform1i(userData->baseMapLoc, 0); // Draw the cube glDrawArrays(GL_TRIANGLES, 0, 36); } (coordinates transformation is in OperateWithMainMatrix() ) Then Draw() function is called: void Draw(ESContext *esContext) { UserData *userData = esContext->userData; // Set the viewport glViewport(0, 0, esContext->width, esContext->height); // Clear the color buffer glClear(GL_COLOR_BUFFER_BIT); // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } This work fine, but if I try to draw multiple cubes (next code for example): void Draw(ESContext *esContext) { ... // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 2.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -2.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } A side faces overlapes frontal face. The side face of the right cube overlaps frontal face of the center cube. How can i remove this effect and display miltiple cubes without it?

    Read the article

  • Pathfinding in multi goal, multi agent environment

    - by Rohan Agrawal
    I have an environment in which I have multiple agents (a), multiple goals (g) and obstacles (o). . . . a o . . . . . . . o . g . . a . . . . . . . . . . o . . . . o o o o . g . . o . . . . . . . o . . . . o . . . . o o o o a What would an appropriate algorithm for pathfinding in this environment? The only thing I can think of right now, is to Run a separate version of A* for each goal separately, but i don't think that's very efficient.

    Read the article

  • Physics timestep questions

    - by SSL
    I've got a projectile working perfectly using the code below: //initialised in loading screen 60 is the FPS - projectilEposition and velocity are Vector3 types gravity = new Vector3(0, -(float)9.81 / 60, 0); //called every frame projectilePosition += projectileVelocity; This seems to work fine but I've noticed in various projectile examples I've seen that the elapsedtime per update is taken into account. What's the difference between the two and how can I convert the above to take into account the elapsedtime? (I'm using XNA - do I use ElapsedTime.TotalSeconds or TotalMilliseconds)? Edit: Forgot to add my attempt at using elapsedtime, which seemed to break the physics: projectileVelocity.Y += -(float)((9.81 * gameTime.ElapsedGameTime.TotalSeconds * gameTime.ElapsedGameTime.TotalSeconds) * 0.5f); Thanks for the help

    Read the article

  • 2D object-aligned bounding-box intersection test

    - by AshleysBrain
    Hi all, I have two object-aligned bounding boxes (i.e. not axis aligned, they rotate with the object). I'd like to know if two object-aligned boxes overlap. (Edit: note - I'm using an axis-aligned bounding box test to quickly discard distant objects, so it doesn't matter if the quad routine is a little slower.) My boxes are stored as four x,y points. I've searched around for answers, but I can't make sense of the variable names and algorithms in examples to apply them to my particular case. Can someone help show me how this would be done, in a clear and simple way? Thanks. (The particular language isn't important, C-style pseudo code is OK.)

    Read the article

  • Who owns the intellectual property to Fragile Allegiance?

    - by analytik
    Fragile Allegiance was developed by Gremlin Interactive, which was later acquired by Infogrames (Atari). I couldn't find any details of the acquisition though. The only interesting thing I have found online is that the owner of the registered trademark Fragile Allegiance is Interplay, who published Fragile Allegiance. However, the only copyright note I've found was in one installation .ini file, claiming it for Gremlin. What are the common business practices when it comes to old, unused IPs? What do publishers/developers actually need to legally claim an intellectual property? Does anyone have an experience with contacting big publishers with copyright/IP inquiries? Related legal question.

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • How to move Objects smoothly like swimming arround

    - by philipp
    I have a Box2D project that is about to create a view where the user looks from the Sky onto Water. Or perhaps on a bathtub filled with water or something like this. The Object which holds the fluid actually does not matter, what matters is the movement of the bodies, because they should move like drops of grease on a soup, or wood on water, I can even imagine the the fluid is mercurial, extreme heavy and "lazy". How can I manipulate the bodies (every frame or time by time) to make them move like this? I started with randomly manipulation their linear velocity, but I turned out that this not very smooth and looks quite hard. Is it a better idea to check their velocity and apply impulses? Is there any example? Greetings philipp

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • Can't get LWJGL lighting to work

    - by Zarkonnen
    I'm trying to enable lighting in lwjgl according to the method described by NeHe and this post. However, no matter what I try, all faces of my shapes always receive the same amount of light, or, in the case of a spinning shape, the amount of lighting seems to oscillate. All faces are lit up by the same amount, which changes as the pyramid rotates. Concrete example (apologies for the length): Note how all panels are always the same brightness, but the brightness varies with the pyramid's rotation. This is using lwjgl 2.8.3 on Mac OS X. package com; import com.zarkonnen.lwjgltest.Main; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.opengl.GL11; import org.newdawn.slick.opengl.Texture; import org.newdawn.slick.opengl.TextureLoader; import org.lwjgl.util.glu.*; import org.lwjgl.input.Keyboard; import java.nio.FloatBuffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; /** * * @author penguin */ public class main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(800, 600)); Display.setTitle("3D Pyramid"); Display.create(); } catch (Exception e) { } initGL(); float rtri = 0.0f; Texture texture = null; try { texture = TextureLoader.getTexture("png", Main.class.getResourceAsStream("tex.png")); } catch (Exception ex) { ex.printStackTrace(); } while (!Display.isCloseRequested()) { // Draw a Triangle :D GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); GL11.glLoadIdentity(); GL11.glTranslatef(0.0f, 0.0f, -10.0f); GL11.glRotatef(rtri, 0.0f, 1.0f, 0.0f); texture.bind(); GL11.glBegin(GL11.GL_TRIANGLES); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); GL11.glBegin(GL11.GL_QUADS); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); Display.update(); rtri += 0.05f; // Exit-Key = ESC boolean exitPressed = Keyboard.isKeyDown(Keyboard.KEY_ESCAPE); if (exitPressed) { System.out.println("Escape was pressed!"); Display.destroy(); } } Display.destroy(); } private static void initGL() { GL11.glEnable(GL11.GL_LIGHTING); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, ((float) 800) / ((float) 600), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0f); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); float lightAmbient[] = {0.5f, 0.5f, 0.5f, 1.0f}; // Ambient Light Values float lightDiffuse[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Diffuse Light Values float lightPosition[] = {0.0f, 0.0f, 2.0f, 1.0f}; // Light Position ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glLight(GL11.GL_LIGHT1, GL11.GL_AMBIENT, (FloatBuffer) temp.asFloatBuffer().put(lightAmbient).flip()); // Setup The Ambient Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_DIFFUSE, (FloatBuffer) temp.asFloatBuffer().put(lightDiffuse).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_POSITION, (FloatBuffer) temp.asFloatBuffer().put(lightPosition).flip()); // Position The Light GL11.glEnable(GL11.GL_LIGHT1); // Enable Light One } }

    Read the article

  • Unable to create xcode project again from unity

    - by Engr Anum
    I am using unity 4.5.3. I already created/built xcode project from unity. However, there were some strange linking errors I was unable to solve, so I decided to delete my xcode project and rebuilt it again from unity. Unfortunately, whenever I try to build the project, just empty project folder is created. There is nothing inside it. I don't know why it is happening. Please tell me how can I create xcode project again. May be I am missing small thing. Thanks.

    Read the article

  • Mesh with quads to triangle mesh

    - by scape
    I want to use Blender for making models yet realize some of the polygons are not triangles but contain quads or more (example: cylinder top and bottom). I could export the the mesh as a basic mesh file and import it in to an openGL application and workout rendering the quads as tris, but anything with more than 4 vert indices is beyond me. Is it typical to convert the mesh to a triangle-based mesh inside blender before exporting it? I actually tried this through the quads_convert_to_tris method within a blender py script and the top of the cylinder does not look symmetrical. What is typically done to render a loaded mesh as a tri?

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • Why does glGetString returns a NULL string

    - by snape
    I am trying my hands at GLFW library. I have written a basic program to get OpenGL renderer and vendor string. Here is the code #include <GL/glew.h> #include <GL/glfw.h> #include <cstdio> #include <cstdlib> #include <string> using namespace std; void shutDown(int returnCode) { printf("There was an error in running the code with error %d\n",returnCode); GLenum res = glGetError(); const GLubyte *errString = gluErrorString(res); printf("Error is %s\n", errString); glfwTerminate(); exit(returnCode); } int main() { // start GL context and O/S window using GLFW helper library if (glfwInit() != GL_TRUE) shutDown(1); if (glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW_WINDOW) != GL_TRUE) shutDown(2); // start GLEW extension handler glewInit(); // get version info const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string const GLubyte* version = glGetString (GL_VERSION); // version as a string printf("Renderer: %s\n", renderer); printf("OpenGL version supported %s\n", version); // close GL context and any other GLFW resources glfwTerminate(); return 0; } I googled this error and found out that we have to initialize the OpenGL context before calling glGetString(). Although I have initialized OpenGL context using glfwInit() but still the function returns a NULL string. Any ideas? Edit I have updated the code with error checking mechanisms. This code on running outputs the following There was an error in running the code with error 2 Error is no error

    Read the article

  • convert orientation vec3 to a rotation matrix

    - by lapin
    I've got a normalized vec3 that represents an orientation. Each frame of animation, an object's orientation changes slightly, so I add a delta vector to the orientation vector and then normalize to find the new orientation. I'd like to convert the vec3 that represents an orientation into a rotation matrix that I can use to orient my object. If it helps, my object is a cone, and I'd like to rotate it about the pointy end, not from its center :) PS I know I should use quaternions because of the gimbal lock problem. If someone can explain quats too, that'd be great :)

    Read the article

  • How can I test if my rotated rectangle intersects a corner?

    - by Raven Dreamer
    I have a square, tile-based collision map. To check if one of my (square) entities is colliding, I get the vertices of the 4 corners, and test those 4 points against my collision map. If none of those points are intersecting, I know I'm good to move to the new position. I'd like to allow entities to rotate. I can still calculate the 4 corners of the square, but once you factor in rotation, those 4 corners alone don't seem to be enough information to determine if the entity is trying to move to a valid state. For example: In the picture below, black is unwalkable terrain, and red is the player's hitbox. The left scenario is allowed because the 4 corners of the red square are not in the black terrain. The right scenario would also be (incorrectly) allowed, because the player, cleverly turned at a 45* angle, has its corners in valid spaces, even if it is (quite literally) cutting the corner. How can I detect scenarios similar to the situation on the right?

    Read the article

  • Unity-Animation parameters are not being set

    - by user1814893
    I have the following animation controller: with two parameters of walkingSpeed and Jump. I have the following code which should change the values: animator.SetFloat("walkingSpeed",0.9f); animator.SetBool("Jump",true); and animator is the correctly referenced animator object. However the values that the parameters are set to do not appear to change in the animator window, nor do they appear to impact what is happening on the screen. However they do seem to impact the values obtained when doing the following: animator.GetFloat("walkingSpeed"); The animator consists of the shown blend tree, which works correctly and is always active, however due to the values not being changed it does not blends, and always acts as if the value with which it blends (walkingSpeed is 0). What is going on?

    Read the article

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • SDL_image & OpenGL Problem

    - by Dylan
    i've been following tutorials online to load textures using SDL and display them on a opengl quad. but ive been getting weird results that no one else on the internet seems to be getting... so when i render the texture in opengl i get something like this. http://www.kiddiescissors.com/after.png when the original .bmp file is this: http://www.kiddiescissors.com/before.bmp ive tried other images too, so its not that this particular image is corrupt. it seems like my rgb channels are all jumbled or something. im pulling my hair out at this point. heres the relevant code from my init() function if ( SDL_Init(SDL_INIT_VIDEO) != 0 ) { return 1; } SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4); SDL_SetVideoMode(WINDOW_WIDTH, WINDOW_HEIGHT, 32, SDL_HWSURFACE | SDL_GL_DOUBLEBUFFER | SDL_OPENGL); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(50, (GLfloat)WINDOW_WIDTH/WINDOW_HEIGHT, 1, 50); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glEnable(GL_MULTISAMPLE); glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); heres the code that is called when my main player object (the one with which this sprite is associated) is initialized texture = 0; SDL_Surface* surface = IMG_Load("i.bmp"); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, surface->w, surface->h, 0, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); SDL_FreeSurface(surface); and then heres the relevant code from my display function glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(1, 1, 1, 1); glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture); glTranslatef(getCenter().x, getCenter().y, 0); glRotatef(getAngle()*(180/M_PI), 0, 0, 1); glTranslatef(-getCenter().x, -getCenter().y, 0); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(getTopLeft().x, getTopLeft().y, 0); glTexCoord2f(0, 1); glVertex3f(getTopLeft().x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 1); glVertex3f(getTopLeft().x + size.x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 0); glVertex3f(getTopLeft().x + size.x, getTopLeft().y, 0); glEnd(); glPopMatrix(); let me know if i left out anything important... or if you need more info from me. thanks a ton, -Dylan

    Read the article

  • Having trouble compiling UnrealScript with defaultproperties

    - by Enchanter
    Have just started scripting in Unreal Script and am following the instructions of the opening chapter of a book on the subject. I have set the development tools up as described in the book such that it is now possible to compile the scripts for UDK before adding them to a level. My issue is: the very first script the book asks you to compile is the following: class AwesomeActor extends Actor placeable; defaultproperties { Begin Object Class=SpriteComponent Name = Sprite Sprite = Texture2D'EditorResources.S_NavP' End Object Components.Add(Sprite) } And when I hit F9 in the compiler I get the following error message: Error, BEGIN OBJECT: Must specify valid name for suboject/copmponent: Begin Object class=SpriteComponent name = Sprite My question: As far as I'm aware I've copied the code from the book veribatim including capital and smaller case letters but I'm getting this error and I was wondering if anyone knew why and if so how I would fix it?

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • 2D SAT How to find collision center or point or area?

    - by Felipe Cypriano
    I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding. I need to find the center of the intersection, the black point on the image above. I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this. I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision. Any ideas? ## Update The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.

    Read the article

  • rotate sprite and shooting bullets from the end of a cannon

    - by Alberto
    Hi all i have a problem in my Andengine code, I need , when I touch the screen, shoot a bullet from the cannon (in the same direction of the cannon) The cannon rotates perfectly but when I touch the screen the bullet is not created at the end of the turret This is my code: private void shootProjectile(final float pX, final float pY){ int offX = (int) (pX-canon.getSceneCenterCoordinates()[0]); int offY = (int) (pY-canon.getSceneCenterCoordinates()[1]); if (offX <= 0) return ; if(offY>=0) return; double X=canon.getX()+canon.getWidth()*0,5; double Y=canon.getY()+canon.getHeight()*0,5 ; final Sprite projectile; projectile = new Sprite( (float) X, (float) Y, mProjectileTextureRegion,this.getVertexBufferObjectManager() ); mMainScene.attachChild(projectile); int realX = (int) (mCamera.getWidth()+ projectile.getWidth()/2.0f); float ratio = (float) offY / (float) offX; int realY = (int) ((realX*ratio) + projectile.getY()); int offRealX = (int) (realX- projectile.getX()); int offRealY = (int) (realY- projectile.getY()); float length = (float) Math.sqrt((offRealX*offRealX)+(offRealY*offRealY)); float velocity = (float) 480.0f/1.0f; float realMoveDuration = length/velocity; MoveModifier modifier = new MoveModifier(realMoveDuration,projectile.getX(), realX, projectile.getY(), realY); projectile.registerEntityModifier(modifier); } @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_MOVE){ double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); return true; } else if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_DOWN){ final float touchX = pSceneTouchEvent.getX(); final float touchY = pSceneTouchEvent.getY(); double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); shootProjectile(touchX, touchY); } return false; } Anyone know how to calculate the coordinates (X,Y) of the end of the barrel to draw the bullet?

    Read the article

  • Unity3D: default parameters in C# script

    - by Heisenbug
    Accordingly to this thread, it seems that default parameters aren't supported by C# script in Unity3D environment. Declaring an optional parameter in a C# scirpt makes Mono Ide complaint about it: void Foo(int first, int second = 10) // this line is marked as wrong inside Mono IDE Anyway if I ignore the error message of Mono and run the script in Unity, it works without notify any error inside Unity Console. Could anyone clarify a little bit this issue? Particularly: Are default parameters allowed inside C# scripts? If yes, are they supported by all platforms? Why Mono complains about them if the actually works?

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >