Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 567/962 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • Jumping over non-stationary objects without problems ... 2-D platformer ... how could this be solved? [on hold]

    - by help bonafide pigeons
    You know this problem ... take Super Mario Bros. for example. When Mario/Luigi/etc. comes in proximity with a nearing pipe image an invisible boundary setter must prevent him from continuing forward movement. However, when you jump and move both x and y you are coordinately moving in two dimensions at an exact time. When nearing the pipe in mid-air as you are falling, i.e. implementation of gravity in the computer program "pulling" the image back down, and you do not want them to get "stuck" in both falling and moving. That problem is solved, but how about this one: The player controlling the ball object is attempting to jump and move rightwards over the non-stationary block that moves up and down. How could we measure its top and lower x+y components to determine the safest way for the ball to accurately either fall back down, or catch the ledge, or get pushed down under it, etc.?

    Read the article

  • How to determine which cells in a grid intersect with a given triangle?

    - by Ray Dey
    I'm currently writing a 2D AI simulation, but I'm not completely certain how to check whether the position of an agent is within the field of view of another. Currently, my world partitioning is simple cell-space partitioning (a grid). I want to use a triangle to represent the field of view, but how can I calculate the cells that intersect with the triangle? Similar to this picture: The red areas are the cells I want to calculate, by checking whether the triangle intersects those cells. Thanks in advance. EDIT: Just to add to the confusion (or perhaps even make it easier). Each cell has a min and max vector where the min is the bottom left corner and the max is the top right corner.

    Read the article

  • I don't understand why one of my vbo is overwritten by another

    - by Alays
    to create a vbo I use this function: public void loadVBO(){ vboID = GL15.glGenBuffers(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboID); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, buf, GL15.GL_STATIC_DRAW); // Put the position coordinates in attribute list 0 GL20.glVertexAttribPointer(0, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 0); // Put the color components in attribute list 1 GL20.glVertexAttribPointer(1, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4); GL20.glVertexAttribPointer(2, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4+4*4); // Put the texture coordinates in attribute list 2 GL20.glVertexAttribPointer(3, 4, GL11.GL_FLOAT, false,4*4+4*4+4*4+2*4 , 4*4+4*4+4*4); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); } to display a vbo I use this function: public void displayVBO(){ GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, vboID); GL20.glEnableVertexAttribArray(0); GL20.glEnableVertexAttribArray(1); GL20.glEnableVertexAttribArray(2); GL20.glEnableVertexAttribArray(3); GL11.glDrawArrays(GL_TRIANGLES, 0, buf.capacity()); GL20.glDisableVertexAttribArray(0); GL20.glDisableVertexAttribArray(1); GL20.glDisableVertexAttribArray(2); GL20.glDisableVertexAttribArray(3); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); } So when I call map.loadVBO() and then ocean.loadVBO(), I think the second call overwrite the first vbo I don't know how ... When I call map.display() and ocean.display(), I have the ocean draw 2 times .... Thanks.

    Read the article

  • My Sprite comes out the screen

    - by IlNero
    If i an action moves the sprite,how can i keep the CCSprite on the screen???? this is my code: [enemy runAction:[CCSequence actions:[CCMoveBy actionWithDuration:2.0 position:ccp(-winSize.width*0.4, 0)], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(winSize.width*0.2, -winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.3,winSize.width*0.3), randomValueBetween(winSize.height*0.3, -winSize.height*0.3))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.2,winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.3,winSize.width*0.3), randomValueBetween(winSize.height*0.3, -winSize.height*0.3))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.2,winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.3,winSize.width*0.3), randomValueBetween(winSize.height*0.3, -winSize.height*0.3))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:randomValueBetween(1.0, 0.3) position:ccp(randomValueBetween(-winSize.width*0.2,winSize.width*0.2), randomValueBetween(winSize.height*0.2, -winSize.height*0.2))], [CCDelayTime actionWithDuration:0.5], [CCMoveBy actionWithDuration:2.0 position:ccp(-winSize.width*1.5, 0)], [CCCallFuncN actionWithTarget:self selector:@selector(invisNode:)], nil]]; but whit this code the sprite sometimes comes out the screen, i need the sprite moves randomly in the screen without comes out..

    Read the article

  • What is the standard technique for shifting the frames of a sprite according to user input?

    - by virtual__
    From my own experience, I developed two techniques for changing the sprites of a character that's reacting to user input -- this in the context of a classic 2D platformer. The first one is to store all character's pixmaps in a list, putting the index of the currently used pixmap in an ordinary variable. This way, every time the player presses a key -- say the right arrow for moving the character forward -- the graphics engine sees what's the next pixmap to draw, draws it, and increments the index counter. That's a pretty common approach I believe, the problem is that in this case the animation's quality depends not only on the number of sprites available but also on how often your engine listens to user input. The second technique is to actually play an animation every key press event. For this you can use any sort of animation framework you want. It's only necessary to set the timer, the animation steps and to call the animation's play() method on your key press event handler. The problem with that approach is that is lacks responsiveness, since the character won't react to any input while the current animation is still being played. What I want to know is whether you are using one of these techniques -- or something similar -- in your games, or whether there's a standard method for animating sprites out there that's widely known by everybody but me.

    Read the article

  • How do I make my character slide down high-angled slopes?

    - by keinabel
    I am currently working on my character's movement in Unity3D. I managed to make him move relatively to the mouse cursor. I set a slope limit of 45°, which does not allow the character to walk up the mountains with higher degrees. But he can still jump them up. How do I manage to make him slide down again when he jumped at places with too high slope? Thanks in advance. edit: Code snippet of my basic movement. using UnityEngine; using System.Collections; public class BasicMovement : MonoBehaviour { private float speed; private float jumpSpeed; private float gravity; private float slopeLimit; private Vector3 moveDirection = Vector3.zero; void Start() { PlayerSettings settings = GetComponent<PlayerSettings>(); speed = settings.GetSpeed(); jumpSpeed = settings.GetJumpSpeed(); gravity = settings.GetGravity(); slopeLimit = settings.GetSlopeLimit(); } void Update() { CharacterController controller = GetComponent<CharacterController>(); controller.slopeLimit = slopeLimit; if (controller.isGrounded) { moveDirection = new Vector3(Input.GetAxis("Horizontal"), 0, Input.GetAxis("Vertical")); moveDirection = transform.TransformDirection(moveDirection); moveDirection *= speed; if (Input.GetButton("Jump")) { moveDirection.y = jumpSpeed; } } moveDirection.y -= gravity * Time.deltaTime; controller.Move(moveDirection * Time.deltaTime); } }

    Read the article

  • What Java class should I use to represent a Vector?

    - by user8363
    Does Java have a built-in Vector class suitable for handling collision detection / response? It should have methods like subtract(Vector v), normalize(), dotProduct(Vector v), ... It seems logical to use java.awt.Rectangle and java.awt.Polygon to calculate collisions. Would I be right to use these classes for this purpose? I understand collision detection; I'm only wondering what approach to it is idiomatic in Java. I'm new to the language and to application development in general.

    Read the article

  • Modular building technique with angles? (A roof)

    - by Mungoid
    Ive been spending a bit of time lately studying the modular buildings of many games and reading/viewing several tutorials about it as well, but almost every example I see uses a plain square building that does not have any angled roof or similar. In all my applications (CS6, Blender/Max, UDK) I adhere to the same grid spacing and I get pretty good results, but trying to make modular angled pieces is confusing me as I'm not sure the best way to approach it. Below is some shots of my template sheet and workflow I have been doing. Should I do the roof separately or is it possible for me to keep it in the same texture sheet? The main issue is below. I have made a couple modular roof pieces but when i try to use them, i end up needing to model multiple other parts to fill gaps based on what roof shape i want. I then model those 'filler' pieces and now i have that much less space left in my texture sheet and those pieces are usually not that reusable for anything else. This is where im not sure how to proceed. If anyone has any links to documents or papers talking about this or advice, I would greatly appreciate it! =-) My main roof pieces with the gaps My power of 2 texture sheet, with 16x16 grid squares. The texture sheet loaded into blender on a 16x16 plane and starting to separate and extrude.

    Read the article

  • 3d js map rendering

    - by gotha
    In the past I've done a 2D tile map using HTML, CSS and Javascript. Now I have the task of creating a 3D version using the same technologies - think of it like a space map where all planets have x/y/z positions. Currently, I have no idea to do this. Is there an existing library or something I can modify to do my job? If not, what method of rendering the map should I use? It needs to be as browser independent as possible, so I can't use webgl, flash or canvas. I'm considering plain JS & HTML or SVG (using Raphael for compatibility).

    Read the article

  • how to calculate intersection time and place of multiple moving arcs

    - by user20733
    I have rocks orbiting moons, moons orbiting planets, planets orbiting suns, and suns orbiting black holes, and the current system could have many many layers of orbitage. the position of any object is a function of time and relative to the object it orbits. (so far so good). now I want to know for a given 2 objects(A,B), a start time and a speed, how can I work out the when and where to go. I can work out where A and B is given a time. so i just need. 1: direction to travel in from A to B(remember B is moving(not in a straight line)) 2: Time to get to b in a straight line. travel must be in a straight line with the shortest possible distance. as an extension to this question, how will i know if its better to wait, EG is it faster to stay on object A and wait for a hour when the objects may be closer, than to set off from A to B at the start. Cheers, it hurt my brain.

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • Estimating costs in a GOAP system

    - by fullwall
    I'm currently developing a GOAP system in Java. An explanation of GOAP can be found at http://web.media.mit.edu/~jorkin/goap.html. Essentially, it's using A* to plot between Actions that mutate the world state. To provide a fair chance for all Actions and Goals to execute, I'm using a heuristic function to estimate the cost of doing something. What is the best way to estimate this cost so that it is comparable to all the other costs? As an example, estimating the cost of running away from an enemy versus attacking it - how should the cost be calculated to be comparable?

    Read the article

  • XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4

    - by danbystrom
    Beginner question. Synopsis: my water effects does something that causes the drawing of my sky sphere to throw an exeption when run in full screen. The exception is: XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4. This happens both when I start in full screen directly or switch to full screen from windowed. It does NOT happen, however, if I comment out the drawing of my water. So, what in my water effect can possibly cause the drawing of my sky sphere to choke???

    Read the article

  • Help decide HTML5 library or framework

    - by aoi
    I need a library or framework for small html5 contents and animation centric softwares. My priority isn't things like physics or network. I need fast rendering speed, support for touch event and most of all maximum compatibility across various platforms, including ios and android. I am pondering upon sprite js, crafty js, and kinetic js. But i can't really test the platform compatibilities, so can someone please tell me which one covers the maximum number of platforms, and if there are any better free alternatives?

    Read the article

  • Why is my primitive xna square not drawn/shown?

    - by Mech0z
    I have made this class to draw a rectangle, but I cant get it to be drawn, I have no issues displaying a 3d model created in 3dmax, but shown these primitives seems much harder I use this to create it board = new Board(Vector3.Zero, 1000, 1000, Color.Yellow); And here is the implementation using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Shapes; using Quadro.Models; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Quadro { public class Board : IGraphicObject { //Private Fields private Vector3 modelPosition; private BasicEffect effect; private VertexPositionColor[] vertices; private Matrix rotationMatrix; private GraphicsDevice graphicsDevice; private Matrix cameraProjection; //Constructor public Board(Vector3 position, float length, float width, Color color) { var _color = color; vertices = new VertexPositionColor[6]; vertices[0].Position = new Vector3(position.X, position.Y, position.Z); vertices[1].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[2].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[3].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[4].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[5].Position = new Vector3(position.X + length, position.Y + width, position.Z); for(int i = 0; i < vertices.Length; i++) { vertices[i].Color = color; } initFields(); } private void initFields() { graphicsDevice = SharedGraphicsDeviceManager.Current.GraphicsDevice; effect = new BasicEffect(graphicsDevice); modelPosition = Vector3.Zero; float screenWidth = (float)graphicsDevice.Viewport.Width; float screenHeight = (float)graphicsDevice.Viewport.Height; float aspectRatio = screenWidth / screenHeight; this.cameraProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); this.rotationMatrix = Matrix.Identity; } //Public Methods public void Update(GameTimerEventArgs e) { } public void Draw(Vector3 cameraPosition, GameTimerEventArgs e) { Matrix cameraView = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); effect.World = rotationMatrix * Matrix.CreateTranslation(modelPosition); effect.View = cameraView; effect.Projection = cameraProjection; graphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2, VertexPositionColor.VertexDeclaration); } } public void Rotate(Matrix rotationMatrix) { this.rotationMatrix = rotationMatrix; } public void Move(Vector3 moveVector) { this.modelPosition += moveVector; } } }

    Read the article

  • What's wrong with this OpenGL model picking code?

    - by openglNewbie
    I am making simple model viewer using OpenGL. When I want to pick an object OpenGL returns nothing or an object that is in another place. This is my code: GLuint buff[1024] = {0}; GLint hits,view[4]; glSelectBuffer(1024,buff); glGetIntegerv(GL_VIEWPORT, view); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluPickMatrix(x,y,1.0,1.0,view); gluPerspective(45,(float)view[2]/(float)view[4],1.0,1500.0); glMatrixMode(GL_MODELVIEW); glRenderMode(GL_SELECT); glLoadIdentity(); //I make the same transformations for normal render glTranslatef(0, 0, -zoom); glMultMatrixf(transform.M); glInitNames(); glPushName(-1); for(int j=0;j<allNodes.size();j++) { glLoadName(allNodes.at(j)->id); allNodes.at(j)->Draw(textures); } glPopName(); glMatrixMode(GL_PROJECTION); glPopMatrix(); hits = glRenderMode(GL_RENDER);

    Read the article

  • What is involved with writing a lobby server?

    - by Kira
    So I'm writing a Chess matchmaking system based on a Lobby view with gaming rooms, general chat etc. So far I have a working prototype but I have big doubts regarding some things I did with the server. Writing a gaming lobby server is a new programming experience to me and so I don't have a clear nor precise programming model for it. I also couldn't find a paper that describes how it should work. I ordered "Java Network Programming 3rd edition" from Amazon and still waiting for shipment, hopefully I'll find some useful examples/information in this book. Meanwhile, I'd like to gather your opinions and see how you would handle some things so I can learn how to write a server correctly. Here are a few questions off the top of my head: (may be more will come) First, let's define what a server does. It's primary functionality is to hold TCP connections with clients, listen to the events they generate and dispatch them to the other players. But is there more to it than that? Should I use one thread per client? If so, 300 clients = 300 threads. Isn't that too much? What hardware is needed to support that? And how much bandwidth does a lobby consume then approx? What kind of data structure should be used to hold the clients' sockets? How do you protect it from concurrent modification (eg. a player enters or exists the lobby) when iterating through it to dispatch an event without hurting throughput? Is ConcurrentHashMap the correct answer here, or are there some techniques I should know? When a user enters the lobby, what mechanism would you use to transfer the state of the lobby to him? And while this is happening, where do the other events bubble up? Screenshot : http://imageshack.us/photo/my-images/695/sansrewyh.png/

    Read the article

  • Fast lighting with multple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • How much time it will take to learn 3ds Max

    - by Mirror51
    I am not a 3d developer but i want to lean 3ds max just for simple house building with 2-3 rooms. Actually i don't want to develop from scratch . What i really want to do is get the existing models of homes , rooms , hotels from the internet and add my name there or my photo there , just for fun . SO i want to know that how much time do u think it will take me to that sort of stuff. Its not my career but just hobby . If its going to take longer time , then i don't want to waste but i can get going in one week or so that will go good but i want to ask from experience developers thanks

    Read the article

  • OpenGL VBOs are slower then glDrawArrays.

    - by Arelius
    So, this seems odd to me. I upload a large buffer of vertices, then every frame I call glBindbuffer and then the appropriate gl*Pointer functions with offsets into the buffer, then I use glDrawArrays to draw all of my triangles. I'm only drawing about 100K triangles, however I'm getting about 15FPS. This is where it gets weird, if I change it to not call glBindBuffer, then change the gl*Pointer calls to be actual pointers into the array I have in system memory, and then call glDrawArrays the same, my framerate jumps up to about 50FPS. Any idea what I weird thing I could be doing that would cause this? Did I maybe forget to call glEnable(GL_ALLOW_VBOS_TO_RUN_FAST) or something?

    Read the article

  • 3D collision physics. Response when hitting wall, floor or roof

    - by GlamCasvaluir
    I am having problem with the most basic physic response when the player collide with static wall, floor or roof. I have a simple 3D maze, true means solid while false means air: bool bMap[100][100][100]; The player is a sphere. I have keys for moving x++, x--, y++, y-- and diagonal at speed 0.1f (0.1 * ftime). The player can also jump. And there is gravity pulling the player down. Relative movement is saved in: relx, rely and relz. One solid cube on the map is exactly 1.0f width, height and depth. The problem I have is to adjust the player position when colliding with solids, I don't want it to bounce or anything like that, just stop. But if moving diagonal left/up and hitting solid up, the player should continue moving left, sliding along the wall. Before moving the player I save the old player position: oxpos = xpos; oypos = ypos; ozpos = zpos; vec3 direction; direction = vec3(relx, rely, relz); xpos += direction.x*ftime; ypos += direction.y*ftime; zpos += direction.z*ftime; gx = floor(xpos+0.25); gy = floor(ypos+0.25); gz = floor(zpos+0.25); if (bMap[gx][gy][gz] == true) { vec3 normal = vec3(0.0, 0.0, 1.0); // <- Problem. vec3 invNormal = vec3(-normal.x, -normal.y, -normal.z) * length(direction * normal); vec3 wallDir = direction - invNormal; xpos = oxpos + wallDir.x; ypos = oypos + wallDir.y; zpos = ozpos + wallDir.z; } The problem with my version is that I do not know how to chose the correct normal for the cube side. I only have the bool array to look at, nothing else. One theory I have is to use old values of gx, gy and gz, but I do not know have to use them to calculate the correct cube side normal.

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • Implementing Camera Zoom in a 2D Engine

    - by Luke
    I'm currently trying to implement camera scaling/zoom in my 2D Engine. Normally I calculate the Sprite's drawing size and position similar to this pseudo code: render() { var x = sprite.x; var y = sprite.y; var sizeX = sprite.width * sprite.scaleX; // width of the sprite on the screen var sizeY = sprite.height * sprite.scaleY; // height of the sprite on the screen } To implement the scaling i changed the code to this: class Camera { var scaleX; var scaleY; var zoom; var finalScaleX; // = scaleX * zoom var finalScaleY; // = scaleY * zoom } render() { var x = sprite.x * Camera.finalScaleX; var y = sprite.y * Camera.finalScaleY; var sizeX = sprite.width * sprite.scaleX * Camera.finalScaleX; var sizeY = sprite.height * sprite.scaleY * Camera.finalScaleY; } The problem is that when the zoom is smaller than 1.0 all sprites are moved toward the top-left corner of the screen. This is expected when looking at the code but i want the camera to zoom on the center of the screen. Any tips on how to do that are welcome. :)

    Read the article

  • XNA- Transforming children

    - by user1806687
    So, I have a Model stored in MyModel, that is made from three meshes. If you loop thrue MyModel.Meshes the first two are children of the third one. And was just wondering, if anyone could tell me where is the problem with my code. This method is called whenever I want to programmaticly change the position of a whole model: public void ChangePosition(Vector3 newPos) { Position = newPos; MyModel.Root.Transform = Matrix.CreateScale(VectorMathHelper.VectorMath(CurrentSize, DefaultSize, '/')) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Up, MathHelper.ToRadians(Rotation.Y)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Right, MathHelper.ToRadians(Rotation.X)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Forward, MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(Position); Matrix[] transforms = new Matrix[MyModel.Bones.Count]; MyModel.CopyAbsoluteBoneTransformsTo(transforms); int count = transforms.Length - 1; foreach (ModelMesh mesh in MyModel.Meshes) { mesh.ParentBone.Transform = transforms[count]; count--; } } This is the draw method: foreach (ModelMesh mesh in MyModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.View = camera.view; effect.Projection = camera.projection; effect.World = mesh.ParentBone.Transform; effect.EnableDefaultLighting(); } mesh.Draw(); } The thing is when I call ChangePosition() the first time everything works perfectlly, but as soon as I call it again and again. The first two meshes(children meshes) start to move away from the parent mesh. Another thing I wanted to ask, if I change the scale/rotation/position of a children mesh, and then do CopyAbsoluteBoneTransforms() will children meshes be positioned properlly(at the proper distance) or would achieving that require more math/methods? Thanks in advance

    Read the article

  • Using a Higher Precision (than 8-bit unsigned integer) Buffered Image for Heightmaps in Java

    - by pl12
    I am generating a heightmap for every quad in my quadtree in openCL. The way I was creating the image is as follows: DataBufferInt dataBuffer = (DataBufferInt)img.getRaster().getDataBuffer(); int data[] = dataBuffer.getData(); //img is a bufferedimage inputImageMem = CL.clCreateImage2D( context, CL_MEM_READ_WRITE | CL_MEM_USE_HOST_PTR, new cl_image_format[]{imageFormat}, size, size, size * Sizeof.cl_uint, Pointer.to(data), null); This works ok but the major issue is that as the quads get smaller and smaller the 8-bit format of the buffered image starts to cause intolerable "stepping" issues as seen below: I was wondering if there was an alternate way I could go about doing this? Thanks for the time.

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >