Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 417/863 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Creating a voxel chunk with a VBO - How to translate the coordinates of each block and add it to the VBO chunk?

    - by sunsunsunsunsun
    Im trying to make a voxel engine similar to minecraft as a little learning experience and a way to learn some opengl. I have created a chunk class and I want to put all of the vertices for the whole chunk into a single VBO. I was previously only putting each block into a vbo and making a call to render each block. Anyways, I am a bit confused about how I can translate the coordinates of each block in the chunk when I'm putting all vertices into one vbo. This is what I have at the moment. public void putVertices(float tx, float ty, float tz) { float l_length = 1.0f; float l_height = 1.0f; float l_width = 1.0f; vertexPositionData.put(new float[]{ xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty,zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty,zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz }); } public void createChunk() { vertexPositionData = BufferUtils.createFloatBuffer((24*3)*activateBlocks); Random random = new Random(); for (int x = 0; x < CHUNK_SIZE; x++) { for (int y = 0; y < CHUNK_SIZE; y++) { for (int z = 0; z < CHUNK_SIZE; z++) { if(blocks[x][y][z].getActive()) { putVertices(x*2.0f, y*2.0f, z*2.0f); } } } } Whats any easy way to translate the vertices of each block into its correct position? I was previously using glTranslatef with each call to render block but this wont work now. What I am doing now also does not work, the blocks all render in stacks on top of each other and it looks like this: http://i.imgur.com/NyFtBTI.png Thanks

    Read the article

  • How to create a copy of an instance without having access to private variables

    - by Jamie
    Im having a bit of a problem. Let me show you the code first: public class Direction { private CircularList xSpeed, zSpeed; private int[] dirSquare = {-1, 0, 1, 0}; public Direction(int xSpeed, int zSpeed){ this.xSpeed = new CircularList(dirSquare, xSpeed); this.zSpeed = new CircularList(dirSquare, zSpeed); } public Direction(Point dirs){ this(dirs.x, dirs.y); } public void shiftLeft(){ xSpeed.shiftLeft(); zSpeed.shiftRight(); } public void shiftRight(){ xSpeed.shiftRight(); zSpeed.shiftLeft(); } public int getXSpeed(){ return this.xSpeed.currentValue(); } public int getZSpeed(){ return this.zSpeed.currentValue(); } } Now lets say i have an instance of Direction: Direction dir = new Direction(0, 0); As you can see in the code of Direction, the arguments fed to the constructor, are passed directly to some other class. One cannot be sure if they stay the same because methods shiftRight() and shiftLeft could have been called, which changes thos numbers. My question is, how do i create a completely new instance of Direction, that is basically copy(not by reference) of dir? The only way i see it, is to create public methods in both CircularList(i can post the code of this class, but its not relevant) and Direction that return the variables needed to create a copy of the instance, but this solution seems really dirty since those numbers are not supposed to be touched after beeing fed to the constructor, and therefore they are private.

    Read the article

  • Simple 3D Physics engine as a part of graduation project [on hold]

    - by Eugene Kolesnikov
    I am working on my graduation project and one part of it is to simulate the motion of a rigid body in 3D space. I can use either already written physics engine or to write it myself. It's quite an interesting challenge for me, so I would like to do it myself. I am able to use either C++ or Java for programming (prefer C++). I am using Mac OS X and Debian 7. Could you suggest any guides or tutorials how to do it, can't find it anywhere... More precisely, I need a very simple engine, without collision detection, and many other things that I do not know, I just need to calculate the forces and move my body, depending on the resultant force. If you think that this task is still very difficult or there is no such tutorial, please suggest me some good and simple engine.

    Read the article

  • Behaviour tree code example?

    - by jokoon
    http://altdevblogaday.org/2011/02/24/introduction-to-behavior-trees/ Obviously the most interesting article I found on this website. What do you think about it ? It lacks some code example, don't you know any ? I also read that state machines are not very flexible compared to behaviour trees... On top of that I'm not sure if there is a true link between state machines and the state pattern... is there ?

    Read the article

  • What data should be cached in a multiplayer server, relative to AI and players?

    - by DevilWithin
    In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.

    Read the article

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • Drawing texture does not work anymore with a small amount of triangles

    - by Paul
    When i draw lines, the vertices are well connected. But when i draw the texture inside the triangles, it only works with i<4 in the for loop, otherwise with i<5 for example, there is a EXC_BAD_ACCESS message, at @synthesize textureImage = _textureImage. I don't understand why. (The generatePolygons method seems to work fine as i tried to draw lines with many vertices as in the second image below. And textureImage remains the same for i<4 or i<5 : it's a 512px square image). Here are the images : What i want to achieve is to put the red points and connect them to the y-axis (the green points) and color the area (the green triangles) : If i only draw lines, it works fine : Then with a texture color, it works for i<4 in the loop (the red points in my first image, plus the fifth one to connect the last y) : But then, if i set i<5, the debug tool says EXC_BAD_ACCESS at the synthesize of _textureImage. Here is my code : I set a texture color in HelloWordLayer.mm with : CCSprite *textureImage = [self spriteWithColor:color3 textureSize:512]; _terrain.textureImage = textureImage; Then in the class Terrain, i create the vertices and put the texture in the draw method : @implementation Terrain @synthesize textureImage = _textureImage; //EXC_BAD_ACCESS for i<5 - (void)generatePath2{ CGSize winSize = [CCDirector sharedDirector].winSize; float x = 40; float y = 0; for(int i = 0; i < kMaxKeyPoints+1; ++i) { _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += 30; } } -(void)generatePolygons{ _nPolyVertices = 0; float x1 = 0; float y1 = 0; int keyPoints = 0; for (int i=0; i<4; i++){ /* HERE : 4 = OK / 5 = crash */ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } } - (id)init { if ((self = [super init])) { [self generatePath2]; [self generatePolygons]; } return self; } - (void) draw { //glDisable(GL_TEXTURE_2D); glDisableClientState(GL_COLOR_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, _textureImage.texture.name); glColor4f(1, 1, 1, 1); glVertexPointer(2, GL_FLOAT, 0, _polyVertices); glTexCoordPointer(2, GL_FLOAT, 0, _polyTexCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nPolyVertices); glColor4f(1, 1, 1, 1); for(int i = 1; i < 40; ++i) { ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } // restore default GL states glEnable(GL_TEXTURE_2D); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); } Do you see anything wrong in this code? Thanks for your help

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Maya Animated Character export for XNA 4.0 problem

    - by FahidK
    To begin with, I'm trying to export an animated character in .fbx format from Maya 2013 to XNA 4.0 In Maya, The Model has a basic rig and the animations are in clips made in the Trax editor. so the issue i'm having is after selecting the model and the root joint and then hitting export in .fbx format, for some reason when i open the exported .fbx file the joint system is detached from the model with no animation. Btw, i have the animations in clips so that they can be called in code, for example "run","walk","attack". So, what can i do to solve this problem? Thank you.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Where to start? (3D Modeling)

    - by herfus
    I'm looking for a good resource to start learning 3d modeling. I'm looking for something that starts with the basics (e.g. terminology; what are quads, triangles etc.) before/while going into the actual modeling. Book, website, video, anything will do. I'm only concerned with the quality of the tutorials, how thorough they are. I have experience with texturing, level design and so on - but I've never created anything more than simple shapes/editing existing assets.

    Read the article

  • Is there a way to use the facebook sdk with libgdx?

    - by Rudy_TM
    I have tried to use the facebook sdk in libgdx with callbacks, but it never enters the authetication listeners, so the user never is logged in, it permits the authorization for the facebook app but it never implements the authentication interfaces :( Is there a way to use it? public MyFbClass() { facebook = new Facebook(APPID); mAsyncRunner = new AsyncFacebookRunner(facebook); SessionStore.restore(facebook, this); FB.init(this, 0, facebook, this.permissions); } ///Method for init the permissions and my listener for authetication public void init(final Activity activity, final Facebook fb,final String[] permissions) { mActivity = activity; this.fb = fb; mPermissions = permissions; mHandler = new Handler(); async = new AsyncFacebookRunner(mFb); params = new Bundle(); SessionEvents.addAuthListener(auth); } ///I call the authetication process, I call it with a callback from libgdx public void facebookAction() { // TODO Auto-generated method stub fb.authenticate(); } ///It only allow the app permission, it doesnt register the events public void authenticate() { if (mFb.isSessionValid()) { SessionEvents.onLogoutBegin(); AsyncFacebookRunner asyncRunner = new AsyncFacebookRunner(mFb); asyncRunner.logout(getContext(), new LogoutRequestListener()); //SessionStore.save(this.mFb, getContext()); } else { mFb.authorize(mActivity, mPermissions,0 , new DialogListener()); } } public class SessionListener implements AuthListener, LogoutListener { @Override public void onAuthSucceed() { SessionStore.save(mFb, getContext()); } @Override public void onAuthFail(String error) { } @Override public void onLogoutBegin() { } @Override public void onLogoutFinish() { SessionStore.clear(getContext()); } } DialogListener() { @Override public void onComplete(Bundle values) { SessionEvents.onLoginSuccess(); } @Override public void onFacebookError(FacebookError error) { SessionEvents.onLoginError(error.getMessage()); } @Override public void onError(DialogError error) { SessionEvents.onLoginError(error.getMessage()); } @Override public void onCancel() { SessionEvents.onLoginError("Action Canceled"); } }

    Read the article

  • How to implement explosion in OpenGL with a particle effect?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Quaternion LookAt for camera

    - by Homar
    I am using the following code to rotate entities to look at points. glm::vec3 forwardVector = glm::normalize(point - position); float dot = glm::dot(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector); float rotationAngle = (float)acos(dot); glm::vec3 rotationAxis = glm::normalize(glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector)); rotation = glm::normalize(glm::quat(rotationAxis * rotationAngle)); This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders: glm::mat4 viewMatrix = glm::inverse( cameraTransform->GetTransformationMatrix() ); The orthographic projection matrix is created using glm::ortho. What's going wrong?

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • Depth is disabled - How to turn on?

    - by marc wellman
    In XNA 3.1 is there any other way to disable depth in 3D Worlds using DirectX models other than GraphicsDevice.RenderState.DepthBufferEnable = false; ? The reason for my question is I have quite a huge program which offers a 3D World with a couple of 3D DirectX models inside. Depth was never an issue since it ever worked fine but since a few days after doing some modifications my models are all depth-translucent i.e. depth-buffering and/or culling seems to be disabled. But in my whole source code I never touch any of the options related to Depth or Culling which means I never turn these settings on explicitly nor turn it off somewhere. So I am searching for some other statement maybe related to the GraphicsDevice that implicitly turns depth off - but I can't find it. (Sorry that I don't post any source code but I have too much source code and I simply don't know where to search) UPDATE: These are a couple of simple objects seen with correct depth. These are the same objects in their current state.

    Read the article

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • Java: Reflection Packet Builder using getField()

    - by Matchlighter
    So I just finished writing a packet builder that dynamically loads data into a data stream which is then sent out. Each builder operates by finding fields in its class (and its superclasses) that are marked with an @data annotation. Upon finishing the builder, I remembered that getFields() does not return in "any specific order". I quite like my builder because it allows for quite simple, yet hard-typed packets. Could this implementation be a problem? What would be the best next step to keep the simplicity - do alphabetical sorting of fields?

    Read the article

  • Isometric algorithm [fixed]

    - by David
    so i've been toying with isometric and i just cant get the tiles to be in the right order. im probably missing something obvious and i just cant see it... but even at the risk of looking stupid, heres my code. for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } and heres what i end up with delicious messed up map yep, bit of an issue. so if anyone could help, i'd appreciate it. edit* works now <_<

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Making body(box2d) a spite(andengine) in android

    - by Kadir
    I can't make body(box2d) a spite(andengine) and at the same time i wanna apply MoveModifier to sprite which is body.if i can make just body,it works namely the srites can collide.if i can apply just MoveModifier to sprites,the sprites can move where i want.but i wanna make body(they can collide) and apply MoveModifier(they can move where i want) at the same time.How can i do? this my code just run MoveModifier not as body at the same time. circles[i] = new Sprite(startX, startY, textRegCircle[i]); body[i] = PhysicsFactory.createCircleBody(physicsWorld, circles[i], BodyType.DynamicBody, FIXTURE_DEF); physicsWorld.registerPhysicsConnector(new PhysicsConnector(circles[i], body[i], true, true)); circles[i].registerEntityModifier( (IEntityModifier) new SequenceEntityModifier ( new MoveModifier(10.0f, circles[i].getX(), circles[i].getX(), circles[i].getY(),CAMERA_HEIGHT+64.0f))); scene.getLastChild().attachChild(circles[i]); scene.registerUpdateHandler(physicsWorld);

    Read the article

  • Suggestions to start a cross-platform project

    - by Gabriele
    I have a big project in my head, it should be cross-platform (Win, Max and Linux), online (Client - Server) and with 3D graphics. I would like some suggestions to start with the right things. Currently I'm a PHP/MySQL coder, I used to code in C and Pascal on DOS ages (Borland Times ;)), my C knowlegde need a refresh but it's ok. I guess C++ it's the right language. What platform and what i should use to code? I can choose from all three platforms. My idea was to use Visual Studio 2010 C++, but i'm not sure if it support Native code. What kind of libraries should i use? I guessed OpenSSL for the login, OpenGL for graphics part. For the Audio or the GUI? Any other suggestions are well accepted. I know it's a "BIG DEAL" but I have no rush and it'll be a free-time project, only for my pleasure. Thank you in advance.

    Read the article

  • How to make room reflection using Cubemap

    - by MaT
    I am trying to use a cube map of the inside of a room to create some reflections on walls, ceiling and floor. But when I use the cube map, the reflected image is not correct. The point of view seems to be false. To be correct I use a different cube map for each walls, floor or ceiling. The cube map is calculated from the center of the plane looking at the room. Are there specialized techniques to achieve such effect ? Thanks a lot !

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >