Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 326/540 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Viewport.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Viewport.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • How to detect GLSL warnings?

    - by msell
    After compiling a shader with glCompileShader, I can call glGetShaderiv with GL_COMPILE_STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL_COMPILE_STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved?

    Read the article

  • Unity mouse input not working in webplayer build

    - by Califer
    I have a button script with the following code void OnMouseDown() { animation.Play("button-squish"); enlarged = true; audio.PlayOneShot(buttonSound); } void OnMouseUpAsButton() { if (enlarged) { SelectThisButton(); enlarged = false; animation.Play("button-return"); } } void OnMouseExit() { if (enlarged) { enlarged = false; animation.Play("button-return"); } } It works great in the editor, but when I made a build and tested it in Chrome none of the buttons had any response. Further testing revealed that it did work in Firefox. Rather than telling people to change their browser if they want to play, I want to make the button code work. How else can I get the buttons to know when they're being pressed if the built-in stuff isn't working?

    Read the article

  • How to obtain window handle in SDL 2.0.3

    - by Diorthotis
    I need to obtain the handle of the window for SDL 2.0.3. I got the suggestion to use info.window after initializing SDL and filling the info variable with data by calling SDL_GetWindowWMInfo(); included in the header file SDL_syswm.h. My compiler (visual studio 2008 professional edition) gives the following error: 226) : error C2039: 'window' : is not a member of 'SDL_SysWMinfo' 1 include\sdl_syswm.h(173) : see declaration of 'SDL_SysWMinfo' Any help appreciated. Thanks. Nevermind, I needed to use "info.info.win.window". That seems a bit redundant, but whateves.

    Read the article

  • a flexible data structure for geometries

    - by AkiRoss
    What data structure would you use to represent meshes that are to be altered (e.g. adding or removing new faces, vertices and edges), and that have to be "studied" in different ways (e.g. finding all the triangles intersecting a certain ray, or finding all the triangles "visible" from a given point in the space)? I need to consider multiple aspects of the mesh: their geometry, their topology and spatial information. The meshes are rather big, say 500k triangles, so I am going to use the GPU when computations are heavy. I tried using arrays with vertices and arrays with indices, but I do not love adding and removing vertices from them. Also, using arrays totally ignore spatial and topological information, which I may need studying the mesh. So, I thought about using custom double-linked list data structures, but I believe doing so will require me to copy the data to array buffers before going on the GPU. I also thought about using BST, but not sure it fits. Any help is appreciated. If I have been too fuzzy and you require other information feel free to ask.

    Read the article

  • How to detect a touch on transparent area of an image in a (libgdx) stage?

    - by Usman
    Can some one please help to detect a touch on an image which I am using as an actor in a stage. The image is actually a long diagnol brush which has plenty of transparent area. The problem is when I touche the transparent area of the brush image it is also triggering the clicklistener of the image. I need the click listener should only be called when the finger actually touched the visible image not the area which is empty. I am using libgdx-0.9.4 libraries. Here is my simple piece of code. import com.badlogic.gdx.scenes.scene2d.ui.Image; import com.badlogic.gdx.scenes.scene2d.ui.ClickListener; Image brushImg = new Image(ImageCache.getTexture("brush")); brushImg.width = mStage.width()*0.75f; brushImg.height = mStage.height()*0.75f; brushImg.setClickListener(new ClickListener() { @Override public void click(Actor actor, float x, float y) { SoundFactory.play("brush"); }

    Read the article

  • D3DXMatrixDecompose gives different quaternion than D3DXQuaternionRotationMatrix

    - by Fraser
    In trying to solve this problem, I tracked down the problem to the conversion of the rotation matrix to quaternion. In particular, consider the following matrix: -0.02099178 0.9997436 -0.008475631 0 0.995325 0.02009799 -0.09446743 0 0.09427284 0.01041905 0.9954919 0 0 0 0 1 SlimDX.Quaternion.RotationMatrix (which calls D3DXQuaternionRotationMatrix gives a different answer than SlimDX.Matrix.Decompose (which uses D3DXMatrixDecompose). The answers they give (after being normalized) are: X Y Z W Quaternion.RotationMatrix -0.05244324 0.05137424 0.002209336 0.9972991 Matrix.Decompose 0.6989997 0.7135442 -0.03674842 -0.03006023 Which are totally different (note the signs of X, Z, and W are different). Note that these aren't q/-q (two quaternions that represent the same rotation); they face completely different directions. I've noticed that with matrices for rotations very close to that one (successive frames in the animation) that the Matrix.Decompose version gives a solution that flips around wildly and occasionally goes into the desired position, while the Quaternion.RotationMatrix version gives solutions that are stable but go in the wrong direction. This is only for the right arm in my animation -- for the left arm, both functions give the correct solution, which is the same quaternion within error tolerances. This makes me think that there's some sort of numeric instability or weird stuff with signs going on. I tried implementing this and then this, but both gave me a completely incorrect solution (even for the matricies where the SlimDX ones were working correctly) -- maybe the rows and columns are flipped?

    Read the article

  • NPOT texture and video memory usage

    - by Eonil
    I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post-processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs?

    Read the article

  • Player Movement DirectX

    - by SullY
    I'm reading on a Book that's about Gamedevelopment with C++ and DirectX 9. There is something that interrests me: It says that playermovements are increasing with the power of the CPU. Becouse a faster CPU will move the player with every frame ( better CPU = better FPS ) To bypass it, it says you have just to multiplicate time*movementfactor . I'd like to know is there an another way to bypass it ?

    Read the article

  • Camera placement sphere for an always fully visible object

    - by BengtR
    Given an object: With the bounds [x, y, z, width, height, depth] And an orthographic projection [left, right, bottom, top, near, far] I want to determine the radius of a sphere which allows me to randomly place my camera on so that: The object is fully visible from all positions on this sphere The sphere radius is the smallest possible value while still satisfying 1. Assume the object is centered around the origin. How can I find this radius? I'm currently using sqrt(width^2 + height^2 + depth^2) but I'm not sure that's the correct value, as it doesn't take the camera into account. Thanks for any advice. I'm sorry for confusing a few things here. My comments below should clarify what I'm trying to do actually.

    Read the article

  • Avoiding orbiting in pursuit steering behavior

    - by bobobobo
    I have a missile that does pursuit behavior to track (and try and impact) its (stationary) target. It works fine as long as you are not strafing when you launch the missile. If you are strafing, the missile tends to orbit its target. I fixed this by accelerating tangentially to the target first, killing the tangential component of the velocity first, then beelining for the target. So I accelerate in -vT until vT is nearly 0. Then accelerate in the direction of vN. While that works, I'm looking for a more elegant solution where the missile is able to impact the target without explicitly killing the tangential component first.

    Read the article

  • Checking for collisions on a 3D heightmap

    - by Piku
    I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'. This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST). I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole. At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain. However... How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them. I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.

    Read the article

  • InputLayout handling

    - by Kikaimaru
    Where are you supposed to store InputLayout? Suppose i have some basic structure like. class Mesh { List<MeshPart> MeshParts } class MeshPart { Effect Effect; VertexBufferBinding VertexBuffer; ... } Where should I store input layout? It's a connection between vertex buffer and specific pass, i can live with just 1 pass but I still have diffferent techniques so i need at least an array with some connection to effecttechniques, but I would appriciate something not crazy like dictionary. I could also create wrapper for Effect and EffectTechnique, but there must be some normal solution.

    Read the article

  • Collision 2D Quads

    - by Vico Pelaez
    I want to detect collision between two 2D squares, one square is static and the other one moves according to keyboard arrows. I have implemented some code, however nothing happens when they overlap each other and what I tried to achieve in the code was to detect an overlapping between them. I think I am either not understanding the concept really well or that because one of the squares is moving this is not working. Please I would really appreciate your help. Thank you! float x1=0.05 ,Y1=0.05; float x2=0.05 ,Y2=0.05; float posX1 =0.5, posY1 = 0.5; float movX2 = 0.0 , movY2 = 0.0; struct box{ int width=0.1; int heigth=0.1; }; void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } void quad1(){ glTranslatef(posX1, posY1, 0.0); glBegin(GL_POLYGON); glColor3f(0.5, 1.0, 0.5); glVertex2f(-x1, -Y1); glVertex2f(-x1, Y1); glVertex2f(x1,Y1); glVertex2f(x1,-Y1); glEnd(); } void quad2(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(movX2, movY2, 0.0); glBegin(GL_POLYGON); glColor3f(1.5, 1.0, 0.5); glVertex2f(-x2, -Y2); glVertex2f(-x2, Y2); glVertex2f(x2,Y2); glVertex2f(x2,-Y2); glEnd(); glPopMatrix(); } void reset(){ //Reset position of square??? movX2 = 0.0; movY2 = 0.0; collisionB = false; } bool collision(box A, box B){ int leftA, leftB; int rightA, rightB; int topA, topB; int bottomA, bottomB; //Calculate the sides of box A leftA = x1; rightA = x1 + A.width; topA = Y1; bottomA = Y1 + A.heigth; //Calculate the sides of box B leftB = x2; rightB = x2 + B.width; topB = Y1; bottomB = Y1+ B.heigth ; if( bottomA <= topB ) return false; if( topA >= bottomB ) return false; if( rightA <= leftB ) return false; if( leftA >= rightB ) return false; return true; } float move_unit = 0.1; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: movY2 += move_unit; break; case GLUT_KEY_RIGHT: movX2 += move_unit; break; case GLUT_KEY_LEFT: movX2 -= move_unit; break; case GLUT_KEY_DOWN: movY2 -= move_unit; break; default: break; } glutPostRedisplay(); } void display(){ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); cuad1(); if (!collision) { cuad2(); } else{ reset(); } glFlush(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Collision Practice"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop(); }

    Read the article

  • How should I choose quadtree depth?

    - by Evpok
    I'm using a quadtree to prune collision detection pairs in a 2d world. How should I choose to what depth said quadtree is calculated? The world is made mostly of moving objects1, so the cost of dispatching the objects between the quadtree cells matters. What is the relationship between the gain from less collision checking and the loss from more dispatching? How can I strike a balance that performs optimally? 1 To be completely explicit, they are autonomous self-replicating cells competing for food sources. This is an attempt to show my pupils predator-prey dynamics and genetic evolution at work.

    Read the article

  • Black or White Border/Shadow around PNGs in SDL/OPENGL

    - by Dylan
    having the same issue as this: Why do my sprites have a dark shadow/line/frame surrounding the texture? however, when I do the fix suggested there (changing GL_SRC_ALPHA to GL_ONE) it just replaces the black border with a white border on the images, and messes with my background color and some polygons I'm drawing (not all of them weirdly) by making them much lighter... any ideas? heres some of my relevant code. init: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glEnable(GL_MULTISAMPLE); glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glAlphaFunc(GL_GREATER, 0.01); glEnable(GL_ALPHA_TEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); when each texture is loaded: glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGBA, surface->w, surface->h, GL_BGRA, GL_UNSIGNED_BYTE, surface->pixels); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    Read the article

  • Why doesn't light continuous on my model?

    - by nosferat
    I created a basic textured cube model with Blender to practice modeling, and then I imported it into Unity. After I put up some lighting it looks pretty ugly. The light is not continuous on a row of textured cubes: What is more odd, the light on the blocks that makes up the floor is continuous. What am I doing wrong? UPDATE This is how it looks like without textures: https://dl.dropbox.com/u/45620018/without%20textures.PNG If I would not know that these are perfect cubes, I'd say there is a slight curve on surface. I also tried lightening the texture but it also didn't help: https://dl.dropbox.com/u/45620018/lighter%20texture.PNG I just simply exported the model from Blender and did not set up any normals or things like that. However I also did not do any special woth the floor brick model.

    Read the article

  • Edge flicker when moving Camera (2D)

    - by Matthias Reisner
    I have a Orthographic camera. I have a fixed landscape texture and a texture for a moveable object. If the object moves to the right the camera will also move with the object. When I also draw an score text that should have fixed position on the screen, that score text position will be update too if the camera's position gets updated so that it looks like that it is fixed on the screen. But if I do that, I have some edge flickering at the text object. I'am using SpriteBatch! Is there another approach to implement a fixed positioned object on the screen?

    Read the article

  • want to build a replica of chartgame.com

    - by raj
    I want to develop a trading simulator based on technical analysis. my ideal application would exactly be chartgame.com currently chartgame.com doesnt have historical data for stocks beyond the year 2008 and I would like to have data until 2012 and have the capability to extend beyond if needed. what are the fundamentals to build an application like chartgame.com. If anyone here is willing to help I can arrange for the finances.let me know.

    Read the article

  • Threading iPhone

    - by bobobobo
    Say I have a group of large meshes that I have to intersect rays against. Assume also, for whatever reason, I cannot further simplify/reduce poly check count by spatial subdivisioning. I can do this in parallel: bool intersects( list of meshes ) // a mesh is a group of triangles { create n threads foreach mesh in meshes assign to a thread in threads wait until ( threads.run() ) ; // run asynchronously // when they're all done // pull out intersected triangles // from per-thread context data } Can you do this in ios for games? Or is the overhead of thread creation and mutex waiting going to beat-out the benefit of multithreading?

    Read the article

  • What is going on in this SAT/vector projection code?

    - by ssb
    I'm looking at the example XNA SAT collision code presented here: http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml See the following code: private int GenerateScalar(Vector2 theRectangleCorner, Vector2 theAxis) { //Using the formula for Vector projection. Take the corner being passed in //and project it onto the given Axis float aNumerator = (theRectangleCorner.X * theAxis.X) + (theRectangleCorner.Y * theAxis.Y); float aDenominator = (theAxis.X * theAxis.X) + (theAxis.Y * theAxis.Y); float aDivisionResult = aNumerator / aDenominator; Vector2 aCornerProjected = new Vector2(aDivisionResult * theAxis.X, aDivisionResult * theAxis.Y); //Now that we have our projected Vector, calculate a scalar of that projection //that can be used to more easily do comparisons float aScalar = (theAxis.X * aCornerProjected.X) + (theAxis.Y * aCornerProjected.Y); return (int)aScalar; } I think the problems I'm having with this come mostly from translating physics concepts into data structures. For example, earlier in the code there is a calculation of the axes to be used, and these are stored as Vector2, and they are found by subtracting one point from another, however these points are also stored as Vector2s. So are the axes being stored as slopes in a single Vector2? Next, what exactly does the Vector2 produced by the vector projection code represent? That is, I know it represents the projected vector, but as it pertains to a Vector2, what does this represent? A point on a line? Finally, what does the scalar at the end actually represent? It's fine to tell me that you're getting a scalar value of the projected vector, but none of the information I can find online seems to tell me about a scalar of a vector as it's used in this context. I don't see angles or magnitudes with these vectors so I'm a little disoriented when it comes to thinking in terms of physics. If this final scalar calculation is just a dot product, how is that directly applicable to SAT from here on? Is this what I use to calculate maximum/minimum values for overlap? I guess I'm just having trouble figuring out exactly what the dot product is representing in this particular context. Clearly I'm not quite up to date on my elementary physics, but any explanations would be greatly appreciated.

    Read the article

  • glTranslate, how exactly does it work?

    - by mykk
    I have some trouble understanding how does glTranslate work. At first I thought it would just simply add values to axis to do the transformation. However then I have created two objects that would load bitmaps, one has matrix set to GL_TEXTURE: public class Background { float[] vertices = new float[] { 0f, -1f, 0.0f, 4f, -1f, 0.0f, 0f, 1f, 0.0f, 4f, 1f, 0.0f }; .... private float backgroundScrolled = 0; public void scrollBackground(GL10 gl) { gl.glLoadIdentity(); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glTranslatef(0f, 0f, 0f); gl.glPushMatrix(); gl.glLoadIdentity(); gl.glMatrixMode(GL10.GL_TEXTURE); gl.glTranslatef(backgroundScrolled, 0.0f, 0.0f); gl.glPushMatrix(); this.draw(gl); gl.glPopMatrix(); backgroundScrolled += 0.01f; gl.glLoadIdentity(); } } and another to GL_MODELVIEW: public class Box { float[] vertices = new float[] { 0.5f, 0f, 0.0f, 1f, 0f, 0.0f, 0.5f, 0.5f, 0.0f, 1f, 0.5f, 0.0f }; .... private float boxScrolled = 0; public void scrollBackground(GL10 gl) { gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glTranslatef(0f, 0f, 0f); gl.glPushMatrix(); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glTranslatef(boxScrolled, 0.0f, 0.0f); gl.glPushMatrix(); this.draw(gl); gl.glPopMatrix(); boxScrolled+= 0.01f; gl.glLoadIdentity(); } } Now they are both drawn in Renderer.OnDraw. However background moves exactly 5 times faster. If I multiply boxScrolled by 5 they will be in sinc and will move together. If I modify backgrounds vertices to be float[] vertices = new float[] { 1f, -1f, 0.0f, 0f, -1f, 0.0f, 1f, 1f, 0.0f, 0f, 1f, 0.0f }; It will also be in sinc with the box. So, what is going under glTranslate?

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • C++ Directx 11 D3DXVECTOR3 doesn't allow me to devide it

    - by Miguel P
    If i have a simple vector3 like this: D3DXVECTOR3 inversevector = D3DXVECTOR3( (pos+lookat_pos)); It works perfect! But let's say i wanted to multiply it by: Speed*(float) timeHandler.GetDelta() So: D3DXVECTOR3 inversevector = D3DXVECTOR3( (pos+lookat_pos) * Speed*(float) timeHandler.GetDelta()); Now this fails completely, i've used this snippet before, but for some wierd reason it simply won't work( The vector somehow leads x,y,z to 0 or almost, no idea why). Do you have any idea why?

    Read the article

  • A way to store potentially infinite 2D map data?

    - by Blam
    I have a 2D platformer that currently can handle chunks with 100 by 100 tiles, with the chunk coordinates are stored as longs, so this is the only limit of maps (maxlong*maxlong). All entity positions etc etc are chunk relevant and so there is no limit there. The problem I'm having is how to store and access these chunks without having thousands of files. Any ideas for a preferably quick & low HD cost archive format that doesn't need to open everything at once?

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >