Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 371/772 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Sprite Kit - containsPoint for SKPhysicsBody?

    - by gj15987
    I have a ball bouncing around the screen. I can pick it up and drag it onto a "bucket". When my touches finish, I use the containsPoint function to check and see if I have dropped the ball onto the bucket. This works fine, however, I actually want to check whether the ball is dropped onto the bucket node's physics body because my "bucket" is actually just an oval, and so I've applied a physics body which is the same shape as the oval, so that the white space around the oval isn't included in the physics simulation. I can't seem to find a "containsPoint" function for physics bodies. Can anyone advise on how I'd check for this? To summarise, I want to drop a node, onto a specific part of another node (or its physics body) and trigger an event. Thanks in advance.

    Read the article

  • Collision filtering by object, team

    - by Bill Zimmerman
    Hi, I am looking for a good method to determine which objects will be considered for collision with other objects. My current idea is that each object has the following properties: alwaysCollidesWith = [list of objects that will always trigger a collision check] neverCollidesWith = [lost of objects that will never be considered] teamCollidesWith = [list of objects that will be checked, provided they belong to a different team] For example: -projectiles never have to be checked for collisions with other projectiles -players are always checked for collisions with players, regardless of team -projectiles are only considered for collisions if they collide with another teams players Does anyone see any weaknesses with this approach? Can anyone recommend a better approach?

    Read the article

  • Billboard shader without distortion

    - by Nick Wiggill
    I use the standard approach to billboarding within Unity that is OK, but not ideal: transform.LookAt(camera). The problem is that this introduces distortion toward the edges of the viewport, especially as the field of view angle grows larger. This is unlike the perfect billboarding you'd see in eg. Doom when seeing an enemy from any angle and irrespective of where they are located in screen space. Obviously, there are ways to blit an image directly to the viewport, centred around a single vertex, but I'm not hot on shaders. Does anyone have any samples of this approach (GLSL if possible), or any suggestions as to why it isn't typically done this way (vs. the aforementioned quad transformation method)? EDIT: I was confused, thanks Nathan for the heads up. Of course, Causing the quads to look at the camera does not cause them to be parallel to the view plane -- which is what I need.

    Read the article

  • Why does setting a geometry shader cause my sprites to vanish?

    - by ChaosDev
    My application has multiple screens with different tasks. Once I set a geometry shader to the device context for my custom terrain, it works and I get the desired results. But then when I get back to the main menu, all sprites and text disappear. These sprites don't dissappear when I use pixel and vertex shaders. The sprites are being drawn through D3D11, of course, with specified view and projection matrices as well an input layout, vertex, and pixel shader. I'm trying DeviceContext->ClearState() but it does not help. Any ideas? void gGeometry::DrawIndexedWithCustomEffect(gVertexShader*vs,gPixelShader* ps,gGeometryShader* gs=nullptr) { unsigned int offset = 0; auto context = mp_D3D->mp_Context; //set topology context->IASetPrimitiveTopology(m_Topology); //set input layout context->IASetInputLayout(mp_inputLayout); //set vertex and index buffers context->IASetVertexBuffers(0,1,&mp_VertexBuffer->mp_Buffer,&m_VertexStride,&offset); context->IASetIndexBuffer(mp_IndexBuffer->mp_Buffer,mp_IndexBuffer->m_DXGIFormat,0); //send constant buffers to shaders context->VSSetConstantBuffers(0,vs->m_CBufferCount,vs->m_CRawBuffers.data()); context->PSSetConstantBuffers(0,ps->m_CBufferCount,ps->m_CRawBuffers.data()); if(gs!=nullptr) { context->GSSetConstantBuffers(0,gs->m_CBufferCount,gs->m_CRawBuffers.data()); context->GSSetShader(gs->mp_D3DGeomShader,0,0);//after this call all sprites disappear } //set shaders context->VSSetShader( vs->mp_D3DVertexShader, 0, 0 ); context->PSSetShader( ps->mp_D3DPixelShader, 0, 0 ); //draw context->DrawIndexed(m_indexCount,0,0); } //sprites void gSpriteDrawer::Draw(gTexture2D* texture,const RECT& dest,const RECT& source, const Matrix& spriteMatrix,const float& rotation,Vector2d& position,const Vector2d& origin,const Color& color) { VertexPositionColorTexture* verticesPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; unsigned int TriangleVertexStride = sizeof(VertexPositionColorTexture); unsigned int offset = 0; float halfWidth = ( float )dest.right / 2.0f; float halfHeight = ( float )dest.bottom / 2.0f; float z = 0.1f; int w = texture->Width(); int h = texture->Height(); float tu = (float)source.right/(w); float tv = (float)source.bottom/(h); float hu = (float)source.left/(w); float hv = (float)source.top/(h); Vector2d t0 = Vector2d( hu+tu, hv); Vector2d t1 = Vector2d( hu+tu, hv+tv); Vector2d t2 = Vector2d( hu, hv+tv); Vector2d t3 = Vector2d( hu, hv+tv); Vector2d t4 = Vector2d( hu, hv); Vector2d t5 = Vector2d( hu+tu, hv); float ex=(dest.right/2)+(origin.x); float ey=(dest.bottom/2)+(origin.y); Vector4d v4Color = Vector4d(color.r,color.g,color.b,color.a); VertexPositionColorTexture vertices[] = { { Vector3d( dest.right-ex, -ey, z),v4Color, t0}, { Vector3d( dest.right-ex, dest.bottom-ey , z),v4Color, t1}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t2}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t3}, { Vector3d( -ex, -ey , z),v4Color, t4}, { Vector3d( dest.right-ex, -ey , z),v4Color, t5}, }; auto mp_context = mp_D3D->mp_Context; // Lock the vertex buffer so it can be written to. mp_context->Map(mp_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Get a pointer to the data in the vertex buffer. verticesPtr = (VertexPositionColorTexture*)mappedResource.pData; // Copy the data into the vertex buffer. memcpy(verticesPtr, (void*)vertices, (sizeof(VertexPositionColorTexture) * 6)); // Unlock the vertex buffer. mp_context->Unmap(mp_vertexBuffer, 0); //set vertex shader mp_context->IASetVertexBuffers( 0, 1, &mp_vertexBuffer, &TriangleVertexStride, &offset); //set texture mp_context->PSSetShaderResources( 0, 1, &texture->mp_SRV); //set matrix to shader mp_context->UpdateSubresource(mp_matrixBuffer, 0, 0, &spriteMatrix, 0, 0 ); mp_context->VSSetConstantBuffers( 0, 1, &mp_matrixBuffer); //draw sprite mp_context->Draw( 6, 0 ); }

    Read the article

  • How to achieve best performance in DirectX 9.0 while rendering on multiple monitors

    - by Vibhore Tanwer
    I am new to DirectX, and trying to learn best practice. Please suggest what are the best practices for rendering on multiple monitors different things at the same time? how can I boost performance of application? I have gone through this article http://msdn.microsoft.com/en-us/library/windows/desktop/bb147263%28v=vs.85%29.aspx . I am making use of some pixel shaders to achieve some effects. At most 4 effect(4 shader effects) can be applied at same time. What are the best practices to achieve best performance with DirectX 9.0. I read somewhere that DirectX 11 provides support for parallel rendering, but I am not able to get any working sample for DirectX 11.0. Please help me with this, Any help would be of great value. Thanks

    Read the article

  • libgdx spite position relative to body

    - by While-E
    Apologies if this is a reiteration, as I couldn't find another discussion of this over the past couple days. Issue: I'm using libgdx and box2d, and I'm currently updating the sprite's position to the body's current position every render call. Using a debugRenderer to see the bodies, I see that there is fairly noticeable lag between the movement/position of the body and the sprite that is being moved relative to it. Question: Is this lag normal, possibly to perform collisions ahead of time? If not, should I be manipulating/relating the positions differently? Thanks in advance! [Solution] This was a coding error on my part. Pointed out by a good reply below, I was updating the position of the sprite relative to the body and then stepping the physics. Thus never actually setting the sprite to the body's CURRENT position. Thanks!

    Read the article

  • OpenGL 3.0+ framebuffer to texture/images

    - by user827992
    I need a way to capture what is rendered on screen, i have read about glReadPixels but it looks really slow. Can you suggest a more efficient or just an alternative way for just copying what is rendered by OpenGL 3.0+ to the local RAM and in general to output this in a image or in a data stream? How i can achieve the same goal with OpenGL ES 2.0 ? EDIT: i just forgot: with this OpenGL functions how i can be sure that I'm actually reading a complete frame, meaning that there is no overlapping between 2 frames or any nasty side effect I'm actually reading the frame that comes right next to the previous one so i do not lose frames

    Read the article

  • How to set a target as image [on hold]

    - by Zadalaxmi
    How to set a target as image in given code. public void addListenerForImage(final Image roomImage) { final DragAndDrop dragAndDrop = new DragAndDrop(); dragAndDrop.addSource(new DragAndDrop.Source(roomImage) { public DragAndDrop.Payload dragStart (InputEvent event, float x, float y, int pointer) { DragAndDrop.Payload payload = new DragAndDrop.Payload(); payload.setDragActor(roomImage); dragAndDrop.setDragActorPosition(-x, -y + roomImage.getHeight()); return payload; } public void dragStop (InputEvent event, float x, float y, int pointer,Target target) { roomImage.setBounds(50, 125, roomImage.getWidth(), roomImage.getHeight()); if(target != null) { roomImage.setPosition(target.getActor().getX(), target.getActor().getY()); } System.out.println(target); stage.addActor(roomImage); } }); My problem is i can drag the images and i am not able to set target as image; and target shows as null;One more if a invisible some of the images in group how can i test that it is overlapped or not;Please give some links and suggestion

    Read the article

  • formula for replicating glTexGen in opengl es 2.0 glsl

    - by visualjc
    I also posted this on the main StackExchange, but this seems like a better place, but for give me for the double post if it shows up twice. I have been trying for several hours to implement a GLSL replacement for glTexGen with GL_OBJECT_LINEAR. For OpenGL ES 2.0. In Ogl GLSL there is the gl_TextureMatrix that makes this easier, but thats not available on OpenGL ES 2.0 / OpenGL ES Shader Language 1.0 Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work. My hunch is that I'm not setting the planes up correctly, or I'm missing something in my understanding. I've pored over the web. But most sites are talking about projected textures, I'm just looking to create UV's based on planar projection. The models are being built in Maya, have 50k polygons and the modeler is using planer mapping, but Maya will not export the UV's. So I'm trying to figure this out. I've looked at the glTexGen manpage information: g = p1xo + p2yo + p3zo + p4wo What is g? Is g the value of s in the texture2d call? I've looked at the site: http://www.opengl.org/wiki/Mathematics_of_glTexGen Another size explains the same function: coord = P1*X + P2*Y + P3*Z + P4*W I don't get how coord (an UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g". What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great. I'm still missing something. My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex position. varying vec4 kOutBaseTCoord; void main() { gl_Position = WVPMatrix * vec4(POSITION, 1.0); vec4 sPlane = vec4(1.0, 0.0, 0.0, 0.0); vec4 tPlane = vec4(0.0, 1.0, 0.0, 0.0); vec4 rPlane = vec4(0.0, 0.0, 0.0, 0.0); vec4 qPlane = vec4(0.0, 0.0, 0.0, 0.0); kOutBaseTCoord.s = dot(vec4(POSITION, 1.0), sPlane); kOutBaseTCoord.t = dot(vec4(POSITION, 1.0), tPlane); //kOutBaseTCoord.r = dot(vec4(POSITION, 1.0), rPlane); //kOutBaseTCoord.q = dot(vec4(POSITION, 1.0), qPlane); } The frag shader precision mediump float; uniform sampler2D BaseSampler; varying mediump vec4 kOutBaseTCoord; void main() { //gl_FragColor = vec4(kOutBaseTCoord.st, 0.0, 1.0); gl_FragColor = texture2D(BaseSampler, kOutBaseTCoord.st); } I've tried texture2DProj in frag shader Here are some of the other links I've looked up http://www.gamedev.net/topic/407961-texgen-not-working-with-glsl-with-fixed-pipeline-is-ok/ Thank you in advance.

    Read the article

  • DirectX9 dynamic rendering

    - by gardian06
    What I am planning to do is have the models (or maybe just an identifier for the model to be used) stored outside of the directX9 framework, and so in nature have completely dynamic rendering. All of the information that I have found contains static rendering (rendering models that are stored in memory at specific positions) I would like information on how to take a model (or identifier for a model type) that is stored outside of the framework, and render it to the screen. I am expected to take a container that holds all the relevant data to be rendered. The information outside would hold the position, orientation (quaternion, though I am told that I can also get a rotation matrix if I prefer), and dimensions (scale)

    Read the article

  • How to code UI / HUD in Entity System?

    - by Sylpheed
    I think I already got the idea of the Entity System inspired by Adam Martin (t-machine). I want to start using this for my next project. I already know the basic of Entity, Components, and Systems. My problem is how to handle UI / HUD. For example, a quest window, skill window, character info window, etc. How do you handle UI events (eg. pressing a button)? These are stuff that doesn't need to be processed every frame. Currently, I'm using MVC to code UI but I don't think that'll be compatible for Entity System. I've read that Entity System is embedded on a larger OOP. I don't know if UI is outside of ES or not. How do I approach this one?

    Read the article

  • Top Down RPG Movement w/ Correction?

    - by Corey Ogburn
    I would hope that we have all played Zelda: A Link to the Past, please correct me if I'm wrong, but I want to emulate that kind of 2D, top-down character movement with a touch of correction. It has been done in other games, but I feel this reference would be the easiest to relate to. More specifically the kind of movement and correction I'm talking about is: Floating movement not restricted to tile based movement like Pokemon and other games where one tap of the movement pad moves you one square in that cardinal direction. This floating movement should be able to achieve diagonal motion. If you're walking West and you come to a wall that is diagonal in a North East/South West fashion, you are corrected into a South West movement even if you continue holding left (West) on the controller. This should work for both diagonals correcting in both directions. If you're a few pixels off from walking squarely into a door or hallway, you are corrected into walking through the hall or down the hallway, i.e. bumping into the corner causes you to be pushed into the hall/door. I've hunted for efficient ways to achieve this and have had no luck. To be clear I'm talking about the human character's movement, not an NPC's movement. Are their resources available on this kind of movement? Equations or algorithms explained on a wiki or something? I'm using the XNA Framework, is there anything in it to help with this?

    Read the article

  • XNA `tex2Dlod` always returns transparent black

    - by feralin
    I want to sample a texture in a vertex shader, so at first I just tried using float2 texcoords = ...; color = tex2D(texture, texcoords); But apparently I cannot use tex2D in a vertex shader, and must use tex2Dlod. So then I changed the above code to color = tex2Dlod(texture, float4(texcoords, 0, 0)); But now color is always float4(0, 0, 0, 0) (i.e. transparent black). Why is this, and how can I fix it? EDIT: I know for a fact that the texture does not contain just transparent black pixels.

    Read the article

  • Giving a Bomberman AI intelligent bomb placement

    - by Paul Manta
    I'm trying to implement an AI algorithm for Bomberman. Currently I have a working but not very smart rudimentary implementation (the current AI is overzealous in placing bombs). This is the first AI I've ever tried implementing and I'm a bit stuck. The more sophisticated algorithms I have in mind (the ones that I expect to make better decisions) are too convoluted to be good solutions. What general tips do you have for implementing a Bomberman AI? Are there radically different approaches for making the bot either more defensive or offensive? Edit: Current algorithm My current algorithm goes something like this (pseudo-code): 1) Try to place a bomb and then find a cell that is safe from all the bombs, including the one that you just placed. To find that cell, iterate over the four directions; if you can find any safe divergent cell and reach it in time (eg. if the direction is up or down, look for a cell that is found to the left or right of this path), then it's safe to place a bomb and move in that direction. 2) If you can't find and safe divergent cells, try NOT placing a bomb and look again. This time you'll only need to look for a safe cell in only one direction (you don't have to diverge from it). 3) If you still can't find a safe cell, don't do anything. for $(direction) in (up, down, left, right): place bomb at current location if (can find and reach divergent safe cell in current $(direction)): bomb = true move = $(direction) return for $(direction) in (up, down, left, right): do not place bomb at current location if (any safe cell in the current $(direction)): bomb = false move = $(direction) return else: bomb = false move = stay_put This algorithm makes the bot very trigger-happy (it'll place bombs very frequently). It doesn't kill itself, but it does have a habit of making itself vulnerable by going into dead ends where it can be blocked and killed by the other players. Do you have any suggestions on how I might improve this algorithm? Or maybe I should try something completely different? One of the problems with this algorithm is that it tends to leave the bot with very few (frequently just one) safe cells on which it can stand. This is because the bot leaves a trail of bombs behind it, as long as it doesn't kill itself. However, leaving a trail of bombs behind leaves few places where you can hide. If one of the other players or bots decide to place a bomb somewhere near you, it often happens that you have no place to hide and you die. I need a better way to decide when to place bombs.

    Read the article

  • Popular genres in Asian (non-Japanese) markets?

    - by mummey
    Hello, From time-to-time I've wondered what kind of games are popular in Asia (India, China, Korea, Singapore, etc...). I hear about developers in the US and UK who outsource work there, but what goes into the games they make for themselves? Related, you hear these days about how Japanese developers have been marketing their games more for American audiences these days (with mixed success). In what ways could American developers aim their development toward Asian audiences?

    Read the article

  • Is the new windows 8 sdk usable with visual c++ express 2010 on windows 7?

    - by JohnB
    This is inspired by and related to Is the June 2010 DX SDK really the latest? asked recently but it's a different question. I won't likely be purchasing the full visual studio 2012 for C++, I intend to use the free visual c++ express 2012 that targets desktop applications when it is released so for now I'm using visual c++ express 2010 running on windows 7. The latest directx11 sdk is the one included in the windows 8 SDK now, it's not a separate release any more. So my question is, can I use the windows 8 SDK to build directx11 programs that work on windows 7 using visual studio express 2010 running on windows 7. Or do I need to stick to the final DirectX SDK release for now?

    Read the article

  • Keeping crosshairs & GUI onscreen - SFML

    - by nihohit
    I read this question, but didn't understand the implementation suggestions with SFML on c#. For example, right now I'm just trying to make sure that the mouse crosshairs stay onscreen constatnly. I tried using this code: View lastView = this._mainWindow.GetView(); this._mainWindow.SetView(this._mainWindow.DefaultView); this._mainWindow.Draw(crosshair); this._mainWindow.SetView(lastView); after drawing all other sprites and before call this._mainWindow.display(), when beforehand I set crosshair.Position based on its position relative to the window, not the view. This just keeps the screen locked and prevents screen scrolling. Any suggestions?

    Read the article

  • Random Movement for multiple entities

    - by opiop65
    I have this code for a arraylist of entities. All the entities use the same random and so all of them move in the same direction. How can I change it so it generates a new random number for each entity? public void moveFemale() { for(int i = 0; i < 1000; i++){ random = rand.nextInt(99); } if (random >= 0 && random <= 25) { posX -= enemyWalkSpeed; // right } if (random >= 26 && random <= 50) { posX += enemyWalkSpeed; // left } if (random >= 51 && random <= 75) { posY -= enemyWalkSpeed; // up } if (random >= 76 && random <= 100) { posY += enemyWalkSpeed; // down } } Is this correct? public void moveFemale() { for (Female female: GameFrame.females){ female.lastChangedDirectionTime += elapsedTime; if (female.lastChangedDirectionTime >= CHANGE_DIRECTION_TIME) { female.lastChangedDirectionTime = 0; random = rand.nextInt(100); if (random >= 0 && random <= 25) { posX -= enemyWalkSpeed; // right } if (random >= 26 && random <= 50) { posX += enemyWalkSpeed; // left } if (random >= 51 && random <= 75) { posY -= enemyWalkSpeed; // up } if (random >= 76 && random <= 100) { posY += enemyWalkSpeed; // down } } } }

    Read the article

  • Disabling depth write trashes the frame buffer on some GPUs

    - by EboMike
    I sometimes disable depth buffer writing via glDepthMask(GL_FALSE) during the alpha rendering of a frame. That works perfectly fine on some GPUs (like the Motorola Droid's PowerVR), but on the HTC EVO with the Adreno GPU for example, I end up with the frame buffer being complete garbage (I see traces of the meshes I rendered somewhere, but the entire screen is mostly trashed). If I force glDepthMask to be true the entire time, everything works fine. I need glDepthMask to be off during parts of the alpha rendering. What can cause the framebuffer to get destroyed by turning the depth writing off? I do clear the depth buffer initially, and the majority of the screen has pixels rendered with depth writing turned on first before I do additional drawing with it turned off.

    Read the article

  • Why is my Tiled map distorted when rendered with LibGDX?

    - by Sean
    I have a Tiled map that looks like this in the editor: But when I load it using an AssetManager (full static source available on GitHub) it appears completely askew. I believe the relevant portion of the code is below. This is the entire method; the others are either empty or might as well be. private OrthographicCamera camera; private AssetManager assetManager; private BitmapFont font; private SpriteBatch batch; private TiledMap map; private TiledMapRenderer renderer; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); camera = new OrthographicCamera(); assetManager = new AssetManager(); batch = new SpriteBatch(); font = new BitmapFont(); camera.setToOrtho(false, (w / h) * 10, 10); camera.update(); assetManager.setLoader(TiledMap.class, new TmxMapLoader( new InternalFileHandleResolver())); assetManager.load(AssetInfo.ICE_CAVE.assetPath, TiledMap.class); assetManager.finishLoading(); map = assetManager.get(AssetInfo.ICE_CAVE.assetPath); renderer = new IsometricTiledMapRenderer(map, 1f/64f); }

    Read the article

  • Isometric screen to 3D world coordinates efficiently

    - by Justin
    Been having a difficult time transforming 2D screen coordinates to 3D isometric space. This is the situation where I am working in 3D but I have an orthographic camera. Then my camera is positioned at (100, 200, 100), Where the xz plane is flat and y is up and down. I've been able to get a sort of working solution, but I feel like there must be a better way. Here's what I'm doing: With my camera at (0, 1, 0) I can translate my screen coordinates directly to 3D coordinates by doing: mouse2D.z = (( event.clientX / window.innerWidth ) * 2 - 1) * -(window.innerWidth /2); mouse2D.x = (( event.clientY / window.innerHeight) * 2 + 1) * -(window.innerHeight); mouse2D.y = 0; Everything okay so far. Now when I change my camera back to (100, 200, 100) my 3D space has been rotated 45 degrees around the y axis and then rotated about 54 degrees around a vector Q that runs along the xz plane at a 45 degree angle between the positive z axis and the negative x axis. So what I do to find the point is first rotate my point by 45 degrees using a matrix around the y axis. Now I'm close. So then I rotate my point around the vector Q. But my point is closer to the origin than it should be, since the Y value is not 0 anymore. What I want is that after the rotation my Y value is 0. So now I exchange my X and Z coordinates of my rotated vector with the X and Z coordinates of my non-rotated vector. So basically I have my old vector but it's y value is at an appropriate rotated amount. Now I use another matrix to rotate my point around the vector Q in the opposite direction, and I end up with the point where I clicked. Is there a better way? I feel like I must be missing something. Also my method isn't completely accurate. I feel like it's within 5-10 coordinates of where I click, maybe because of rounding from many calculations. Sorry for such a long question.

    Read the article

  • How can I get the palette of an 8-bit surface in SDL.NET/Tao.SDL?

    - by lolmaster
    I'm looking to get the palette of an 8-bit surface in SDL.NET if possible, or (more than likely) using Tao.SDL. This is because I want to do palette swapping with the palette directly, instead of blitting surfaces together to replace colours like how you would do it with a 32-bit surface. I've gotten the SDL_Surface and the SDL_PixelFormat, however when I go to get the palette in the same way, I get a System.ExecutionEngineException: private Tao.Sdl.Sdl.SDL_Palette GetPalette(Surface surf) { // Get surface. Tao.Sdl.Sdl.SDL_Surface sdlSurface = (Tao.Sdl.Sdl.SDL_Surface)System.Runtime.InteropServices.Marshal.PtrToStructure(surf.Handle, typeof(Tao.Sdl.Sdl.SDL_Surface)); // Get pixel format. Tao.Sdl.Sdl.SDL_PixelFormat pixelFormat = (Tao.Sdl.Sdl.SDL_PixelFormat)System.Runtime.InteropServices.Marshal.PtrToStructure(sdlSurface.format, typeof(Tao.Sdl.Sdl.SDL_PixelFormat)); // Execution exception here. Tao.Sdl.Sdl.SDL_Palette palette = (Tao.Sdl.Sdl.SDL_Palette)System.Runtime.InteropServices.Marshal.PtrToStructure(pixelFormat.palette, typeof(Tao.Sdl.Sdl.SDL_Palette)); return palette; } When I used unsafe code to get the palette, I got a compile time error: "Cannot take the address of, get the size of, or declare a pointer to a managed type ('Tao.Sdl.Sdl.SDL_Palette')". My unsafe code to get the palette was this: unsafe { Tao.Sdl.Sdl.SDL_Palette* pal = (Tao.Sdl.Sdl.SDL_Palette*)pixelFormat.palette; } From what I've read, a managed type in this case is when a structure has some sort of reference inside it as a field. The SDL_Palette structure happens to have an array of SDL_Color's, so I'm assuming that's the reference type that is causing issues. However I'm still not sure how to work around that to get the underlying palette. So if anyone knows how to get the palette from an 8-bit surface, whether it's through safe or unsafe code, the help would be greatly appreciated.

    Read the article

  • How can I write only to the stencil buffer in OpenGL ES 2.0?

    - by stephelton
    I'd like to write to the stencil buffer without incurring the cost of my expensive shaders. As I understand it, I write to the stencil buffer as a 'side effect' of rendering something. In this first pass where I write to the stencil buffer, I don't want to write anything to the color or depth buffer, and I definitely don't want to run through my lighting equations in my shaders. Do I need to create no-op shaders for this (and can I just discard fragments), or is there a better way to do this? As the title says, I'm using OpenGL ES 2.0. I haven't used the stencil buffer before, so if I seem to be misunderstanding something, feel free to be verbose.

    Read the article

  • How do I implement a quaternion based camera?

    - by kudor gyozo
    I looked at several tutorials about this and when I thought I understood I tried to implement a quaternion based camera. The problem is it doesn't work correctly, after rotating for approx. 10 degrees it jumps back to -10 degrees. I have no idea what's wrong. I'm using openTK and it already has a quaternion class. I'm a noob at opengl, I'm doing this just for fun, and don't really understand quaternions, so probably I'm doing something stupid here. Here is some code: (Actually almost all the code except the methods that load and draw a vbo (it is taken from an OpenTK sample that demonstrates vbo-s)) I load a cube into a vbo and initialize the quaternion for the camera protected override void OnLoad(EventArgs e) { base.OnLoad(e); cameraPos = new Vector3(0, 0, 7); cameraRot = Quaternion.FromAxisAngle(new Vector3(0,0,-1), 0); GL.ClearColor(System.Drawing.Color.MidnightBlue); GL.Enable(EnableCap.DepthTest); vbo = LoadVBO(CubeVertices, CubeElements); } I load a perspective projection here. This is loaded at the beginning and every time I resize the window. protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, Width, Height); float aspect_ratio = Width / (float)Height; Matrix4 perpective = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 64); GL.MatrixMode(MatrixMode.Projection); GL.LoadMatrix(ref perpective); } Here I get the last rotation value and create a new quaternion that represents only the last rotation and multiply it with the camera quaternion. After this I transform this into axis-angle so that opengl can use it. (This is how I understood it from several online quaternion tutorials) protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); double speed = 1; double rx = 0, ry = 0; if (Keyboard[Key.A]) { ry = -speed * e.Time; } if (Keyboard[Key.D]) { ry = +speed * e.Time; } if (Keyboard[Key.W]) { rx = +speed * e.Time; } if (Keyboard[Key.S]) { rx = -speed * e.Time; } Quaternion tmpQuat = Quaternion.FromAxisAngle(new Vector3(0,1,0), (float)ry); cameraRot = tmpQuat * cameraRot; cameraRot.Normalize(); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); Vector3 axis; float angle; cameraRot.ToAxisAngle(out axis, out angle); GL.Rotate(angle, axis); GL.Translate(-cameraPos); Draw(vbo); SwapBuffers(); } Here are 2 images to explain better: I rotate a while and from this: it jumps into this Any help is appreciated. Update1: I add these to a streamwriter that writes into a file: sw.WriteLine("camerarot: X:{0} Y:{1} Z:{2} W:{3} L:{4}", cameraRot.X, cameraRot.Y, cameraRot.Z, cameraRot.W, cameraRot.Length); sw.WriteLine("ry: {0}", ry); The log is available here: http://www.pasteall.org/26133/text. At line 770 the cube jumps from right to left, when camerarot.Y changes signs. I don't know if this is normal. Update2 Here is the complete project.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >