Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 383/772 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • glm quaternion camera rotating on wrong axis

    - by Jarrett
    I'm trying to get my camera implemented with a glm::quat used to store the rotation. However, whenever I do circles with the mouse, the camera rotates along the axis I am viewing (i.e. I think it's called the target axis). For example, if I rotated the mouse in a clockwise fashion, the camera rotates clockwise around the axis. I initialize my quaternion like so: void Camera::initialize() { orientationQuaternion_ = glm::quat(); orientationQuaternion_ = glm::normalize(orientationQuaternion_); } I rotate like so: void Camera::rotate(const glm::detail::float32& degrees, const glm::vec3& axis) { orientationQuaternion_ = orientationQuaternion_ * glm::normalize(glm::angleAxis(degrees, axis)); } and I set the viewMatrix like so: void Camera::render() { glm::quat temp = glm::conjugate(orientationQuaternion_); viewMatrix_ = glm::mat4_cast(temp); viewMatrix_ = glm::translate(viewMatrix_, glm::vec3(-pos_.x, -pos_.y, -pos_.z)); } The only axis' I actually try to rotate are the X and Y axis (i.e. (1,0,0) and (0,1,0)). Anyone have any idea why I see my camera rotating around the target axis?

    Read the article

  • What to do with input during movement?

    - by user1895420
    In a concept I'm working on, the player can move from one position in a grid to the next. Once movement starts it can't be changed and takes a predetermined amount of time to finish (about a quarter of a second). Even though their movement can't be altered, the player can still press keys (perhaps in anticipation of their next move). What do I do with this input? Possibilities i've thought of: Ignore all input during movement. Log all input and loop through them one by one once movement finishes. Log the first or last input and move when possible. I'm not really sure which is the most appropriate or most natural. Hence my question: What do I do with player-input during movement?

    Read the article

  • Taking fixed direction on hemisphere and project to normal (openGL)

    - by Maik Xhani
    I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions: vec3 sampleDirections[6] = {vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, -0.700629f), vec3(-0.509037f, 0.5f, -0.700629), vec3(-0.823639f, 0.5f, 0.267617f)}; now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta = acos(sqrt(1.0f-rand1)); float phi = 6.283185f * rand2; vec3 s = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)); vec3 v = normalize(cross(n,vec3(0.0072, 1.0, 0.0034))); vec3 u = cross(v, n); u = s.x*u; v = s.y*v; vec3 w = s.z*n; vec3 direction = u+v+w; return normalize(direction); } ** EDIT ** This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) { vec3 x; vec3 z; if(abs(n.x) < abs(n.y)){ if(abs(n.x) < abs(n.z)){ x = vec3(1.0f,0.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } }else{ if(abs(n.y) < abs(n.z)){ x = vec3(0.0f,1.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } } z = normalize(cross(x,n)); x = cross(n,z); mat3 M = mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z); return M*sampleDir; } So if my n = (0,0,1); and my sampleDir = (0,1,0); shouldn't the M*sampleDir be (0,0,1)? Cause that is what I was expecting.

    Read the article

  • Can not remove cube in UDK

    - by user32228
    For some reason, I can't move or remove an 'invisible' cube which is on my map. I searched on Google to find a solution but somehow I still can't remove it. The cube looks like this: http://screencloud.net/v/uNyz In Brush Wireframe: http://screencloud.net/v/3C0c In Wireframe: screencloud.net/v/oGBj As you can see, I want to delete the brown cube. Selecting it and pressing the DEL button won't do anything. So, how do you delete the brown cube? EDIT: Seriously, I wrote this post a few minutes ago and I found the solution. However, I still don't know how to delete the brown cube.

    Read the article

  • How do I import service references to Unity3D?

    - by Timothy Williams
    I'm attempting access a service reference in Unity. I need two: the SOAP framework and a separate service called ContentVault. The respective service URL's are: SOAP: http://api.microsofttranslator.com/V2/Soap.svc ContentVault: http://ioun.wizards.com/ContentVault.svc Both services import fine in to Visual Studio. I've tried everything I can think of but they won't work with Unity. I just get various errors (changing depending on which solution I'm trying out). I've attempted using svcutil to export the services as external scripts, but all I got was a bunch of using errors. I've tried converting the code to work with .NET 2.0 to no avail, I've even tried making the services in to .DLL's to no success. How could get these services working with Unity?

    Read the article

  • Platformer gravity where gravity is greater than tile size

    - by Sara
    I am making a simple grid-tile-based platformer with basic physics. I have 16px tiles, and after playing with gravity it seems that to get a nice quick Mario-like jump feel, the player ends up moving faster than 16px per second at the ground. The problem is that they clip through the first layer of tiles before collisions being detected. Then when I move the player to the top of the colliding tile, they move to the bottom-most tile. I have tried limiting their maximum velocity to be less than 16px but it does not look right. Are there any standard approaches to solving this? Thanks.

    Read the article

  • My raycaster is putting out strange results, how do I fix it?

    - by JamesK89
    I'm working on a raycaster in ActionScript 3.0 for the fun of it, and as a learning experience. I've got it up and running and its displaying me output as expected however I'm getting this strange bug where rays go through corners of blocks and the edges of blocks appear through walls. Maybe somebody with more experience can point out what I'm doing wrong or maybe a fresh pair of eyes can spot a tiny bug I haven't noticed. Thank you so much for your help! Screenshots: http://i55.tinypic.com/25koebm.jpg http://i51.tinypic.com/zx5jq9.jpg Relevant code: function drawScene() { rays.graphics.clear(); rays.graphics.lineStyle(1, rgba(0x00,0x66,0x00)); var halfFov = (player.fov/2); var numRays:int = ( stage.stageWidth / COLUMN_SIZE ); var prjDist = ( stage.stageWidth / 2 ) / Math.tan(toRad( halfFov )); var angStep = ( player.fov / numRays ); for( var i:int = 0; i < numRays; i++ ) { var rAng = ( ( player.angle - halfFov ) + ( angStep * i ) ) % 360; if( rAng < 0 ) rAng += 360; var ray:Object = castRay(player.position, rAng); drawRaySlice(i*COLUMN_SIZE, prjDist, player.angle, ray); } } function drawRaySlice(sx:int, prjDist, angle, ray:Object) { if( ray.distance >= MAX_DIST ) return; var height:int = int(( TILE_SIZE / (ray.distance * Math.cos(toRad(angle-ray.angle))) ) * prjDist); if( !height ) return; var yTop = int(( stage.stageHeight / 2 ) - ( height / 2 )); if( yTop < 0 ) yTop = 0; var yBot = int(( stage.stageHeight / 2 ) + ( height / 2 )); if( yBot > stage.stageHeight ) yBot = stage.stageHeight; rays.graphics.moveTo( (ray.origin.x / TILE_SIZE) * MINI_SIZE, (ray.origin.y / TILE_SIZE) * MINI_SIZE ); rays.graphics.lineTo( (ray.hit.x / TILE_SIZE) * MINI_SIZE, (ray.hit.y / TILE_SIZE) * MINI_SIZE ); for( var x:int = 0; x < COLUMN_SIZE; x++ ) { for( var y:int = yTop; y < yBot; y++ ) { buffer.setPixel(sx+x, y, clrTable[ray.tile-1] >> ( ray.horz ? 1 : 0 )); } } } function castRay(origin:Point, angle):Object { // Return values var rTexel = 0; var rHorz = false; var rTile = 0; var rDist = MAX_DIST + 1; var rMap:Point = new Point(); var rHit:Point = new Point(); // Ray angle and slope var ra = toRad(angle) % ANGLE_360; if( ra < ANGLE_0 ) ra += ANGLE_360; var rs = Math.tan(ra); var rUp = ( ra > ANGLE_0 && ra < ANGLE_180 ); var rRight = ( ra < ANGLE_90 || ra > ANGLE_270 ); // Ray position var rx = 0; var ry = 0; // Ray step values var xa = 0; var ya = 0; // Ray position, in map coordinates var mx:int = 0; var my:int = 0; var mt:int = 0; // Distance var dx = 0; var dy = 0; var ds = MAX_DIST + 1; // Horizontal intersection if( ra != ANGLE_180 && ra != ANGLE_0 && ra != ANGLE_360 ) { ya = ( rUp ? TILE_SIZE : -TILE_SIZE ); xa = ya / rs; ry = int( origin.y / TILE_SIZE ) * ( TILE_SIZE ) + ( rUp ? TILE_SIZE : -1 ); rx = origin.x + ( ry - origin.y ) / rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = true; rTexel = int(rx % TILE_SIZE) } break; } rx += xa; ry += ya; } } // Vertical intersection if( ra != ANGLE_90 && ra != ANGLE_270 ) { xa = ( rRight ? TILE_SIZE : -TILE_SIZE ); ya = xa * rs; rx = int( origin.x / TILE_SIZE ) * ( TILE_SIZE ) + ( rRight ? TILE_SIZE : -1 ); ry = origin.y + ( rx - origin.x ) * rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = false; rTexel = int(ry % TILE_SIZE); } break; } rx += xa; ry += ya; } } return { angle: angle, distance: Math.sqrt(rDist), hit: rHit, map: rMap, tile: rTile, horz: rHorz, origin: origin, texel: rTexel }; }

    Read the article

  • How to create water like in new super mario bros?

    - by user1103457
    I assume the water in New super mario bros works the same as in the first part of this tutorial: http://gamedev.tutsplus.com/tutorials/implementation/make-a-splash-with-2d-water-effects/ But in new super mario bros the water also has constant waves on the surface, and the splashes look very different. What's also a difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. When I refer to the splashes in new super mario bros I am referring to the splashes that the player creates when jumping in and out of the water. For reference you could use this video: http://www.ign.com/videos/2012/11/17/new-super-mario-bros-u-3-star-coin-walkthrough-sparkling-waters-1-waterspout-beach just after 00:50, when the camera isn't moving you can get a good look at the water and the constant waves. there are also some good examples of the splashes during that time. How do they create the constant waves and the splashes? I am programming in XNA. (I have tried this myself but couldn't really get it all to work well together) (and as bonus questions: how do they create the light spots just under the surface of the waves, and how do they texture the deeper parts of the water? This is the first time I try to create water like this.)

    Read the article

  • How do I get the compression on specific dynamic body

    - by Mike JM
    Sorry, I could not find any tag that would suit my question. Let me first show you the image and then write what I want to do: I'm using box2D. As you can see there are three dynamic bodies connected to each other (think of it as a table from front view).The LEG1 and LEG2 are connected to the static body. (it's the ground body). Another dynamic body is falling onto the table. I need to get the compression in the LEG1 and LEG2 separately. Joints have GetReactionForce() function which returns a b2Vec, which in turn has Length() and LengthSqd functions. This will give the total sum of the forces in any taken joint. But what I need is forces in individual bodies that are connected with joints. Once you connect several bodies with a single joint it again will show the sum of forces which is not useful.Here's the case iI'm talking about:

    Read the article

  • Using heavyweight ORM implementation for light based games

    - by Holland
    I'm just about to engulf myself in an MVC-based/Component architecture in C#, using MySQL's connector/Net for the data storage, and probably some NHibernate/FluentNHibernate Object-relational-mapping to map out the data structure. The goal is to build a scalable 2D RPG. Then I think about it...and I can't help but think this seems a little "heavy weight" for a 2D RPG, especially one which, while I plan to incorporate a lot of functionality and entertaining gameplay, may be ported to something like Windows Phone or Android in the future. Yet, on the other hand even a 2-Dimensional RPG can become very complicated, and therefore must incorporate a lot of functionality. While this can be accomplished with text/XML/JSON for data storage, is there a better way? Is something such as Object-Relational-Mapping useful in such an application? So, what do you think? Would you say that there is a place for such technologies? I don't know what to think...

    Read the article

  • How to do reflective collisions with particles hitting background tiles?

    - by Shawn LeBlanc
    In my 2d pixel old-school platformer, I'm looking for methods for bouncing particles off of background tiles. Particles aren't affected by gravity and collisions are "reflective". By that I mean a particle hitting the side of a square tile at 45 degrees should bounce off at 45 degrees as well. We can assume that tiles will always be perfectly square. No slopes or anything. What are efficient methods and algorithms to do this? I'd be implementing this on a Sega Genesis.

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • Does swf provide better compress rate than zlib for png image?

    - by Huang F. Lei
    Somebody told me that when a png image is stored in swf, it's separated to several layer, hence the alpha channel can be compressed better. Is it true? Or, once png image is imported into a swf, it's format is changed, e.g converted into bitmap data, and than compressed by swf's compress algorithm. That's, it is not in png format anymore. I don't know how swf packing its resource, please tell me if you know.

    Read the article

  • Changing Palette for Day/Light Mode using GIMP

    - by J.C.
    Hello, Suppose I've a picture, which want to achieve day/light mode by changing 8bpp color palette. If I want the pixel index of my picture is always fixed for both day mode and night mode. For example, the 1st pixel index is 100. Which I can look up index 100 in day mode palette and night mode palette. How can I use GIMP to do so? My goal is to not update my pixel index of my picture. Also, as you see in two palette, they are not one one mapping. That is index 1 of the day mode palette and index 1 of the night mode palette may not used in the same pixel of the picture, how can I tackle this problem? Actually, my use case is as follow I want to use one 8bpp picture to achieve day/night mode by update only the color palette (without updating the pixel index). The advantage is I only have to prepare 2 256 byte palette rather than saving 2 big pictures in my limited data ram. Thanks a lot

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • Omni-directional shadow mapping

    - by gridzbi
    What is a good/the best way to fill a cube map with depth values that are going to give me the least amount of trouble with floating point imprecision? To get up and running I'm just writing the raw depth to the buffer, as you can imagine it's pretty terrible - I need to to improve it, but I'm not sure how. A few tutorials on directional lights divide the depth by W and store the Z/W value in the cube map - How would I perform the depth comparison in my shadow mapping step? The nvidia article here http://http.developer.nvidia.com/GPUGems/gpugems_ch12.html appears to do something completely different and use the dot of the light vector, presumably to counter the depth precision worsening over distance? He also scales the geometry so that it fits into the range -.5 +.5 - The article looks a bit dated, though - is this technique still reasonable? Shader code http://pastebin.com/kNBzX4xU Screenshot http://imgur.com/54wFI

    Read the article

  • Using Ogre particle point billboards with shaders

    - by Jay
    I'm learning about using Ogre particles and had some questions about how the point type particles work. Q. I believe point type particles are implemented as a single position. Is one single vertex is passed to the vertex shader? Q. If one vertex is passed to the vertex shader then what gets sent to the fragment shader? Q. Can I pass the particle size to the shader? Perhaps with a custom parameter?

    Read the article

  • *DX11, HLSL* - Colour as 4 floats or one UINT

    - by Paul
    With the DX11 pipeline, would it be much quicker for the vertex buffer to pass one single UINT with one byte per channel to the input assembler, as opposed to three floats? Then the vertex shader would convert the four bytes to four floats, which I guess is the required colour format for the pipeline. In this instance, colour accuracy isn't an issue. The vertex buffer would need to be updated many times per frame, so using a single UINT and saving 12 bytes for every vertex could well be worth it: quicker uploads to vram and also less memory used. But the cost is the extra shader work for every vertex to convert each 8 bits of the input UNIT into a float. Anyone have an idea if it might be worth doing? Or, is it possible for the pipeline to be set to just internally use a four-byte colour format? The swap chain buffer has been initialised as DXGI_FORMAT_R8G8B8A8_UNORM, so ultimately that's how the colour will be written. Thanks!

    Read the article

  • Separate shaders from HTML file in WebGL

    - by Chris Smith
    I'm ramping up on WebGL and was wondering what is the best way to specify my vertex and fragment shaders. Looking at some tutorials, the shaders are embedded directly in the HTML. (And referenced via an ID.) For example: <script id="shader_1-fs" type="x-shader/x-fragment"> precision highp float; void main(void) { // ... } </script> <script id="shader_1-vs" type="x-shader/x-vertex"> attribute vec3 aVertexPosition; uniform mat4 uMVMatrix; // ... My question is, is it possible to have my shaders referenced in a separate file? (Ideally as plain text.) I presume this is straight forward in JavaScript. Is there essentially a way to do this: var shaderText = LoadRemoteFileOnSever('/shaders/shader_1.txt');

    Read the article

  • Can't load vector font in Nuclex Framework

    - by ProgrammerAtWork
    I've been trying to get this to work for the last 2 hours and I'm not getting what I'm doing wrong... I've added Nuclex.TrueTypeImporter to my references in my content and I've added Nuclex.Fonts & Nuclex.Graphics in my main project. I've put Arial-24-Vector.spritefont & Lindsey.spritefont in the root of my content directory. _spriteFont = Content.Load<SpriteFont>("Lindsey"); // works _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes I get this error on the _testFont line: File contains Microsoft.Xna.Framework.Graphics.SpriteFont but trying to load as Nuclex.Fonts.VectorFont. So I've searched around and by the looks of it it has something to do with the content importer & the content processor. For the content importer I have no new choices, so I leave it as it is, Sprite Font Description - XNA Framework for content processor and I select Vector Font - Nuclex Framework And then I try to run it. _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes again I get the following error Error loading "Arial-24-Vector". It does work if I load a sprite, so it's not a pathing problem. I've checked the samples, they do work, but I think they also use a different version of the XNA framework because in my version the "Content" class starts with a capital letter. I'm at a loss, so I ask here. Edit: Something super weird is going on. I've just added the following two lines to a method inside FreeTypeFontProcessor::FreeTypeFontProcessor( Microsoft::Xna::Framework::Content::Pipeline::Graphics::FontDescription ^fontDescription, FontHinter hinter, just to check if code would even get there: System::Console::WriteLine("I AM HEEREEE"); System::Console::ReadLine(); So, I compile it, put it in my project, I run it and... it works! What the hell?? This is weird because I've downloaded the binaries, they didn't work, I've compiled the binaries myself. didn't work either, but now I make a small change to the code and it works? _. So, now I remove the two lines, compile it again and it works again. Someone care to elaborate what is going on? Probably some weird caching problem!

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Shadow Mapping and Transparent Quads

    - by CiscoIPPhone
    Shadow mapping uses the depth buffer to calculate where shadows should be drawn. My problem is that I'd like some semi transparent textured quads to cast shadows - for example billboarded trees. As the depth value will be set across all of the quad and not just the visible parts it will cast a quad shadow, which is not what I want. How can I make my transparent quads cast correct shadows using shadow mapping?

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • Rotate sprite to face 3D camera

    - by omikun
    I am trying to rotate a sprite so it is always facing a 3D camera. shaders->setUniform("camera", gCamera.matrix()); glm::mat4 scale = glm::scale(glm::mat4(), glm::vec3(5e5, 5e5, 5e5)); glm::vec3 look = gCamera.position(); glm::vec3 right = glm::cross(gCamera.up(), look); glm::vec3 up = glm::cross(look, right); glm::mat4 newTransform = glm::lookAt(glm::vec3(0), gCamera.position(), up) * scale; shaders->setUniform("model", newTransform); In the vertex shader: gl_Position = camera * model * vec4(vert, 1); The object will track the camera if I move the camera up or down, but if I rotate the camera around it, it will rotate in the other direction so I end up seeing its front twice and its back twice as I rotate around it 360. What am I doing wrong?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >