Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 507/1169 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • What are the GPU requirements for XNA 4.0?

    - by Nate Koppenhaver
    I tried to build a sample application using XNA, but I got an error saying that Pixel Shader 1.1 was required, so I got a used Radeon X300 GPU that supports Pixel Shader. I tried to build it again, but I got another error saying that "Your current graphics card does not support the XNA HiDef profile" and would not build. Since that card seems to not be compatible, I guess I need to buy another one. What features should I look for to make sure that it's compatible with XNA?

    Read the article

  • Which K-factor should be used in case players have different K-factor values

    - by DmitryN
    I am implementing a ranking system based on the Elo rating and cannot get a point about the K-factor. If two players with different skills and, therefore, different K-factors are playing, which exactly K-factor should be used when changing their ratings? For example, player A has rating 2500 and K-factor 16 (probability ~75%) while player B has rating 2300 and K-factor 24 (probability ~25%). If player A wins, do I need to use 16 as K-factor for both players or 16 for player A and 24 for player B?

    Read the article

  • XNA matrix order problem

    - by user1990950
    I want a matrix that scales first and then rotates. I tried the code below, but it didn't work. zRotation, yRotation and xRotation are rotations that shouldn't be affected by the origin. allrot should be affected. xScale, yScale and zScale are the scaling variables. The code below works except that it rotates and then scales. Matrix worldMatrix = ( Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation)) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation)) ) * ( Matrix.CreateTranslation(origin) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot)) * Matrix.CreateScale(xScale, yScale, zScale) );

    Read the article

  • How can I place a ProgressBar in Android using Cocos 2d?

    - by Laxmipriya
    I want to place a horizontal progress bar in my Android application and I want to change its progress color. I used the following code, but the progress bar is not being displayed. CCProgressTimer progressBar = CCProgressTimer.progress("progressbar.png"); progressBar.setType(kCCProgressTimerTypeHorizontalBarLR); progressBar.setScale(5); progressBar.setAnchorPoint(CGPoint.ccp(0, 0)); progressBar.setPosition(CGPoint.ccp(0,0)); addChild(progressBar);

    Read the article

  • Texture mapping on gluDisk

    - by Marnix
    I'm trying to map a brick texture on the edge of a fountain and I'm using gluDisk for that. How can I make the right coordinates for the disk? My code looks like this and I have only found a function that takes the texture along with the camera. I want the cubic texture to be alongside of the fountain, but gluDisk does a linear mapping. How do I get a circular mapping? void Fountain::Draw() { glPushMatrix(); // push 1 this->ApplyWorldMatrixGL(); glEnable(GL_TEXTURE_2D); // enable texturing glPushMatrix(); // push 2 glRotatef(90,-1,0,0); // rotate 90 for the quadric // also drawing more here... // stone texture glBindTexture(GL_TEXTURE_2D, texIDs[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glPushMatrix(); // push 3 glTranslatef(0,0,height); // spherical texture generation // this piece of code doesn't work as I intended glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); glEnable(GL_TEXTURE_GEN_S); glEnable(GL_TEXTURE_GEN_T); GLUquadric *tub = gluNewQuadric(); gluQuadricTexture(tub, GL_TRUE); gluDisk(tub, radius, outerR, nrVertices, nrVertices); gluDeleteQuadric(tub); glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); glPopMatrix(); // pop 3 // more drawing here... glPopMatrix(); // pop 2 // more drawing here... glPopMatrix(); // pop 1 } To refine my question a bit. This is an image of what it is at default (left) and of what I want (right). The texture should fit in the border of the disk, a lot of times. If this is possible with the texture matrix, than that's fine with me as well.

    Read the article

  • 2D Rendering with OpenGL ES 2.0 on Android (matrices not working)

    - by TranquilMarmot
    So I'm trying to render two moving quads, each at different locations. My shaders are as simple as possible (vertices are only transformed by the modelview-projection matrix, there's only one color). Whenever I try and render something, I only end up with slivers of color! I've only done work with 3D rendering in OpenGL before so I'm having issues with 2D stuff. Here's my basic rendering loop, simplified a bit (I'm using the Matrix manipulation methods provided by android.opengl.Matrix and program is a custom class I created that just calls GLES20.glUniformMatrix4fv()): Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); program.setUniformMatrix4f("Projection", projection); At this point, I render the quads (this is repeated for each quad): Matrix.setIdentityM(modelview, 0); Matrix.translateM(modelview, 0, quadX, quadY, 0); program.setUniformMatrix4f("ModelView", modelview); quad.render(); // calls glDrawArrays and all I see is a sliver of the color each quad is! I'm at my wits end here, I've tried everything I can think of and I'm at the point where I'm screaming at my computer and tossing phones across the room. Anybody got any pointers? Am I using ortho wrong? I'm 100% sure I'm rendering everything at a Z value of 0. I tried using frustumM instead of orthoM, which made it so that I could see the quads but they would get totally skewed whenever they got moved, which makes sense if I correctly understand the way frustum works (it's more for 3D rendering, anyway). If it makes any difference, I defined my viewport with GLES20.glViewport(0, 0, windowWidth, windowHeight); Where windowWidth and windowHeight are the same values that are pased to orthoM It might be worth noting that the android.opengl.Matrix methods take in an offset as the second parameter so that multiple matrices can be shoved into one array, so that'w what the first 0 is for For reference, here's my vertex shader code: uniform mat4 ModelView; uniform mat4 Projection; attribute vec4 vPosition; void main() { mat4 mvp = Projection * ModelView; gl_Position = vPosition * mvp; } I tried swapping Projection * ModelView with ModelView * Projection but now I just get some really funky looking shapes... EDIT Okay, I finally figured it out! (Note: Since I'm new here (longtime lurker!) I can't answer my own question for a few hours, so as soon as I can I'll move this into an actual answer to the question) I changed Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); to float ratio = windowWwidth / windowHeight; Matrix.orthoM(projection, 0, 0, ratio, 0, 1, -1, 1); I then had to scale my projection matrix to make it a lot smaller with Matrix.scaleM(projection, 0, 0.05f, 0.05f, 1.0f);. I then added an offset to the modelview translations to simulate a camera so that I could center on my action (so Matrix.translateM(modelview, 0, quadX, quadY, 0); was changed to Matrix.translateM(modelview, 0, quadX + camX, quadY + camY, 0);) Thanks for the help, all!

    Read the article

  • Importing an object from Blender into a scene, rotation on X axis?

    - by Arne
    This is my situation: I save the scene with blender no export with any processing steps. Blender has x right y up -z into the scene for the view coordinates (OpenGL) I have x right y up -z into the scene for the view coordinates (OpenGl) Bleneder has x/y plane and z up as world coordinates I have x/y plane and z up as world coordinates I load the mesh with assimp directly from the blend file with absolutely no post processing. The object is rotated abount p/2 on the x-axis. Why?

    Read the article

  • How to Draw texture between 2 Vector3

    - by Sparky41
    My scenario: RTS combat style, 1 unit fires beam on another unit My problem is i want to draw a flat texture between 2 Vector3 points. I have looked at various Billboarding styles but that doesn't give me a proper solution. I looked at this: http://msdn.microsoft.com/en-us/library/bb464051.aspx is BasicEffect and DrawPrimitives the correct solution just stretch the texture to the distance between point of origin to target? I used the quad class they used but i found this it seemed to inflexible So my question to you is how would i go about this sort of problem? Regards

    Read the article

  • View matrix in opengl

    - by user5584
    Hi! Sorry for my clumsy question. But I don't know where I am wrong at creating view matrix. I have the following code: createMatrix(vec4f(xAxis.x, xAxis.y, xAxis.z, dot(xAxis,eye)), vec4f( yAxis.x_, yAxis.y_, yAxis.z_, dot(yAxis,eye)), vec4f(-zAxis.x_, -zAxis.y_, -zAxis.z_, -dot(zAxis,eye)), vec4f(0, 0, 0, 1)); //column1, column2,... I have tried to transpose it, but with no success. I have also tried to use gluLookAt(...) with success. At the reference page, I watched the remarks about the to-be-created matrix, and it seems the same as mine. Where I am wrong?

    Read the article

  • "Marching cubes" voxel terrain - triplanar texturing with depth?

    - by Dan the Man
    I am currently working on a voxel terrain that uses the marching cubes algorithm for polygonizing the scalar field of voxels. I am using a triplanar texturing shader for texturing. say I have a grass texture set to the Y axis and a dirt texture for both the X and Z axes. Now, when my player digs downwards, it still appears as grass. How would I make it to appear as dirt? I have been thinking about this for a while, and the only thing I can think of to make this effect, would be to mark vertices that have been dug with a certain vertex color. When it has that vertex color, the shader would apply that dirt texture to the vertices marked. Is there a better method?

    Read the article

  • AABB Sweeping, algorithm to solve "stacking box" problem

    - by Ivo Wetzel
    I'm currently working on a simple AABB collision system and after some fiddling the sweeping of a single box vs. another and the calculation of the response velocity needed to push them apart works flawlessly. Now on to the new problem, imagine I'm having a stack of boxes which are falling towards a ground box which isn't moving: Each of these boxes has a vertical velocity for the "gravity" value, let's say this velocity is 5. Now, the result is that they all fall into each other: The reason is obvious, since all the boxes have a downward velocity of 5, this results in no collisions when calculating the relative velocity between the boxes during sweeping. Note: The red ground box here is static (always 0 velocity, can utilize spatial partitioning ), and all dynamic static collisions are resolved first, thus the fact that the boxes stop correctly at this ground box. So, this seems to be simply an issue with the order the boxes are sweept against each other. I imagine that sorting the boxes based on their x and y velocities and then sweeping these groups correctly against each other may resolve this issues. So, I'm looking for algorithms / examples on how to implement such a system. The code can be found here: https://github.com/BonsaiDen/aabb The two files which are of interest are [box/Dynamic.lua][3] and [box/Manager.lua][4]. The project is using Love2D in case you want to run it.

    Read the article

  • Procedural... house with rooms generator

    - by pek
    I've been looking at some algorithms and articles about procedurally generating a dungeon. The problem is, I'm trying to generate a house with rooms, and they don't seem to fit my requirements. For one, dungeons have corridors, where houses have halls. And while initially they might seem the same, a hall is nothing more than the area that isn't a room, whereas a corridor is specifically designed to connect one area to another. Another important difference with a house is that you have a specific width and height, and you have to fill the entire thing with rooms and halls, whereas with a dungeon, there is empty space. I think halls in a house is something in between a dungeon corridor (gets you to other rooms) and an empty space in the dungeon (it's not explicitly defined in code). More specifically, the requirements are: There is a set of predefined rooms I cannot create walls and doors on the fly. Rooms can be rotated but not resized Again, because I have a predefined set of rooms, I can only rotate them, not resize them. The house dimensions are set and has to be entirely filled with rooms (or halls) I.e. I want to fill a 14x20 house with the available rooms making sure there is no empty space. Here are some images to make this a little more clear: As you can see, in the house, the "empty space" is still walkable and it gets you from one room to another. So, having said all this, maybe a house is just a really really tightly packed dungeon with corridors. Or it's something easier than a dungeon. Maybe there is something out there and I haven't found it because I don't really know what to search for. This is where I'd like your help: could you give me pointers on how to design this algorithm? Any thoughts on what steps it will take? If you have created a dungeon generator, how would you modify it to fit my requirements? You can be as specific or as generic as you like. I'm looking to pick your brains, really.

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • Unity Android: Truecolor texture performance hit and alternatives for truecolor

    - by Esa
    After integrating the graphics assets to my application, I noticed that when the textures are compressed they look very bad compared to truecolor. This happens to all the textures and it did not seem to help changing the texture type to GUI nor did it help to switch the 32-bit display buffering on. Does using truecolor textures make the application much heavier to run? Or does it just increase the size of the .APK? Are there alternatives to getting a good texture quality and a smaller texture size instead of using truecolor?

    Read the article

  • 3DS Max 2012 OBJ file import missing polygons

    - by Vit
    I started learning OpenGL. I got to a point I want to import some "real" objects. After "Googling" I decided I will go with OBJ file for start, since it is simple to understand, and there are plenty of tutorials on how to read them properly. I have from university access to 3DS Max 2012. So I tried to create very simple model (just deformated cube) and exporting it using OBJ file, just to vertices and triangles for the moment, without textures, so I can examine its structure by myself. But if I imported it right back to 3DS from OBJ file, now it renders somewhat strange, like its smoothen, and with lightsource, even I have none in scene. But the geometry, its wireframe is intact. So I thought maybe it is problem of exporting only vertices and triangles so I downloaded Enterprise-D model from internet, exported with everything on (normals, textures everything), and again imported it. Now, some polygons are missing. So, I want to ask, am I doing something terribly wrong, or is there some incompatibility issue between .max and .obj file ? Even it is only simple textured model without any lightsources, animation etc.? Thanks. Edit: I tried objects with MeshLab, the first, deformated cube was absolutelly OK. But still bothers me that 3DS Max doesen´t render it properly. In Enterprise-D model, there are polygons missing even in MeshLab. I uploaded rar archive with .max model of Enterprise, same .obj model exported from 3DS, and obj model of deformated cube. Download here (2.5 MB, filesonic).

    Read the article

  • Slick 2D first trial error

    - by pringlesinn
    I followed some advices to learn Slick2D and when I started doing the "SimpleGame" I got my first error. Does anyone have any idea of what is it and how to fix? Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:Slick Build #274 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:LWJGL Version: 2.0b1 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:OriginalDisplayMode: 1024 x 768 x 16 @60Hz Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:TargetDisplayMode: 800 x 600 x 0 @0Hz Sun Dec 26 23:09:12 GMT-03:00 2010 ERROR:Could not find a valid pixel format org.lwjgl.LWJGLException: Could not find a valid pixel format at org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(Native Method) at org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(WindowsPeerInfo.java:52) at org.lwjgl.opengl.WindowsDisplayPeerInfo.initDC(WindowsDisplayPeerInfo.java:54) at org.lwjgl.opengl.WindowsDisplay.createWindow(WindowsDisplay.java:158) at org.lwjgl.opengl.Display.createWindow(Display.java:299) at org.lwjgl.opengl.Display.create(Display.java:848) at org.lwjgl.opengl.Display.create(Display.java:800) at org.newdawn.slick.AppGameContainer.tryCreateDisplay(AppGameContainer.java:299) at org.newdawn.slick.AppGameContainer.access$000(AppGameContainer.java:34) at org.newdawn.slick.AppGameContainer$2.run(AppGameContainer.java:364) at java.security.AccessController.doPrivileged(Native Method) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:345) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38) Exception in thread "main" org.newdawn.slick.SlickException: Failed to initialise the LWJGL display at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:375) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38)

    Read the article

  • Beat detection, weird detection

    - by Quincy
    I made this soundanalyzer class to detect beats in songs : // put it on pastebin for the big size, will put it here if people rather want that. pastebin.com/8PdgZPP3 but for some reason its only detecting beats from 637 sec to around 641(sec) and I have no idea why. I know the beats are being inserted from multiple bands since I am finding duplicates and it seems as its assigning a beat to each instant energy value in between those values. Its modeled after this : http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf So why won't the beats properly register ?

    Read the article

  • Convert vector interpolation to quaternion interpolation? (Catmull-Rom)

    - by edA-qa mort-ora-y
    I have some existing code which does catmull-rom interpolation on two vectors (facing and up). I'm converting this to use quaternions instead (to replace the two vectors). Is there a general way to convert the vector based interpolation to a quaternion one? The approach I'm using now is to exact the axis and angle from the quanternion. I then interpolate each of those independently and convert back to a quaternion. Is there a more direct method?

    Read the article

  • How to control in the vertex shader where pixel ends up in the renderTarget?

    - by cubrman
    What if I have an arbitrary renderTarget, that is smaller than the screen (say it is 1x1 pixel) and I want to make sure in the VertexShaderFunction that all my pixels end up exactly in that 1 pixel region? No matter what I do, they all seem to get culled at some point, though GraphicDevise.Clear() works OK. Where is the top left corner of the renderTarget Vertex-shader-vise? I tried output.Position = (0,0,0,0)/(0,0,0,1)/(1,1,1,1)/(-0.5,0.5,0,1) NOTHING works! Fullscreen quad is not an option 'cause I actually need to process geometry in the shaders to get the results I need.

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • Why do my LWJGL fonts have dots and lines around them?

    - by Jordan
    When we render fonts there are weird dots and lines around the text. I have no idea why this would happen. Here is an image of what it looks like: Our font class looks like this: package me.NJ.ComputerTycoon.Font; import me.NJ.ComputerTycoon.BaseObjects.UDim2; import org.lwjgl.opengl.Display; import org.newdawn.slick.Color; import org.newdawn.slick.TrueTypeFont; public class Font { public TrueTypeFont font; private int fontSize = 18; private String fontName = "Calibri"; private int fontStyle = java.awt.Font.BOLD; public Font(String fontName, int fontStyle, int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); //font. } public Font(int fontStyle, int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public Font(int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public Font() { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public void drawString(int x, int y, String s, Color color){ this.font.drawString(x, y, s, color); } public void drawString(int x, int y, String s){ this.font.drawString(x, y, s); } public void drawString(float x, float y, String s, Color color){ this.font.drawString(x, y, s, color); } public void drawString(float x, float y, String s){ this.font.drawString(x, y, s); } public void drawString(UDim2 udim, String s){ this.font.drawString((Display.getWidth() * udim.getX().getScale()) + udim.getX().getOffset(), (Display.getHeight() * udim.getY().getScale()) + udim.getY().getOffset(), s); } public String getFontName(){ return this.fontName; } public int getFontSize(){ return this.fontSize; } public TrueTypeFont getFont(){ return this.font; } } What could be causing this?

    Read the article

  • Voxel Face Crawling (Mesh simplification, possibly using greedy)

    - by Tim Winter
    This is in regards to a Minecraft-like terrain engine. I store blocks in chunks (16x256x16 blocks in a chunk). When I generate a chunk, I use multiple procedural techniques to set the terrain and to place objects. While generating, I keep one 1D array for the full chunk (solid or not) and a separate 1D array of solid blocks. After generation, I iterate through the solid blocks checking their neighbors so I only generate block faces that don't have solid neighbors. I store which faces to generate in their own list (that's 6 lists, one per possible face). When rendering a chunk, I render all lists in the camera's current chunk and only the lists facing the camera in all other chunks. Using a 2D atlas with this little shader trick Andrew Russell suggested, I want to merge similar faces together completely. That is, if they are in the same list (same normal), are adjacent to each other, have the same light level, etc. My assumption would be to have each of the 6 lists sorted by the axis they rest on, then by the other two axes (the list for the top of a block would be sorted by it's Y value, then X, then Z). With this alone, I could quite easily merge strips of faces, but I'm looking to merge more than just strips together when possible. I've read up on this greedy meshing algorithm, but I am having a lot of trouble understanding it. To even use it, I would think I'd need to perform a type of flood-fill per sorted list to get the groups of merge-able faces. Then, per group, perform the greedy algorithm. It all sounds awfully expensive if I would ever want dynamic terrain/lighting after initial generation. So, my question: To perform merging of faces as described (ignoring whether it's a bad idea for dynamic terrain/lighting), is there perhaps an algorithm that is simpler to implement? I would also quite happily accept an answer that walks me through the greedy algorithm in a much simpler way (a link or explanation). I don't mind a slight performance decrease if it's easier to implement or even if it's only a little better than just doing strips. I worry that most algorithms focus on triangles rather than quads and using a 2D atlas the way I am, I don't know that I could implement something triangle based with my current skills. PS: I already frustum cull per chunk and as described, I also cull faces between solid blocks. I don't occlusion cull yet and may never.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >