Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 404/1328 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Render on other render targets starting from one already rendered on

    - by JTulip
    I have to perform a double pass convolution on a texture that is actually the color attachment of another render target, and store it in the color attachment of ANOTHER render target. This must be done multiple time, but using the same texture as starting point What I do now is (a bit abstracted, but what I have abstract is guaranteed to work singularly) renderOnRT(firstTarget); // This is working. for each other RT currRT{ glBindFramebuffer(GL_FRAMEBUFFER, currRT.frameBufferID); programX.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, firstTarget.colorAttachmentID); programX.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, firstTarget.depthAttachmentID); programX.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); // quadBuffID is a VBO for a screen aligned quad. It is fine. programX.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS,0,4); programY.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, currRT.colorAttachmentID); // The second pass is done on the previous pass programY.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, currRT.depthAttachmentID); programY.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); programY.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS, 0, 4); } The problem is that I end up with black textures and not the wanted result. The GLSL programs program(X,Y) works fine, already tested on single targets. Is there something stupid I am missing? Even an hint is much appreciated, thanks!

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

  • XAudio2 - Multiple instances of the same sound

    - by Boreal
    Right now, I'm adding a rudimentary sound engine to my game. So far, I am able to load in a WAV file and play it once, then free up the memory when I close the game. However, the game crashes with a nice ArgumentOutOfBoundsException when I try to play another sound instance. Specified argument was out of the range of valid values. Parameter name: readLength I'm following this tutorial pretty much exactly, but I still keep getting the aforementioned error. Here's my sound-related code. http://pastebin.com/FgaqfXTs The exception occurs on line 156 when I am playing the sound: source.SubmitSourceBuffer(buffer);

    Read the article

  • Steady zoom on center in LWJGL (Modelview)

    - by l5p4ngl312
    I am having a problem in LWJGL with zooming in and out. I am using glScaled(zoom, zoom, 1) before glTranslated. There are 2 problems: 1. The rate of zoom speeds up a lot when zooming out (lower zoom value). 2. It zooms in on the bottom left corner of the screen rather than the center. Eventually, I would like to have the zoom focused on the mouse position. I have tried to fix these problems by make it glScaled(zoom^12, zoom^12, 1) so that the greater the zoom value, the faster it will zoom in order to balance out the faster zoom at lower zoom values. To compensate for the zoom focused on the bottom left, I have tried to subtract (zoom+1)^10 + 2^10 from the X and Y of each sprite. This results in a curved zoom path, first to the left and then to the right. It is a 2D game.

    Read the article

  • Using polygons instead of quads on Cocos2d

    - by rraallvv
    I've been looking under the hood of Cocos2d, and I think (please correct me if I'm wrong) that although working with quads is a key feature of the engine, it should't be dificult to make it work with arrays of vertices (aka polygons) instead of quads, being the quads a special case of an array of four vertices by the way, does anyone have any code that makes cocos2d render a texture filled polygon inside a batch node? the code posted here (http://www.cocos2d-iphone.org/forum/topic/8142/page/2#post-89393) does a nice job rendering a texture filled polygon but the class doesn't work with batch nodes

    Read the article

  • Time/duration handling in strategic game

    - by borg
    I'm considering developing a space opera game, having already done some game design. Technically, though, I'm coming from a business applications background. Hence I don't really know how I should handle time and duration. Let's state the matter clearly: what if something is bound to happen in 5 hours and on which other events depend. For example the arrival of some space ship in some system where some defense systems are present, hence a fight would start. Should I use some kind of scheduler (like Quartz in my java land) to trigger the corresponding event when due (I plan to use events for communication)? Something else?

    Read the article

  • How do I properly use String literals for loading content?

    - by Dave Voyles
    I've been using verbatim string literals for some time now, and never quite thought about using a regular string literal until I started to come across them in Microsoft provided XNA samples. With that said, I'm trying to implement a new AudioManager class from the Net Rumble sample. I have two (2) issues here: Question 1: In my code for my GameplayScreen screen I have a folder location written as the following, and it works fine: menuButton = content.Load<SoundEffect>(@"sfx/menuButton"); menuClose = content.Load<SoundEffect>(@"sfx/menuClose"); If you notice, you'll see that I'm using a verbatim string, with a forward slash "/". In the AudioManager class, they use a regular string literal, but with two backslashes "\". I understand that one is used as an escape, but why are they BACK instead of FORWARD? (See below) soundList[soundName] = game.Content.Load<SoundEffect>("audio\\wav\\"+ soundName); Question 2: I seem to be doing everything correctly in my AudioManager class, but I'm not sure of what this line means: audioFileList = audioFolder.GetFiles("*.xnb"); I suppose that the *xnb means look for everything BUT files that end in *xnb? I can't figure out what I'm doing wrong with my file locations, as the sound effects are not playing. My code is not much different from what I've linked to above. private AudioManager(Game game, DirectoryInfo audioDirectory) : base(game) { try { audioFolder = audioDirectory; audioFileList = audioFolder.GetFiles("*.mp3"); soundList = new Dictionary<string, SoundEffect>(); for (int i = 0; i < audioFileList.Length; i++) { string soundName = Path.GetFileNameWithoutExtension(audioFileList[i].Name); soundList[soundName] = game.Content.Load<SoundEffect>(@"sfx\" + soundName); soundList[soundName].Name = soundName; } // Plays this track while the GameplayScreen is active soundtrack = game.Content.Load<Song>("boomer"); } catch (NoAudioHardwareException) { // silently fall back to silence } }

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • RenderState in XNA 4

    - by Shashwat
    I was going through this tutorial for having transparency which can be used to solve my problem here. The code is written in XNA 3 but I'm using XNA 4. What is the alternative for the following code in XNA 4? device.RenderState.AlphaTestEnable = true; device.RenderState.AlphaFunction = CompareFunction.GreaterEqual; device.RenderState.ReferenceAlpha = 200; device.RenderState.DepthBufferWriteEnable = false; I searched a lot but didn't find anything useful.

    Read the article

  • XNA - Render texture to a rendertarget 2d via SpriteBatch error

    - by Jared B
    I got simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D... private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if(Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from Draw(GameTime gameTime). First drawScene is called, then drawPostProcessing is called. The thing is, I can't run the code because "the render target must not be set on the device when it is used as a texture." at line spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); I already found the solution, which is to draw the actual renderTarget (targetScene) to the texture so it doesn't create a reference to the loaded rendertarget. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(OutputTarget) SpriteBatch.Draw(InputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render InputTarget to OutputTarget without reference issues?

    Read the article

  • Making a game preloader (Flash) [closed]

    - by Artemix
    Possible Duplicate: How do you create a single/internal pre-loader for a Flash game written using Flex? Hi guys, Im trying to make a preloader in a Flash game. Thing is, I need some advices on this since I never made one, I have the game almost complete, but when, i.e, I upload the game to a website I get a white screen for a few seconds, and then I see the game. Is there a simple way, maybe using an a API or something like that, to make a preloader screen? Im using Flash Builder fyi. Thx!

    Read the article

  • Keeping Aspect Screen Ration While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Scene Graph as Object Container?

    - by Bunkai.Satori
    Scene graph contains game nodes representing game objects. At a first glance, it might seem practical to use Scene Graph as physical container for in game objects, instead of std::vector< for example. My question is, is it practical to use Scene Graph to contain the game objects, or should it be used only to define scene objects/nodes linkages, while keepig the objects stored in separate container, such as std::vector<?

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • LWJGL Text Rendering

    - by Trixmix
    Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example: Font f = new Font("Times New Roman",Font.BOLD,18); font = new UnicodeFont(f); font.getEffects().add(new ColorEffect(Color.white)); font.addAsciiGlyphs(); try { font.loadGlyphs(); } catch (SlickException e) { e.printStackTrace(); } then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1-3 seconds. so when i want to change a color or font type then i have to wait 1-3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!

    Read the article

  • Will having many timers affect my game performance?

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

  • Auto Save and Auto Load Game onto the Device's Storage Concept Question

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game feature only every time the newbies open the first app then continues it on the other day? I tried making a sprite that is moving, starting at the center. When I close and re-open the app, the sprite goes back to the center instead of the last coordinate where the sprite land on this part (i.e. at the top). The thing I want to know how the sequence of saving and loading goes like this: I open the app The starting sprite at the center. It displays a coordinate of the sprite plus number of times does the sprite move. I exit the app that automatically saves the game without notice. Finally, when I re-opened it, it automatically loads the game retaining the number of times the sprite move, coordinates, and the sprite's area landed. These steps above are similar, but not the sprite movement test app, to the sequence of saving and loading the game's level and record in Jewel Stackers for the Android app. And, by default, if there is no SD card in any tab or phone that runs on Android, does it automatically save/load onto the internal drive or the APK file itself? Is it also useful to use auto save and auto load feature for protecting and fetching informations (i.e. fastest time, last time where the sprite is located via coordinates, etc.)?

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Resource management question. Resource containing resource

    - by bobenko
    I have resource manager handling as usual resource loading, unloading etc. With resources such an images, mesh no problem. But what to do when I have resource containing other resource (for example spriteFont contains reference to sprite and letters description). Should that sprite be added to resource manager? Or my spriteFont must be the only owner of that resource? Any thoughts on this. Have you faced with such problem? Thanks in advance.

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating via memory-modifiation? What strategies are effective to combat this kind of cheating? For reference, I know that players can: - Search for something by value or range - Search for something that changed value - Set memory values - Freeze memory values I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change. (For example, instead of an int, I have a class that's an array of ints, and whenever the value changes, I use a different, randomly-chosen array item to hold the value)

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • Pixel Shader Giving Black output

    - by Yashwinder
    I am coding in C# using Windows Forms and the SlimDX API to show the effect of a pixel shader. When I am setting the pixel shader, I am getting a black output screen but if I am not using the pixel shader then I am getting my image rendered on the screen. I have the following C# code using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using System.Runtime.InteropServices; using SlimDX.Direct3D9; using SlimDX; using SlimDX.Windows; using System.Drawing; using System.Threading; namespace WindowsFormsApplication1 { // Vertex structure. [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public float Tu; public float Tv; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } public static VertexFormat Format { get { return VertexFormat.Position | VertexFormat.Texture1; } } } static class Program { public static Device D3DDevice; // Direct3D device. public static VertexBuffer Vertices; // Vertex buffer object used to hold vertices. public static Texture Image; // Texture object to hold the image loaded from a file. public static int time; // Used for rotation caculations. public static float angle; // Angle of rottaion. public static Form1 Window =new Form1(); public static string filepath; static VertexShader vertexShader = null; static ConstantTable constantTable = null; static ImageInformation info; [STAThread] static void Main() { filepath = "C:\\Users\\Public\\Pictures\\Sample Pictures\\Garden.jpg"; info = new ImageInformation(); info = ImageInformation.FromFile(filepath); PresentParameters presentParams = new PresentParameters(); // Below are the required bare mininum, needed to initialize the D3D device. presentParams.BackBufferHeight = info.Height; // BackBufferHeight, set to the Window's height. presentParams.BackBufferWidth = info.Width+200; // BackBufferWidth, set to the Window's width. presentParams.Windowed =true; presentParams.DeviceWindowHandle = Window.panel2 .Handle; // DeviceWindowHandle, set to the Window's handle. // Create the device. D3DDevice = new Device(new Direct3D (), 0, DeviceType.Hardware, Window.Handle, CreateFlags.HardwareVertexProcessing, presentParams); // Create the vertex buffer and fill with the triangle vertices. (Non-indexed) // Remember 3 vetices for a triangle, 2 tris per quad = 6. Vertices = new VertexBuffer(D3DDevice, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.None, Pool.Managed); DataStream stream = Vertices.Lock(0, 0, LockFlags.None); stream.WriteRange(BuildVertexData()); Vertices.Unlock(); // Create the texture. Image = Texture.FromFile(D3DDevice,filepath ); // Turn off culling, so we see the front and back of the triangle D3DDevice.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting D3DDevice.SetRenderState(RenderState.Lighting, false); ShaderBytecode sbcv = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\vertexShader.vs", "vs_main", "vs_1_1", ShaderFlags.None); constantTable = sbcv.ConstantTable; vertexShader = new VertexShader(D3DDevice, sbcv); ShaderBytecode sbc = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\pixelShader.txt", "ps_main", "ps_3_0", ShaderFlags.None); PixelShader ps = new PixelShader(D3DDevice, sbc); VertexDeclaration vertexDecl = new VertexDeclaration(D3DDevice, new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.PositionTransformed, 0), new VertexElement(0, 12, DeclarationType.Float2 , DeclarationMethod.Default, DeclarationUsage.TextureCoordinate , 0), VertexElement.VertexDeclarationEnd }); Application.EnableVisualStyles(); MessagePump.Run(Window, () => { // Clear the backbuffer to a black color. D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); // Begin the scene. D3DDevice.BeginScene(); // Setup the world, view and projection matrices. //D3DDevice.VertexShader = vertexShader; //D3DDevice.PixelShader = ps; // Render the vertex buffer. D3DDevice.SetStreamSource(0, Vertices, 0, Vertex.SizeBytes); D3DDevice.VertexFormat = Vertex.Format; // Setup our texture. Using Textures introduces the texture stage states, // which govern how Textures get blended together (in the case of multiple // Textures) and lighting information. D3DDevice.SetTexture(0, Image); // Now drawing 2 triangles, for a quad. D3DDevice.DrawPrimitives(PrimitiveType.TriangleList , 0, 2); // End the scene. D3DDevice.EndScene(); // Present the backbuffer contents to the screen. D3DDevice.Present(); }); if (Image != null) Image.Dispose(); if (Vertices != null) Vertices.Dispose(); if (D3DDevice != null) D3DDevice.Dispose(); } private static Vertex[] BuildVertexData() { Vertex[] vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].Tu = 0.0f; vertexData[0].Tv = 0.0f; vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].Tu = 0.0f; vertexData[1].Tv = 1.0f; vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].Tu = 1.0f; vertexData[2].Tv = 0.0f; vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].Tu = 0.0f; vertexData[3].Tv = 1.0f; vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].Tu = 1.0f; vertexData[4].Tv = 1.0f; vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].Tu = 1.0f; vertexData[5].Tv = 0.0f; return vertexData; } } } And my pixel shader and vertex shader code are as following // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; // Global variables sampler2D Tex0; // Name: Simple Pixel Shader // Type: Pixel shader // Desc: Fetch texture and blend with constant color // PS_OUTPUT ps_main( in PS_INPUT In ) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup Out.Color *= float4(0.9f, 0.8f, 0.0f, 1); //do a simple effect return Out; //return output pixel } // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; // Name: Simple Vertex Shader // Type: Vertex shader // Desc: Vertex transformation and texture coord pass-through // VS_OUTPUT vs_main( in VS_INPUT In ) { VS_OUTPUT Out; //create an output vertex Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; //return output vertex }

    Read the article

  • Allocating Entities within an Entity System

    - by miguel.martin
    I'm quite unsure how I should allocate/resemble my entities within my entity system. I have various options, but most of them seem to have cons associated with them. In all cases entities are resembled by an ID (integer), and possibly has a wrapper class associated with it. This wrapper class has methods to add/remove components to/from the entity. Before I mention the options, here is the basic structure of my entity system: Entity An object that describes an object within the game Component Used to store data for the entity System Contains entities with specific components Used to update entities with specific components World Contains entities and systems for the entity system Can create/destroy entites and have systems added/removed from/to it Here are my options, that I have thought of: Option 1: Do not store the Entity wrapper classes, and just store the next ID/deleted IDs. In other words, entities will be returned by value, like so: Entity entity = world.createEntity(); This is much like entityx, except I see some flaws in this design. Cons There can be duplicate entity wrapper classes (as the copy-ctor has to be implemented, and systems need to contain entities) If an Entity is destroyed, the duplicate entity wrapper classes will not have an updated value Option 2: Store the entity wrapper classes within an object pool. i.e. Entities will be return by pointer/reference, like so: Entity& e = world.createEntity(); Cons If there is duplicate entities, then when an entity is destroyed, the same entity object may be re-used to allocate another entity. Option 3: Use raw IDs, and forget about the wrapper entity classes. The downfall to this, I think, is the syntax that will be required for it. I'm thinking about doing thisas it seems the most simple & easy to implement it. I'm quite unsure about it, because of the syntax. i.e. To add a component with this design, it would look like: Entity e = world.createEntity(); world.addComponent<Position>(e, 0, 3); As apposed to this: Entity e = world.createEntity(); e.addComponent<Position>(0, 3); Cons Syntax Duplicate IDs

    Read the article

  • Drawing multiple triangles at once isn't working

    - by Deukalion
    I'm trying to draw multiple triangles at once to make up a "shape". I have a class that has an array of VertexPositionColor, an array of Indexes (rendered by this Triangulation class): http://www.xnawiki.com/index.php/Polygon_Triangulation So, my "shape" has multiple points of VertexPositionColor but I can't render each triangle in the shape to "fill" the shape. It only draws the first triangle. struct ShapeColor { // Properties (not all properties) VertexPositionColor[] Points; int[] Indexes; } First method that I've tried, this should work since I iterate through the index array that always are of "3s", so they always contain at least one triangle. //render = ShapeColor for (int i = 0; i < render.Indexes.Length; i += 3) { device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, new VertexPositionColor[] { render.Points[render.Indexes[i]], render.Points[render.Indexes[i+1]], render.Points[render.Indexes[i+2]] }, 0, 3, new int[] { 0, 1, 2 }, 0, 1 ); } or the method that should work: device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, render.Points, 0, render.Points.Length, render.Indexes, 0, render.Indexes.Length / 3, VertexPositionColor.VertexDeclaration ); No matter what method I use this is the "typical" result from my Editor (in Windows Forms with XNA) It should show a filled shape, because the indexes are right (I've checked a dozen of times) I simply click the screen (gets the world coordinates, adds a point from a color, when there are 3 points or more it should start filling out the shape, it only draws the lines (different method) and only 1 triangle). The Grid isn't rendered with "this" shape. Any ideas?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >