Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 341/1021 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • planar shadow matrix and plane b value

    - by DevExcite
    I implemented planar shadows with the function D3DXMatrixShadow. As you know, we need plane and light factor to calculate a shadow matrix. The problem is that when I set the plane as D3DXPLANE p(0, -1, 0, 0.1f), the shadows by directional light are correctly rendered, but the shadows by point light are not rendered. However, if I use D3DXPLANE p(0, 1, 0, 0.1f), the situation is reversed, shadows by directional light are not drawn, the shadows by point light are ok. I cannot understand why it happens. Is it normal or am i missing something? Please explain to me why this happens. Thanks in advance.

    Read the article

  • Importance of scripting engine at Cocos2d Game Engine

    - by Mahbubur R Aaman
    Each Game Engine is different and solves different problems in different ways, so the engine design does vary greatly from engine to engine (even though a lot of principles are shared from engine to engine). Cocos2D is a great product on it’s own, but it doesn’t expose engine functionality to a scripting Language like Lua, JavaScript etc. My Question: How much important to integrate a Scripting Engine at Cocos2d?

    Read the article

  • OpenGL/GLSL: Render to cube map?

    - by BobDole
    I'm trying to figure out how to render my scene to a cube map. I've been stuck on this for a bit and figured I would ask you guys for some help. I'm new to OpenGL and this is the first time I'm using a FBO. I currently have a working example of using a cubemap bmp file, and the samplerCube sample type in the fragment shader is attached to GL_TEXTURE1. I'm not changing the shader code at all. I'm just changing the fact that I wont be calling the function that was loading the cubemap bmp file and trying to use the below code to render to a cubemap. You can see below that I'm also attaching the texture again to GL_TEXTURE1. This is so when I set the uniform: glUniform1i(getUniLoc(myProg, "Cubemap"), 1); it can access it in my fragment shader via uniform samplerCube Cubemap. I'm calling the below function like so: cubeMapTexture = renderToCubeMap(150, GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE); Now, I realize in the draw loop below that I'm not changing the view direction to look down the +x, -x, +y, -y, +z, -z axis. I really was just wanting to see something working first before implemented that. I figured I should at least see something on my object the way the code is now. I'm not seeing anything, just straight black. I've made my background white still the object is black. I've removed lighting, and coloring to just sample the cubemap texture and still black. I'm thinking the problem might be the format types when setting my texture which is GL_RGB8, GL_RGBA but I've also tried: GL_RGBA, GL_RGBA GL_RGB, GL_RGB I thought this would be standard since we are rendering to a texture attached to a framebuffer, but I've seen different examples that use different enum values. I've also tried binding the cube map texture in every draw call that I'm wanting to use the cube map: glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); Also, I'm not creating a depth buffer for the FBO which I saw in most examples, because I'm only wanting the color buffer for my cube map. I actually added one to see if that was the problem and still got the same results. I could of fudged that up when I tried. Any help that can point me in the right direction would be appreciated. GLuint renderToCubeMap(int size, GLenum InternalFormat, GLenum Format, GLenum Type) { // color cube map GLuint textureObject; int face; GLenum status; //glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE1); glGenTextures(1, &textureObject); glBindTexture(GL_TEXTURE_CUBE_MAP, textureObject); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, InternalFormat, size, size, 0, Format, Type, NULL); } // framebuffer object glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, textureObject, 0); status = glCheckFramebufferStatus(GL_FRAMEBUFFER); printf("%d\"\n", status); printf("%d\n", GL_FRAMEBUFFER_COMPLETE); glViewport(0,0,size, size); for (face = 1; face < 6; face++) { drawSpheres(); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, textureObject, 0); } //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebuffer(GL_FRAMEBUFFER, 0); return textureObject; }

    Read the article

  • OpenGL slower than Canvas

    - by VanDir
    Up to 3 days ago I used a Canvas in a SurfaceView to do all the graphics operations but now I switched to OpenGL because my game went from 60FPS to 30/45 with the increase of the sprites in some levels. However, I find myself disappointed because OpenGL now reaches around 40/50 FPS at all levels. Surely (I hope) I'm doing something wrong. How can I increase the performance at stable 60FPS? My game is pretty simple and I can not believe that it is impossible to reach them. I use 2D sprite texture applied to a square for all the objects. I use a transparent GLSurfaceView, the real background is applied in a ImageView behind the GLSurfaceView. Some code public MyGLSurfaceView(Context context, AttributeSet attrs) { super(context); setZOrderOnTop(true); setEGLConfigChooser(8, 8, 8, 8, 0, 0); getHolder().setFormat(PixelFormat.RGBA_8888); mRenderer = new ClearRenderer(getContext()); setRenderer(mRenderer); setLongClickable(true); setFocusable(true); } public void onSurfaceCreated(final GL10 gl, EGLConfig config) { gl.glEnable(GL10.GL_TEXTURE_2D); gl.glShadeModel(GL10.GL_SMOOTH); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glDepthMask(false); gl.glEnable(GL10.GL_ALPHA_TEST); gl.glAlphaFunc(GL10.GL_GREATER, 0); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrthof(0, width, height, 0, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Draw all the graphic object. for (byte i = 0; i < mGame.numberOfObjects(); i++){ mGame.getObject(i).draw(gl); } // Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } mGame.getObject(i).draw(gl) is for all the objects like this: /* HERE there is always a translatef and scalef transformation and sometimes rotatef */ gl.glBindTexture(GL10.GL_TEXTURE_2D, mTexPointer[0]); // Point to our vertex buffer gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer); // Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, mVertices.length / 3); EDIT: After some test it seems to be due to the transparent GLSurfaceView. If I delete this line of code: setEGLConfigChooser(8, 8, 8, 8, 0, 0); the background becomes all black but I reach 60 fps. What can I do?

    Read the article

  • Laser Beam End Points Problems (XNA)

    - by user36159
    I am building a game in XNA that features colored laser beams in 3D space. The beams are defined as: Segment start position Segment end position Line width For rendering, I am using 3 quads: Start point billboard End point billboard Middle section quad whose forward vector is the slope of the line and whose normal points to the camera The problem is that using additive blending, the end points and middle section overlap, which looks quite jarring. However, I need the endpoints in case the laser is pointing towards the camera! See the blue laser in particular:

    Read the article

  • how to drawing continues line just like in paint [on hold]

    - by hussain shah
    hi sir i want to draw a points.the following code is work good but the problem is than when i drag the mouse button, if i move slow working good but if i move the curser fast they cannot made continues line.please what is the solution...? #include <iostream> #include <GL/glut.h> #include <GL/glu.h> #include <stdlib.h> void first() { glPushMatrix(); glTranslatef(1,01,01); glScalef(1, 1, 1); glColor3f(0, 1, 0); glBegin(GL_QUADS); glVertex2f(0.8, 0.6); glVertex2f(0.6, 0.6); glVertex2f(0.6, 0.8); glVertex2f(0.8, 0.8); glEnd(); glPopMatrix(); glFlush(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0);// screen color //glFlush(); } void drag (int x, int y) { { y=500-y; //x=500-x; glPointSize(5); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex2f(x,y+2); glEnd(); glutSwapBuffers(); glFlush(); } } void reshape (int w, int h){} void init (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0); glViewport(0,0,500,500); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 500.0, 0.0, 500.0, 1.0, -1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void mouse_button (int button, int state, int x, int y) { if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) { drag(x,y); first(); } //else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN) //{ // //} else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN) { exit(0); } } int main (int argc, char**argv) { glutInit (&argc, argv); //initialize the program. glutInitDisplayMode (GLUT_SINGLE); //set up a basic display buffer (only singular for now) glutInitWindowSize (500,500); //set whe width and height of the window glutInitWindowPosition (100, 100); //set the position of the window glutCreateWindow ("A basic OpenGL Window"); //set the caption for the window glutMotionFunc(drag); //glutMouseFunc(mouse_button); init(); glutDisplayFunc (display);//call the display function to draw our world glutMainLoop(); //initialize the OpenGL loop cycle return 0; }

    Read the article

  • How can I make OpenGL textures scale without becoming blurry?

    - by adorablepuppy
    I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object: public void render(){ Color.white.bind(); this.spriteSheet.bind(); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(this.x, this.y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(getDrawingWidth(), this.y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(this.x, getDrawingHeight()); GL11.glEnd(); } And here is code from my initGL method in my game class GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(0.46f,0.46f,0.90f,1.0f); GL11.glViewport(0,0,width,height); GL11.glOrtho(0,width,height,0,1,-1); And here is the code that does the actual drawing public void start(){ initGL(800,600); init(); while(true){ GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); for(int i=0;i<entities.size();i++){ ((RenderableEntity)entities.get(i)).render(); } Display.update(); Display.sync(100); if(Display.isCloseRequested()){ Display.destroy(); System.exit(0); } } }

    Read the article

  • Using AdMob with Games that use Open GLES

    - by Vishal Kumar
    Can anyone help me integrating Admob to my game. I've used the badlogic framework by MarioZencher ... and My game is like the SuperJumper. I am unable to use AdMob after a lot of my attempts. I am new to android dev...please help me..I went thru a number of tutorials but not getting adds displayed ... I did the following... get the libraries and placed them properly My main.xml looks like this android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" / My Activity class onCreate method: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); RelativeLayout layout = new RelativeLayout(this); adView = new AdView(this, AdSize.BANNER, "a1518637fe542a2"); AdRequest request = new AdRequest(); request.addTestDevice(AdRequest.TEST_EMULATOR); request.addTestDevice("D77E32324019F80A2CECEAAAAAAAAA"); adView.loadAd(request); layout.addView(glView); RelativeLayout.LayoutParams adParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.WRAP_CONTENT, RelativeLayout.LayoutParams.WRAP_CONTENT); adParams.addRule(RelativeLayout.ALIGN_PARENT_BOTTOM); adParams.addRule(RelativeLayout.CENTER_IN_PARENT); layout.addView(adView, adParams); setContentView(layout); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); glView = new GLSurfaceView(this); glView.setRenderer(this); //setContentView(glView); glGraphics = new GLGraphics(glView); fileIO = new AndroidFileIO(getAssets()); audio = new AndroidAudio(this); input = new AndroidInput(this, glView, 1, 1); PowerManager powerManager = (PowerManager) getSystemService(Context.POWER_SERVICE); wakeLock = powerManager.newWakeLock(PowerManager.FULL_WAKE_LOCK, "GLGame"); } My Manifest file looks like this .... <activity android:name="com.google.ads.AdActivity" android:configChanges="keyboard|keyboardHidden|orientation|smallestScreenSize"/> </application> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-sdk android:minSdkVersion="7" /> When I first decided to use the XML for admob purpose ..it showed no changes..it even didn't log the device id in Logcat... Later when I wrote code in the Main Activity class.. and run ... Application crashed ... with 7 errors evry time ... The android:configChanges value of the com.google.ads.AdActivity must include screenLayout. The android:configChanges value of the com.google.ads.AdActivity must include uiMode. The android:configChanges value of the com.google.ads.AdActivity must include screenSize. The android:configChanges value of the com.google.ads.AdActivity must include smallestScreenSize. You must have AdActivity declared in AndroidManifest.xml with configChanges. You must have INTERNET and ACCESS_NETWORK_STATE permissions in AndroidManifest.xml. Please help me by telling what wrong is there with the code? Can I write code only in xml files without changing the Activity class ... I will be very grateful to anyone providing support.

    Read the article

  • What should be contained in a game scene graph?

    - by Bunkai.Satori
    Would you help me to clarify, please, what what exactly should be contained within a game scene graph? See the following list, please: Game Actors? (obviously yes, all the objects changing state should be the major prart of the Scene Graph) Simple static game ojbects? (I mean ojects places in the background that do not get animated, neither do they collide) Game Triggers? Game Lights? Game Cameras? Weapon Bullets? Game Explosions and Special Effects? The above considered object types. Now to the coverage of the scene graph: Should a scene graph contain the whole game level map since the level start, or should it contain only the visible portion of the map? If the second is true, it would mean that scene graph would be continuously updated, by adding/removing game objects, as the player moves. However, containing only the visible are of the map obviously would be much faster to traverse and update.

    Read the article

  • Block With Given ID Does Not Exist - Minecraft Mod

    - by inixsoftware
    I have tried to make my own Minecraft Block using Forge, but for some reason, when I use /give Playerxxx 1000 1 the game says, There is no block with id '1000' My Block Code: package net.minecraft.blockr; import net.minecraft.block.Block; import net.minecraft.block.material.Material; import net.minecraft.creativetab.CreativeTabs; public class Basalt extends Block { public Basalt(int par1, Material par2Material) { super(par1, par2Material); this.setCreativeTab(CreativeTabs.tabBlock); } } Mod code: package net.minecraft.blockr; import cpw.mods.fml.common.Mod; import cpw.mods.fml.common.Mod.Init; import cpw.mods.fml.common.network.NetworkMod; import cpw.mods.fml.common.registry.GameRegistry; import cpw.mods.fml.common.registry.LanguageRegistry; import net.minecraft.block.Block; import net.minecraft.block.material.Material; @Mod(modid="blockr", name="Blockr Mod", version="PreAlpha v0.0.1") @NetworkMod(clientSideRequired=true, serverSideRequired=false) public class BlockrMod { public static Block basalt; @Init public void load() { basalt = new Basalt(1000, Material.ground).setUnlocalizedName("basalt"); GameRegistry.registerBlock(basalt, basalt.getUnlocalizedName()); LanguageRegistry.addName(basalt, "Basalt Block"); } public String getVersion() { return "0.0.1"; } } What exactly is going wrong? My package is blockr (as my mod is called blockr) I know my mod was loaded as I see in Forge under Mods I see my mod

    Read the article

  • component Initialization in component-based game architectures

    - by liortal
    I'm develping a 2d game (in XNA) and i've gone slightly towards a component-based approach, where i have a main game object (container) that holds different components. When implementing the needed functionality as components, i'm now faced with an issue -- who should initialize components? Are components usually passed in initialized into an entity, or some other entity initialized them? In my current design, i have an issue where the component, when created, requires knowledge regarding an attached entity, however these 2 events may not happen at the same time (component construction, attaching to a game entity). I am looking for a standard approach or examples of implementations that work, that overcome this issue or present a clear way to resolve it

    Read the article

  • Shadow mapping with deffered shading for directional lights - shadow map projection problem

    - by Harry
    I'm trying to implement shadow mapping to my engine. I started with directional lights because they seemed to be the easiest one, but I was wrong :) I have implemented deferred shading and I retrieve position from depth. I think that there is the biggest problem but code looks ok for me. Now more about problem: Shadow map projected onto meshes looks bad scaled and translated and also some informations from shadow map texture aren't visible. You can see it on this screen: http://img5.imageshack.us/img5/2254/93dn.png Yelow frustum is light frustum and I have mixed shadow map preview and actual scene. As you can see shadows are in wrong place and shadow of cone and sphere aren't visible. Could you look at my codes and tell me where I have a mistake? // create shadow map if(!_shd)glGenTextures(1, &_shd); glBindTexture(GL_TEXTURE_2D, _shd); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL); // shadow map size glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shd, 0); glDrawBuffer(GL_NONE); // setting camera Vector dire=Vector(0,0,1); ACamera.setLookAt(dire,Vector(0)); ACamera.setPerspectiveView(60.0f,1,0.1f,10.0f); // currently needed for proper frustum corners calculation Vector min(ACamera._point[0]),max(ACamera._point[0]); for(int i=0;i<8;i++){ max=Max(max,ACamera._point[i]); min=Min(min,ACamera._point[i]); } ACamera.setOrthogonalView(min.x,max.x,min.y,max.y,-max.z,-min.z); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _s_buffer); // framebuffer for shadow map // rendering to depth buffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _g_buffer); Shaders["DirLight"].set(true); Matrix4 bias; bias.x.set(0.5,0.0,0.0,0.0); bias.y.set(0.0,0.5,0.0,0.0); bias.z.set(0.0,0.0,0.5,0.0); bias.w.set(0.5,0.5,0.5,1.0); Shaders["DirLight"].set("textureMatrix",ACamera.matrix*Projection3D*bias); // order of multiplications are 100% correct, everything gives mi the same result as using glm glActiveTexture(GL_TEXTURE5); glBindTexture(GL_TEXTURE_2D,_shd); lightDir(dir); // light calculations Vertex Shader makes nothing related to shadow calculatons Pixel shader function which calculates if pixel is in shadow or not: float readShadowMap(vec3 eyeDir) { // retrieve depth of pixel float z = texture2D(depth, gl_FragCoord.xy/screen).z; vec3 pos = vec3(gl_FragCoord.xy/screen, z); // transform by the projection and view inverse vec4 worldSpace = inverse(View)*inverse(ProjectionMatrix)*vec4(pos*2-1,1); worldSpace /= worldSpace.w; vec4 coord=textureMatrix*worldSpace; float vis=1.0f; if(texture2D(shadow, coord.xy).z < coord.z-0.001)vis=0.2f; return vis; } I also have question about shadows specifically for directional light. Currently I always look at 0,0,0 position and in further implementation I have to move light frustum along to camera frustum. I've found how to do this here: http://www.gamedev.net/topic/505893-orthographic-projection-for-shadow-mapping/ but it doesn't give me what I want. Maybe because of problems mentioned above, but I want know your opinion. EDIT: vec4 worldSpace is position read from depht of the scene (not shadow map). Maybe I wasn't precise so I'll try quick explain what is what: View is camera view matrix, ProjectionMatrix is camera projection,. First I try to get world space position from depth map and then multiply it by textureMatrix which is light view *light projection*bias. Rest of code is the same as in many tutorials. I can't use vertex shader to make something like gl_Position=textureMatrix*gl_Vertex and get it interpolated in fragment shader because of deffered rendering use so I want get it from depht buffer. EDIT2: I also tried make it as in Coding Labs tutorial about Shadow Mapping with Deferred Rendering but unfortunately this either works wrong.

    Read the article

  • How can I use multiple meshes per entity without breaking one component of a single type per entity?

    - by Mathias Hölzl
    We are just switching from a hierarchy based game engine to a component based game engine. My problem is that when I load a model which has has a hierarchy of meshes and the way I understand is that a entity in a component based system can not have multiple components of the same type, but I need a "meshComponent" for each mesh in a model. So how could I solve this problem. On this side they implemented a Component based game engine: http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • C# XNA Normals Question

    - by Wade
    Hello all! I have been working on some simple XNA proof of concept for a game idea I have as well as just to further my learning in XNA. However, i seem to be stuck on these dreaded normals, and using the BasicEffect with default lighting i can't seem to tell if my normals are being calculated correctly, hence the question. I'm mainly drawing cubes at the moment, I'm using a triangle list and a VertexBuffer to get the job done. The north face of my cube has two polygons and 6 vectors: Vector3 startPosition = new Vector3(0,0,0); corners[0] = startPosition; // This is the start position. Block size is 5. corners[1] = new Vector3(startPosition.X, startPosition.Y + BLOCK_SIZE, startPosition.Z); corners[2] = new Vector3(startPosition.X + BLOCK_SIZE, startPosition.Y, startPosition.Z); corners[3] = new Vector3(startPosition.X + BLOCK_SIZE, startPosition.Y + BLOCK_SIZE, startPosition.Z); verts[0] = new VertexPositionNormalTexture(corners[0], normals[0], textCoordBR); verts[1] = new VertexPositionNormalTexture(corners[1], normals[0], textCoordTR); verts[2] = new VertexPositionNormalTexture(corners[2], normals[0], textCoordBL); verts[3] = new VertexPositionNormalTexture(corners[3], normals[0], textCoordTL); verts[4] = new VertexPositionNormalTexture(corners[2], normals[0], textCoordBL); verts[5] = new VertexPositionNormalTexture(corners[1], normals[0], textCoordTR); Using those coordinates I want to generate the normal for the north face, I have no clue how to get the average of all those vectors and create a normal for the two polygons that it makes. Here is what i tried: normals[0] = Vector3.Cross(corners[1], corners[2]); normals[0].Normalize(); It seems like its correct, but then using the same thing for other sides of the cube the lighting effect seems weird, and not cohesive with where i think the light source is coming from, not really sure with the BasicEffect. Am I doing this right? Can anyone explain in lay mans terms how normals are calculated. Any help is much appreciated. Note: I tried going through Riemers and such to figure it out with no luck, it seems no one really goes over the math well enough. Thanks!

    Read the article

  • XNA running slow when making a texture

    - by Anthony
    I'm using XNA to test an image analysis algorithm for a robot. I made a simple 3D world that has a grass, a robot, and white lines (that are represent the course). The image analysis algorithm is a modification of the Hough line detection algorithm. I have the game render 2 camera views to a render target in memory. One camera is a top down view of the robot going around the course, and the second camera is the view from the robot's perspective as it moves along. I take the rendertarget of the robot camera and convert it to a Color[,] so that I can do image analysis on it. private Color[,] TextureTo2DArray(Texture2D texture, Color[] colors1D, Color[,] colors2D) { texture.GetData(colors1D); for (int x = 0; x < texture.Width; x++) { for (int y = 0; y < texture.Height; y++) { colors2D[x, y] = colors1D[x + (y * texture.Width)]; } } return colors2D; } I want to overlay the results of the image analysis on the robot camera view. The first part of the image analysis is finding the white pixels. When I find the white pixels I create a bool[,] array showing which pixels were white and which were black. Then I want to convert it back into a texture so that I can overlay on the robot view. When I try to create the new texture showing which ones pixels were white, then the game goes super slow (around 10 hz). Can you give me some pointers as to what to do to make the game go faster. If I comment out this algorithm, then it goes back up to 60 hz. private Texture2D GenerateTexturesFromBoolArray(bool[,] boolArray,Color[] colorMap, Texture2D textureToModify) { for(int i =0;i < screenWidth;i++) { for(int j =0;j<screenHeight;j++) { if (boolArray[i, j] == true) { colorMap[i+(j*screenWidth)] = Color.Red; } else { colorMap[i + (j * screenWidth)] = Color.Transparent; } } } textureToModify.SetData<Color>(colorMap); return textureToModify; } Each Time I run draw, I must set the texture to null, so that I can modify it. public override void Draw(GameTime gameTime) { Vector2 topRightVector = ((SimulationMain)Game).spriteRectangleManager.topRightVector; Vector2 scaleFactor = ((SimulationMain)Game).config.scaleFactorScreenSizeToWindow; this.spriteBatch.Begin(); // Start the 2D drawing this.spriteBatch.Draw(this.textureFindWhite, topRightVector, null, Color.White, 0, Vector2.Zero, scaleFactor, SpriteEffects.None, 0); this.spriteBatch.End(); // Stop drawing. GraphicsDevice.Textures[0] = null; } Thanks for the help, Anthony G.

    Read the article

  • Collision Detection Code Structure with Sloped Tiles

    - by ProgrammerGuy123
    Im making a 2D tile based game with slopes, and I need help on the collision detection. This question is not about determining the vertical position of the player given the horizontal position when on a slope, but rather the structure of the code. Here is my pseudocode for the collision detection: void Player::handleTileCollisions() { int left = //find tile that's left of player int right = //find tile that's right of player int top = //find tile that's above player int bottom = //find tile that's below player for(int x = left; x <= right; x++) { for(int y = top; y <= bottom; y++) { switch(getTileType(x, y)) { case 1: //solid tile { //resolve collisions break; } case 2: //sloped tile { //resolve collisions break; } default: //air tile or whatever else break; } } } } When the player is on a sloped tile, he is actually inside the tile itself horizontally, that way the player doesn't look like he is floating. This creates a problem because when there is a sloped tile next to a solid square tile, the player can't move passed it because this algorithm resolves any collisions with the solid tile. Here is a gif showing this problem: So what is a good way to structure my code so that when the player is inside a sloped tile, solid tiles get ignored?

    Read the article

  • 2D Side Scrolling game and "walk over ground" collision detection

    - by Fire-Dragon-DoL
    The question is not hard, I'm writing a game engine for 2D side scrolling games, however I'm thinking to my 2D side scrolling game and I always come up with the problem of "how should I do collision with the ground". I think I couldn't handle the collision with ground (ground for me is "where the player walk", so something heavily used) in a per-pixel way, and I can't even do it with simple shape comparison (because the ground can be tilted), so what's the correct way? I'know what tiles are and i've read about it, but how much should be big each tile to not appear like a stairs?Are there any other approach? I watched this game and is very nice how he walks on ground: http://www.youtube.com/watch?v=DmSAQwbbig8&feature=player_embedded If there are "platforms" in mid air, how should I handle them?I can walk over them but I can't pass "inside". Imagine a platform in mid air, it allows you to walk over it but limit you because you can't jump in the area she fits Sorry for my english, it's not my native language and this topic has a lot of keywords I don't know so I have to use workarounds Thanks for any answer Additional informations and suggestions: I'm doing a game course in this period and I asked them how to do this, they suggested me this approach (a QuadTree): -All map is divided into "big nodes" -Each bigger node has sub nodes, to find where the player is -You can find player's node with a ray on player position -When you find the node where the player is, you can do collision check through all pixels (which can be 100-200px nothing more) Here is an example, however i didn't show very well the bigger nodes because i'm not very good with photoshop :P How is this approach?

    Read the article

  • Multiple render targets and pixel shader outputs terminology

    - by Rei Miyasaka
    I'm a little confused on the jargon: does Multiple Render Targets (MRT) refer to outputting from a pixel shader to multiple elements in a struct? That is, when one says "MRT is to write to multiple textures", are multiple elements interleaved in a single output texture, or do you specify multiple discrete output textures? By the way, from what I understand, at least for DX9, all the elements of this struct need to be of the same size. Does this restriction still apply to DX11?

    Read the article

  • Farseer Physics Engine and the Ms-PL License

    - by Stephen Tierney
    Am I able to produce code for a game which uses the Farseer engine and release my code under an open source license other than the Ms-PL? My concern is with the following section from the license: If you distribute any portion of the software in source code form, you may do so only under this license by including a complete copy of this license with your distribution. If you distribute any portion of the software in compiled or object code form, you may only do so under a license that complies with this license. If I do not include Farseer in my source code distribution does this give me an exemption from this clause as I am not distributing the software? My code merely uses its functions. No where in the license does it force you to provide source code for derivative works or linking works, it simply gives you the option of "if you distribute".

    Read the article

  • What is the most efficient way to blur in a shader?

    - by concernedcitizen
    I'm currently working on screen space reflections. I have perfectly reflective mirror-like surfaces working, and I now need to use a blur to make the reflection on surfaces with a low specular gloss value look more diffuse. I'm having difficulty deciding how to apply the blur, though. My first idea was to just sample a lower mip level of the screen rendertarget. However, the rendertarget uses SurfaceFormat.HalfVector4 (for HDR effects), which means XNA won't allow linear filtering. Point filtering looks horrible and really doesn't give the visual cue that I want. I've thought about using some kind of Box/Gaussian blur, but this would not be ideal. I've already thrashed the texture cache in the raymarching phase before the blur even occurs (a worst case reflection could be 32 samples per pixel), and the blur kernel to make the reflections look sufficiently diffuse would be fairly large. Does anyone have any suggestions? I know it's doable, as Photon Workshop achieved the effect in Unity.

    Read the article

  • Do 2D games have a future? [closed]

    - by Griffin
    I'm currently working on a 2D soft-body physics engine (since none exist right now -_-), but I'm worried that there's no point to spending what will most likely be years on it. Although I love working on it, I doubt such an engine would get any income considering anyone willing to pay money for the library will likely to be working in 3D. Do 2D games have any sort of future in the game industry? Should I just drop my engine and find something meaningful to work on? Bonus: I've been trying to think of a unique way to implement my physics engine in a 2d game by looking at games that are multiple dimensions, but still in 2d perspective like Paper Mario. Any ideas?

    Read the article

  • Hide collision layer in libgdx with TiledMap?

    - by Daniel Jonsson
    I'm making a 2D game with libgdx, and I'm using its TileMapRenderer to render my map which I have made in the map editor Tiled. In Tiled I have a dedicated collision layer. However, I can't figure out how I'm supposed to hide it and its tiles in the game. This is how a map is loaded: TiledMap map = TiledLoader.createMap(Gdx.files.internal("maps/map.tmx")); TileAtlas atlas = new TileAtlas(map, Gdx.files.internal("maps")); tileMapRenderer = new TileMapRenderer(map, atlas, 32, 32); Currently the collision tiles are rendered on top of everything else, as I see them in the map editor.

    Read the article

  • Polygons vs sprites rendering performance in Unity for windows phone 8

    - by Géry Arduino
    I'm currently building a windows phone 8 game with unity, having 111 (no more no less) sprites being updated each frames. I have a strong overhead in the profiler (70% to 90% minimum) I tried the following to get higher frame rate, I'm running it with minimum quality settings, I tried disabling and enabling V-Sync Finally I managedto get 60Fps, but I still have large overhead. I believe I should have more than 60Fps for such few amount. Moreover, I still have to implement the game logic over this so I'd like some room in my FPS to be able to work. I was wondering if it would be better in terms of performance to use polygons instead of sprites? As sprites are quite new in Unity, (that would give me around 222 triangles). Did someone tried to check the performance differences between sprites and actual mesh renderes in Unity when it comes to phones? If so what could be the best option in that case? FYI : I'm using the Windows Phone 8 emulator on Visual studio, I have a compliant computer for that so it should normally reflect the behavior of a real phone (expecting some differences but still...) EDIT : To clarify my question i wonder what is the most efficient in windows phone 8 : Sprites or Mesh renderers?

    Read the article

  • C# wpf helix scale based mesh parenting using Transform3DGroup

    - by Rick2047
    I am using https://helixtoolkit.codeplex.com/ as a 3D framework. I want to move black mesh relative to the green mesh as shown in the attached image below. I want to make green mesh parent to the black mesh as the change in scale of the green mesh also will result in motion of the black mesh. It could be partial parenting or may be more. I need 3D rotation and 3D transition + transition along green mesh's length axis for the black mesh relative to the green mesh itself. Suppose a variable green_mesh_scale causing scale for the green mesh along its length axis. The black mesh will use that variable in order to move along green mesh's length axis. How to go about it. I've done as follows: GeometryModel3D GreenMesh, BlackMesh; ... double green_mesh_scale = e.NewValue; Transform3DGroup forGreen = new Transform3DGroup(); Transform3DGroup forBlack = new Transform3DGroup(); forGreen.Children.Add(new ScaleTransform3D(new Vector3D(1, green_mesh_scale , 1))); // ... transforms for rotation n transition GreenMesh.Transform = forGreen ; forBlack = forGreen; forBlack.Children.Add(new TranslateTransform3D(new Vector3D(0, green_mesh_scale, 0))); BlackMesh.Transform = forBlack; The problem with this is the scale transform will also be applied to the black mesh. I think i just need to avoid the scale part. I tried keeping all the transforms but scale, on another Transform3DGroup variable but that also not behaving as expected. Can MatrixTransform3D be used here some how? Also please suggest if this question can be posted somewhere else in stackexchange.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >