Search Results

Search found 13727 results on 550 pages for 'game industry'.

Page 275/550 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • What does SetTextureStage(0, D3DTSS_COLORARG2, 0) in DirectX mean?

    - by Vite Falcon
    I'm trying to convert some DirectX code to Ogre3D and was wondering what the following translates to: pDev->SetTextureStage(0, D3DTSS_TEXCOORDINDEX, 0) pDev->SetTextureStage(0, D3DTSS_COLORARG1, D3DTA_TEXTURE) pDev->SetTextureStage(0, D3DTSS_COLOROP, D3DTOP_MODULATE) pDev->SetTextureStage(0, D3DTSS_COLORARG2, 0) What is the modulation operation happening here? Is the texture getting modulated with the background? Or is it getting zeroed? I've tried searching for what this means and unfortunately I haven't come across anything meaningful. Any help to shed light on this matter will be much appreciated.

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • What are the maths behind 'Raiden 2' purple laser?

    - by Aybe
    The path of the laser is affected by user input and enemies present on the screen. Here is a video, at 5:00 minutes the laser in question is shown : Raiden II (PS) - 1 Loop Clear - Part 2 UPDATE Here is a test using Inkscape, ship is at bottom, the first 4 enemies are targeted by the plasma. There seems to be a sort of pattern. I moved the ship first, then the handle from it to form a 45° angle, then while trying to fit the curve I found a pattern of parallel handles and continued so until I reached the last enemy. Update, 5/26/2012 : I started an XNA project using beziers, there is still some work needed, will update the question next week. Stay tuned ! Update : 5/30/2012 : It really seems that they are using Bézier curves, I think I will be able to replicate/imitate a plasma of such grade. There are two new topics I discovered since last time : Arc length, Runge's phenomenon, first one should help in having a linear movement possible over a Bézier curve, second should help in optimizing the number of vertices. Next time I will put a video so you can see the progress 8-)

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • Transforming a primitive tetrahedron into a primitive icosahedron?

    - by Djentleman
    I've created a tetrahedron by creating a BoundingBox and building the faces of the tetrahedron within the bounding box as follows (see image as well): VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[12]; BoundingBox box = new BoundingBox(new Vector3(-1f, 1f, 1f), new Vector3(1f, -1f, -1f)); vertices[0].Position = box.GetCorners()[0]; vertices[1].Position = box.GetCorners()[2]; vertices[2].Position = box.GetCorners()[7]; vertices[3].Position = box.GetCorners()[0]; vertices[4].Position = box.GetCorners()[5]; vertices[5].Position = box.GetCorners()[2]; vertices[6].Position = box.GetCorners()[5]; vertices[7].Position = box.GetCorners()[7]; vertices[8].Position = box.GetCorners()[2]; vertices[9].Position = box.GetCorners()[5]; vertices[10].Position = box.GetCorners()[0]; vertices[11].Position = box.GetCorners()[7]; What would I then have to do to transform this tetrahedron into an icosahedron? Similar to this image: I understand the concept but applying it is another thing entirely for me.

    Read the article

  • How to show a minimap in a 3d world

    - by Bubblewrap
    Got a really typical use-case here. I have large map made up of hexagons and at any given time only a small section of the map is visible. To provide an overview of the complete map, i want to show a small 2d representation of the map in a corner of the screen. What is the recommended approach for this in libgdx? Keep in mind the minimap must be updated when the currently visible section changes and when the map is updated. I've found SpriteBatch, but the warning label on it made me think twice: A SpriteBatch is a pretty heavy object so you should only ever have one in your program. I'm not sure i'm supposed to use the one SpriteBatch that i can have on the minimap, and i'm also not sure how to interpret "heavy" in this context. Another thing to possibly keep in mind is that the minimap will probably be part of a larger UI...is there any way to integrate these two?

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use Rtaudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use a duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I seek on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter? Can anyone help me? Thanks

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

  • How do engines avoid "Phase Lock" (multiple objects in same location) in a Physics Engine?

    - by C0M37
    Let me explain Phase Lock first: When two objects of non zero mass occupy the same space but have zero energy (no velocity). Do they bump forever with zero velocity resolution vectors or do they just stay locked together until an outside force interacts? In my home brewed engine, I realized that if I loaded a character into a tree and moved them, they would signal a collision and hop back to their original spot. I suppose I could fix this by implementing impulses in the event of a collision instead of just jumping back to the last spot I was in (my implementation kind of sucks). But while I make my engine more robust, I'm just curious on how most other physics engines handle this case. Do objects that start in the same spot with no movement speed just shoot out from each other in a random direction? Or do they sit there until something happens? Which option is generally the best approach?

    Read the article

  • How to refactor and improve this XNA mouse input code?

    - by Andrew Price
    Currently I have something like this: public bool IsLeftMouseButtonDown() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Pressed; } public bool IsLeftMouseButtonPressed() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonUp() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonReleased() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Pressed; } This is fine. In fact, I kind of like it. However, I'd hate to have to repeat this same code five times (for right, middle, X1, X2). Is there any way to pass in the button I want to the function so I could have something like this? public bool IsMouseButtonDown(MouseButton button) { return currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonPressed(MouseButton button) { return currentMouseState.IsPressed(button) && !previousMouseState.IsPressed(button); } public bool IsMouseButtonUp(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonReleased(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } I suppose I could create some custom enumeration and switch through it in each function, but I'd like to first see if there is a built-in solution or a better way.. Thanks!

    Read the article

  • 2D SAT How to find collision center or point or area?

    - by Felipe Cypriano
    I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding. I need to find the center of the intersection, the black point on the image above. I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this. I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision. Any ideas? ## Update The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.

    Read the article

  • Tool for creating complex paths?

    - by TerryB
    I want to create some fairly complex predefined paths for my AI sprites to follow. I'll need to use curves, splines etc to get the effect I want. Is there a drawing tool out there that will allow me to draw such curves, "mesh" them by placing lots of points along them at some defined density and then output the coordinates of all of those points for me? I could write this tool myself but hopefully one of the drawing packages can do this? Cheers!

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • Modern Shader Book?

    - by Michael Stum
    I'm interested in learning about Shaders: What are they, when/for what would I use them, and how to use them. (Specifically I'm interested in Water and Bloom effects, but I know close to 0 about Shaders, so I need a general introduction). I saw a lot of books that are a couple of years old, so I don't know if they still apply. I'm targeting XNA 4.0 at the moment (which I believe means HLSL Shaders for Shader Model 4.0), but anything that generally targets DirectX 11 and OpenGL 4 is helpful I guess.

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • Unity3D: default parameters in C# script

    - by Heisenbug
    Accordingly to this thread, it seems that default parameters aren't supported by C# script in Unity3D environment. Declaring an optional parameter in a C# scirpt makes Mono Ide complaint about it: void Foo(int first, int second = 10) // this line is marked as wrong inside Mono IDE Anyway if I ignore the error message of Mono and run the script in Unity, it works without notify any error inside Unity Console. Could anyone clarify a little bit this issue? Particularly: Are default parameters allowed inside C# scripts? If yes, are they supported by all platforms? Why Mono complains about them if the actually works?

    Read the article

  • How do I generate terrain like that of Scorched Earth?

    - by alex
    Hi, I'm a web developer and I am keen to start writing my own games. For familiarity, I've chosen JavaScript and canvas element for now. I want to generate some terrain like that in Scorched Earth. My first attempt made me realise I couldn't just randomise the y value; there had to be some sanity in the peaks and troughs. I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords. Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?

    Read the article

  • Translating an Object to a certain Vector 3 in OpenGL and Java LWJGL

    - by aliasmk
    So after almost two hours, I got the hang of using glTranslated() (with Java and LWJGL). If I am correct, applying glTranslated on an object moves that object with the x,y,z relative to the previously moved object. I believe the correct term for this is local vs global, global being the one I want. I was wondering if there was a way to translate an object to a specific XYZ position, or relative to the origin. Thanks! (Code or other details can be supplied if it helps, just let me know. Also sorry if this is a noob comment, Im very new to OpenGL.)

    Read the article

  • Parent variable inheritance methods Unity3D/C#

    - by Timothy Williams
    I'm creating a system where there is a base "Hero" class and each hero inherits from that with their own stats and abilities. What I'm wondering is, how could I call a variable from one of the child scripts in the parent script (something like maxMP = MP) or call a function in a parent class that is specified in each child class (in the parent update is alarms() in the child classes alarms() is specified to do something.) Is this possible at all? Or not? Thanks.

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Why does my sprite glitch when moving? [closed]

    - by rphello101
    Using Slick 2D/Java, I'm using the mouse to rotate a sprite and WASD to move it (A and D are used to strafe). I finally got the directional keys and rotation to work in sync, but I'm having problems with sporadic movement. It seems that the move speed is not always set to the value I have it at. Sometimes the sprite with just shoot across the screen. Furthermore, it seems that at 0 degrees, when the left key is pressed, the sprite moves backwards, not to the left. There also seems to be quite a bit of glitching when two keys are pressed, like left and up. Anyone see anything obvious? Here is the rotational code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); Movement code: Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); Vector2f left; Vector2f right; direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); left = new Vector2f(-direction.y, direction.x); right = new Vector2f(direction.y, -direction.x); velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ sprite.x += left.x * sprite.moveSpeed * delta; sprite.y += left.y * sprite.moveSpeed * delta; }if (input.isKeyDown(sprite.right)){ sprite.x += right.x * sprite.moveSpeed * delta; sprite.y += right.y * sprite.moveSpeed * delta; }

    Read the article

  • rotate sprite and shooting bullets from the end of a cannon

    - by Alberto
    Hi all i have a problem in my Andengine code, I need , when I touch the screen, shoot a bullet from the cannon (in the same direction of the cannon) The cannon rotates perfectly but when I touch the screen the bullet is not created at the end of the turret This is my code: private void shootProjectile(final float pX, final float pY){ int offX = (int) (pX-canon.getSceneCenterCoordinates()[0]); int offY = (int) (pY-canon.getSceneCenterCoordinates()[1]); if (offX <= 0) return ; if(offY>=0) return; double X=canon.getX()+canon.getWidth()*0,5; double Y=canon.getY()+canon.getHeight()*0,5 ; final Sprite projectile; projectile = new Sprite( (float) X, (float) Y, mProjectileTextureRegion,this.getVertexBufferObjectManager() ); mMainScene.attachChild(projectile); int realX = (int) (mCamera.getWidth()+ projectile.getWidth()/2.0f); float ratio = (float) offY / (float) offX; int realY = (int) ((realX*ratio) + projectile.getY()); int offRealX = (int) (realX- projectile.getX()); int offRealY = (int) (realY- projectile.getY()); float length = (float) Math.sqrt((offRealX*offRealX)+(offRealY*offRealY)); float velocity = (float) 480.0f/1.0f; float realMoveDuration = length/velocity; MoveModifier modifier = new MoveModifier(realMoveDuration,projectile.getX(), realX, projectile.getY(), realY); projectile.registerEntityModifier(modifier); } @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_MOVE){ double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); return true; } else if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_DOWN){ final float touchX = pSceneTouchEvent.getX(); final float touchY = pSceneTouchEvent.getY(); double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); shootProjectile(touchX, touchY); } return false; } Anyone know how to calculate the coordinates (X,Y) of the end of the barrel to draw the bullet?

    Read the article

  • Animate from end frame of one animation to end frame of another Unity3d/C#

    - by Timothy Williams
    So I have two legacy FBX animations; Animation A and Animation B. What I'm looking to do is to be able to fade from A to B regardless of the current frame A is on. Using animation.CrossFade() will play A in reverse until it reaches frame 0, then play B forward. What I'm looking to do is blend from the current frame of A to the end frame of B. Probably via some sort of lerp between the facial position in A and the facial position in the last frame of B. Does anyone know how I might be able to accomplish this? Either via a built in function or potentially lerping of some sort?

    Read the article

  • Colorize with a given color a texture

    - by Pacha
    I have a texture and I want to "colorize" it with a given color, lets say cyan (#00ffff) or purple (#800080). What I want to do, is get all the pixel values from the texture, and remove the color and keep the "brightness" and "saturation" and apply to the desired color. There is a tool in GIMP to do this called Colorize (Colors -> Colorize.. while editing), I made an example below. This is will all be done in a shader (GLSL), although this is probably a general algorithm.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >