Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 18/84 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • Xcode - Drawing Pixels

    - by Brett
    Hi guys; I am trying to draw individual pixels in xcode to be outputted to the iphone. I do not know any OpenGL or Quartz coding but I do know a bit about Core Graphics. I was thinking about drawing small rectangles with width and height of one, but do not know how to implement this into code and how to get this to show in the view. Any help is greatly appreciated. Thanks, Brett

    Read the article

  • i want the ruby code of the php code i have given inside , please help me out

    - by Arpit Vaishnav
    <?php // amcharts.com export to image utility // set image type (gif/png/jpeg) $imgtype = 'jpeg'; // set image quality (from 0 to 100, not applicable to gif) $imgquality = 100; // get data from $_POST or $_GET ? $data = &$_POST; // get image dimensions $width = (int) $data['width']; $height = (int) $data['height']; // create image object $img = imagecreatetruecolor($width, $height); // populate image with pixels for ($y = 0; $y < $height; $y++) { // innitialize $x = 0; // get row data $row = explode(',', $data['r'.$y]); // place row pixels $cnt = sizeof($row); for ($r = 0; $r < $cnt; $r++) { // get pixel(s) data $pixel = explode(':', $row[$r]); // get color $pixel[0] = str_pad($pixel[0], 6, '0', STR_PAD_LEFT); $cr = hexdec(substr($pixel[0], 0, 2)); $cg = hexdec(substr($pixel[0], 2, 2)); $cb = hexdec(substr($pixel[0], 4, 2)); // allocate color $color = imagecolorallocate($img, $cr, $cg, $cb); // place repeating pixels $repeat = isset($pixel[1]) ? (int) $pixel[1] : 1; for ($c = 0; $c < $repeat; $c++) { // place pixel imagesetpixel($img, $x, $y, $color); // iterate column $x++; } } } // set proper content type header('Content-type: image/'.$imgtype); header('Content-Disposition: attachment; filename="chart.'.$imgtype.'"'); // stream image $function = 'image'.$imgtype; if ($imgtype == 'gif') { $function($img); } else { $function($img, null, $imgquality); } // destroy imagedestroy($img); ?

    Read the article

  • Dynamically pixelate an html image element

    - by Chris Armstrong
    I'm to take an image on a webpage, and then use javascript (or whatever would be best suited) to dynamically 'pixelate' it (e.g. into 20px squares). Then, as the user scrolls down the page, I need the image to gradually increase in resolution, till it is no longer pixelated. Any ideas how I could go about doing this? I realise I could use php to resize an image and several times and just switch out the image, but that would require loading several extra images. Also, I know I could probably do the effect with flash & pixelbender, but I want to achieve it within the limitations of HTML5, CSS & Javascript if possible. Appreciate any thoughts!

    Read the article

  • please convert this PHP code in ruby

    - by Arpit Vaishnav
    <?php // amcharts.com export to image utility // set image type (gif/png/jpeg) $imgtype = 'jpeg'; // set image quality (from 0 to 100, not applicable to gif) $imgquality = 100; // get data from $_POST or $_GET ? $data = &$_POST; // get image dimensions $width = (int) $data['width']; $height = (int) $data['height']; // create image object $img = imagecreatetruecolor($width, $height); // populate image with pixels for ($y = 0; $y < $height; $y++) { // innitialize $x = 0; // get row data $row = explode(',', $data['r'.$y]); // place row pixels $cnt = sizeof($row); for ($r = 0; $r < $cnt; $r++) { // get pixel(s) data $pixel = explode(':', $row[$r]); // get color $pixel[0] = str_pad($pixel[0], 6, '0', STR_PAD_LEFT); $cr = hexdec(substr($pixel[0], 0, 2)); $cg = hexdec(substr($pixel[0], 2, 2)); $cb = hexdec(substr($pixel[0], 4, 2)); // allocate color $color = imagecolorallocate($img, $cr, $cg, $cb); // place repeating pixels $repeat = isset($pixel[1]) ? (int) $pixel[1] : 1; for ($c = 0; $c < $repeat; $c++) { // place pixel imagesetpixel($img, $x, $y, $color); // iterate column $x++; } } } // set proper content type header('Content-type: image/'.$imgtype); header('Content-Disposition: attachment; filename="chart.'.$imgtype.'"'); // stream image $function = 'image'.$imgtype; if ($imgtype == 'gif') { $function($img); } else { $function($img, null, $imgquality); } // destroy imagedestroy($img); ?

    Read the article

  • min max coordinate of cells , given cell length in c#

    - by Raj
    Please see attached picture to better understand my question i have a matrix of cells of [JXI] , cell is square in shape with length "a" my question is .. is there a way to use FOR loop to assign MIN,MAX coordinate to each cell taking origin (0,0) at one corner Thanks "freeimagehosting.net/uploads/3b09575180.jpg" i was trying following code but no success int a ; a = 1; for (int J=1; J<=5; J++) { for (int I = 1; I <= 5; I++) { double Xmin = ((I - 1)*a ); double Ymin = ((J - 1) * a); double Xmax = (I * a ); double Ymax = (J * a); } }

    Read the article

  • Changing RGB color image to Grayscale image using Objective C

    - by user567167
    I was developing a application that changes color image to gray image. However, some how the picture comes out wrong. I dont know what is wrong with the code. maybe the parameter that i put in is wrong please help. UIImage *c = [UIImage imageNamed:@"downRed.png"]; CGImageRef cRef = CGImageRetain(c.CGImage); NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef)); size_t w = CGImageGetWidth(cRef); size_t h = CGImageGetHeight(cRef); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; unsigned char* greyPixelData = (unsigned char*) malloc(w*h); for (int y = 0; y < h; y++) { for(int x = 0; x < w; x++){ int iter = 4*(w*y+x); int red = pixe lBytes[iter]; int green = pixelBytes[iter+1]; int blue = pixelBytes[iter+2]; greyPixelData[w*y+x] = (unsigned char)(red*0.3 + green*0.59+ blue*0.11); int value = greyPixelData[w*y+x]; } } CFDataRef imgData = CFDataCreate(NULL, greyPixelData, w*h); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData); size_t width = CGImageGetWidth(cRef); size_t height = CGImageGetHeight(cRef); size_t bitsPerComponent = 8; size_t bitsPerPixel = 8; size_t bytesPerRow = CGImageGetWidth(cRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGBitmapInfo info = kCGImageAlphaNone; CGFloat *decode = NULL; BOOL shouldInteroplate = NO; CGColorRenderingIntent intent = kCGRenderingIntentDefault; CGDataProviderRelease(imgDataProvider); CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent); UIImage* newImage = [UIImage imageWithCGImage:throughCGImage]; CGImageRelease(throughCGImage); newImageView.image = newImage;

    Read the article

  • Make dialogs compatible with "large fonts".

    - by Narcís Calvet
    Which do you think are best practices for making a windows dialog compatible both with standard fonts (96 dpi) and "large fonts" setting (120 dpi) so that objects don't overlap or get cut off? BTW: Just in case it's relevant, I'm interested in doing this for Delphi dialogs. Thanks in advance!

    Read the article

  • How come JFrame window size in Java does not produce the size of window specified?

    - by typoknig
    Hi all, I am just messing around trying to make a game right now, but I have had this problem before too. When I specify a specific window size (1024 x 768 for instance) the window produced is just a little larger than what I specified. Very annoying. Is there a reason for this? How do I correct it so the window created is actually the size I want instead of being just a little bit bigger? Up till now I have always just gone back and manually adjusted the size a few pixels at a time until I got the result I wanted, but that is getting old. If there was even a formula I could use that would tell me how many pixels I needed to add/subtract from my my variable that would be excellent! P.S. I don't know if my OS could be a factor in this, but I am using W7X64. private int windowWidth = 1024; private int windowHeight = 768; public SomeWindow() { this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.setSize(windowWidth, windowHeight); this.setResizable(false); this.setLocation(0,0); this.setVisible(true); }

    Read the article

  • Want to convert whole PHP script in RUBY ON RAILS

    - by user303058
    // set image quality (from 0 to 100, not applicable to gif) $imgquality = 100; // get data from $_POST or $_GET ? $data = &$_POST; // get image dimensions $width = (int) $data['width']; $height = (int) $data['height']; // create image object $img = imagecreatetruecolor($width, $height); // populate image with pixels for ($y = 0; $y < $height; $y++) { // innitialize $x = 0; // get row data $row = explode(',', $data['r'.$y]); // place row pixels $cnt = sizeof($row); for ($r = 0; $r < $cnt; $r++) { // get pixel(s) data $pixel = explode(':', $row[$r]); // get color $pixel[0] = str_pad($pixel[0], 6, '0', STR_PAD_LEFT); $cr = hexdec(substr($pixel[0], 0, 2)); $cg = hexdec(substr($pixel[0], 2, 2)); $cb = hexdec(substr($pixel[0], 4, 2)); // allocate color $color = imagecolorallocate($img, $cr, $cg, $cb); // place repeating pixels $repeat = isset($pixel[1]) ? (int) $pixel[1] : 1; for ($c = 0; $c < $repeat; $c++) { // place pixel imagesetpixel($img, $x, $y, $color); // iterate column $x++; } } } // set proper content type header('Content-type: image/'.$imgtype); header('Content-Disposition: attachment; filename="chart.'.$imgtype.'"'); // stream image $function = 'image'.$imgtype; if ($imgtype == 'gif') { $function($img); } else { $function($img, null, $imgquality); } // destroy imagedestroy($img); ?

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • How do multipass shaders work in OpenGL?

    - by Boreal
    In Direct3D, multipass shaders are simple to use because you can literally define passes within a program. In OpenGL, it seems a bit more complex because it is possible to give a shader program as many vertex, geometry, and fragment shaders as you want. A popular example of a multipass shader is a toon shader. One pass does the actual cel-shading effect and the other creates the outline. If I have two vertex shaders, "cel.vert" and "outline.vert", and two fragment shaders, "cel.frag" and "outline.frag" (similar to the way you do it in HLSL), how can I combine them to create the full toon shader? I don't want you saying that a geometry shader can be used for this because I just want to know the theory behind multipass GLSL shaders ;)

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • How to access Google Maps API v3 marker's DIV and it's pixel position?

    - by Ray Yun
    Instead of google maps api's default info window, I'm going to use other jquery tooltip plugin over marker. So I need to get marker's DIV and its pixel position. But couldn't get it because there are no id or class for certain marker. Only I can access map canvas div from marker object and undocumented pixelBounds object. How can I access marker's DIV? Where can I get DIV's pixel position? Can I convert lat-lng position to pixel values?

    Read the article

  • What is the pixel clock setting on my monitor actually doing?

    - by codecowboy
    I am experiencing display interference on a dell 24" flat panel monitor.I find that if I adjust the pixel clock settings up or down in the monitor's on-screen menus, the interference goes away for a while. The monitor is attached to a Macbook Pro using a mini display to VGA adapter. I have found that in a different house, I get the interference problem less so it might be related to electricity supply or possibly even ethernet powerline (total guess). What does the pixel clock setting actually do and does this behaviour point to a likely cause of the interference?

    Read the article

  • Differences in cg shader code for OpenGL vs. for DirectX?

    - by Cray
    I have been trying to use an existing library that automatically generates shaders (Hydrax plugin for Ogre3D). These shaders are used to render water and somewhat involved, but are not extremely complicated. However there seems to be some differences in how the cg shaders are handled by OpenGL and DirectX, more specifically, I am pretty sure that the author of the library only has debugged all the shaders for DirectX, and they work flawlessly there, but not so in OpenGL. There are no compiler errors, but the result just doesn't look the same. (And I have to run the library in OpenGL.) Isn't cg supposed to be a language that can freely use the exact same code for both platforms? Are there any specific known caveats one should know about when using the same code for them? Are there any fast ways to find what parts of the code work differently? (I am pretty sure that the shaders are the problem. Otherwise Ogre3D has great support for both problems, and everything is abstracted away nicely. Other shaders work in OpenGL, etc...)

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Possible / How to render to multiple back buffers, using one as a shader resource when rendering to the other, and vice versa?

    - by Raptormeat
    I'm making a game in Direct3D10. For several of my rendering passes, I need to change the behavior of the pass depending on what is already rendered on the back buffer. (For example, I'd like to do some custom blending- when the destination color is dark, do one thing; when it is light, do another). It looks like I'll need to create multiple render targets and render back and forth between them. What's the best way to do this? Create my own render textures, use them, and then copy the final result into the back buffer. Create multiple back buffers, render between them, and then present the last one that was rendered to. Create one render texture, and one back buffer, render between them, and just ensure that the back buffer is the final target rendered to I'm not sure which of these is possible, and if there are any performance issues that aren't obvious. Clearly my preference would be to have 2 rather than 3 default render targets, if possible.

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • How can I use a Shader in XNA to color single pixels?

    - by George Johnston
    I have a standard 800x600 window in my XNA project. My goal is to color each individual pixel based on a rectangle array which holds boolean values. Currently I am using a 1x1 Texture and drawing each sprite in my array. I am very new to XNA and come from a GDI background, so I am doing what I would have done in GDI, but it doesn't scale very well. I have been told in another question to use a Shader, but after much research, I still haven't been able to find out how to accomplish this goal. My application loops through the X and Y coordinates of my rectangular array, does calculations based on each value, and reassigns/moves the array around. At the end, I need to update my "Canvas" with the new values. A smaller sample of my array would look like: 0,0,0,0,0,0,0 0,0,0,0,0,0,0 0,0,0,0,0,0,0 1,1,1,1,1,1,1 1,1,1,1,1,1,1 How can I use a shader to color each pixel?

    Read the article

  • possible to have a background color transition from color A to color B without repeating a pixel sti

    - by Andrew Heath
    For things like menubars and headers, a background color is nice. But a background color that gracefully transitions from say Blue to White is even nicer. I know this can be done by making a 1-pixel wide, X-pixel tall image file containing the desired fade and repeating it across the div, but does CSS have native support to just define colors and be done with it? Can any other language handle this?

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >