Search Results

Search found 2333 results on 94 pages for 'mr pixel'.

Page 5/94 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Pixel alignment algorithm

    - by user42325
    I have a set of square blocks, I want to draw them in a window. I am sure the coordinates calculation is correct. But on the screen, some squares' edge overlap with other, some are not. I remember the problem is caused by accuracy of pixels. I remember there's a specific topic related to this kind of problem in 2D image rendering. But I don't remember what exactly it is, and how to solve it. Look at this screenshot. Each block should have a fixed width margin. But in the image, the vertical white line have different width.Though, the horizontal lines looks fine.

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • Calculate random points (pixel) within a circle (image)

    - by DMills
    I have an image that contains a circles at a specific location, and of a specific diameter. What I need to do is to be able to calculate random points within the circle, and then manipulate said the pixels they correlate to. I have the following code already: private Point CalculatePoint() { var angle = _random.NextDouble() * ( Math.PI * 2 ); var x = _originX + ( _radius * Math.Cos( angle ) ); var y = _originY + ( _radius * Math.Sin( angle ) ); return new Point( ( int )x, ( int )y ); } And that works fine for finding all the points at the circumference of the circle, but I need all points from anywhere in the circle. If this doesn't make sense let me know and I will do my best to clarify.

    Read the article

  • Silverlight Pixel Shader resource "not found"; what should the URI be?

    - by Grank
    So I've written and compiled an HLSL pixel shader with Shazzam, placed the resulting .ps file in my project, and am trying to instantiate it. No matter what URI I put, Blend tells me that the resource can't be found whenever I try to view any xaml designer, and Visual Studio just shows me a blank page, both in design view and if I try to run the application. This is a Silverlight 4 SketchFlow project, in Blend 4 RC and Visual Studio 2010. I've tried both Resource and EmbeddedResource as the Build Action for the .ps file, neither make any difference (I'm pretty sure it's supposed to be set to Resource). I've tried the following URI formats: "ShaderFileName.ps" "/ShaderFileName.ps" "AssemblyName;component/ShaderFileName.ps" "/AssemblyName;component/ShaderFileName.ps" I also tried moving the shader file from the Screens assembly to the root assembly (that's how SketchFlow projects are created) and that didn't help either. Anyone have any thoughts?

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • Getting Greyscale pixel value from RGB colourspace in Java using BufferedImage

    - by Andrew Bolster
    Anyone know of a simple way of converting the RGBint value returned from <BufferedImage> getRGB(i,j) into a greyscale value? I was going to simply average the RGB values by breaking them up using this; int alpha = (pixel >> 24) & 0xff; int red = (pixel >> 16) & 0xff; int green = (pixel >> 8) & 0xff; int blue = (pixel) & 0xff; and then average red,green,blue. But i feel like for such a simple operation I must be missing something...

    Read the article

  • iPhone: CMYK Supported Pixel Formats woes

    - by user219393
    I am trying to create a image sized 1X1 in CMYK color space. as first step I am creating bitmapContext as CGColorSpaceRef cmykcolorSpace = CGColorSpaceCreateDeviceCMYK(); CGContextRef currentcontext = CGBitmapContextCreate(NULL, 1, 1, 8, //bitsPerComponent 4, // bytesPerRow cmykcolorSpace, kCGImageAlphaNone ); No matter what the combination for bitsPerComponent(8,16 or 32) for supported CMYK, there's always same error Unsupported pixel description - 4 components, 8 bits-per-component, 32 bits-per-pixel Any help is appreicated. These are the supported pixel formats 32 bpp, 8 bpc, kCGImageAlphaNone 64 bpp, 16 bpc, kCGImageAlphaNone 128 bpp, 32 bpc, kCGImageAlphaNone | kCGBitmapFloatComponents

    Read the article

  • Create a buffered image from rgb pixel values

    - by Jeff Storey
    I have an integer array of RGB pixels that looks something like: pixels[0] = <rgb-value of pixel(0,0)> pixels[1] = <rgb-value of pixel(1,0)> pixels[2] = <rgb-value of pixel(2,0)> pixels[3] = <rgb-value of pixel(0,1)> ...etc... And I'm trying to create a BufferedImage from it. I tried the following: BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); img.getRaster().setPixels(0, 0, width, height, pixels); But the resulting image has problems with the color bands. The image is unclear and there are diagonal and horizontal lines through it. What is the proper way to initialize the image with the rgb values? thanks, Jeff

    Read the article

  • Drawing single pixel in Quartz

    - by wwrob
    I have an array of CGPoints, and I'd like to fill the whole screen with colours, the colour of each pixel depending on the total distance to each of the points in the array. The natural way to do this is to, for each pixel, compute the total distance, and turn that into a colour. Questions follow: 1) How can I colour a single pixel in Quartz? I've been thinking of making 1 by 1 rectangles. 2) Are there better, more efficient ways to achieve this effect?

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • Code Golf: 1x1 black pixel

    - by Joey Adams
    Recently, I used my favorite image editor to make a 1x1 black pixel (which can come in handy when you want to draw solid boxes in HTML cheaply). Even though I made it a monochrome PNG, it came out to be 120 bytes! I mean, that's kind of steep. 120 bytes. For one pixel. I then converted it to a GIF, which dropped the size down to 43 bytes. Much better, but still... Challenge The shortest image file or program that is or generates a 1x1 black pixel. A submission may be: An image file that represents a 1x1 black pixel. The format chosen must be able to represent larger images than 1x1, and cannot be ad-hoc (that is, it can't be an image format you just made up for code golf). Image files will be ranked by byte count. A program that generates such an image file. Programs will be ranked by character count, as usual in code golf. As long as an answer falls into one of these two categories, anything is fair game.

    Read the article

  • What's a good way to organize samplers for HLSL?

    - by Rei Miyasaka
    According to MSDN, I can have 4096 samplers per context. That's a lot, considering there's only a handful of common sampler states. That tempts me to initialize an array containing a whole bunch of common sampler states, assign them to every device context I use, and then in the pixel shaders refer to them by index using : register(s[n]) where n is the index in the array. If I want more samplers for whatever reason, I can just add them on after the last slot. Does this work? If not, when should I set the samplers? Should it be done when by the mesh renderer? The texture renderer? Or alongside PSSetShader? Edit: That trick I wrote above doesn't work (at least not yet), as the compiler gives me this error message when I try to use the same register twice: error X4500: overlapping register semantics not yet implemented 's0' So how do people usually organize samplers, then?

    Read the article

  • How to do reflective collisions with particles hitting background tiles?

    - by Shawn LeBlanc
    In my 2d pixel old-school platformer, I'm looking for methods for bouncing particles off of background tiles. Particles aren't affected by gravity and collisions are "reflective". By that I mean a particle hitting the side of a square tile at 45 degrees should bounce off at 45 degrees as well. We can assume that tiles will always be perfectly square. No slopes or anything. What are efficient methods and algorithms to do this? I'd be implementing this on a Sega Genesis.

    Read the article

  • How do I multiply pixels on an SDL Surface?

    - by NoobScratcher
    Okay so I'm able to put blank pixels into a surface and also draw gradient pixels rectangles,etc But I don't know how to multiply the pixels on a surface so I was hoping someone could provide me information on this topic. I was thinking you could get the members pixel and then * it by 2 but that didn't provide results I wanted so I'm now thinking that you have to actually get to the position in bytes in one location to the left and one location to the right and then store it in memory and then * that by 2 am I correct or what? If so what is it that allows me to do that and how do I do that?

    Read the article

  • Hexagonal Grid Coordinates To Pixel Coordinates

    - by CaptnCraig
    I am working with a hexagonal grid. I have chosen to use this coordinate system because it is quite elegant. This question talks about generating the coordinates themselves, and is quite useful. My issue now is in converting these coordinates to and from actual pixel coordinates. I am looking for a simple way to find the center of a hexagon with coordinates x,y,z. Assume (0,0) in pixel coordinates is at (0,0,0) in hex coords, and that each hexagon has an edge of length s. It seems to me like x,y, and z should each move my coordinate a certain distance along an axis, but they are interrelated in an odd way I can't quite wrap my head around it. Bonus points if you can go the other direction and convert any (x,y) point in pixel coordinates to the hex that point belongs in.

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • Scanline filling of polygons that share edges and vertices

    - by Belgin
    In this picture (a perspective projection of an icosahedron), the scanline (red) intersects that vertex at the top. In an icosahedron each edge belongs to two triangles. From edge a, only one triangle is visible, the other one is in the back. Same for edge d. Also, in order to determine what color the current pixel should be, each polygon has a flag which can either be 'in' or 'out', depending upon where on the scanline we currently are. Flags are flipped according to the intersection of the scanline with the edges. Now, as we go from a to d (because all edges are intersected with the scanline at that vertex), this happens: the triangle behind triangle 1 and triangle 1 itself are set 'in', then 2 is set in and 1 is 'out', then 3 is set 'in', 2 is 'out' and finally 3 is 'out' and the one behind it is set 'in', which is not the desired behavior because we only need the triangles which are facing us to be set 'in', the rest should be 'out'. How do process the edges in the Active Edge List (a list of edges that are currently intersected by the scanline) so the right polys are set 'in'? Also, I should mention that the edges are unique, which means there exists an array of edges in the data structure of the icosahedron which are pointed to by edge pointers in each of the triangles.

    Read the article

  • Java: How to Make a Player Class in a Tile-Based RPG

    - by A.K.
    So I've been following a JavaHub tutorial that basically uses a pixel engine similar to MiniCraft. I've attempted to make a Player Class as such, and I'm basically making a mock Pokemon game for learning's sake: package pokemon.entity; import java.awt.Rectangle; import pokemon.gfx.Screen; import pokemon.levelgen.Tile; import pokemon.entity.SpritesManage;; public class Player { int x, y; int vx, vy; public Rectangle AshRec; public Sprite AshSprite; Screen screen; Sprite[][] AshSheet; public Player() { AshSprite = SpritesManage.AshSheet[1][0]; AshRec = new Rectangle(0, 0, 16, 16); x = 0; y = 0; vx = 1; vy = 1; screen.renderSprite(0, 0, AshSprite); } public void update() { move(); checkCollision(); } private void checkCollision() { } private void move() { AshRec.x += vx; AshRec.y += vy; } public void render(Screen screen, int x, int y) { screen.renderSprite(x, y, AshSprite); } } I guess what I really want to do is have the Player centered in the screen and have the sprite drawn based on an Input Handler. I'm just stumped as to how to sync these together.

    Read the article

  • How to alter image pixels of a wild life bird?

    - by NoobScratcher
    Hello so I was hoping someone knew how to move or change color and position actual image pixels and could explain and show the code to do so. I know how to write pixels on a surface or screen-surface usigned int *ptr = static_cast <unsigned int *> (screen-pixels); int offset = y * (screen->pitch / sizeof(unsigned int)); ptr[offset + x] = color; But I don't know how to alter or manipulate a image pixel of a png image my thoughts on this was How do I get the values and locations of pixels and what do I have to write to make it happen? Then how do I actually change the values or locations of those gotten pixels and how do I make that happen? any ideas tip suggestions are also welcome! int main(int argc , char *argv[]) { SDL_Surface *Screen = SDL_SetVideoMode(640,480,32,SDL_SWSURFACE); SDL_Surface *Image; Image = IMG_Load("image.png"); bool done = false; SDL_Event event; while(!done) { SDL_FillRect(Screen,NULL,(0,0,0)); SDL_BlitSurface(Image,NULL,Screen,NULL); while(SDL_PollEvent(&event)) { switch(event.type) { case SDL_QUIT: return 0; break; } } SDL_Flip(Screen); } return 0; }

    Read the article

  • Picking a suitable resolution for a modern low-res game?

    - by MrKatSwordfish
    I'm working on a 2D game project right now (using SFML+OpenGL and C++) and I'm trying to figure out how to go about choosing a resolution. I want my game to have a pixel resolution that is around that of classic '16bit' era consoles like the Super Nintendo or Neo Geo. However, I'd also like to have my game fit the 16:9 aspect ratio that most modern PC monitors use. Finally I'd like to be able to include an option for running full screen. I know that I could create my own low-res 16:9 resolution that is more-or-less around the size of SNES or NeoGeo games. However, the problem seems to be that doing so would leave me with a non-standard resolution that my monitor would not be able to support in fullscreen mode. For example, if i divide the common 16:9 resolution 1920x1080 by 4, I would get a 16:9 resolution that is relatively close to the resolution used by 16bit era games; 480x270. That would be fine in a windowed mode, but I don't think that it would be supported in fullscreen mode. How can I choose a resolution that suits my needs? Can I use something like 480x270? If so, how would I go about getting fullscreen mode to work with such a non-standard resolution? (I'm guessing OpenGL/SFML might have a way of up-scaling...but..)

    Read the article

  • What is the standard way of using Q15 values?

    - by Alex
    To process 8-bit pixels, to do things like gamma correction without losing information, we normally upsample the values, work in 16 bits or whatever, and then downsample them to 8 bits. Now, this is a somewhat new area for me, so please excuse incorrect terminology etc. For my needs I have chosen to work in "non-standard" Q15, where I only use the upper half of the range (0.0-1.0), and 0x8000 represents 1.0 instead of -1.0. This makes it much easier to calculate things in C. But I ran into a problem with SSSE3. It has the PMULHRSW instruction which multiplies Q15 numbers, but it uses the "standard" range of Q15 is [-1,1-2?¹5], so multplying (my) 0x8000 (1.0) by 0x4000 (0.5) gives 0xC000 (-0.5), because it thinks 0x8000 is -1. This is quite annoying. What am I doing wrong? Should I keep my pixel values in the 0000-7FFF range? This kind of defeats the purpose of it being a fixed-point format. Is there a way around this? Maybe some trick? Is there some kind of definitive treatise on Q15 which discusses all this?

    Read the article

  • Scaling sprite velocity / co-ordinatesin Android

    - by user22241
    I'm trying to find the answer to a question that I've had for a long time, but am having trouble finding it! I hope someone can help :-) I'm trying to find information on how to scale sprite velocity / movement / co-ordinates. What I mean by this is how do I get a sprite to move at the same speed relative to the screen size / DPI so that it takes the same amount of real-time to get from one side of the screen to the other? All of the posts pertaining to sprite scaling that I can find on the various forums relate to the size of the sprite, but this part of it I'm OK with so far, it's just that when I move a sprite, it kind of gets there at different speed depending on the dpi / resolution of the device. I hope I'm making sense. This is the code I have so far, instead of using explicit amounts, like 1, I'm using something like the following: platSpeedFloat= (1 * (dpi/160)); //Use '1' so on an MDPI screen, the sprite will move by 1 physical pixel Then basically what I'm doing is something like this: (all varialble previously declared) platSpeedSave+=platSpeedFloat; //Add the platSpeedFloat value to the current platSpeedSave value platSpeed=(int) platSpeedSave; //Cast to int so it can be checked in the following statement if (platSpeed==platSpeedSave) //Check the casted int value to float value stored previoiusly {floorY=floorY-platSpeed; //If they match then change the Y value platSpeedSave=0;} //Reset Would be grateful if someone could assists - hope I'm making sense. The above doesn't seems to work the sprite moves 'faster' on lower DPI screens. Thanks

    Read the article

  • jQTouch flicker on iPad with pixel doubling

    - by websfear
    I'm using jQTouch on an iPhone application and one of our requirements is to make this work in the iPad with pixel doubling. I believe there's a bug/issue with jQTouch on the iPad (running within an app UIWebView, but pixel doubled) that causes the screen to flicker during transitions. Pretty much every transition has a stutter/flicker on it. Has anyone else experienced this? I also started seeing this flicker on some Android devices as well.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >