Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 14/84 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • underline line thickness always one pixel...

    - by Mark
    ...regardless of font size. Its an mx:Text object. (The Text object is actually being used as a mask so don't know if that's the problem.) If underline is set with the <u> tag in Text.htmlText, or Text.textField.setTextFormat, the underline thickness is always just one pixel which is not acceptable. (There are other problems with <u> so I'm limited to using setTextFormat currently.) Can the thickness of an underline be set through CSS? (textField.styleSheet, etc.) I may have another problem as I already use setTextFormat extensively, and the documentation says you can't use textField.setTextFormat if you use textField.setStyleSheet. I primarily need the underline to simulate correctly the look for an anchor tag.

    Read the article

  • Pan point on Google Map to specific pixel position on screen (API v3)

    - by Jake
    When overlay is a Google maps overlay and offsetx, offsety is the pixel distance from the maps center that I want to pan latlong to, the following works. var projection = overlay.getProjection(); var pxlocation = projection.fromLatLngToContainerPixel(latlong); map.panTo(projection.fromContainerPixelToLatLng(new google.maps.Point(pxlocation.x+offsetx,pxlocation.y+offsety))); However, I don't always have an overlay on the map and map.getProjection() returns a projection, not a MapCanvasProjection which does not have the methods I need. Is there a way to do this without making an overlay specificaly for it?

    Read the article

  • Finding specific pixel colors of a BitmapImage

    - by Andrew Shepherd
    I have a WPF BitmapImage which I loaded from a .JPG file, as follows: this.m_image1.Source = new BitmapImage(new Uri(path)); I want to query as to what the colour is at specific points. For example, what is the RGB value at pixel (65,32)? How do I go about this? I was taking this approach: ImageSource ims = m_image1.Source; BitmapImage bitmapImage = (BitmapImage)ims; int height = bitmapImage.PixelHeight; int width = bitmapImage.PixelWidth; int nStride = (bitmapImage.PixelWidth * bitmapImage.Format.BitsPerPixel + 7) / 8; byte[] pixelByteArray = new byte[bitmapImage.PixelHeight * nStride]; bitmapImage.CopyPixels(pixelByteArray, nStride, 0); Though I will confess there's a bit of monkey-see, monkey do going on with this code. Anyway, is there a straightforward way to process this array of bytes to convert to RGB values?

    Read the article

  • Encode complex number as RGB pixel and back

    - by Vi
    How is it better to encode a complex number into RGB pixel and vice versa? Probably (logarithm of) an absolute value goes to brightness and an argument goes to hue. Desaturated pixes should receive randomized argument in reverse transformation. Something like: 0 - (0,0,0) 1 - (255,0,0) -1 - (0,255,255) 0.5 - (128,0,0) i - (255,255,0) -i - (255,0,255) (0,0,0) - 0 (255,255,255) - e^(i * random) (128,128,128) - 0.5 * e^(i *random) (0,128,128) - -0.5 Are there ready-made formulas for that? Edit: Looks like I just need to convert RGB to HSB and back. Edit 2: Existing RGB - HSV converter fragment: if (hsv.sat == 0) { hsv.hue = 0; // ! return hsv; } I don't want 0. I want random. And not just if hsv.sat==0, but if it is lower that it should be ("should be" means maximum saturation, saturation that is after transformation from complex number).

    Read the article

  • How to read pixel values of a video?

    - by vikramtheone
    Hi Guys, I recently wrote C programs for image processing of BMP images, I had to read the pixel values and process them, it was very simple I followed the BMP header contents and I could get most of the information of the BMP image. Now the challenge is to process videos (Frame by Frame), how can I do it? How will I be able to read the headers of continuous streams of image frames in a video clip? Or else, is it like, for example, the mpeg format will also have universal header, upon reading which I can get the information about the entire video and after the header, all the data are only pixels. I hope I could convey. Has anyone got experience with processing videos? Any books or links to tutorials will be very helpful. Vikram

    Read the article

  • iframe shifts 1 pixel to left on some browser sizes

    - by Tuan Nguyen
    i have code like, sorry i dont have the exact code now. but its valid. <iframe src="..." borderframe="0" scrolling="no" width=728px" height="90px"></iframe> the target is a html file that contains code for a banner. everything displays well. but when i resize browser or go to maximize. the content is shiftet to the left by 1 pixel. so the banner is displayed missing the first vertical 1px line. and only 727px is visible. anyone has an idea? thank you.

    Read the article

  • Pixel plot method errors out without error message.

    - by sonny5
    // The following method blows up (big red x on screen) without generating error info. Any // ideas why? // MyPlot.PlotPixel(x, y, Color.BlueViolet, Grf); // runs if commented out // My goal is to draw a pixel on a form. Is there a way to increase the pixel size also? using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; public class Plot : System.Windows.Forms.Form { private Size _ClientArea; //keeps the pixels info private double _Xspan; private double _Yspan; public Plot() { InitializeComponent(); } public Size ClientArea { set { _ClientArea = value; } } private void InitializeComponent() { this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(400, 300); this.Text="World Plot (world_plot.cs)"; this.Resize += new System.EventHandler(this.Form1_Resize); this.Paint += new System.Windows.Forms.PaintEventHandler(this.doLine); this.Paint += new System.Windows.Forms.PaintEventHandler(this.TransformPoints); // new this.Paint += new System.Windows.Forms.PaintEventHandler(this.DrawRectangleFloat); this.Paint += new System.Windows.Forms.PaintEventHandler(this.DrawWindow_Paint); } private void DrawWindow_Paint(object sender, PaintEventArgs e) { Graphics Grf = e.Graphics; pixPlot(Grf); } static void Main() { Application.Run(new Plot()); } private void doLine(object sender, System.Windows.Forms.PaintEventArgs e) { // no transforms done yet!!! Graphics g = e.Graphics; g.FillRectangle(Brushes.White, this.ClientRectangle); Pen p = new Pen(Color.Black); g.DrawLine(p, 0, 0, 100, 100); // draw DOWN in y, which is positive since no matrix called p.Dispose(); } public void PlotPixel(double X, double Y, Color C, Graphics G) { Bitmap bm = new Bitmap(1, 1); bm.SetPixel(0, 0, C); G.DrawImageUnscaled(bm, TX(X), TY(Y)); } private int TX(double X) //transform real coordinates to pixels for the X-axis { double w; w = _ClientArea.Width / _Xspan * X + _ClientArea.Width / 2; return Convert.ToInt32(w); } private int TY(double Y) //transform real coordinates to pixels for the Y-axis { double w; w = _ClientArea.Height / _Yspan * Y + _ClientArea.Height / 2; return Convert.ToInt32(w); } private void pixPlot(Graphics Grf) { Plot MyPlot = new Plot(); double x = 12.0; double y = 10.0; MyPlot.ClientArea = this.ClientSize; Console.WriteLine("x = {0}", x); Console.WriteLine("y = {0}", y); //MyPlot.PlotPixel(x, y, Color.BlueViolet, Grf); // blows up } private void DrawRectangleFloat(object sender, PaintEventArgs e) { // Create pen. Pen penBlu = new Pen(Color.Blue, 2); // Create location and size of rectangle. float x = 0.0F; float y = 0.0F; float width = 200.0F; float height = 200.0F; // translate DOWN by 200 pixels // Draw rectangle to screen. e.Graphics.DrawRectangle(penBlu, x, y, width, height); } private void TransformPoints(object sender, System.Windows.Forms.PaintEventArgs e) { // after transforms Graphics g = this.CreateGraphics(); Pen penGrn = new Pen(Color.Green, 3); Matrix myMatrix2 = new Matrix(1, 0, 0, -1, 0, 0); // flip Y axis with -1 g.Transform = myMatrix2; g.TranslateTransform(0, 200, MatrixOrder.Append); // translate DOWN the same distance as the rectangle... // ...so this will put it at lower left corner g.DrawLine(penGrn, 0, 0, 100, 90); // notice that y 90 is going UP } private void Form1_Resize(object sender, System.EventArgs e) { Invalidate(); } }

    Read the article

  • Calculating the pixel size of a string with Python

    - by Aristide
    I have a Python script which needs to calculate the exact size of arbitrary strings displayed in arbitrary fonts in order to generate simple diagrams. I can easily do it with Tkinter. The problem is the results seem to depend on the version of Python and/or the system. import Tkinter as tk import tkFont root = tk.Tk() times12 = tkFont.Font(family="times",size=12) print times12.metrics("linespace"), print times12.measure("Hello world") times24 = tkFont.Font(family="times",size=24) print times24.metrics("linespace"), print times24.measure("Hello world") Python 2.5 on Mac OS X gives the actual pixel measurements: 12 57 24 116 Python 2.6.1 on Mac OS X gives: 14 58 27 115 Python 2.6.3 on Windows XP gives: 19 71 36 154 Such a need being quite common, I suspect I did something wrong. Any idea?

    Read the article

  • Wanting a type of grid for a pixel editor

    - by wiggles
    Hi, I am currently trying to develop a basic pixel editor application to build up my programming experience with Java. I am designing it so the user has several colour options on, they click on an option and then they can drag over the cells in the grid and they change colour (like a typical image editor, but with a sort of snap on to each grid cell) Any idea of what Java component, if any, is able to implement this type of grid in Java? I had thought of each cell being a JButton, but this seemed terribly inefficient and I don't think it would be possible to change the colour of each cell(button) with out individually clicking on each one. Any help appreciated.

    Read the article

  • How to efficiently deal with a large amount of HTML5 canvas pixel data over websockets

    - by user730569
    Using imageData = context.getImageData(0, 0, width, height); JSON.stringify(imageData.data); I grab the pixel data, convert it to a string, and then send it over the wire via websockets. However, this string can be pretty large, depending on the size of the canvas object. I tried using the compression technique found here: JavaScript implementation of Gzip but socket.io throws the error Websocket message contains invalid character(s). Is there an effective way to compress this data so that it can be sent over websockets?

    Read the article

  • Is Android (read typical devices) fast enough for a game that requires plotting pixel by pixel rather than blitting

    - by mP
    i have an idea for an Android game which is a little different from the typical game that usually moves sprites(bitmaps) around the screen. Id want to plot lots of little pixels to create my visuals. PROS no bitmaps required pixel plotting of stuff like "fire" can react to wind. no need to scale bitmaps, works w/ any screen res (lets pretend device can handle more drawing because its got a bigger screen). CONS slower to plot pixels than blit bitmaps need lot of animation frames. WISHES id like to update my game in real time, more is better 30fps is good but not essential, 15fps is enough. PERFORMANCE Q... Is the typical Android device fast enough to plot say half a screenful of pixels w/ a default background ? if full screen is not practical what window size should be able to handle such refreshes

    Read the article

  • Converting between square and rectangular pixel co-ordinates

    - by FlyboyUtah
    I'm new at using transforms and this type of math, and would appreciate some direction solving my coding problem. I'm writing in XCode for the iphone, and am working with CGraphics. Problem: In Xcode, I want to draw curves, lines and so on it's screen of of square pixels. Then convert those points, as close as possible, into non-square pixel sysem. For example if the original coordinate system is 500 x 500 pixels that are displayed on square screen of 10 by 10 inchs I draw a round circle with the circle formula. It looks round, and all is well. Now, I draw the same circle on a second 10 x 10 inch screen that is 850 pixels by 500 pixels. Without changing the coordinates, the same circle formual displays something that looks like an egg. How can I draw the circle on the second screen in a different coordinate system? And in addition, I need to access the set of points x,y system individually. s

    Read the article

  • In OpenGL vertex shader, gl_Position doesn't get homogenized..

    - by KJ
    Hi everyone, I was expecting gl_Position to automatically get homogenized (divided by w), but it seems not working.. Why do the followings make different results? 1) void main() { vec4 p; ... omitted ... gl_Position = projectionMatrix * p; } 2) ... same as above ... p = projectionMatrix * p; gl_Position = p / p.w; I think the two are supposed to generate the same results, but it seems it's not the case. 1 doesn't work while 2 is working as expected.. Could it possibly be a precision problem? Am I missing something? This is driving me almost crazy.. helps needed. Many thanks in advance!

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

  • Silverlight graphics pixel side position?

    - by Tuukka
    I try to port simple game to silverlight (SameGame). The problem is that my old source code used pixel sizes to allight game marks to board. I draw simple grid using lines and game mark (using rectangle). How i can set rentacle position correctly? Example 20 20 pixels to upper left corner). private void DrawGrid() { LayoutRoot.Children.Clear(); Rectangle r = new Rectangle(); r.Width = 20; r.Height = 20; r.Fill = new SolidColorBrush(Color.FromArgb(255, 0, 255, 0)); r.Stroke = new SolidColorBrush(Color.FromArgb(255, 0, 255, 0)); r.SetValue(Canvas.LeftProperty, (double)0); r.SetValue(Canvas.TopProperty, (double)0); LayoutRoot.Children.Add(r); Color GridColor = Color.FromArgb(0xFF, 0x00, 0x00, 0x00); for (int y = 0; y < 11; y++) { Line l = new Line(); l.X1 = 0; l.Y1 = 30 * y - 1; l.X2 = 20 * 30; l.Y2 = 30 * y - 1; l.Stroke = new SolidColorBrush(GridColor); l.StrokeThickness = 1; LayoutRoot.Children.Add(l); } for (int x = 0; x < 21; x++) { Line l = new Line(); l.X1 = 30 * x; l.Y1 = 0; l.X2 = 30 * x; l.Y2 = 10 * 30; l.Stroke = new SolidColorBrush(GridColor); l.StrokeThickness = 1; LayoutRoot.Children.Add(l); } }

    Read the article

  • glGetActiveAttrib on Android NDK

    - by user408952
    In my code-base I need to link the vertex declarations from a mesh to the attributes of a shader. To do this I retrieve all the attribute names after linking the shader. I use the following code (with some added debug info since it's not really working): int shaders[] = { m_ps, m_vs }; if(linkProgram(shaders, 2)) { ASSERT(glIsProgram(m_program) == GL_TRUE, "program is invalid"); int attrCount = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTES, &attrCount)); int maxAttrLength = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttrLength)); LOG_INFO("shader", "got %d attributes for '%s' (%d) (maxlen: %d)", attrCount, name, m_program, maxAttrLength); m_attrs.reserve(attrCount); GLsizei attrLength = -1; GLint attrSize = -1; GLenum attrType = 0; char tmp[256]; for(int i = 0; i < attrCount; i++) { tmp[0] = 0; GL_CHECKED(glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)); LOG_INFO("shader", "%d: %d %d '%s'", i, attrLength, attrSize, tmp); m_attrs.append(String(tmp, attrLength)); } } GL_CHECKED is a macro that calls the function and calls glGetError() to see if something went wrong. This code works perfectly on Windows 7 using ANGLE and gives this this output: info:shader: got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) info:shader: 0: 7 1 'a_Color' info:shader: 1: 10 1 'a_Position' But on my Nexus 7 (1st gen) I get the following (the errors are the output from the GL_CHECKED macro): I/testgame:shader(30865): got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 0: -1 -1 '' E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 1: -1 -1 '' I.e. the call to glGetActiveAttrib gives me an INVALID_VALUE. The opengl docs says this about the possible errors: GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. This is not the case, I added an ASSERT to make sure glIsProgram(m_program) == GL_TRUE, and it doesn't trigger. GL_INVALID_OPERATION is generated if program is not a program object. Different error. GL_INVALID_VALUE is generated if index is greater than or equal to the number of active attribute variables in program. i is 0 and 1, and the number of active attribute variables are 2, so this isn't the case. GL_INVALID_VALUE is generated if bufSize is less than 0. Well, it's not zero, it's 256. Does anyone have an idea what's causing this? Am I just lucky that it works in ANGLE, or is the nvidia tegra driver wrong?

    Read the article

  • Internet Explorer table 1 pixel spacing problem

    - by Dennis G.
    I've found a strange problem with Internet Explorer related to table spacing and cannot find a way to work around it. An empty table results in a single pixel white space with Internet Explorer (6 and 7, 8 not yet tested), while all other browsers ignore the empty table. Here's a picture of the problem: And here is the minimum HTML code to reproduce the issue (please note that there are more margin/padding css attributes and table attributes specified than really needed, I just tested if this fixes IE's behavior): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <body> <div style="width: 200px; border: 1px black solid"> <table border="0" cellspacing="0" cellpadding="0" style="margin: 0pt; padding: 0pt; border-collapse: collapse;"> <tr> <td style="padding: 0; margin: 0"> </td> </tr> </table> <div style="background: red"> Test </div> </div> </body> </html> I'm not using an empty table as specified in the example above, but this was the minimum code that displays this behavior. Any ideas on how to fix this and remove the white space with IE?

    Read the article

  • How to control in the vertex shader where pixel ends up in the renderTarget?

    - by cubrman
    What if I have an arbitrary renderTarget, that is smaller than the screen (say it is 1x1 pixel) and I want to make sure in the VertexShaderFunction that all my pixels end up exactly in that 1 pixel region? No matter what I do, they all seem to get culled at some point, though GraphicDevise.Clear() works OK. Where is the top left corner of the renderTarget Vertex-shader-vise? I tried output.Position = (0,0,0,0)/(0,0,0,1)/(1,1,1,1)/(-0.5,0.5,0,1) NOTHING works! Fullscreen quad is not an option 'cause I actually need to process geometry in the shaders to get the results I need.

    Read the article

  • Getting pixel data from an image using java.

    - by Matt
    I'm trying to get the pixel rgb values from a 64 x 48 bit image. I get some values but nowhere near the 3072 (= 64 x 48) values that I'm expecting. I also get: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds! at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:301) at java.awt.image.BufferedImage.getRGB(BufferedImage.java:871) at imagetesting.Main.getPixelData(Main.java:45) at imagetesting.Main.main(Main.java:27) I can't find the out of bounds error... Here's the code: package imagetesting; import java.io.IOException; import javax.imageio.ImageIO; import java.io.File; import java.awt.image.BufferedImage; public class Main { public static final String IMG = "matty.jpg"; public static void main(String[] args) { BufferedImage img; try { img = ImageIO.read(new File(IMG)); int[][] pixelData = new int[img.getHeight() * img.getWidth()][3]; int[] rgb; int counter = 0; for(int i = 0; i < img.getHeight(); i++){ for(int j = 0; j < img.getWidth(); j++){ rgb = getPixelData(img, i, j); for(int k = 0; k < rgb.length; k++){ pixelData[counter][k] = rgb[k]; } counter++; } } } catch (IOException e) { e.printStackTrace(); } } private static int[] getPixelData(BufferedImage img, int x, int y) { int argb = img.getRGB(x, y); int rgb[] = new int[] { (argb >> 16) & 0xff, //red (argb >> 8) & 0xff, //green (argb ) & 0xff //blue }; System.out.println("rgb: " + rgb[0] + " " + rgb[1] + " " + rgb[2]); return rgb; } }

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Test whether pixel is inside the blobs for ofxOpenCV

    - by mia
    I am doing an application of the concept of the dodgeball and need to test of the pixel of the ball is in the blobs capture(which is the image of the player) I am stucked and ran out of idea of how to implement it. I manage to do a little progress which have the blobs but I not sure how to test it. Please help. I am a newbie who in a desperate condition. Thank you. This is some of my code. void testApp::setup(){ #ifdef _USE_LIVE_VIDEO vidGrabber.setVerbose(true); vidGrabber.initGrabber(widthS,heightS); #else vidPlayer.loadMovie("fingers.mov"); vidPlayer.play(); #endif widthS = 320; heightS = 240; colorImg.allocate(widthS,heightS); grayImage.allocate(widthS,heightS); grayBg.allocate(widthS,heightS); grayDiff.allocate(widthS,heightS); ////<---what I want bLearnBakground = true; threshold = 80; //////////circle////////////// counter = 0; radius = 0; circlePosX = 100; circlePosY=200; } void testApp::update(){ ofBackground(100,100,100); bool bNewFrame = false; #ifdef _USE_LIVE_VIDEO vidGrabber.grabFrame(); bNewFrame = vidGrabber.isFrameNew(); #else vidPlayer.idleMovie(); bNewFrame = vidPlayer.isFrameNew(); #endif if (bNewFrame){ if (bLearnBakground == true){ grayBg = grayImage; // the = sign copys the pixels from grayImage into grayBg (operator overloading) bLearnBakground = false; } #ifdef _USE_LIVE_VIDEO colorImg.setFromPixels(vidGrabber.getPixels(),widthS,heightS); #else colorImg.setFromPixels(vidPlayer.getPixels(),widthS,heightS); #endif grayImage = colorImg; grayDiff.absDiff(grayBg, grayImage); grayDiff.threshold(threshold); contourFinder.findContours(grayDiff, 20, (340*240)/3, 10, true); // find holes } ////////////circle//////////////////// counter = counter + 0.05f; if(radius>=50){ circlePosX = ofRandom(10,300); circlePosY = ofRandom(10,230); } radius = 5 + 3*(counter); } void testApp::draw(){ // draw the incoming, the grayscale, the bg and the thresholded difference ofSetColor(0xffffff); //white colour grayDiff.draw(10,10);// draw start from point (0,0); // we could draw the whole contour finder // or, instead we can draw each blob individually, // this is how to get access to them: for (int i = 0; i < contourFinder.nBlobs; i++){ contourFinder.blobs[i].draw(10,10); } ///////////////circle////////////////////////// //let's draw a circle: ofSetColor(0,0,255); char buffer[255]; float a = radius; sprintf(buffer,"radius = %i",a); ofDrawBitmapString(buffer, 120, 300); if(radius>=50) { ofSetColor(255,255,255); counter = 0; } else{ ofSetColor(255,0,0); } ofFill(); ofCircle(circlePosX,circlePosY,radius); }

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >