Search Results

Search found 1379 results on 56 pages for 'fragment shader'.

Page 13/56 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • What's the best library to do a URL hash/history in JQuery?

    - by alex
    I've been looking around JQuery libraries for the URL hash, but found none that were good. There is the "history plugin", but we all know it's buggy and isn't flexible. I am loading my pages inside a div. I'll need a way to do back/forward along with the url hashing. mydomain.com/#home mydomain.com/#aboutus mydomain.com/#register What's the best library that can handle all of this?

    Read the article

  • GLEW + VS2010 error

    - by Egon
    Hi folks, I am running into this issue and don't know what to do ? 'abc.exe': Loaded 'D:\Windows\SysWOW64\nvoglv32.dll', Cannot find or open the PDB file earlier I was getting a whole bunch of errors, but the microsoft symbols source set the issue for all of them except this one file : nvoglv32.dll ; does anybody know how I can resolve this issue or get my hands on the file "nvoglv32.pdb" ? Thanks -A

    Read the article

  • How would I create this background effect?

    - by William
    What would you call the effect applied to the backgrounds in the Giygas fight of Earthbound, and the battle backgrounds in Mother 3? This is what I'm talking about. http://www.youtube.com/watch?v=tcaErqaoWek http://www.youtube.com/watch?v=ubVnmeTRqhg Now anyone know how I could go about this without using animated images, or using openGL?

    Read the article

  • Nested fragments survive screen rotation

    - by ievgen
    I've faced with an issue with Nested Fragments in Android. When I rotate the screen the Nested Fragments survive somehow. I've come up with a sample example to illustrate this issue. public class ParentFragment extends BaseFragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_parent, container); } @Override public void onViewCreated(View view, Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); getChildFragmentManager() .beginTransaction() .add(getId(), new ParentFragmentChild(), ParentFragmentChild.class.getName()) .commit(); } @Override public void onResume() { super.onResume(); log.verbose("onResume(), numChildFragments: " + getChildFragmentManager().getFragments().size()); } } public class ParentFragmentChild extends BaseFragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_child, null); } } BaseFragment just logs method calls. This is what I see when I rotate the screen. When Activity initially appears ParentFragment? onAttach(): ParentFragment{420d0a98 #0 id=0x7f060064} ParentFragment? onCreate() ParentFragment? onViewCreated() ParentFragmentChild? onAttach(): ParentFragmentChild{420d08d0 #0 id=0x7f060064 com.kinoteatr.ua.filmgoer.test.ParentFragmentChild} ParentFragmentChild? onCreate() ParentFragmentChild? onViewCreated() ParentFragment? onResume() ParentFragment? onResume(), numChildFragments: 1 ParentFragmentChild? onResume() Screen rotation #1 ParentFragmentChild? onPause() ParentFragment? onPause() ParentFragment? onSaveInstanceState() ParentFragmentChild? onSaveInstanceState() ParentFragmentChild? onStop() ParentFragment? onStop() ParentFragmentChild? onDestroyView() ParentFragment? onDestroyView() ParentFragmentChild? onDestroy() ParentFragmentChild? onDetach() ParentFragment? onDestroy() ParentFragment? onDetach() ParentFragment? onAttach(): ParentFragment{4211bc38 #0 id=0x7f060064} ParentFragment? onCreate() ParentFragmentChild? onAttach(): ParentFragmentChild{420f4180 #0 id=0x7f060064 com.kinoteatr.ua.filmgoer.test.ParentFragmentChild} ParentFragmentChild? onCreate() ParentFragment? onViewCreated() ParentFragmentChild? onViewCreated() ParentFragmentChild? onAttach(): ParentFragmentChild{42132a08 #1 id=0x7f060064 com.kinoteatr.ua.filmgoer.test.ParentFragmentChild} ParentFragmentChild? onCreate() ParentFragmentChild? onViewCreated() ParentFragment? onResume() ParentFragment? onResume(), numChildFragments: 2 ParentFragmentChild? onResume() ParentFragmentChild? onResume() Screen rotation #2 ParentFragmentChild? onPause() ParentFragmentChild? onPause() ParentFragment? onPause() ParentFragment? onSaveInstanceState() ParentFragmentChild? onSaveInstanceState() ParentFragmentChild? onSaveInstanceState() ParentFragmentChild? onStop() ParentFragmentChild? onStop() ParentFragment? onStop() ParentFragmentChild? onDestroyView() ParentFragmentChild? onDestroyView() ParentFragment? onDestroyView() ParentFragmentChild? onDestroy() ParentFragmentChild? onDetach() ParentFragmentChild? onDestroy() ParentFragmentChild? onDetach() ParentFragment? onDestroy() ParentFragment? onDetach() ParentFragment? onAttach(): ParentFragment{42122a48 #0 id=0x7f060064} ParentFragment? onCreate() ParentFragmentChild? onAttach(): ParentFragmentChild{420ffd48 #0 id=0x7f060064 com.kinoteatr.ua.filmgoer.test.ParentFragmentChild} ParentFragmentChild? onCreate() ParentFragmentChild? onAttach(): ParentFragmentChild{420fffa0 #1 id=0x7f060064 com.kinoteatr.ua.filmgoer.test.ParentFragmentChild} ParentFragmentChild? onCreate() ParentFragment? onViewCreated() ParentFragmentChild? onViewCreated() ParentFragmentChild? onViewCreated() ParentFragmentChild? onAttach(): ParentFragmentChild{42101488 #2 id=0x7f060064 com.kinoteatr.ua.filmgoer.test.ParentFragmentChild} ParentFragmentChild? onCreate() ParentFragmentChild? onViewCreated() ParentFragment? onResume() ParentFragment? onResume(), numChildFragments: 3 ParentFragmentChild? onResume() ParentFragmentChild? onResume() ParentFragmentChild? onResume() They keep getting multiplied. Does anybody know why is that ?

    Read the article

  • changing the intensity of lighten/darken on bitmaps using PorterDuffXfermode in the Android Paint class

    - by user1116836
    Ok my orignal question has changed. How do i change the intensity of how something like this is effected? DayToNight.setXfermode(new PorterDuffXfermode(Mode.DST_IN)); in my dream world it would have worked like this DayToNight.setXfermode(new PorterDuffXfermode(Mode.DST_IN(10))); the 10 being a level of intensity. An example would be if I had a flickering candle, when the candle burns bright I want the bitmaps I am drawing to the screen to retain their origanol color and brightness, when it flickers I want the bitmaps to be almost blacked out, and I want to darken the Bitmaps as the light dims. I have equations, timers and all that figured out, just not how to actually apply it to change the color/brightness. Maybe burning the images is what im looking for? I just want to change the lightness lol. I feel like using paint.setShader might be a solution, but the information in this area is pretty limited from what i have been able to find. Any help would be appreciated. edit: to be crystal clear, i am looking for a way to lighten/darken bitmaps as I draw them to the canvas

    Read the article

  • I'm looking for a blend mode that gives 'realistic' paint colors. (Subtractive)

    - by almosnow
    I've been looking for a blend mode to (well ...) blend two RGB pixels in order to build colors in the samw way that a painter builds them (i.e: subtractive). Here are quick examples of the type of results that I'm expecting: CYAN + MAGENTA = BLUE CYAN + YELLOW = GREEN MAGENTA + YELLOW = RED RED + YELLOW = ORANGE RED + BLUE = PURPLE YELLOW + BLUE = GREEN I'm looking for a formula, like: dest_red = first_red + second_red; dest_green = first_green + second_green; dest_blue = first_blue + second_blue; I've tried with the commonly used 'multiply' formula but it doesn't work; I've tried with custom made formulas but I'm still not able to 'crack' how it should work. And I know already a lot of color theory so please refrain from answers like: Check this link: http://the_difference_betweeen_additive_and_subtractive_lightning.html

    Read the article

  • GLSL - one-pass gaussian blur

    - by martin pilch
    It is possible to implement fragment shader to do one-pass gaussian blur? I have found lot of implementation of two-pass blur (gaussian and box blur): http://callumhay.blogspot.com/2010/09/gaussian-blur-shader-glsl.html http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/ http://www.geeks3d.com/20100909/shader-library-gaussian-blur-post-processing-filter-in-glsl/ and so on. I have been thinking of implementing gaussian blur as convolution (in fact, it is the convolution, the examples above are just aproximations): http://en.wikipedia.org/wiki/Gaussian_blur

    Read the article

  • Using Ogre particle point billboards with shaders

    - by Jay
    I'm learning about using Ogre particles and had some questions about how the point type particles work. Q. I believe point type particles are implemented as a single position. Is one single vertex is passed to the vertex shader? Q. If one vertex is passed to the vertex shader then what gets sent to the fragment shader? Q. Can I pass the particle size to the shader? Perhaps with a custom parameter?

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • DOMDocument groupping nodes, with clone, nodeClone, importNode, fragment... What the better way?

    - by Peter Krauss
    A "DOMNodeList grouper" (groupList() function below) is a function that envelopes a set of nodes into a tag. Example: INPUT <root><b>10</b><a/><a>1</a><b>20</b><a>2</a></root> OUTPUT of groupList($dom->getElementsByTagName('a'),'G') <root><b>10</b> <G><a/><a>1</a><a>2</a></G> <b>20</b></root> There are many ways to implement it, what is the better? function groupList_v1(DOMNodeList &$list,$tag,&$dom) { $list = iterator_to_array($list); // to save itens $n = count($list); if ($n && $list[0]->nodeType==1) { $T = $dom->createDocumentFragment(); $T->appendChild($dom->createElement($tag)); for($i=0; $i<$n; $i++) { $T->firstChild->appendChild( clone $list[$i] ); if ($i) $list[$i]->parentNode->removeChild($list[$i]); } $dom->documentElement->replaceChild($T,$list[0]); }//if return $n; }//func function groupList_v2(DOMNodeList &$list,$tag,&$dom) { $list = iterator_to_array($list); // to save itens $n = count($list); if ($n && $list[0]->nodeType==1) { $T = $dom->createDocumentFragment(); $T->appendChild($dom->createElement($tag)); for($i=0; $i<$n; $i++) $T->firstChild->appendChild( clone $list[$i] ); $dom->documentElement->replaceChild($T,$list[0]); for($i=1; $i<$n; $i++) $list[$i]->parentNode->removeChild($list[$i]); }//if return $n; }//func // ... YOUR SUGGESTION ... // My ugliest function groupList_vN(DOMNodeList &$list,$tag,&$dom) { $list = iterator_to_array($list); // to save itens $n = count($list); if ($n && $list[0]->nodeType==1) { $d2 = new DOMDocument; $T = $d2->createElement($tag); for($i=0; $i<$n; $i++) $T->appendChild( $d2->importNode($list[$i], true) ); $dom->documentElement->replaceChild( $dom->importNode($T, true), $list[0] ); for($i=1; $i<$n; $i++) $list[$i]->parentNode->removeChild($list[$i]); }//if return $n; }//func Related questions: at stackoverflow, at codereview.

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • How do I share a WiX fragment in two WiX projects?

    - by Randy Eppinger
    We have a WiX fragment in a file SomeDialog.wxs that prompts the user for some information. It's referenced in another fragment in InstallerUI.wxs file that controls the dialog order. Of course, Product.wxs is our main file. Works great. Now I have a second Visual Studio 2008 Wix 3.0 Project for the .MSI of another application and it needs to ask the user for the same information. I can't seem to figure out the best way to share the file so that changing the information requested will result in both .MSIs getting the new behavior. I honestly can't tell if a merge module, an .wsi (include) or a .wixlib is the right solution. I would have hoped to find a simple example of someone doing this but I have failed thus far. Edit: Based on Rob Mensching's wixlib blog entry, a wixlib may be the answer, but I am still searching for an example of how to do this.

    Read the article

  • How to make ASP.NET authentication persist the Url Fragment when redirecting to the login page?

    - by estourodepilha.com
    After I inserted the configuration below in my Web.Config <authentication mode="Forms"> <forms name="appNameAuth" path="/" loginUrl="login.aspx" protection="All" timeout="30"> <credentials passwordFormat="Clear"> <user name="user" password="password" /> </credentials> </forms> </authentication> <authorization> <deny users="?" /> </authorization> All requests to Menu.aspx#fragment are redirected to login.aspx?ReturnUrl=/Menu.aspx and I expected it to be redirected to login.aspx?ReturnUrl=/Menu.aspx#fragment How to achieve the desired behavior?

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • How do multipass shaders work in OpenGL?

    - by Boreal
    In Direct3D, multipass shaders are simple to use because you can literally define passes within a program. In OpenGL, it seems a bit more complex because it is possible to give a shader program as many vertex, geometry, and fragment shaders as you want. A popular example of a multipass shader is a toon shader. One pass does the actual cel-shading effect and the other creates the outline. If I have two vertex shaders, "cel.vert" and "outline.vert", and two fragment shaders, "cel.frag" and "outline.frag" (similar to the way you do it in HLSL), how can I combine them to create the full toon shader? I don't want you saying that a geometry shader can be used for this because I just want to know the theory behind multipass GLSL shaders ;)

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • After segment lost TCP connection never recovers

    - by mvladic
    Take a look at following trace taken with Wireshark: http://dl.dropbox.com/u/145579/trace1.pcap or http://dl.dropbox.com/u/145579/trace2.pcap I will repeat here an interesting part (from trace1.pcap): No. Time Source Destination Protocol Length Info 1850 2012-02-09 13:44:32.609 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=581704 Win=65392 Len=0 1851 2012-02-09 13:44:32.610 192.168.4.213 172.22.37.4 COTP 550 DT TPDU (0) [COTP fragment, 509 bytes] 1852 2012-02-09 13:44:32.639 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1853 2012-02-09 13:44:32.639 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=582736 Win=65392 Len=0 1854 2012-02-09 13:44:32.657 192.168.4.213 172.22.37.4 TCP 590 [TCP Previous segment lost] 62479 > iso-tsap [ACK] Seq=583232 Ack=345 Win=65191 Len=536 1855 2012-02-09 13:44:32.657 192.168.4.213 172.22.37.4 TCP 108 [TCP segment of a reassembled PDU] 1856 2012-02-09 13:44:32.657 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1853#1] iso-tsap > 62479 [ACK] Seq=345 Ack=582736 Win=65392 Len=0 1857 2012-02-09 13:44:32.657 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1853#2] iso-tsap > 62479 [ACK] Seq=345 Ack=582736 Win=65392 Len=0 1858 2012-02-09 13:44:32.675 192.168.4.213 172.22.37.4 COTP 590 [TCP Fast Retransmission] DT TPDU (0) [COTP fragment, 509 bytes] 1859 2012-02-09 13:44:32.715 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1860 2012-02-09 13:44:32.715 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=583272 Win=65392 Len=0 1861 2012-02-09 13:44:32.796 192.168.4.213 172.22.37.4 COTP 590 [TCP Retransmission] DT TPDU (0) EOT 1862 2012-02-09 13:44:32.945 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1863 2012-02-09 13:44:32.945 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=583808 Win=65392 Len=0 1864 2012-02-09 13:44:32.963 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1865 2012-02-09 13:44:32.963 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1863#1] iso-tsap > 62479 [ACK] Seq=345 Ack=583808 Win=65392 Len=0 1866 2012-02-09 13:44:32.963 192.168.4.213 172.22.37.4 TCP 576 [TCP segment of a reassembled PDU] 1867 2012-02-09 13:44:32.963 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1863#2] iso-tsap > 62479 [ACK] Seq=345 Ack=583808 Win=65392 Len=0 1868 2012-02-09 13:44:33.235 192.168.4.213 172.22.37.4 COTP 590 [TCP Retransmission] DT TPDU (0) [COTP fragment, 509 bytes] 1869 2012-02-09 13:44:33.434 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=584344 Win=65392 Len=0 1870 2012-02-09 13:44:33.447 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1871 2012-02-09 13:44:33.447 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1869#1] iso-tsap > 62479 [ACK] Seq=345 Ack=584344 Win=65392 Len=0 1872 2012-02-09 13:44:33.806 192.168.4.213 172.22.37.4 COTP 590 [TCP Retransmission] DT TPDU (0) [COTP fragment, 509 bytes] 1873 2012-02-09 13:44:34.006 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=584880 Win=65392 Len=0 1874 2012-02-09 13:44:34.018 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1875 2012-02-09 13:44:34.018 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1873#1] iso-tsap > 62479 [ACK] Seq=345 Ack=584880 Win=65392 Len=0 1876 2012-02-09 13:44:34.932 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1877 2012-02-09 13:44:35.132 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=585416 Win=65392 Len=0 1878 2012-02-09 13:44:35.144 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1879 2012-02-09 13:44:35.144 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1877#1] iso-tsap > 62479 [ACK] Seq=345 Ack=585416 Win=65392 Len=0 1880 2012-02-09 13:44:37.172 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1881 2012-02-09 13:44:37.372 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=585952 Win=65392 Len=0 1882 2012-02-09 13:44:37.385 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1883 2012-02-09 13:44:37.385 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1881#1] iso-tsap > 62479 [ACK] Seq=345 Ack=585952 Win=65392 Len=0 1884 2012-02-09 13:44:41.632 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1885 2012-02-09 13:44:41.832 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=586488 Win=65392 Len=0 1886 2012-02-09 13:44:41.844 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1887 2012-02-09 13:44:41.844 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1885#1] iso-tsap > 62479 [ACK] Seq=345 Ack=586488 Win=65392 Len=0 1888 2012-02-09 13:44:50.554 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1889 2012-02-09 13:44:50.753 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=587024 Win=65392 Len=0 1890 2012-02-09 13:44:50.766 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1891 2012-02-09 13:44:50.766 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1889#1] iso-tsap > 62479 [ACK] Seq=345 Ack=587024 Win=65392 Len=0 1892 2012-02-09 13:45:08.385 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1893 2012-02-09 13:45:08.585 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=587560 Win=65392 Len=0 1894 2012-02-09 13:45:08.598 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1895 2012-02-09 13:45:08.598 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1893#1] iso-tsap > 62479 [ACK] Seq=345 Ack=587560 Win=65392 Len=0 1896 2012-02-09 13:45:44.059 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1897 2012-02-09 13:45:44.259 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=588096 Win=65392 Len=0 1898 2012-02-09 13:45:44.272 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1899 2012-02-09 13:45:44.272 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1897#1] iso-tsap > 62479 [ACK] Seq=345 Ack=588096 Win=65392 Len=0 1900 2012-02-09 13:46:55.386 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] Some background information (not much, unfortunately, as I'm responsible only for server part): Server (172.22.37.4) is Windows Server 2008 R2 and client (192.168.4.213) is Ericsson telephone exchange of whom I do not know much. Client sends a file to server using FTAM protocol. This problem happens very often. I think, either client or server is doing sliding window protocol wrong. Server sends dup ack, client retransmits lost packet, but soon after client sends packets with wrong seq. Again, Server sends dup ack, client retransmits lost packet - but, this time with longer retransmission timeout. Again, client sends packet with wrong seq. Etc... Retransmission timeout grows to circa 4 minutes and communications never recovers to normal.

    Read the article

  • Differences in cg shader code for OpenGL vs. for DirectX?

    - by Cray
    I have been trying to use an existing library that automatically generates shaders (Hydrax plugin for Ogre3D). These shaders are used to render water and somewhat involved, but are not extremely complicated. However there seems to be some differences in how the cg shaders are handled by OpenGL and DirectX, more specifically, I am pretty sure that the author of the library only has debugged all the shaders for DirectX, and they work flawlessly there, but not so in OpenGL. There are no compiler errors, but the result just doesn't look the same. (And I have to run the library in OpenGL.) Isn't cg supposed to be a language that can freely use the exact same code for both platforms? Are there any specific known caveats one should know about when using the same code for them? Are there any fast ways to find what parts of the code work differently? (I am pretty sure that the shaders are the problem. Otherwise Ogre3D has great support for both problems, and everything is abstracted away nicely. Other shaders work in OpenGL, etc...)

    Read the article

  • What is the most efficient way to blur in a shader?

    - by concernedcitizen
    I'm currently working on screen space reflections. I have perfectly reflective mirror-like surfaces working, and I now need to use a blur to make the reflection on surfaces with a low specular gloss value look more diffuse. I'm having difficulty deciding how to apply the blur, though. My first idea was to just sample a lower mip level of the screen rendertarget. However, the rendertarget uses SurfaceFormat.HalfVector4 (for HDR effects), which means XNA won't allow linear filtering. Point filtering looks horrible and really doesn't give the visual cue that I want. I've thought about using some kind of Box/Gaussian blur, but this would not be ideal. I've already thrashed the texture cache in the raymarching phase before the blur even occurs (a worst case reflection could be 32 samples per pixel), and the blur kernel to make the reflections look sufficiently diffuse would be fairly large. Does anyone have any suggestions? I know it's doable, as Photon Workshop achieved the effect in Unity.

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >