Search Results

Search found 296 results on 12 pages for 'tex'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • How to update a uniform variable in GLSL

    - by paj777
    Hi All, I am trying to get update the eye position in my shader from my appliaction but I keep getting error 1281 when I attempt this. I have no problems after the initialization just when i subsequently try to update the values. Here is my code: void GraphicsObject::SendShadersDDS(char vertFile [], char fragFile [], char filename []) { char *vs = NULL,*fs = NULL; vert = glCreateShader(GL_VERTEX_SHADER); frag = glCreateShader(GL_FRAGMENT_SHADER); vs = textFileRead(vertFile); fs = textFileRead(fragFile); const char * ff = fs; const char * vv = vs; glShaderSource(vert, 1, &vv, NULL); glShaderSource(frag, 1, &ff, NULL); free(vs); free(fs); glCompileShader(vert); glCompileShader(frag); program = glCreateProgram(); glAttachShader(program, frag); glAttachShader(program, vert); glLinkProgram(program); glUseProgram(program); LoadCubeTexture(filename, compressedTexture); GLint location = glGetUniformLocation(program, "tex"); glUniform1i(location, 0); glActiveTexture(GL_TEXTURE0); EyePos = glGetUniformLocation(program, "EyePosition"); glUniform4f(EyePos, EyePosition.X(),EyePosition.Y(), EyePosition.Z(), 1.0); DWORD bob = glGetError(); //All is fine here glEnable(GL_DEPTH_TEST); } And here's the function I call to update the eye position: void GraphicsObject::UpdateEyePosition(Vector3d& eyePosition){ glUniform4f(EyePos, eyePosition.X(),eyePosition.Y(), eyePosition.Z(), 1.0); DWORD bob = glGetError(); //bob equals 1281 after this call } I've tried a few ways now of updating the variable and this is the latest incarnation, thanks for viewing, all comments welcome.

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • LaTeX limitation?

    - by Jayen
    Hi, I've hit an annoying problem in LaTeX. I've got a tex file of about 1000 lines. I've already got a few figures, but when I try to add another figure, It barfs with: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.937 \begin{figure}[t] If I move the figure to other parts of the file, I can get similar errors on different lines: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.657 \paragraph {A Centering Algorithm} If I comment out the figure, all is ok. %\begin{figure}[t] % \caption{Example decision tree, from Reiter and Dale [2000]} % \label{fig:relation-decision-tree} % \centering % \includegraphics[keepaspectratio=true]{./relation-decision-tree.eps} %\end{figure} If I keep just the begin and end like: \begin{figure}%[t] % \caption{Example decision tree, from Reiter and Dale [2000]} % \label{fig:relation-decision-tree} % \centering % \includegraphics[keepaspectratio=true]{./relation-decision-tree.eps} \end{figure} I get: ! Undefined control sequence. <argument> ... \sf@size \z@ \selectfont \@currbox l.942 \end {figure} At first, I thought maybe LaTeX has hit some limit, and I tried playing with the ulimits, but that didn't help. Any ideas? i've got other figures with graphics already. my preamble looks like: \documentclass[acmcsur,acmnow]{acmtrans2n} \usepackage{array} \usepackage{lastpage} \usepackage{pict2e} \usepackage{amsmath} \usepackage{varioref} \usepackage{epsfig} \usepackage{graphics} \usepackage{qtree} \usepackage{rotating} \usepackage{tree-dvips} \usepackage{mdwlist} \makecompactlist{quote*}{quote} \usepackage{verbatim} \usepackage{ulem}

    Read the article

  • [SOLVED] How to create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • Rendering LaTeX on third-party websites

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • LaTeX book class: Twosided document with wrong margins

    - by fgysin
    I am trying to write my thesis in latex... Cannot get the layout straight though :? I'm using the following document class: \documentclass[11pt,a4paper,twoside,openright]{book} My problem is: on the odd numbered pages there is a big margin right, and a small margin left - it should be the other way round... (for binding & stuff) I am a little puzzled by this - am I just to stupid to see the obvious? The odd page numbers appear on the 'right' page of a bound document, so there needs to be a larger margin left for bindin. Right? Why does LaTeX not behave like this? Here is the full code to produce a small Tex file that shows my problem: \documentclass[11pt,a4paper,twoside,openright]{book} \begin{document} \chapter{blah} Lorem ipsum ius et accumsan tractatos, aliquip deterruisset cu usu. Ea soleat eirmod nostrud eum, est ceteros similique ad, at mea tempor petentium. At decore neglegentur quo, ea ius doming dictas facilis, duo ut porro nostrum suavitate. \end{document}

    Read the article

  • 3 index buffers

    - by bobobobo
    So, in both D3D and OpenGL there's ability to draw from an index buffer. The OBJ file format however does something weird. It specifies a bunch of vertices like: v -21.499660 6.424470 4.069845 v -25.117170 6.418100 4.068025 v -21.663851 8.282170 4.069585 v -21.651890 6.420180 4.068675 v -25.128481 8.281520 4.069585 Then it specifies a bunch of normals like.. vn 0.196004 0.558984 0.805680 vn -0.009523 0.210194 -0.977613 vn -0.147787 0.380832 -0.912757 vn 0.822108 0.567581 0.044617 vn 0.597037 0.057507 -0.800150 vn 0.809312 -0.045432 0.585619 Then it specifies a bunch of tex coords like vt 0.1225 0.5636 vt 0.6221 0.1111 vt 0.4865 0.8888 vt 0.2862 0.2586 vt 0.5865 0.2568 vt 0.1862 0.2166 THEN it specifies "faces" on the model like: f 1/2/5 2/3/7 8/2/6 f 5/9/7 6/3/8 5/2/1 So, in trying to render this with vertex buffers, In OpenGL I can use glVertexPointer, glNormalPointer and glTexCoordPointer to set pointers to each of the vertex, normal and texture coordinate arrays respectively.. but when it comes down to drawing with glDrawElements, I can only specify ONE set of indices, namely the indices it should use when visiting the vertices. Ok, then what? I still have 3 sets of indices to visit. In d3d its much the same - I can set up 3 streams: one for vertices, one for texcoords, and one for normals, but when it comes to using IDirect3DDevice9::DrawIndexedPrimitive, I can still only specify ONE index buffer, which will index into the vertices array. So, is it possible to draw from vertex buffers using different index arrays for each of the vertex, texcoord, and normal buffers (EITHER d3d or opengl!)

    Read the article

  • How to change the margin of code's frame of showexpl.sty when numbers=none?

    - by suugaku
    Hi all, I want to align the code's frame with the formated text's frame. If the numbers=left,the mission is accomplished by configuring xleftmargin, xrightmargin, framesep, and numbersep. However, when I set numbers=none, I cannot accomplish my mission by configuring those properties. Regardless of the values set to those properties, the code's frame remains unchanged and expands beyond the \textwidth. Please see my code snippet below. Thank you in advance. \documentclass[a4paper,11pt,twoside,final,dvips]{book} \usepackage{xcolor} \usepackage{showexpl} \lstset{% breaklines=true, breakindent=0pt, basicstyle=\color{magenta}\ttfamily\tiny, keywordstyle=\color{blue}\sffamily\bfseries, identifierstyle=\color{black}, commentstyle=\color{cyan}\itshape, stringstyle=\rmfamily, showstringspaces=false, tabsize=2, %========== FRAME ========== frame=single, framerule=0.4pt, rulecolor=\color{red}, framesep=3pt, %========== MARGIN ========== xleftmargin=3.4pt, xrightmargin=3.4pt, %========== NUMBER ========== numbers=left, numberstyle=\color{red}\tiny, numbersep=6.4pt, explpreset={language={[LaTeX]TeX},pos=b}% } \usepackage{lipsum} \usepackage{printlen} %------------------------- MARKER ------------------------ \newlength{\halftextwidth} \setlength{\halftextwidth}{\textwidth*\real{0.5}} \newcommand*{\MARKER}% {% \uselengthunit{cm} \par\noindent\strut\vrule% \hrulefill~% {\color{red}\scriptsize half text area: \printlength{\halftextwidth}}% ~\hrulefill\vrule% \hrulefill~% {\color{red}\scriptsize half text area: \printlength{\halftextwidth}}% ~\hrulefill\vrule% \marginpar% {% \strut\vrule\hrulefill~% {\color{red}\scriptsize margin area: \rndprintlength{\marginparwidth}}% ~\hrulefill\vrule% }% \par% }% \begin{document} \chapter{I love \LaTeXe, but $\cdots$} \section{With numbers=left} \vfill \MARKER \begin{LTXexample} \lipsum[1-1] \end{LTXexample} \MARKER \vfill \pagebreak \section{With numbers=none} \vfill \MARKER % Any values that are assigned to xleftmargin or xrightmargin % have no effect on the margin of the code's frame. \begin{LTXexample}[numbers=none,xleftmargin=10.4pt,xrightmargin=10.4pt] \lipsum[1-1] \end{LTXexample} \MARKER \vfill \end{document}

    Read the article

  • Suppress indentation after environment in LaTeX

    - by David
    I'm trying to create a new environment in my LaTeX document where indentation in the next paragraph following the environment is suppressed. I have been told (TeXbook and LaTeX source) that by setting \everypar to {\setbox0\lastbox}, the TeX typesetter will execute this at the beginning of the next paragraph and thus remove the indentation: \everypar{\setbox0\lastbox} So this is what I do, but to no effect (following paragraph is still indented): \newenvironment{example} {\begin{list} {} {\setlength\leftmargin{2em}}} {\end{list}\everypar{\setbox0\lastbox}} I have studied LaTeX's internals as well as I could manage. It seems that the \end routine says \endgroup and \par at some point, which may be the reason LaTeX ignores my \everypar setting. \global doesn't help either. I know about \noindent but want to do this automatically. Example document fragment: This is paragraph text. This is paragraph text, too. \begin{example} \item This is the first item in the list. \item This is the second item in the list. \end{example} This is more paragraph text. I don't want this indented, please. Internal routines and switches of interest seem to be \@endpetrue, \@endparenv and others. Thanks for your help.

    Read the article

  • Opengl Iphone SDK: How to tell if you're touching an object on screen?

    - by TheGambler
    First is my touchesBegan function and then the struct that stores the values for my object. I have an array of these objects and I'm trying to figure out when I touch the screen if I'm touching an object on the screen. I don't know if I need to do this by iterating through all my objects and figure out if I'm touching an object that way or maybe there is an easier more efficient way. How is this usually handled? -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ [super touchesEnded:touches withEvent:event]; UITouch* touch = ([touches count] == 1 ? [touches anyObject] : nil); CGRect bounds = [self bounds]; CGPoint location = [touch locationInView:self]; location.y = bounds.size.height - location.y; float xTouched = location.x/20 - 8 + ((int)location.x % 20)/20; float yTouched = location.y/20 - 12 + ((int)location.y % 20)/20; } typedef struct object_tag // Create A Structure Called Object { int tex; // Integer Used To Select Our Texture float x; // X Position float y; // Y Position float z; // Z Position float yi; // Y Increase Speed (Fall Speed) float spinz; // Z Axis Spin float spinzi; // Z Axis Spin Speed float flap; // Flapping Triangles :) float fi; // Flap Direction (Increase Value) } object;

    Read the article

  • Rendering LaTeX on third-party websites?

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? (The LaTeX would be enclosed within dollar signs, e.g. $\pi$. See the arXiv link above.) Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. There is one answer there, which is helpful, but perhaps Stack Overflow can provide better answers. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • pdfLaTeX + memoir class compile error

    - by Sebastien
    Hi, I'm in the middle of writing my thesis, and was using KOMA-Script. The document compiles just fine. I stumbled upon the memoir class yesterday, and was thinking of giving it a try, so here I am trying to compile with this class instead of KOMA-Script. First compilation is OK On second compilation, the document would not build (./fourier/fourier.tex [98] ! Undefined control sequence. <argument> ... C\protect \noexpand \protect \bond \protect \noexpand \protec... l.1 \chapter {Homogénéisation numérique par transformée de Fourier rapide} ? It has apparently not connected to hyperlink (btw, I'm using memhfixc), since I've commented this one out. Here is the preamble of my document, any thoughts ? Thanks in advance, S %\documentclass[draft, 11pt, a4paper, chapterprefix]{scrreprt} \documentclass[draft, 11pt, a4paper]{memoir} \usepackage[authoryear, round]{natbib} \usepackage[french]{babel} \usepackage[latin1]{inputenc} \usepackage{pdfsync} \usepackage[version=3]{mhchem} \usepackage{pgf} \usepackage{microtype} \usepackage{txfonts} % Polices times \usepackage[notref, notcite]{showkeys} \usepackage{amsmath} \usepackage{amssymb} \usepackage[bvec]{sbmacros} \usepackage{micromechanics} \usepackage{pgfcad} %\usepackage[breaklinks=true]{hyperref} %\usepackage{memhfixc} % Pour assurer la compatibilité entre memoir et hyperref %\newcommand{\url}[1]{\texttt{#1}} % Options KOMA-Script % \addtokomafont{caption}{\small} % \pagestyle{headings}

    Read the article

  • How could I create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • What happens when value types are created?

    - by Bob
    I'm developing a game using XNA and C# and was attempting to avoid calling new struct() type code each frame as I thought it would freak the GC out. "But wait," I said to myself, "struct is a value type. The GC shouldn't get called then, right?" Well, that's why I'm asking here. I only have a very vague idea of what happens to value types. If I create a new struct within a function call, is the struct being created on the stack? Will it simply get pushed and popped and performance not take a hit? Further, would there be some memory limit or performance implications if, say, I need to create many instances in a single call? Take, for instance, this code: spriteBatch.Draw(tex, new Rectangle(x, y, width, height), Color.White); Rectangle in this case is a struct. What happens when that new Rectangle is created? What are the implications of having to repeat that line many times (say, thousands of times)? Is this Rectangle created, a copy sent to the Draw method, and then discarded (meaning no memory getting eaten up the more Draw is called in that manner in the same function)? P.S. I know this may be pre-mature optimization, but I'm mostly curious and wish to have a better understanding of what is happening.

    Read the article

  • why am I stuck on "Initiating update" when deploying to google

    - by michelle
    I've have not had any trouble deploying through eclipse until now. I'm guessing it might have to do with all the stuff I've added today a folder of .pdf and .tex files (in war/web-inf directory) a bit of JDO stuff and a servlet that reads the files in the directory and indexes them into the JDO Is there any way to find out what exactly is the problem? I currently get stuck at "Initiating update" and the stack trace say "ConnectionReset" Any helkp of imput will be appreciated, I really need to deploy this today, thanks! here's the deploy trace: Unable to update: java.net.SocketException: Connection reset at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at com.google.appengine.tools.admin.ServerConnection.getAuthCookie(ServerConnection.java:315) at com.google.appengine.tools.admin.ServerConnection.authenticate(ServerConnection.java:219) at com.google.appengine.tools.admin.ServerConnection.send(ServerConnection.java:145) at com.google.appengine.tools.admin.ServerConnection.post(ServerConnection.java:81) at com.google.appengine.tools.admin.AppVersionUpload.send(AppVersionUpload.java:427) at com.google.appengine.tools.admin.AppVersionUpload.beginTransaction(AppVersionUpload.java:241) at com.google.appengine.tools.admin.AppVersionUpload.doUpload(AppVersionUpload.java:98) at com.google.appengine.tools.admin.AppAdminImpl.update(AppAdminImpl.java:56) at com.google.appengine.eclipse.core.proxy.AppEngineBridgeImpl.deploy(AppEngineBridgeImpl.java:271) at com.google.appengine.eclipse.core.deploy.DeployProjectJob.runInWorkspace(DeployProjectJob.java:148) at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getHeaderFieldKey(Unknown Source) at com.google.appengine.tools.util.ClientCookieManager.readCookies(ClientCookieManager.java:123) at com.google.appengine.tools.admin.ServerConnection.connect(ServerConnection.java:340) at com.google.appengine.tools.admin.ServerConnection.getAuthCookie(ServerConnection.java:314) ... 11 more

    Read the article

  • Sweave can't see a vector if run from a function ?

    - by PaulHurleyuk
    I have a function that sets a vector to a string, copies a Sweave document with a new name and then runs that Sweave. Inside the Sweave document I want to use the vector I set in the function, but it doesn't seem to see it. (Edit: I changed this function to use tempdir(() as suggested by Dirk) I created a sweave file test_sweave.rnw; % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{Sweave} \begin{document} \title{Test Sweave Document} \author{gb02413} \maketitle <<>>= ls() Sys.time() print(paste("The chosen study was ",chstud,sep="")) @ \end{document} and I have this function; onOK <- function(){ chstud<-"test" message(paste("Chosen Study is ",chstud,sep="")) newfile<-paste(chstud,"_report",sep="") mypath<-paste(tempdir(),"\\",sep="") setwd(mypath) message(paste("Copying test_sweave.Rnw to ",paste(mypath,newfile,".Rnw",sep=""),sep="")) file.copy("c:\\local\\test_sweave.Rnw", paste(mypath,newfile,".Rnw",sep=""), overwrite=TRUE) Sweave(paste(mypath,newfile,".Rnw",sep="")) require(tools) texi2dvi(file = paste(mypath,newfile,".tex",sep=""), pdf = TRUE) } If I run the code from the function directly, the resulting file has this output for ls(); > ls() [1] "chstud" "mypath" "newfile" "onOK" However If I call onOK() I get this output; > ls() [1] "onOK" and the print(...chstud...)) function generates an error. I suspect this is an environment problem, but I assumed because the call to Sweave occurs within the onOK function, it would be in the same enviroment, and would see all the objects created within the function. How can I get the Sweave process to see the chstud vector ? Thanks Paul.

    Read the article

  • Sending a list of mails with SmtpClient

    - by Malcolm Frexner
    I use SendCompletedEventHandler of SmtpClient when sending a list of emails. The SendCompletedEventHandler is only called when have already sent all emails in the list. I expexted, that SendCompletedEventHandler is called one an email is send. Is there something wrong in my code? public void SendAllNewsletters(List<string> recipients) { string mailText = "My Tex"; foreach(string recipient in recipients) { //if this loop takes 10min then the first call to //SendCompletedCallback is after 10min SendNewsletter(mailText,recipient); } } public bool SendNewsletter(string mailText , string emailaddress) { SmtpClient sc = new SmtpClient(_smtpServer, _smtpPort); System.Net.NetworkCredential SMTPUserInfo = new System.Net.NetworkCredential(_smtpuser, _smtppassword); sc.Credentials = SMTPUserInfo; sc.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailMessage mm = null; mm = new MailMessage(mailText, emailaddress); mm.IsBodyHtml = true; mm.Priority = MailPriority.Normal; mm.Subject = "Something"; mm.Body = "Something"; mm.SubjectEncoding = Encoding.UTF8; mm.BodyEncoding = Encoding.UTF8; //Mail string userState = emailaddress; sc.SendAsync(mm, userState); return true; } public void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string)e.UserState; if (e.Error != null) { _news.SetNewsletterEmailsisSent(e.UserState.ToString(), _newslettername, false, e.Error.Message); } else { _news.SetNewsletterEmailsisSent(e.UserState.ToString(), _newslettername, true, string.Empty); } }

    Read the article

  • Silverlight 4: StackPanel doesn't resize, when content gets more narrow

    - by Aaginor
    I am using Silverlight 4 with Blend 4. I have a (horizontal) stackpanel that includes some TextBoxes and a Button. The stackpanel is set to stretch to the size that the content uses. The TextBoxes are on autosize too. When I add text to the Textboxes, the textbox size grows and the stackpanel grows too. So far so good. When I remove text from the textboxes, the textbox size shrinks (as excepted), but the stackpanel size doesn't. Is there any trick to make the stackpanel change size, when the content (textboxes) getting smaller? Thanks in advance, Frank Here is the XAML for the UserControl: <Grid x:Name="LayoutRoot"> <StackPanel x:Name="StackPanelBorder" Orientation="Horizontal"> <TextBox x:Name="TextBoxCharacteristicName" TextWrapping="Wrap" Text="Tex"> </TextBox> <TextBox x:Name="TextBoxSep" TextWrapping="Wrap" Text="=" IsReadOnly="True"> </TextBox> <Button x:Name="ButtonRemove" Content="-" Click="ButtonAddOrRemove_Click"> </Button> </StackPanel> </Grid>

    Read the article

  • Which technology should I use to transform my latex documents into html documents

    - by Matthias Günther
    Hey, I want to write a little program that transforms my TeX files into HTML. I want to parse the documents and turn the macros (the build-in and of course my own) into HTML pieces. Here are my requirements: predefined rules (e.g. begin{itemize} \item text \end{itemize} = <br> <p>text </p> <br/>) defining own CSS style ability to convert formulars (extract the formulars, load them in an imagecreator and then save the jpg/png) easy to maintain and concise I know there are several technologies out there, but I don't exactly know which is the best for me. Here are the technologies which flow into my mind Ruby (I/O is easy, formular loading via webrat), XML XSLT (I don't think that I need just overhead) perl (there are many libs out there but I'm not quite familiar with it) bash (I worked with sed and was surprised how easy it was to work with regular expressions) latex2html ... (these converters won't work for me and they don't give me freedom in parsing) Any suggestions, hints and comments are welcome. Thanks for your time, folks.

    Read the article

  • Unix: cannot add "\\ \n" even with escaping to the end of line

    - by HH
    I try to convert clean columnwise data to tables in tex. I am unable to have "\ \n" at each end of line. Please, see the command at the end. Data $ echo `. ./bin/addTableTexTags.sh < .data_3` 10.31 & 8.50 & 7.40 10.34 & 8.53 & 7.81 8.22 & 8.62 & 7.78 10.16 & 8.53 & 7.44 10.41 & 8.38 & 7.63 10.38 & 8.57 & 8.03 10.13 & 8.66 & 7.41 8.50 & 8.60 & 7.15 10.41 & 8.63 & 7.21 8.53 & 8.53 & 7.12 $ cat .data_3 10.31 8.50 7.40 10.34 8.53 7.81 8.22 8.62 7.78 10.16 8.53 7.44 10.41 8.38 7.63 10.38 8.57 8.03 10.13 8.66 7.41 8.50 8.60 7.15 10.41 8.63 7.21 8.53 8.53 7.12 addTableTexTags.sh #!/bin/bash sed -e "s@[[:space:]]@\t\&\t@g" -e "s@[[:space:]]*&*[[:space:]]*\$@\t \\ \\\\n@g" // Tried Escaping "\\" with "/" here and there // but cannot get a line ending with "\\ \n".

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >