Search Results

Search found 4141 results on 166 pages for 'render'.

Page 19/166 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How do I use link_to_remote to pass then render a partial in rails?

    - by Angela
    I want to create several link_to_remote links that are campaign names: <% @campaigns.each do |campaign| %> <!--link_to_remote(name, options = {}, html_options = nil)--> <%= link_to_remote(campaign, :update => "campaign_todo", :url => %> <% end %> I want the output to update on the page to render a partial, which runs a loop through the values associated with the campaign. The API docs says this will render a partial, but I'm not clear where the name of the :partial template is passed in, either here or in the controller Thanks.

    Read the article

  • How to efficiently render different things at different times in a game?

    - by Baqjohanson
    Sorry for the ambiguous title. What I am wondering is what is an efficient way to alternate rendering between lets say a main menu, options menu, and "in the game." The only two ways I've come up with so far are to have 1 render function, with code for each part (menu, ...) and a variable to control what gets drawn, or to have multiple render functions, and use a function pointer to point to the appropriate one, and then just call the function pointer. I always wonder how more professional games do it.

    Read the article

  • What is the difference between "render a view" and send the response using the Response's method "sendResponse()"?

    - by Green
    I've asked a question about what is "rendering a view". Got some answers: Rendering a view means showing up a View eg html part to user or browser. and So by rendering a view, the MVC framework has handled the data in the controller and done the backend work in the model, and then sends that data to the View to be output to the user. and render just means to emit. To print. To echo. To write to some source (probably stdout). but don't understand then the difference between rendering a view and using the Response class to send the output to the user using its sendResponse() method. If render a view means to echo the output to the user then why sendResponse() exists and vise versa? sendResponse() exactly sends headers and after headers outputs the body. They solve the same tasks but differently? What is the difference?

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • XNA 4.0 Point Vertex Rendering

    - by luis
    I have a buffer of about 134 million particles and a very powerful computer to render them smoothly but I am getting an error when trying to render them as primitive lines it says I cannot render more than around 1 million. I wonder how can I do this, also if is there a better way to render this other than with lines, I'm comfortable with having 1 pixel points or anything as long as the vertices are shown all the time. I'm basically just plotting the points. thanks.

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • How to render remote assistance to a person using Live Messenger?

    - by Cheeso
    There is a feature within Windows Live Messenger v9 that allows a person to ask for remote assistance. BBut as I understand it, this works only if the router is UPnP enabled on both ends. Today I tried this with a friend during an active chat session, and nothing happened. I suspect a router problem. as I am remote, I cannot configure the router for them. What's a good way to render remote assistance? Here's the scenario: it will be based on invitation only (it's not a remote desktop or "logmein" situation). It's a younger person, a computer novice, on the other end of the wire. I'll be assiting with their use of applications on the PC. I'd l ike to be able to SEE the screen, and also use the mouse and keyboard. I have used Ultra-Vnc on the target machine and vncviewer on my machine, on a LAN. It works well. But I don't think I can use that, because it's my kids' computer in my ex-wife's place, and I don't want her to accuse me of spying on her computer. That's why I need it to be invitation only. Advice please. Is there an easy way for me to set up Remote Assistance? IS there some other tool I can use?

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • Color Picking Troubles - LWJGL/OpenGL

    - by Tom Johnson
    I'm attempting to check which object the user is hovering over. While everything seems to be just how I'd think it should be, I'm not able to get the correct color due to the second time I draw (without picking colors). Here is my rendering code: public void render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.applyTranslations(); scene.pick(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.applyTranslations(); scene.render(); } And here is what gets called on each block/tile on "scene.pick()": public void pick() { glColor3ub((byte) pickingColor.x, (byte) pickingColor.y, (byte) pickingColor.z); draw(); glReadBuffer(GL_FRONT); ByteBuffer buffer = BufferUtils.createByteBuffer(4); glReadPixels(Mouse.getX(), Mouse.getY(), 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer); int r = buffer.get(0) & 0xFF; int g = buffer.get(1) & 0xFF; int b = buffer.get(2) & 0xFF; if(r == pickingColor.x && g == pickingColor.y && b == pickingColor.z) { hovered = true; } else { hovered = false; } } I believe the problem is that in the method of each tile/block called by scene.pick(), it is reading the color from the regular drawing state, after that method is called somehow. I believe this because when I remove the "glReadBuffer(GL_FRONT)" line from the pick method, it seems to almost fix it, but then it will also select blocks behind the one you are hovering as it is not only looking at the front. If you have any ideas of what to do, please be sure to reply!/ EDIT: Adding scene.render(), tile.render(), and tile.draw() scene.render: public void render() { for(int x = 0; x < tiles.length; x++) { for(int z = 0; z < tiles.length; z++) { tiles[x][z].render(); } } } tile.render: public void render() { glColor3f(color.x, color.y, color.z); draw(); if(hovered) { glColor3f(1, 1, 1); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); draw(); glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); } } tile.draw: public void draw() { float x = position.x, y = position.y, z = position.z; //Top glBegin(GL_QUADS); glVertex3f(x, y + size, z); glVertex3f(x + size, y + size, z); glVertex3f(x + size, y + size, z + size); glVertex3f(x, y + size, z + size); glEnd(); //Left glBegin(GL_QUADS); glVertex3f(x, y, z); glVertex3f(x + size, y, z); glVertex3f(x + size, y + size, z); glVertex3f(x, y + size, z); glEnd(); //Right glBegin(GL_QUADS); glVertex3f(x + size, y, z); glVertex3f(x + size, y + size, z); glVertex3f(x + size, y + size, z + size); glVertex3f(x + size, y, z + size); glEnd(); } (The game is like an isometric game. That's why I only draw 3 faces.)

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • Why does my adorner not re-render when the element it's applied to changes?

    - by Robert Rossney
    In a UI I'm building, I want to adorn a panel whenever one of the controls in the panel has the focus. So I handle the IsKeyboardFocusWithinChanged event, and add an adorner to the element when it gains the focus and remove the adorner when it loses focus. This seems to work OK. The problem I'm having is that the adorner isn't getting re-rendered if the bounds of the adorned element changes. For instance, in this simple case: <WrapPanel Orientation="Horizontal" IsKeyboardFocusChanged="Panel_IsKeyboardFocusChanged"> <Label>Caption</Label> <TextBox>Data</TextBox> </WrapPanel> the adorner correctly decorates the bounds of the WrapPanel when the TextBox receives the focus, but as I type in text, the TextBox expands underneath the edge of the adorner. Of course as soon as I do anything that forces the adorner to render, like ALT-TAB out of the application or give another panel the focus, it corrects itself. But how can I get it to re-render when the bounds of the adorned element change?

    Read the article

  • How can I render an in-memory UIViewController's view Landscape?

    - by Aaron
    I'm trying to render an in-memory (but not in hierarchy, yet) UIViewController's view into an in-memory image buffer so I can do some interesting transition animations. However, when I render the UIViewController's view into that buffer, it is always rendering as though the controller is in Portrait orientation, no matter the orientation of the rest of the app. How do I clue this controller in? My code in RootViewController looks like this: MyUIViewController* controller = [[MyUIViewController alloc] init]; int width = self.view.frame.size.width; int height = self.view.frame.size.height; int bitmapBytesPerRow = width * 4; unsigned char *offscreenData = calloc(bitmapBytesPerRow * height, sizeof(unsigned char)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef offscreenContext = CGBitmapContextCreate(offscreenData, width, height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); CGContextTranslateCTM(offscreenContext, 0.0f, height); CGContextScaleCTM(offscreenContext, 1.0f, -1.0f); [(CALayer*)[controller.view layer] renderInContext:offscreenContext]; At that point, the offscreen memory buffers contents are portrait-oriented, even when the window is in landscape orientation. Ideas?

    Read the article

  • JSF2 - Why does render response not rerender component setting?

    - by fekete-kamosh
    From the tutorial: "If the request is a postback and errors were encountered during the apply request values phase, process validations phase, or update model values phase, the original page is rendered during Render response phase" (http://java.sun.com/javaee/5/docs/tutorial/doc/bnaqq.html) Does it mean that if view is restored in "Restore View" phase and then any apply request/validation/update model phase fails and skips to "Render response" that Render response only passes restored view without any changes to client? Managed Bean: package cz.test; import javax.faces.bean.ManagedBean; import javax.faces.bean.RequestScoped; @ManagedBean @RequestScoped public class TesterBean { // Simple DataStore (in real world EJB) private static String storedSomeValue = null; private String someValue; public TesterBean() { } public String storeValue() { storedSomeValue = someValue; return "index"; } public String eraseValue() { storedSomeValue = null; return "index"; } public String getSomeValue() { someValue = storedSomeValue; return someValue; } public void setSomeValue(String someValue) { this.someValue = someValue; } } Composite component: <?xml version='1.0' encoding='ISO-8859-1' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:composite="http://java.sun.com/jsf/composite" xmlns:c="http://java.sun.com/jsp/jstl/core"> <!-- INTERFACE --> <composite:interface> <composite:attribute name="currentBehaviour" type="java.lang.String" required="true"/> <composite:attribute name="fieldValue" required="true"/> </composite:interface> <!-- IMPLEMENTATION --> <composite:implementation> <h:panelGrid columns="3"> <c:choose> <c:when test="#{cc.attrs.currentBehaviour == 'READONLY'}" > <h:outputText id="fieldValue" value="#{cc.attrs.fieldValue}"> </h:outputText> </c:when> <c:when test="#{cc.attrs.currentBehaviour == 'MANDATORY'}" > <h:inputText id="fieldValue" value="#{cc.attrs.fieldValue}" required="true"> <f:attribute name="requiredMessage" value="Field is mandatory"/> <c:if test="#{empty cc.attrs.fieldValue}"> <f:attribute name="style" value="background-color: yellow;"/> </c:if> </h:inputText>&nbsp;* </c:when> <c:when test="#{cc.attrs.currentBehaviour == 'OPTIONAL'}" > <h:inputText id="fieldValue" value="#{cc.attrs.fieldValue}"> </h:inputText> </c:when> </c:choose> <h:message for="fieldValue" style="color:red;" /> </h:panelGrid> </composite:implementation> Page: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ez="http://java.sun.com/jsf/composite/components"> <h:head> <title>Testing page</title> </h:head> <h:body> <h:form> <h:outputText value="Some value:"/> <ez:field-component currentBehaviour="MANDATORY" fieldValue="#{testerBean.someValue}"/> <h:commandButton value="Store" action="#{testerBean.storeValue}"/> <h:commandButton value="Erase" action="#{testerBean.eraseValue}" immediate="true"/> </h:form> <br/><br/> <b>Why is field's background color not set to yellow?</b> <ol> <li>NOTICE: Field has yellow background color (mandatory field with no value)</li> <li>Fill in any value (eg. "Hello") and press Store</li> <li>NOTICE: Yellow background disappeared (as mandatory field has value)</li> <li>Clear text in the field and press Store</li> <li><b>QUESTION: Why is field's background color not set to yellow?</b></li> <li>Press Erase</li> <li>NOTICE: Field has yellow background color (mandatory field with no value)</li> </ol> </h:body>

    Read the article

  • multipass shadow mapping renderer in XNA

    - by Nick
    I am wanting to implement a multipass renderer in XNA (additive blending combines the contributions from each light). I have the renderer working without any shadows, but when I try to add shadow mapping support I run into an issue with switching render targets to draw the shadow maps. When I switch render targets, I lose the contents of the backbuffer which ruins the whole additive blending idea. For example: Draw() { DrawAmbientLighting() foreach (DirectionalLight) { DrawDirectionalShadowMap() // <-- I lose all previous lighting contributions when I switch to the shadow map render target here DrawDirectionalLighting() } } Is there any way around my issue? (I could render all the shadow maps first, but then I have to make and hold onto a render target for each light that casts a shadow--is this the only way?)

    Read the article

  • How do I correctly Re-render a Recaptcha in ASP.NET MVC 2 after an AJAX POST

    - by Eoin Campbell
    Ok... I've downloaded and implemented this Recaptcha implementation for MVC which uses the ModelState to confirm the validity of the captcha control. It works brilliantly... except when I start to use it in an AJAX Form. In a nutshell, when a div is re-rendered with AJAX, the ReCaptcha that it should contain does not appear, even though the relevant <scripts> are in the source after the partial render. Code Below. using (Ajax.BeginForm("CreateComment", "Blog", new AjaxOptions() { HttpMethod = "POST", UpdateTargetId = "CommentAdd", OnComplete="ReloadRecaptcha", OnSuccess = "ShowComment", OnFailure = "ShowComment", OnBegin = "HideComment" })) {%> <fieldset class="comment"> <legend>Add New Comment</legend> <%= Html.ValidationSummary()%> <table class="postdetails"> <tbody> <tr> <td rowspan="3" id="blahCaptcha"> <%= Html.CreateRecaptcha("recaptcha", "blackglass") %> </td> .... Remainder of Form Omitted for Brevity I've confirmed the Form is perfectly functional when the Recaptcha Control is not present and the Javascript calls in the AjaxOptions are all working fine. The problem is that if the ModelState is Invalid as a result of the Recaptcha or some other validation, then the ActionResult returns the View to reshow the form. [RecaptchaFilter(IgnoreOnMobile = true)] [HttpPost] public ActionResult CreateComment(Comment c) { if (ModelState.IsValid) { try { //Code to insert Comment To DB return Content("Thank You"); } catch { ModelState.AddRuleViolations(c.GetRuleViolations()); } } else { ModelState.AddRuleViolations(c.GetRuleViolations()); } return View("CreateComment", c); } When it's InValid and the form posts back, for some reason the ReCaptcha Control does not re-render. I've checked the source and the <script> & <noscript> blocks are present in the HTML so the HTML Helper function below is obviously working <%= Html.CreateRecaptcha("recaptcha", "blackglass") %> I assume this has something to do with scripts injected into the DOM by AJAX are not re-executed. As you can see from the HTML snippet above, I've tried to add an OnComplete= javascript function to re-create the Captcha on the client side, but although the script executes with no errors, it doesn't work. OnComplete Function is. function ReloadRecaptcha() { Recaptcha.create("my-pub-key", 'blahCaptcha', { //blahCaptcha is the ID of the <td> where the ReCaptcha should go. theme: 'blackglass' }); } Can anyone shed any light on this ? Thanks, Eoin C

    Read the article

  • Why can't I render objects to texture properly using FBO?

    - by Brett
    Hello, I'm trying to implement a simple program using 3 FBO's to render a scene to a texture and display a textured quad. I've successfully done this previously using fragment shaders before projecting to the textured quad but can't get it to work without a shader. What could be wrong? First I set up my textures glGenTextures(3, renderTextureID); for (GLint i = 0; i < 3; i++) { glBindTexture(GL_TEXTURE_2D, renderTextureID[i]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // this may change with window size changes glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, fboWidth, fboHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); } Next some framebuffer state glGenFramebuffersEXT(3, framebufferID); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[0]); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, renderTextureID[0], 0); GLenum fboStatus = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); if(fboStatus != GL_FRAMEBUFFER_COMPLETE_EXT) { fprintf(stderr, "FBO Error!"); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[1]); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, renderTextureID[1], 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[2]); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, renderTextureID[2], 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); Then I render the scene glViewport(0, 0, fboWidth, fboHeight); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[0]); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); drawModels(); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glBindTexture(GL_TEXTURE_2D, renderTextureID[0]); glEnable(GL_SCISSOR_TEST); glViewport(windowWidth/2, 0, fboWidth, fboHeight); glScissor(windowWidth/2, 0, fboWidth, fboHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glBegin(GL_QUADS); glTexCoord2i(0, 0); glVertex2f(-1.0f, -1.0f); glTexCoord2i(1, 0); glVertex2f(1.0f, -1.0f); glTexCoord2i(1, 1); glVertex2f(1.0f, 1.0f); glTexCoord2i(0, 1); glVertex2f(-1.0f, 1.0f); glEnd(); glDisable(GL_SCISSOR_TEST); glutSwapBuffers(); The result is a yellow square. Draw models draws yellow wireframe cubes if not using the FBO so the problem is not that. I've read a previous similar post that talks about using glTexParameterf after generating and binding textures but i've done this. Thanks...

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >