Search Results

Search found 5155 results on 207 pages for 'render to texture'.

Page 4/207 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Issue with transparent texture on 3D primitive, XNA 4.0

    - by Bevin
    I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here: cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"]; Block[] cubes = new Block[4]; cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3)); cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3)); cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4)); cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4)); foreach(Block b in cubes) { b.shape.RenderShape(GraphicsDevice, cubeEffect); } This is the code in the Draw method. It produces this result: As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this: It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.

    Read the article

  • OpenGL 2D Texture Mapping problem.

    - by gutsblow
    Hi there, I am relatively new to OpenGL and I am having some issues when I am rendering an image as a texture for a QUAD which is as the same size of the image. Here is my code. I would be very grateful if someone helps me to solve this problem. The image appears way smaller and is squished. (BTW, the image dimensions are 500x375). glGenTextures( 1, &S_GLator_InputFrameTextureIDSu ); glBindTexture(GL_TEXTURE_2D, S_GLator_InputFrameTextureIDSu); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D( GL_TEXTURE_2D, 0, 4, S_GLator_EffectCommonData.mRenderBufferWidthSu, S_GLator_EffectCommonData.mRenderBufferHeightSu, 0, GL_RGBA, GL_UNSIGNED_BYTE, dataP); glBindTexture(GL_TEXTURE_2D, S_GLator_InputFrameTextureIDSu); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, S_GLator_EffectCommonData.mRenderBufferWidthSu, S_GLator_EffectCommonData.mRenderBufferHeightSu, GL_RGBA, GL_UNSIGNED_BYTE, bufferP); //set the matrix modes glMatrixMode( GL_PROJECTION ); glLoadIdentity(); //gluPerspective( 45.0, (GLdouble)widthL / heightL, 0.1, 100.0 ); glOrtho (0, 1, 0, 1, -1, 1); // Set up the frame-buffer object just like a window. glViewport( 0, 0, widthL, heightL ); glDisable(GL_DEPTH_TEST); glClearColor( 0.0f, 0.0f, 0.0f, 0.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glMatrixMode( GL_MODELVIEW ); glLoadIdentity(); glBindTexture( GL_TEXTURE_2D, S_GLator_InputFrameTextureIDSu ); //Render the geometry to the frame-buffer object glBegin(GL_QUADS); //input frame glColor4f(1.f,1.f,1.f,1.f); glTexCoord2f(0.0f,0.0f); glVertex3f(0.f ,0.f ,0.0f); glTexCoord2f(1.0f,0.0f); glVertex3f(1.f ,0.f,0.0f); glTexCoord2f(1.0f,1.f); glVertex3f(1.f ,1.f,0.0f); glTexCoord2f(0.0f,1.f); glVertex3f(0.f ,1.f,0.0f); glEnd();

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • Mapping a 3D texture to a standard hollow-hull 3D model

    - by John
    I have 3D models which are typical hollow hulls. If such a model also had a 3D volumetric/voxel texture map then given a point P inside such a model, I'd like to be able to find its uvw coordinates within the 3D texture. Is this possible by simply setting 3D texcoords on my existing mesh or does it have to be broken up into polyhedra? Is there a way to map a 3D texture onto a mesh without doing this?

    Read the article

  • LWJGL - OpenGL - Texture shading

    - by Trixmix
    I want to use LWJGL to create a shader that all it does is change the color of the given texture. For example I tell it to draw the letter A using a sprite sheet then I can tell the shader to draw the letter in a certain color. How would you do something like this without needed to create different colored letter sprite sheets? Task for the shader: Simply change all pixels to a certain color in the texture. Input: Color , texture. Output: it draws onto the screen the new colored texture. How do i accomplish such a thing?

    Read the article

  • XNA texture stretching at extreme coordinates

    - by Shaun Hamman
    I was toying around with infinitely scrolling 2D textures using the XNA framework and came across a rather strange observation. Using the basic draw code: spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.PointWrap, null, null); spriteBatch.Draw(texture, Vector2.Zero, sourceRect, Color.White, 0.0f, Vector2.Zero, 2.0f, SpriteEffects.None, 1.0f); spriteBatch.End(); with a small 32x32 texture and a sourceRect defined as: sourceRect = new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height); I was able to scroll the texture across the window infinitely by changing the X and Y coordinates of the sourceRect. Playing with different coordinate locations, I noticed that if I made either of the coordinates too large, the texture no longer drew and was instead replaced by either a flat color or alternating bands of color. Tracing the coordinates back down, I found the following at around (0, -16,777,000): As you can see, the texture in the top half of the image is stretched vertically. My question is why is this occurring? Certainly I can do things like bind the x/y position to some low multiple of 32 to give the same effect without this occurring, so fixing it isn't an issue, but I'm curious about why this happens. My initial thought was perhaps it was overflowing the coordinate value or some such thing, but looking at a data type size chart, the next closest below is an unsigned short with a range of about 32,000, and above is an unsigned int with a range of around 2,000,000,000 so that isn't likely the cause.

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • Texture artifacts on iPad

    - by MrDatabase
    I'm porting an iPhone game to the iPad. When I move textures "quickly" (5.0 pixels every update at a rate of 60 Hz) I start to see little "artifacts" or remnants of where the texture used to be. I'm not sure if I know the correct terminology for this... imagine a texture at some location on the screen... then next to it is the same texture but faded a bit... then the same texture again just faded a bit more. I'm using CADisplayLink to drive my update loop if that helps. Also I didn't see this issue on the 3G or the iPhone 4. Any ideas? Cheers!

    Read the article

  • Getting a texture from a renderbuffer in OpenGL?

    - by Rushyo
    I've got a renderbuffer (DepthStencil) in an FBO and I need to get a texture from it. I can't have both a DepthComponent texture and a DepthStencil renderbuffer in the FBO, it seems, so I need some way to convert the renderbuffer to a DepthComponent texture after I'm done with it for use later down the pipeline. I've tried plenty of techniques to grab the depth component from the renderbuffer for weeks but I always come out with junk. All I want at the end is the same texture I'd get from an FBO if I wasn't using a renderbuffer. Can anyone post some comprehensive instructions or code that covers this seemingly simple operation? EDIT: Linky to an extract version of the code http://dl.dropbox.com/u/9279501/fbo.cs Screeny of the Depth of Field effect + FBO - without depth(!) http://i.stack.imgur.com/Hj9Oe.jpg Screeny without Depth of Field effect + FBO - depth working fine http://i.stack.imgur.com/boOm1.jpg

    Read the article

  • Texture not drawing on cubes

    - by Christian Frantz
    I can draw the cubes fine but they are just solid black besides the occasional lighting that goes on. The basic effect is being set for each cube also. public void Draw(BasicEffect effect) { foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 8, 0, 12); } } The cubes draw method. TextureEnabled is set to true in my main draw method. My texture is also loading fine. public Cube(GraphicsDevice graphicsDevice, Vector3 Position, Texture2D Texture) { device = graphicsDevice; texture = Texture; cubePosition = Position; effect = new BasicEffect(device); } The constructor seems fine too. Could this be caused by the Vector2's of my VertexPositionNormalTexture? Even if they were out of order something should still be drawn other than a black cube

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Techniques for lighting a texture (no shadows)

    - by Paul Manta
    I'm trying to learn about dynamic shadows for 2D graphics. While I understand the basic ideas behind determining what areas should be lit and which should be in shadow, I don't know how I would "lighten" a texture in the first place. Could you go over various popular techniques for lighting a texture and what (dis)advantages each one has? Also, how is lighting a texture with colored light different from using white light?

    Read the article

  • XNA: Retrieve texture file name during runtime

    - by townsean
    I'm trying to retrieve the names of the texture files (or their locations) on a mesh. I realize that the texture file name information is not preserved when the model is loaded. I've been doing tons of searching and some experimenting but I've been met with no luck. I've gathered that I need to extended the content pipeline and store the file location in somewhere like ModelMeshPart.Tag. My problem is, even when I'm trying to make my own custom processor, I still can't figure out where the texture file name is. :( Any thoughts? Thanks! UPDATE: Okay, so I found something kind of promising. NodeContent.Identity.SourceFilename, only that returns the location of my .X model. When I go down the node tree he is always null. Then there's the ContentItem.Name property. It seems to have names of my mesh, but not my actual texture file names. :(

    Read the article

  • How to implement custom texture formats in Android?

    - by random1337
    What I know: Android can load PNG, BMP, WEBP,... via BitmapFactory. What I want to achive: Load my own 2D file format (e.g. 1-bit texture with a 1-bit alpha channel) and output a RGBA8888 texture. Question: Is there any interface to achieve this?(or any other way) The resulting image is used as a texture for a 3D model. Why would you do that? Saving phone memory and download bandwidth while expanding the texture at runtime to RAM seems reasonable for very simple textures.

    Read the article

  • Rails render partial with block

    - by brad
    I'm trying to re-use an html component that i've written that provides panel styling. Something like: <div class="v-panel"> <div class="v-panel-tr"></div> <h3>Some Title</h3> <div class="v-panel-c"> .. content goes here </div> <div class="v-panel-b"><div class="v-panel-br"></div><div class="v-panel-bl"></div></div> </div> So I see that render takes a block. I figured then I could do something like this: # /shared/_panel.html.erb <div class="v-panel"> <div class="v-panel-tr"></div> <h3><%= title %></h3> <div class="v-panel-c"> <%= yield %> </div> <div class="v-panel-b"><div class="v-panel-br"></div><div class="v-panel-bl"></div></div> </div> And I want to do something like: #some html view <%= render :partial => '/shared/panel', :locals =>{:title => "Some Title"} do %> <p>Here is some content to be rendered inside the panel</p> <% end %> Unfortunately this doesn't work with this error: ActionView::TemplateError (/Users/bradrobertson/Repos/VeloUltralite/source/trunk/app/views/sessions/new.html.erb:1: , unexpected tRPAREN old_output_buffer = output_buffer;;@output_buffer = ''; __in_erb_template=true ; @output_buffer.concat(( render :partial => '/shared/panel', :locals => {:title => "Welcome"} do ).to_s) on line #1 of app/views/sessions/new.html.erb: 1: <%= render :partial => '/shared/panel', :locals => {:title => "Welcome"} do -%> ... So it doesn't like the = obviously with a block, but if I remove it, then it just doesn't output anything. Does anyone know how to do what I'm trying to achieve here? I'd like to re-use this panel html in many places on my site.

    Read the article

  • Rails 3 render and method call in another controller

    - by akam
    Hello, I am using rails 3: I would like to render a portion of view which is build by a 'notification' method in message class so I've add in my application.html.erb : <li><%= render :action => "notification", :controller => "messages" %></li> The goal of my file notification.html.erb is to display in a red circle the number of notifications in all my pages. I don't think I am in the good way, any ideas ? Thanks all :)

    Read the article

  • Opengl problem with texture in model from obj

    - by subSeven
    Hello! I writing small program in OpenGL, and I have problem ( textures are skew, and I dont know why, this model work in another obj viewer) What I have: http://img696.imageshack.us/i/obrazo.png/ What I want http://img88.imageshack.us/i/obraz2d.jpg/ This code where I load texture: bool success; ILuint texId; GLuint image; ilGenImages(1, &texId); ilBindImage(texId); success = ilLoadImage((WCHAR*)fileName.c_str()); if(success) { success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); if(!success) { return false; } } else { return false; } glGenTextures(1, &image); glBindTexture(GL_TEXTURE_2D, image); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); ilDeleteImages(1, &texId); Code to load obj: triangles.clear(); std::ifstream in; std::string cmd; in.open (fileName.c_str()); if (in.is_open()) { while(!in.eof()) { in>>cmd; if(cmd=="v") { Vector3d vector; in>>vector.x; in>>vector.y; in>>vector.z; points.push_back(vector); } if(cmd=="vt") { Vector2d texcord; in>>texcord.x; in>>texcord.y; texcords.push_back(texcord); } if(cmd=="vn") { Vector3d normal; in>>normal.x; in>>normal.y; in>>normal.z; normals.push_back(normal); } if(cmd=="f") { Triangle triangle; std::string str; std::string str1,str2,str3; std::string delimeter("/"); int pos; int n; std::stringstream ss (std::stringstream::in | std::stringstream::out); in>>str; pos = str.find(delimeter); str1 = str.substr(0,pos); str2 = str.substr(pos+delimeter.length()); pos = str2.find(delimeter); str3 = str2.substr(pos+delimeter.length()); str2 = str2.substr(0,pos); ss<<str1; ss>>n; triangle.a= n-1; ss.clear(); ss<<str3; ss>>n; triangle.an =n-1; ss.clear(); ss<<str2; ss>>n; ss.clear(); triangle.atc = n-1; in>>str; pos = str.find(delimeter); str1 = str.substr(0,pos); str2 = str.substr(pos+delimeter.length()); pos = str2.find(delimeter); str3 = str2.substr(pos+delimeter.length()); str2 = str2.substr(0,pos); ss<<str1; ss>>n; triangle.b= n-1; ss.clear(); ss<<str3; ss>>n; triangle.bn =n-1; ss.clear(); ss<<str2; ss>>n; ss.clear(); triangle.btc = n-1; in>>str; pos = str.find(delimeter); str1 = str.substr(0,pos); str2 = str.substr(pos+delimeter.length()); pos = str2.find(delimeter); str3 = str2.substr(pos+delimeter.length()); str2 = str2.substr(0,pos); ss<<str1; ss>>n; triangle.c= n-1; ss.clear(); ss<<str3; ss>>n; triangle.cn =n-1; ss.clear(); ss<<str2; ss>>n; ss.clear(); triangle.ctc = n-1; triangles.push_back(triangle); } cmd = ""; } in.close(); return true; } return false; Code to draw model: glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glBindTexture(GL_TEXTURE_2D,image); glBegin(GL_TRIANGLES); for(int i=0;i<triangles.size();i++) { glTexCoord2f(texcords[triangles[i].ctc].x, texcords[triangles[i].ctc].y); glNormal3f(normals[triangles[i].cn].x, normals[triangles[i].cn].y, normals[triangles[i].cn].z); glVertex3f( points[triangles[i].c].x, points[triangles[i].c].y, points[triangles[i].c].z); glTexCoord2f(texcords[triangles[i].btc].x, texcords[triangles[i].btc].y); glNormal3f(normals[triangles[i].bn].x, normals[triangles[i].bn].y, normals[triangles[i].bn].z); glVertex3f( points[triangles[i].b].x, points[triangles[i].b].y, points[triangles[i].b].z); glTexCoord2f(texcords[triangles[i].atc].x, texcords[triangles[i].atc].y); glNormal3f(normals[triangles[i].an].x, normals[triangles[i].an].y, normals[triangles[i].an].z); glVertex3f( points[triangles[i].a].x, points[triangles[i].a].y, points[triangles[i].a].z); } glEnd(); glDisable(GL_TEXTURE_2D); Mayby someone find mistake in this

    Read the article

  • How to render a POST and make it show up on another page

    - by stack5914
    I'm trying to create a marketplace website similar to craigslist. I created a form according to the Django tutorial "Working with forms", but I don't know how to render information I got from the POST forms. I want to make information(subject,price...etc) that I got from POST show up on another page like this. http://bakersfield.craigslist.org/atq/3375938126.html and, I want the "Subject"(please look at form.py) of this product(eg.1960 French Chair) to show up on another page like this. http://bakersfield.craigslist.org/ata/ } Can I get some advice to handle submitted information? Here's present codes. I'll appreciate all your answers and helps. <-! Here's my codes -- ?forms.py from django import forms class SellForm(forms.Form): subject = forms.CharField(max_length=100) price = forms.CharField(max_length=100) condition = forms.CharField(max_length=100) email = forms.EmailField() body = forms.TextField() ?views.py from django.shortcuts import render, render_to_response from django.http import HttpResponseRedirect from site1.forms import SellForm def sell(request): if request.method =="POST": form =SellForm(request.POST) if form.is_valid(): subject = form.cleaned_data['subject'] price = form.cleaned_data['price'] condition = form.cleaned_data['condition'] email = form.cleaned_data['email'] body = form.cleaned_data['body'] return HttpResponseRedirect('/books/') else: form=SellForm() render(request, 'sell.html',{'form':form,}) ?urls.py from django.conf.urls import patterns, include, url from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^sechand/$','site1.views.sell'), url(r'^admin/', include(admin.site.urls)), ) ?sell.html <form action = "/sell/" method = "post">{% csrf_token%} {{ form.as_p }} <input type = "submit" value="Submit" /> </form>

    Read the article

  • Textures do not render on ATI graphics cards?

    - by Mathias Lykkegaard Lorenzen
    I'm rendering textured quads to an orthographic view in XNA through hardware instancing. On Nvidia graphics cards, this all works, tested on 3 machines. On ATI cards, it doesn't work at all, tested on 2 machines. How come? Culling perhaps? My orthographic view is set up like this: Matrix projection = Matrix.CreateOrthographicOffCenter(0, graphicsDevice.Viewport.Width, -graphicsDevice.Viewport.Height, 0, 0, 1); And my elements are rendered with the Z-coordinate 0. Edit: I just figured out something weird. If I do not call this spritebatch code above doing my textured quad rendering code, then it won't work on Nvidia cards either. Could that be due to culling information or something like that? Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone); ... spriteBatch.End(); Edit 2: Here's the full code for my instancing call. public void DrawTextures() { Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, textureEffect); while (texturesToDraw.Count > 0) { TextureJob texture = texturesToDraw.Dequeue(); spriteBatch.Draw(texture.Texture, texture.DestinationRectangle, texture.TintingColor); } spriteBatch.End(); #if !NOTEXTUREINSTANCING // no work to do if (positionInBufferTextured > 0) { device.BlendState = BlendState.Opaque; textureEffect.CurrentTechnique = textureEffect.Techniques["Technique1"]; textureEffect.Parameters["Texture"].SetValue(darkTexture); textureEffect.CurrentTechnique.Passes[0].Apply(); if ((textureInstanceBuffer == null) || (positionInBufferTextured > textureInstanceBuffer.VertexCount)) { if (textureInstanceBuffer != null) textureInstanceBuffer.Dispose(); textureInstanceBuffer = new DynamicVertexBuffer(device, texturedInstanceVertexDeclaration, positionInBufferTextured, BufferUsage.WriteOnly); } if (positionInBufferTextured > 0) { textureInstanceBuffer.SetData(texturedInstances, 0, positionInBufferTextured, SetDataOptions.Discard); } device.Indices = textureIndexBuffer; device.SetVertexBuffers(textureGeometryBuffer, new VertexBufferBinding(textureInstanceBuffer, 0, 1)); device.DrawInstancedPrimitives(PrimitiveType.TriangleStrip, 0, 0, textureGeometryBuffer.VertexCount, 0, 2, positionInBufferTextured); // now that we've drawn, it's ok to reset positionInBuffer back to zero, // and write over any vertices that may have been set previously. positionInBufferTextured = 0; } #endif }

    Read the article

  • Endless terrain in jMonkey using TerrainGrid fails to render

    - by nightcrawler23
    I have started to learn game development using jMonkey engine. I am able to create single tile of terrain using TerrainQuad but as the next step I'm stuck at making it infinite. I have gone through the wiki and want to use the TerrainGrid class but my code does not seem to work. I have looked around on the web and searched other forums but cannot find any other code example to help. I believe in the below code, ImageTileLoader returns an image which is the heightmap for that tile. I have modified it to return the same image every time. But all I see is a black window. The Namer method is not even called. terrain = new TerrainGrid("terrain", patchSize, 513, new ImageTileLoader(assetManager, new Namer() { public String getName(int x, int y) { //return "Scenes/TerrainMountains/terrain_" + x + "_" + y + ".png"; System.out.println("X = " + x + ", Y = " + y); return "Textures/heightmap.png"; } })); These are my sources: jMonkeyEngine 3 Tutorial (10) - Hello Terrain TerrainGridTest.java ImageTileLoader This is the result when i use TerrainQuad: , My full code: // Sample 10 - How to create fast-rendering terrains from heightmaps, and how to // use texture splatting to make the terrain look good. public class HelloTerrain extends SimpleApplication { private TerrainQuad terrain; Material mat_terrain; private float grassScale = 64; private float dirtScale = 32; private float rockScale = 64; public static void main(String[] args) { HelloTerrain app = new HelloTerrain(); app.start(); } private FractalSum base; private PerturbFilter perturb; private OptimizedErode therm; private SmoothFilter smooth; private IterativeFilter iterate; @Override public void simpleInitApp() { flyCam.setMoveSpeed(200); initMaterial(); AbstractHeightMap heightmap = null; Texture heightMapImage = assetManager.loadTexture("Textures/heightmap.png"); heightmap = new ImageBasedHeightMap(heightMapImage.getImage()); heightmap.load(); int patchSize = 65; //terrain = new TerrainQuad("my terrain", patchSize, 513, heightmap.getHeightMap()); // * This Works but below doesnt work* terrain = new TerrainGrid("terrain", patchSize, 513, new ImageTileLoader(assetManager, new Namer() { public String getName(int x, int y) { //return "Scenes/TerrainMountains/terrain_" + x + "_" + y + ".png"; System.out.println("X = " + x + ", Y = " + y); return "Textures/heightmap.png"; // set to return the sme hieghtmap image. } })); terrain.setMaterial(mat_terrain); terrain.setLocalTranslation(0,-100, 0); terrain.setLocalScale(2f, 1f, 2f); rootNode.attachChild(terrain); TerrainLodControl control = new TerrainLodControl(terrain, getCamera()); terrain.addControl(control); } public void initMaterial() { // TERRAIN TEXTURE material this.mat_terrain = new Material(this.assetManager, "Common/MatDefs/Terrain/HeightBasedTerrain.j3md"); // GRASS texture Texture grass = this.assetManager.loadTexture("Textures/white.png"); grass.setWrap(WrapMode.Repeat); this.mat_terrain.setTexture("region1ColorMap", grass); this.mat_terrain.setVector3("region1", new Vector3f(-10, 0, this.grassScale)); // DIRT texture Texture dirt = this.assetManager.loadTexture("Textures/white.png"); dirt.setWrap(WrapMode.Repeat); this.mat_terrain.setTexture("region2ColorMap", dirt); this.mat_terrain.setVector3("region2", new Vector3f(0, 900, this.dirtScale)); Texture building = this.assetManager.loadTexture("Textures/building.png"); building.setWrap(WrapMode.Repeat); this.mat_terrain.setTexture("slopeColorMap", building); this.mat_terrain.setFloat("slopeTileFactor", 32); this.mat_terrain.setFloat("terrainSize", 513); } }

    Read the article

  • help with rails render action vs routing

    - by Stacia
    I was using some image cropping example that I found online and now I got confused. There is actually no "crop" method in my controller. Instead (following the guide) I put a render :action => 'cropping', :layout=> "admin" In my create method. That renders a page the view called cropping.html.erb . It works fine but I have no idea how to link or render that page otherwise, like if I wanted to hit a URL directly or press a button to recrop an image. Should I actually create a crop method in my controller and hook it up via routing if I want to be able to do this, or is there a way within my view to link to the same place that renders the cropping action? Sorry about the confusion :) It doesn't help that the first version of the tutorial did have a cropping method and he removed it!! Any explanation on why one method is better over the other would be great. Thanks!!

    Read the article

  • problem while displayin the texture image on view that works fine on iphone simulator but not on dev

    - by yunas
    hello i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView. here is the code to load an image from an UIImage object - (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture { int width, height; CGImageRef cgImage; GLubyte *data; CGContextRef cgContext; CGColorSpaceRef colorSpace; GLenum err; if (image == nil) { NSLog(@"Failed to load"); return; } cgImage = [image CGImage]; width = CGImageGetWidth(cgImage); height = CGImageGetHeight(cgImage); colorSpace = CGColorSpaceCreateDeviceRGB(); // Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte)); cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast); if (cgContext != NULL) { // Set the blend mode to copy. We don't care about the previous contents. CGContextSetBlendMode(cgContext, kCGBlendModeCopy); CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage); glGenTextures(1, &(_textures[texture])); glBindTexture(GL_TEXTURE_2D, _textures[texture]); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); if (mipmap) glGenerateMipmapOES(GL_TEXTURE_2D); err = glGetError(); if (err != GL_NO_ERROR) NSLog(@"Error uploading texture. glError: 0x%04X", err); CGContextRelease(cgContext); } free(data); CGColorSpaceRelease(colorSpace); } The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501 any idea how to tackle this bug.... thnx in advance 4 ur soluitons

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >