Search Results

Search found 3337 results on 134 pages for 'terrain rendering'.

Page 10/134 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • In GLSL is it possible to offset vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others. This is the C++ code that generates the grid and texture co-orientates which I believe to be correct as the texture is mapped to the grid, hence the white blob in the middle. The grid-lines are generated in the fragment shader, sorry for any confusion. I have tried multiplying the r value of hmColour by 1000 unfortunately that had no effect. The only other problem it could be is that the texture coordinate data is incorrect ? for (int z = 0; z < MAP_Z ; z++) { for(int x = 0; x < MAP_X ; x++) { //Generate Vertex Buffer vertexData[iVertex++] = float (x) * MAP_X; vertexData[iVertex++] = 0; vertexData[iVertex++] = -(float) (z) * MAP_Z; //Colour Buffer NOT NEEDED colourData[iColour++] = 255.0f; // R colourData[iColour++] = 1.0f; // G colourData[iColour++] = 0.0f; // B //Texture Buffer textureData[iTexture++] = (float ) x * (1.0f / MAP_X); textureData[iTexture++] = (float ) z * (1.0f / MAP_Z); } } The heightmap texture I am trying to use appears like so (without grid-lines). This is the corresponding fragment shader // Fragment Shader Code #version 330 uniform sampler2D hmTexture; layout (location=0) out vec4 fragColour; in vec2 fragTex; in vec4 pos; void main(void) { vec2 line = fragTex * 32; // Without Gridlines fragColour = texture(hmTexture,fragTex); // With grid lines // + mix(vec4(0.0, 0.0, 1.0, 0.0), vec4(1.0, 1.0, 1.0, 1.0), // smoothstep(0.05,fract(line.y), 0.99) * smoothstep(0.05,fract(line.x),0.99)); }

    Read the article

  • XNA Quadtree with LOD

    - by Byron Cobb
    I'm looking to create a fairly large environment, and as such would like to implement a quadtree and use LOD on it. I've looked through numerous examples and I get the basic idea of a quadtree. Start with a root node with 4 vertices covering the whole map and divide into 4 children nodes until I meet some criteria(max number of triangles) I'm looking for some very very basic algorithm or explanation with respect to drawing the quadtree. What vertices need to be stored per iteration? When do I determine what vertices to draw? When to update indices and vertices? Hope to integrate the bounding frustrum? Do I include parent and child vertices? I'm looking for very simple instruction on what to do. I've scoured the internet for days now looking, but everyone adds extra code and a different spin without explanation. I understand quadtrees, but not with respect to 3d rendering and lod. A link to an outside source will probably have been read by myself already and won't help. Regards, Byron.

    Read the article

  • How to test a 3D rendering engine?

    - by YoYo
    Me and some friends developing simple 3D rendering engine as practice for the university. We used Ogre 3d as prototype and now we are developing it from base The engine is wrapped up in simple game that asks the user to select shape (circle, triangle, square...), color and dimensions and renders the image to the screen. It also enables to move and rotate the shape on screen using mouse. We would like to test automate the view rendering. I could not find any test framework for this issue and I would like to know how 3D test is done in non manual matter

    Read the article

  • How to achieve best performance in DirectX 9.0 while rendering on multiple monitors

    - by Vibhore Tanwer
    I am new to DirectX, and trying to learn best practice. Please suggest what are the best practices for rendering on multiple monitors different things at the same time? how can I boost performance of application? I have gone through this article http://msdn.microsoft.com/en-us/library/windows/desktop/bb147263%28v=vs.85%29.aspx . I am making use of some pixel shaders to achieve some effects. At most 4 effect(4 shader effects) can be applied at same time. What are the best practices to achieve best performance with DirectX 9.0. I read somewhere that DirectX 11 provides support for parallel rendering, but I am not able to get any working sample for DirectX 11.0. Please help me with this, Any help would be of great value. Thanks

    Read the article

  • How do I implement a Bullet Physics CollisionObject that represents my cube like terrain?

    - by Byte56
    I've successfully integrated the Bullet Physics library into my entity/component system. Entities can collide with each other. Now I need to enable them to collide with the terrain, which is finite and cube-like (think InfiniMiner or it's clone Minecraft). I only started using the Bullet Physics library yesterday, so perhaps I'm missing something obvious. So far I've extended the RigidBody class to override the checkCollisionWith(CollisionObject co) function. At the moment it's just a simple check of the origin, not using the other shape. I'll iterate on that later. For now it looks like this: @Override public boolean checkCollideWith(CollisionObject co) { Transform t = new Transform(); co.getWorldTransform(t); if(COLONY.SolidAtPoint(t.origin.x, t.origin.y,t.origin.z)){ return true; } return false; } This works great, as far as detecting when collisions happen. However, this doesn't handle the collision response. It seems that the default collision response is to move the colliding objects outside of each others shapes, possibly their AABBs. At the moment the shape of the terrain is just a box the size of the world. This means the entities that collide with the terrain just shoot away to outside that world size box. So it's clear that I either need to modify the collision response or I need to create a shape that conforms directly to the shape of the terrain. So which option is best and how do I go about implementing it? It should be noted that the terrain is dynamic and frequently modified by the player.

    Read the article

  • Android opengles 2.0 :different resolutions rendering and input

    - by kkan
    I'm currently developing a sprite based 2D game for android using opengles 2.0. I've got some basic rendering done that mimics the spritebatch functionality of xna (draw sprite, rotation, color). But all of this works for a fixed projection matrix, but android has a lot of screen sizes. Q1)Would this be an okay method to scale up/down the drawing? 1)Draw the whole screen to a texture. 2)Draw the above texture as a quad to the device. I found the above through some searching, not sure if it's the best one, are there any alternatives? Q2)How do you handle inputs for different resolutions? I currently get the position of a touch and use it raw. Would it be okay to get the position, and then scale the position to size of the texture used for rendering, and the perform calculations on it? Thanks.

    Read the article

  • Screen rendering problems after upgrading to 12.10

    - by vjrj
    Since upgrading to Ubuntu 12.10 I'm suffering severe rendering issues in unity: Some parts of any application (for instance Firefox, or emacs, etc) are blank, or are not rendered, or are blinking. Some fonts in an application are not correct rendered, maybe only some part (see Eclipse screenshot). There is some shadows in desktop background. It's something that occurs from time to time (it's works great for hours and suddenly start to give this rendering problems). I was trying to find a bug in launchpad but without success. My card (using i915 module): 00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0042] (rev 02) I've tried to reset compiz/unity or gnome but does not help. Any tip? Update: A Firefox screenshot of how my profile in Ask Ubuntu looks like now

    Read the article

  • How to run Fujitsu P27T-7 LED monitor in its not native resolution and have perfect fonts rendering

    - by Ilia Rostovtsev
    My problem is completely opposite to anything I could find as I need to run my monitor in its NOT native resolution and have perfect font rendering. I recently got myself Ultra HD 2560x1440 27 inch monitor (Fujitsu P27T-7 LED) and I have an issue with this. I would call it personal but I'm afraid it's not as few people already agreed with me. I do programming and the text on UHD is way to small for comfortable usage. I changed the resolution to regular Full HD (1920x1080), it became just right but the text is looking slightly blur now, in comparison to both its natural UHD resolution and/or to my old 23 inch NEC. I am pretty frustrated and not sure what to do and how to make fonts look just as sleek as they should? I can't work in UHD resolution (my vision is 100% perfect), simply if calculated, picture size with Ultra HD (2560x1440) on 27 inch is around 30% smaller than Full HD (1920x1080) on 23 inch. In order to have same font size, if compared with Full HD 23 inch, 27 inch Ultra HD monitor must be around 32 inches in size. If I set my new monitor to regular Full HD 1920x1080, then the fonts' size are just perfect but the quality is not as it's blurry? Could anyone please help me out with an advise of how to solve this problem? Spec: nVidia 560 Ti with DVI-D port on Fedora 20. EDIT 1: Changing fonts doesn't really help as everything else doesn't look the way it should. EDIT 2: The monitor is buzzing on 2560x1440 so badly in case there are lots of lines on the screen, like file listing. If I type ls /usr/bin it makes such nasty irritating sound. When resolution goes to 1920x1080 it's a bit better. Any idea why?

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • What rendering services are available to convert URLs to images?

    - by tangens
    I know some services that encode the description of an image inside on an URL. For example: yuml.me for drawing UML Diagrams: or www.codecogs.com for rendering LaTeX equations: I really like these services to use them inside my javadoc to illustrate the documentation. On stackoverflow.com it's a bit tricky to encode these URLs, see my request at meta.stackoverflow.com. Question Are there any other rendering services that are useful for documenting source code?

    Read the article

  • Preferred way to render text in OpenGL

    - by dukeofgaming
    Hi, I'm about tu pick up computer graphics once again for an university project. For a previous project I used a library called ftgl that didn't leave me quite satisfied as it felt kind of heavy (I tried all rendering techniques, text rendering didn't scale very well). My question is, is there a good and efficient library for this?, if not, what would be the way to implement fast but nice looking text?. Some intended uses are: Floating object/character labels Dialogues Menus HUD Regards and thanks in advance. EDIT: Preferrably that it can load fonts

    Read the article

  • Rendering with Direct3D

    - by Jamie
    Hi, I'm slightly confused about how Direct3D rendering works. Basically, as long as I render to one surface, everything is fine. But when I try rendering to multiple surfaces, it seems like everything is still rendered to one surface. I think there's something wrong with my calls. For each update cycle this is what I do 1. device-BeginScene() 2. sprite-Begin(...) ... A bunch of GetRenderTarget to store the old render target, then SetRenderTarget to set a new surface, and then things like CreateVertexBuffer, SetTexture, etc to draw on the new render target. Then resetting to the old render target. sprite-Draw([the back buffer]) (the back buffer is actually another surface, not the actual back buffer. But here it is being drawn onto the actual back buffer, I think) sprite-End() device-EndScene() device-Present(...) Also, it seems like if I mix sprite drawing and non-sprite drawing onto a surface, that first one set of render commands is executed and then the other set, rather than in order by when each command was called. If anyone could shed light on any of this, it would be much appreciated.

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • Nifty default controls prevent the rest of my game from rendering

    - by zergylord
    I've been trying to add a basic HUD to my 2D LWJGL game using nifty gui, and while I've been successful in rendering panels and static text on top of the game, using the built-in nifty controls (e.g. an editable text field) causes the rest of my game to not render. The strange part is that I don't even have to render the gui control, merely declaring it appears to cause this problem. I'm truly lost here, so even the vaguest glimmer of hope would be appreciated :-) Some code showing the basic layout of the problem: display setup: // load default styles nifty.loadStyleFile("nifty-default-styles.xml"); // load standard controls nifty.loadControlFile("nifty-default-controls.xml"); screen = new ScreenBuilder("start") {{ layer(new LayerBuilder("baseLayer") {{ childLayoutHorizontal(); //next line causes the problem control(new TextFieldBuilder("input","asdf") {{ width("200px"); }}); }}); }}.build(nifty); nifty.gotoScreen("start"); rendering glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluOrtho2D(0f,WINDOW_DIMENSIONS[0],WINDOW_DIMENSIONS[1],0f); //I can remove the 2 nifty lines, and the game still won't render nifty.render(true); nifty.update(); glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluOrtho2D(0f,(float)VIEWPORT_DIMENSIONS[0],0f,(float)VIEWPORT_DIMENSIONS[1]); glTranslatef(translation[0],translation[1],0); for (Bubble bubble:bubbles){ bubble.draw(); } for (Wall wall:walls){ wall.draw(); } for(Missile missile:missiles){ missile.draw(); } for(Mob mob:mobs){ mob.draw(); } agent.draw();

    Read the article

  • Input handling between game loops

    - by user48023
    This may be obvious and trivial for you but as I am a newbie in programming I come with a specific question. I have three loops in my game engine which are input-loop, update-loop and render-loop. Update-loop is set to 10 ticks per second with a fixed timestep, render-loop is capped at around 60 fps and the input-loop runs as fast as possible. I am using one of the Javascript frameworks which provide such things but it doesn't really matter. Let's say I am rendering a tile map and the view of which elements are rendered depends on camera-like movement variables which are modified during key pressing. This is only about camera/viewport and rendering, no game physics involved here. And now, how can I handle input events among these loops to keep consistent engine reaction? Am I supposed to read the current variable modified with input and do some needed calculations in a update-loop and share the result so it could be interpolated in a render-loop? Or read the input effect directly inside the render-loop and put needed calculations inside? I thought interpreting user input inside an update-loop with a low tick rate would be inaccurate and kind of unresponsive while rendering with interpolation in the final view. How it is done properly in games overall?

    Read the article

  • Logic / Render phases with a single thread

    - by DevilWithin
    The question I have may generate different opinions from different developers, but I'd still like to have an answer on this. Its all about the updating and rendering steps of the game loop, and their use under multi and single threaded environments. Currently, there is one thread running, which takes care of sequentially executing events , logic and rendering. Sometimes, the logic part may wish to change the game state to something else, and in between do some loading of files. The result is that the game hangs completely while loading, and then proceeds to normal rendering of the new state. To go around this, i could make another thread, do the loading there while the main thread renders a smooth loading animation, and then proceed normally. The real question is about if i don't create another thread. I could refresh the screen from the logic thread, and provide some basic loading screen, which could be not so smoothly updated while the files load. In fact, this approach is not loved by a lot of developers, as it scrambles render code in the logic step, which may cause problems of different sorts.. Hope its clear!

    Read the article

  • android shut-down errors / thread problems

    - by iQue
    Im starting to deal with some stuff in my game that I thought were "minor problems" and one of these are that I get an error every time I shut down my game that looks like this: 09-05 21:40:58.320: E/AndroidRuntime(30401): FATAL EXCEPTION: Thread-4898 09-05 21:40:58.320: E/AndroidRuntime(30401): java.lang.NullPointerException 09-05 21:40:58.320: E/AndroidRuntime(30401): at nielsen.happy.shooter.MainGamePanel.render(MainGamePanel.java:94) 09-05 21:40:58.320: E/AndroidRuntime(30401): at nielsen.happy.shooter.MainThread.run(MainThread.java:101) on these lines is my GameViews rendering-method and the line in my Thread that calls my GameViews rendering-method. Im guessing Something in my surfaceDestroyed(SurfaceHolder holder) is wrong, or maybe Im not ending the tread in the right place. My surfaceDestroyed-method: public void surfaceDestroyed(SurfaceHolder holder) { Log.d(TAG, "Surface is being destroyed"); boolean retry = true; ((MainThread)thread).setRunning(false); while (retry) { try { ((MainThread)thread).join(); retry = false; } catch (InterruptedException e) { } } Log.d(TAG, "Thread was shut down cleanly"); } Also, In my activity for this View my onPaus, onDestoy and onStop are empty, do I maybe need to add something there? The crash occurs when I press my home-button on the phone, or any other button that makes the game stop. But as I understand it the onPaus is called when you press the Home-button.. This is really new territory for me so Im sorry if im missing something obvious or something really big. adding my surfaceCreated method asweel since that is where I start this thread: public void surfaceCreated(SurfaceHolder holder) { controls = new GameControls(this); setOnTouchListener(controls); timer1.schedule(new Task(this), 0, delay1); thread.setRunning(true); thread.start(); } and aslong as this is running, my game is rendering and updating.

    Read the article

  • Partial Rendering with Update Progress Bar Using AJAX and jQuery

    This article guides about showing an update progress bar while partial page rendering. It also covers about writing data in XML file as well....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • Partial rendering control using JQuery in MVC 2

    This article show a web custom control that allows partial rendering using JQuery in a MVC 2 web application...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google I/O 2012 - YouTube API + Cloud Rendering = Happy Mobile Gamers

    Google I/O 2012 - YouTube API + Cloud Rendering = Happy Mobile Gamers Jarek Wilkiewicz, Danny Hermes YouTube is one of the top destinations for gamers. Many console developers already incorporate video recording and uploading directly into their titles, but uploading to YouTube from a mobile game presents a unique set of challenges. Come and learn how the YouTube API combined with cloud computing can help enable video uploads in your mobile game. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 100 0 ratings Time: 57:05 More in Science & Technology

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >