Search Results

Search found 37285 results on 1492 pages for 'text rendering'.

Page 46/1492 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Transparency in XNA-4 primitives

    - by Shashwat
    I'm using XNA 4 with Visual Studio 2010. I'm trying to create a simple 3D world with walls and doors in which the user to free to roam around. A wall is just a rectangle which is currently being rendered with four vertices using triangle strips. But to create a door, I'd have to split it into three rectangles as shown in the figure. Four quadrilaterals if I want to have the following door-style It will become more complex to have multiple doors on the same wall or if I have windows. Is there any shorter way to handle this? I am looking for something that will just make the wall transparent wherever I want. I found a solution but facing a problem here

    Read the article

  • Read All Text from Textfile with Encoding in Windows RT

    - by jdanforth
    A simple extension for reading all text from a text file in WinRT with a specific encoding, made as an extension to StorageFile: public static class StorageFileExtensions {     async public static Task<string> ReadAllTextAsync(this StorageFile storageFile)     {         var buffer = await FileIO.ReadBufferAsync(storageFile);         var fileData = buffer.ToArray();         var encoding = Encoding.GetEncoding("Windows-1252");         var text = encoding.GetString(fileData, 0, fileData.Length);         return text;     } }

    Read the article

  • Game thread, render thread, animation/inverse kinematics, and synchronization

    - by user782220
    In a multithreaded setup with a game logic thread and a render thread, with some kind of skin mesh animation with inverse kinematics plus etc how does animation work? Does the game logic thread just update a number saying time T in the animation and then the render thread infers Who owns the skin mesh animation, the game logic thread or the render thread? How is it stored in the scene graph if it is stored there at all? When the game logic updates does it do the computation of the skin mesh animation and the computation of the inverse kinematics and then store the result directly in the scene graph or is it stored indirectly and the render thread does the computation?

    Read the article

  • How do I make urxvt render xft fonts?

    - by wishi
    I wonder whether there's a way to make urxvt render xft fonts: URxvt.font: xft:Droid Sans Mono Slashed:pixelsize=9:Regular URxvt.boldFont: xft:Droid Sans Mono Slashed:pixelsize=9:Bold URxvt.talicFont: xft:Droid Sans Mono Slashed:pixelsize=9:Italic URxvt.bolditalicFont: xft:Droid Sans Mono Slashed:pixelsize=9:Bold:Italic If I try this, I get something like: So it scales pretty bad: ! Fonts Xft.dpi: 132 Xft.antialias: true Xft.rgba: rgb Xft.hinting: true Xft.autohint: true Xft.hintstyle: hintfull I'm not sure whether this is one of the reaons. However I want antialias and that Droid. Is there any trick here?

    Read the article

  • Text Editor with SSH/Terminal/FTP/Putty combo for develeping in Rails on Windows

    - by Panoy
    I plan to learn Ruby on Rails and would like to code in my development box which runs on Windows XP. I have Ubuntu Server (forgot the version ;p) running as my web server with Rails installed on it. I have been considering using Vim as my text editor of choice in XP but would like to know any text editor and accompanying shell/FTP/Putty/SSH (or whatever you may call it) program that can access those files in my Ubuntu server. It is better if the shell can be called or is bundled inside the text editor. I would like to know your combinations (text editor + shell) and your experiences on it when you were able to develop your Rails projects on that combination. Cheers!

    Read the article

  • Updating scene graph in multithreaded game

    - by user782220
    In a game with a render thread and a game logic thread the game logic thread needs to update the scene graph used by the render thread. I've read about ideas such as a queue of updates. Can someone describe to a newbie at scene graphs what kind of interface the scene graph exports. Presumably it would be rather complicated. So then how does a queue of updates get implemented in C++ in a way that can handle the complexity of the interface of the scene graph while also being type safe and efficient. Again I'm a newbie at scene graphs and C++.

    Read the article

  • How to select text in Pico when I don't have a carat character?

    - by Andrew Swift
    I am using Pico via Terminal/SSH on an iPad with a French bluetooth keyboard. There is no way to type the carat (^) key to select text (control-carat to start selection). The carat on the keyboard is used to type a circonflex accent (août), and only becomes active when one types a letter after pressing it. There is therefore no way to type control-^. When I do type ctrl-^ in Pico, the cursor moves to the previous line. Is there an alternate way to select text in Pico? I can't see how to use it without finding a solution.

    Read the article

  • Is there any hueristic to polygonize a closed 2d-raster shape with n triangles?

    - by Arthur Wulf White
    Lets say we have a 2d image black on white that shows a closed geometric shape. Is there any (not naive brute force) algorithm that approximates that shape as closely as possible with n triangles? If you want a formal definition for as closely as possible: Approximate the shape with a polygon that when rendered into a new 2d image will match the largest number of pixels possible with the original image.

    Read the article

  • building a game for different resoulution phones

    - by Jason
    Hi, I am starting some tests for building a game on the android program. So far everything is working and seems nice. However I do not understand how to make sure my game looks correct on all phones as the all will have slightly different screen ratios (and even very different on some odd phones) What I am doing right now is making a view frustrum ( could also be ortho ) which I set to go from -ratio to +ratio ( as I have seen on many examples) however this causes my test shape to be stretched and sometimes cut off by the edge of the screen. I am tilting my phone to landscape to do my tests ( a bit extreame) but it should still render correctly if I have dome things right. Should I be scaling by some ratio before drawing or something? An example would be greatly apriciated PS I am doing a 2d game

    Read the article

  • Light shaped like a line

    - by Michael
    I am trying to figure out how line-shaped lights fit into the standard point light/spotlight/directional light scheme. The way I see it, there are two options: Seed the line with regular point lights and just deal with the artifacts. Easy, but seems wasteful. Make the line some kind of emissive material and apply a bloom effect. Sounds like it could work, but I can't test it in my engine yet. Is there a standard way to do this? Or for non-point lights in general?

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Can't remove JPanel from JFrame while adding new class into it

    - by A.K.
    Basically, I have my Frame class, which instantiates all the properties for the JFrame, and draws a JLabel with an image (my title screen). Then I made a separate JPanel with a start button on it, and made a mouse listener that will allow me to remove these objects while adding in a new Board() class (Which paints the main game). *Note: The JLabel is SEPARATE from the JPanel, but it still gets moved to the side by it. Problem: Whenever I click the button though, it only shows a little square of what I presume is my board class trying to run. Code below for the Frame Class: package OurPackage; //Made By A.K. 5/24/12 //Contains Frame. import java.awt.BorderLayout; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridBagLayout; import java.awt.GridLayout; import java.awt.Image; import java.awt.Rectangle; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.*; import javax.swing.plaf.basic.BasicOptionPaneUI.ButtonActionListener; public class Frame implements MouseListener { public static boolean StartGame = false; ImageIcon img = new ImageIcon(getClass().getResource("/Images/ActionJackTitle.png")); ImageIcon StartImg = new ImageIcon(getClass().getResource("/Images/JackStart.png")); public Image Title; JLabel TitleL = new JLabel(img); public JPanel panel = new JPanel(); JButton StartB = new JButton(StartImg); JFrame frm = new JFrame("Action-Packed Jack"); public Frame() { TitleL.setPreferredSize(new Dimension(1200, 420)); frm.add(TitleL); frm.setLayout(new GridBagLayout()); frm.add(panel); panel.setSize(new Dimension(220, 45)); panel.setLayout(new GridBagLayout ()); panel.add(StartB); StartB.addMouseListener(this); StartB.setPreferredSize(new Dimension(220, 45)); frm.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frm.setSize(1200, 420); frm.setVisible(true); frm.setResizable(false); frm.setLocationRelativeTo(null); } public static void main(String[] args) { new Frame(); } public void mouseClicked(MouseEvent e) { StartB.setContentAreaFilled(false); panel.remove(StartB); frm.remove(panel); frm.remove(TitleL); //frm.setLayout(null); frm.add(new Board()); //Add Game "Tiles" Or Content. x = 1200 frm.validate(); System.out.println("Hit!"); } @Override public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mousePressed(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseReleased(MouseEvent arg0) { // TODO Auto-generated method stub } }

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Dynamic Terrain Texture

    - by lgrevenl
    I've been looking at a 2D physics game called 'Hill Climb Racing' (Android and iOS) and was wondering how they went about texturing the terrain? I've had a think about it and I've come up with nothing and finding a resource on the web has proved impossible. Please help. The game mentioned uses Cocos2d. Would it be just as doable in a different environment? EDIT: I was looking at another question: Drawing large 2D sidescroller level terrain The end result is what I'm looking for, but in my mind I was thinking that there would be some way to add this effect (using small textures) to some terrain specified by vertices rather than making a very large image to match whatever is seen in the level.

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Essence of Anchor Text

    It is significant to utilize anchor text in order to improve search engine ranking. Anchor text is directly correlated with inbound links. If you are leaving comments to blogs or submit articles with link, make use of anchor text and not the URL only.

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Drawing an outline around an arbitrary group of hexagons

    - by Perky
    Is there an algorithm for drawing an outline around around an arbitrary group of hexagons? The polygon outline drawn may be concave. See the images below, the green line is what I am trying to achieve. The hexagons are stored as vertices and drawn as polygons. Edit: I've uploaded images that should explain more. I want to favour convex hulls because it's conveys an area of control more quickly. Each hexagon is stored in a multidimensional array so they all have x and y coordinates, I can easily find adjacent hexagons and the opposite vertex, i.e. adjacentHexagon = getAdjacentHexagon( someHexagon, NORTHWEST ) if there isn't a hexagon immediately adjacent it will continue to search in that direction until it finds one or hits the map edges.

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • Is there a way to prevent users from adjusting their gamma correction to 'cheat' their way out of a 'dark' area?

    - by Athix
    In almost every game I've come across that includes a dark situation designed to change the way a user interacts with the environment, there are always some players who turn up their monitor's gamma correction in order to negate the desired effect. Is there a way to prevent users from adjusting their gamma correction to 'cheat' their way out of a challenge? (the darkness) I'd imagine if you could reliably retrieve the current gamma correction of the user's monitor, you could use that to more or less prevent the advantage it would otherwise grant without causing the normal users any inconvenience.

    Read the article

  • IDirect3DDevice9Ex and D3DPOOL_MANAGED?

    - by bluescrn
    So I wanted to switch to IDirect3DDevice9Ex, purely for the SetFrameLatency function, as fullscreen vsynced D3D seemed to produce noticable input lag. But then it tells me 'ha ha ha! now you can't use D3DPOOL_MANAGED!': Direct3D9: (ERROR) :D3DPOOL_MANAGED is not valid with IDirect3DDevice9Ex Is this really as unpleasant as it looks (when you're relying quite heavily on managed resources) - or is there a simple solution? If it really does mean manual management of everything (reloading all static textures, VBs, and IBs on a device reset), is it worth the hassle, will IDirect3DDevice9Ex bring enough benefit to make it worth writing a new resource manager? Starting to think I must be doing something wrong, due to this: Direct3D9: (ERROR) :Lock is not supported for textures allocated with POOL_DEFAULT unless they are marked D3DUSAGE_DYNAMIC. So if I put my (static) textures in POOL_DEFAULT, they need flagging as D3DUSAGE_DYNAMIC, just because I lock them once to load the data in?

    Read the article

  • How is the terrain generated in Commandos and Commandos game clones/look-alikes?

    - by teodron
    The Commandos series of games and its similar western counterpart, Desperados, use a mix of 2D and 3D elements to achieve a very pleasing and immersive atmosphere. Apart from the concept that alone made the series a best-seller, the graphics eye-candy was also a much appreciated asset of that game. I was very curious on what was the technique used to model and adorn the realistic terrains in those titles? Below are some screenshots that could be relevant as a reference for whomever has a candidate answer: The tiny details and patternless distribution of ornamental textures make me think that these terrains were not generated using a standard heightmap-blendmap method.

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >