Search Results

Search found 10050 results on 402 pages for 'graphics card'.

Page 60/402 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Screen tearing oddity when one window is on top of another with Radeon

    - by Kevin Qiu
    I have a mobility radeon 5850 running 11.04 and am running catalyst 11.10 drivers. I noticed that whenever I have two windows open, one of top of the other, scrolling results in minor screen tearing, usually around the border of the window on the bottom. When I minimize or close the bottom the tearing goes away. I have tear free desktop enabled and tried looking for this specific issue but couldn't find anyone else with it. It happens in both Unity and Classic. Is there any fix for this?

    Read the article

  • Orthographic Projection Issue

    - by Nick
    I have a problem with my Ortho Matrix. The engine uses the perspective projection fine but for some reason the Ortho matrix is messed up. (See screenshots below). Can anyone understand what is happening here? At the min I am taking the Projection matrix * Transform (Translate, rotate, scale) and passing to the Vertex shader to multiply the Vertices by it. VIDEO Shows the same scene, rotating on the Y axis. http://youtu.be/2feiZAIM9Y0 void Matrix4f::InitOrthoProjTransform(float left, float right, float top, float bottom, float zNear, float zFar) { m[0][0] = 2 / (right - left); m[0][1] = 0; m[0][2] = 0; m[0][3] = 0; m[1][0] = 0; m[1][1] = 2 / (top - bottom); m[1][2] = 0; m[1][3] = 0; m[2][0] = 0; m[2][1] = 0; m[2][2] = -1 / (zFar - zNear); m[2][3] = 0; m[3][0] = -(right + left) / (right - left); m[3][1] = -(top + bottom) / (top - bottom); m[3][2] = -zNear / (zFar - zNear); m[3][3] = 1; } This is what happens with Ortho Matrix: This is the Perspective Matrix:

    Read the article

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • 3DS Max exporting too many vertexes for model

    - by Juan Pablo
    I have a sample model of a cube and a buddha downloaded from internet in 3ds format which I can load correctly into my program and view them without problem, but wanted to try and create my own model. I created a simple box mesh in 3ds max, and exported it as .3ds (Converted to mesh - export as .3ds) When inspecting the .3ds file with a hex viewer, I was expecting to see 8 vertexes and 12 faces declared (as the model I downloaded from internet). But what i found was that it listed 26 vertexes, and 12 faces! And when I try to load that file with my .3ds viewer, my parser isn't detecting the face block (0x4120), which is strange because it worked for other objects downloaded from internet. Do I have to set any special property in order to export a 3ds file with minimum vertexes and a vertex-index list?

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • Blending animations for more character movements

    - by Noob Saibot
    I am making a hack n slash 3rd person game. And I want the character movements to be more dynamic not like fighting games where you have a moves list. I want to animate tons of different animations and have them "Tween" between each other? Because I want the controls to not be keyboard mouse. I want it to be all keyboard. that way you have up to 10 inputs (All your fingers) to blend and morph animations to create more fluid movements. In the end this will almost be similar to characters typing a phrase or string of keys rather than move forward mouse look click to melee. My question is. Has anyone done this before and would someone go about trying to tween lets say one for key on the keyboard excluding Tab, Caps, R+Shift, L+Shift, Enter, R+Ctrl, L+Ctrl, L+Alt, R+Alt, Windows Key, and Menu. So thats all the numbers, letters and punctuation keys. Thats 46 keys gives me a combination of 46P1 = 5502622159812088949850305428800254892961651752960000000000L (used Python) and with a minimum entry value of 2 keypresses shortening to half. This is not humanly possible to create so many inique animations in one lifetime. But I'm guessing there is a reason this hasn't been done already. Or if I just used 10 basic keys. Maybe ASDF SPACE (RIGHT HAND) 456+0 (LEFT HAND KEYPAD) it would give me 3,628,800 posible unique animations.

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Ubuntu + Wacom Intuos 4 + MyPaint HELP!

    - by Sativa
    Please keep in mind I'm not that computer savvy, but I will try any suggestion so please help me out! My tablet will stop working if the USB connection is ever broken, or the Ubuntu software is being updated. Sometimes it will stop working for no reason that I can see. The lights will still be on, but it won't be responsive. It doesn't work again until I restart the laptop with the tablet plugged in, which is grating if you have to do it every 25 min. or so... I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! Also, MyPaint is frequently having hiccups. It seems to save fine but at times it will randomly close down and when I open files they're often empty. They turn into 0Kb files and only contain a single empty layer. Also very grating, considering I lose days of work for no clear reason and without any heads up. Again, I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! The error message reads; Traceback (most recent call last): File "/usr/share/mypaint/gui/application.py", line 177, at_application_start(*junk=()) else: self.filehandler.open_file(fn) variables: {'fn': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora'), 'self.filehandler.open_file': ('local', <bound method FileHandler.wrapper of <gui.filehandling.FileHandler object at 0x7fdb89063a10>>)} File "/usr/share/mypaint/gui/drawwindow.py", line 60, wrapper(self=<gui.filehandling.FileHandler object>, *args=(u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',), **kwargs={}) try: func(self, *args, **kwargs) # gtk main loop may be called in here... variables: {'self': ('local', <gui.filehandling.FileHandler object at 0x7fdb89063a10>), 'args': ('local', (u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',)), 'func': ('local', <function open_file at 0x7fdb8b397b90>), 'kwargs': ('local', {})} File "/usr/share/mypaint/gui/filehandling.py", line 231, open_file(self=<gui.filehandling.FileHandler object>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora') try: self.doc.model.load(filename, feedback_cb=self.gtk_main_tick) except document.SaveLoadError, e: variables: {'self.doc.model.load': ('local', <bound method Document.load of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'feedback_cb': (None, []), 'self.gtk_main_tick': ('local', <function gtk_main_tick at 0x7fdb8b397b18>), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 544, load(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', **kwargs={'feedback_cb': <function gtk_main_tick>}) try: load(filename, **kwargs) except gobject.GError, e: variables: {'load': ('local', <bound method Document.load_ora of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'kwargs': ('local', {'feedback_cb': <function gtk_main_tick at 0x7fdb8b397b18>}), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 772, load_ora(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', feedback_cb=<function gtk_main_tick>) tempdir = tempdir.decode(sys.getfilesystemencoding()) z = zipfile.ZipFile(filename) print 'mimetype:', z.read('mimetype').strip() variables: {'zipfile.ZipFile': ('global', <class 'zipfile.ZipFile'>), 'z': (None, []), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/lib/python2.7/zipfile.py", line 770, __init__(self=<zipfile.ZipFile object>, file=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', mode='r', compression=0, allowZip64=False) if key == 'r': self._RealGetContents() elif key == 'w': variables: {'self._RealGetContents': ('local', <bound method ZipFile._RealGetContents of <zipfile.ZipFile object at 0x7fdb9b952790>>)} File "/usr/lib/python2.7/zipfile.py", line 811, _RealGetContents(self=<zipfile.ZipFile object>) if not endrec: raise BadZipfile, "File is not a zip file" if self.debug > 1: variables: {'BadZipfile': ('global', <class 'zipfile.BadZipfile'>)} BadZipfile: File is not a zip file

    Read the article

  • Algorithm to generate multifaced cube?

    - by OnePie
    Are there any elegant soloution to generate a simple-six sided cube, where each cube is made out of more than one face? The method I have used ended up a horrible and complicated mess of logic that is imopssible to follow and most likely to maintain. The algorithm should not generate reduntant vertices, and should output the indice list for the mesh as well. The reason I need this is that the cubes vertices will be deformed depending on various factors, meaning that a simple six-faced cube will nto do.

    Read the article

  • RenderState in XNA 4

    - by Shashwat
    I was going through this tutorial for having transparency which can be used to solve my problem here. The code is written in XNA 3 but I'm using XNA 4. What is the alternative for the following code in XNA 4? device.RenderState.AlphaTestEnable = true; device.RenderState.AlphaFunction = CompareFunction.GreaterEqual; device.RenderState.ReferenceAlpha = 200; device.RenderState.DepthBufferWriteEnable = false; I searched a lot but didn't find anything useful.

    Read the article

  • what is the simplest 3d software for unity?

    - by kdavis8
    Ive heard a lot about Daz studio, Poser, Maya, K-3d, Anim8or, Blender, and all the rest. My question is which one is the best choice in terms of simplicity and quality. price is not an issue really. I'm programming games in java for android mobile devices at the moment but i will eventually move onto larger platforms. I would like to utilize unity3d for the game programming itself and utilize a 3d modeling software just to create the game objects. I just need to know the best one to get started with from scratch or should i use a combination of multiple ones? Any insight for this would be great, thanks!

    Read the article

  • How to flip a BC6/BC7 texture?

    - by postgoodism
    I have some code to load DDS image files into OpenGL textures, and I'd like to extend it to support the BC6 and BC7 compressed formats introduced in D3D11. Since DirectX and OpenGL disagree about whether a texture's origin is in the upper-left or lower-left corner, my DDS loader flips each image's pixels along the Y axis before passing the pixels to OpenGL. Flipping compressed textures presents an additional wrinkle: in addition to flipping each row of 4x4-pixel blocks, you also need to flip the pixels within each block. I found code here to flip BC1/BC2/BC3 blocks, and from the block diagrams on MSDN it was easy to adapt the BC3-flipping code to handle BC4 and BC5. The BC6 and BC7 formats look significantly more intimidating, though. Is there a similar bit-twiddling trick to flip these formats, or would I have to fully decompress and recompress each block?

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • draw fog of war using shaders

    - by lezebulon
    I am making a RTS game, and I'd like some advice on how to best render fog of wars, given what I'm already doing. You can imagine this game as being a classic RTS like Age of Empires 2, where the fog of war will basically be handled by a 2D array telling if a given "tile" is explored or not. The specific things to consider here are : 1) I'm only doing a few draw calls to draw the whole screen, using shaders, and I'm not drawing "tile by tile" in a 2D loop 2) The whole map is much bigger than the screen, and the screen can move every frame or so In that case, how could I draw the fog of war ? I have no issue maintaining on the CPU-side a 2D array giving the fog of war for each tile, but what would be the best way to actually display it dynamically ? thanks!

    Read the article

  • fglrx installation without success

    - by Lucio
    I followed the steps of this guide and it doesn't work. I've entered the following command and I had an output with dependencies error. sudo dpkg -i fglrx*.deb So I tried with gdebi instead, and it works. Now fglrx & fglrx-amdcccle & fglrx-dev are installed. The next step is Generate a new /etc/X11/xorg.conf file, but I can't do this due to the following reason: When I enter sudo aticonfig --initial -f the terminal show me this output: sudo: aticonfig: command not found I've installed the packages correctly or not? What I have to do to fix the problem? NOTE: I've not uninstalled nothing (drivers, config., etc.) before beginning the installation.

    Read the article

  • Tool for creating Spritesheet? and Tips

    - by Spooks
    I am looking for a tool that I can use to create sprite sheet easily. Right now I am using Illustrator, but I can never get the center of the character in the exact position, so it looks like it is moving around(even though its always in one place), while being loop through the sprite sheet. Is there any better tools that I can be using? Also what kind of tips would you give for working with a sprite sheet? Should I create each part of the character in individual layers (left arm, right arm, body, etc.) or everything at once? any other tips would also be helpful! thank you

    Read the article

  • Kingston SD reader not working for USB3

    - by user1146334
    I have a Kingston 4-in-1 Multimedia reader. When my PC was formatted with Win7 it worked fine. I decided to change to Ubuntu 14.04 and now it doesn't work. If I plug it into one of the USB2 ports it works fine, but whenever I plug it into one of the USB3 ports, it thinks about it for a minute and then dies. Here's the output of dmesg when it dies [110262.148656] usb 4-1: new SuperSpeed USB device number 3 using xhci_hcd [110262.170330] usb 4-1: Parent hub missing LPM exit latency info. Power management will be impacted. [110262.266379] usb 4-1: New USB device found, idVendor=11b0, idProduct=6348 [110262.266386] usb 4-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [110262.266390] usb 4-1: Product: USB3.0 Media Reader [110262.266394] usb 4-1: Manufacturer: Kingston [110262.266398] usb 4-1: SerialNumber: 08735314400198 [110262.272929] usb-storage 4-1:1.0: USB Mass Storage device detected [110262.273239] scsi15 : usb-storage 4-1:1.0 [110263.290056] scsi 15:0:0:0: Direct-Access FCR-HS3 -0 1.00 PQ: 0 ANSI: 4 [110263.306622] scsi 15:0:0:1: Direct-Access FCR-HS3 -1 1.00 PQ: 0 ANSI: 4 [110263.323292] scsi 15:0:0:2: Direct-Access FCR-HS3 -2 1.00 PQ: 0 ANSI: 4 [110263.339858] scsi 15:0:0:3: Direct-Access FCR-HS3 -3 1.00 PQ: 0 ANSI: 4 [110263.340332] sd 15:0:0:0: Attached scsi generic sg3 type 0 [110263.340706] sd 15:0:0:1: Attached scsi generic sg4 type 0 [110263.340850] sd 15:0:0:2: Attached scsi generic sg5 type 0 [110263.340975] sd 15:0:0:3: Attached scsi generic sg6 type 0 [110264.651847] sd 15:0:0:1: [sde] 31116288 512-byte logical blocks: (15.9 GB/14.8 GiB) [110264.667049] sd 15:0:0:1: [sde] Write Protect is off [110264.667055] sd 15:0:0:1: [sde] Mode Sense: 2f 00 00 00 [110264.682767] sd 15:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [110264.694975] sd 15:0:0:2: [sdf] Attached SCSI removable disk [110264.697933] sd 15:0:0:3: [sdg] Attached SCSI removable disk [110264.729918] sd 15:0:0:0: [sdd] Attached SCSI removable disk [110264.754189] sde: sde1 [110264.760114] sd 15:0:0:1: [sde] Attached SCSI removable disk [110275.377368] usb 4-1: reset SuperSpeed USB device number 3 using xhci_hcd [110275.398453] usb 4-1: Parent hub missing LPM exit latency info. Power management will be impacted. [110275.436592] xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff8802e4fb9980 [110275.436600] xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff8802e4fb99c0 [110277.263444] usb 4-1: USB disconnect, device number 3

    Read the article

  • Why is Spritebatch drawing my Textures out of order?

    - by Andrew
    I just started working with XNA Studio after programming 2D games in java. Because of this, I have absolutely no experience with Spritebatch and sprite sorting. In java, I could just layer the images by calling the draw methods in order. For a while, my Spritebatch was working fine in deferred sorting mode, but when I made a change to one of my textures, it suddenly started drawing them out of order. I have searched for a solution to this problem, but nothing seems to work. I have tried adding layer depths to the sprites and changing the sort mode to BackToFront or FrontToBack or even immediate, but nothing seems to work. Here is my drawing code: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Gray); Game1.spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, null, null); for (int x = 0; x < 5; x++) { for (int y = 0; y < 5; y++) { region[x, y].draw(((float)w / aw)); // Draws the Tile-Based background } } player.draw(spriteBatch, ((float)w / aw));//draws the character (This method is where the problem occurs) enemy.draw(spriteBatch, (float)w/aw); // draws a basic enemy Game1.spriteBatch.End(); base.Draw(gameTime); } player.draw method: public void draw(SpriteBatch sb, float ratio){ //draws the player base (The character without hair or equipment) sb.Draw(playerbase[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White,0,Vector2.Zero,SpriteEffects.None,0); //draws the player's hair sb.Draw(playerbase[3], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shirt sb.Draw(equipment[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's pants sb.Draw(equipment[1], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shoes sb.Draw(equipment[2], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); } the game has a top-down perspective much like the early legend of zelda games. It draws sections of the texture depending on which direction the character is facing and the animation frame. However, instead of drawing the character in the order the draw methods are called, it ends up drawing the character out of order. Please help me with this problem.

    Read the article

  • Game window systems and internal frames

    - by 2080
    I don't know if this is a valid question, but: What kind of window manager do games use which have internal frames (Frames inside frames)? Does this differ between the programming languages (Are e.g. in Java the AWT/Swing libraries used to manage these and other graphical elements, such as buttons,or is this to restrictive (speed, graphical possibilities?)) A special example would be EVE Online, where the client can use the ingame windows like on a normal desktop.

    Read the article

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

  • Screen problems

    - by Erick
    I struggled a lot in order to install Ubuntu because of a problem with the video drivers caused by when installing the driver and after turning it on it just appears a black screen, which worked for me was to use the terminal with sudo setpci -s 00:02.0 F4.B=0 and the screen starts, but after rebooting the changes won't remain and I have to run that command again. My question is: How can I make the changes permanent and not to be in need to execute that command always when I shut it down? Original Question in Spanish: Problemas con pantalla He batallado mucho al instalar ubuntu por el problema con el controlador de video ya que al instalarlo al encenderla aparece solo una pantalla negra lo que me funciono fue usar desde consola sudo setpci -s 00:02.0 F4.B=0 enciende la pantalla pero al reiniciarla los cambios no se quedan guardados y tengo que volver a ejecutar ese comando mi pregunta es ¿como hacerle para que los cambios queden igual y no tener que estar ejecutando eso siempre que la apago?

    Read the article

  • Can somebody guide me asto how I can make a game for playing cards [closed]

    - by user2558
    In college me and my friends use to play cards all the time. I want to make a game for that. It's quite similar to hearts, a kind of modified hearts which we made up. I want to make a multiplayer game which could be played over the internet. Plus there should also be an option for computer to play if less players availiable at the time. I don't want to make a exe. I want to play in browser. How should I go about it.

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >