Search Results

Search found 29396 results on 1176 pages for 'multiple graphics devices'.

Page 76/1176 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Rendering scaled-down card images

    - by user1065145
    I have high-quality SVG card images, but they drastically lose their quality when I downsize them. I have tried two ways of rendering cards (using Inkscape and Imagemagics): 1) Render SVG to high-res PNG and resize it then: inkscape -D --export-png=QS1024.png --export-width=1024 QS.svg convert QS1024.png -filter Lanczos -sampling-factor 1x1 -resize 71x QS71.png 2) Render SVG to image of proper size at once: inkscape -D --export-png=QS71.png --export-width=71 QS.svg Both approaches generate blurry card images, which looks even worse than old Windows cards. What are the best way to generate smaller card images from SVG sources and not to loose their quality a lot? UPDATE: I am using Inkscape to render SVG - PNG and ImageMagick then to downsize PNG. I've tried using convert -resize with couple of filters (Lanczos/Mitchell/etc), but result was pretty much the same. Original: 71x raster:

    Read the article

  • prism and multiple screens

    - by Avi
    OK - I am studying Prism a little because of a "free weekend" offer on Pluralsight. As this is proving too complex for me, I went to the Prism book and looked at the forward, and this is what it said: What comes after “Hello, World?” WPF and Silverlight developers are blessed with an abundance of excellent books... There’s no lack of tutorials on Model-View-ViewModel ... But they stop short of the guidance you need to deliver a non-trivial application in full. Your first screen goes well. You add a second screen and a third. Because you started your solution with the built-in “Navigation Application Template,” adding new screens feels like hanging shirts on a closet rod. You are on a roll. Until the harsh reality of real application requirements sets in. As it happens, your application has 30 screens not three. There’s no room on that closet rod for 30 screens. Some screens are modal pop-ups; you don’t navigate to a pop-up. Screens become interdependent such that user activity in one screen triggers changes that propagate throughout the UI. Some screens are optional; others are visible only to authorized users. Some screens are permanent, while other screens can be opened and closed at will. You discover that navigating back to a previously displayed screen creates a new instance. That’s not what you expected and, to your horror, the prior instance is gone along with the user’s unsaved changes. Now the issue is, I don't relate to this description. I've never been a UI programmer, but same as everyone else I'm using Windows apps such as MS-Office, and web sites such as Amazon, Facebook and StackExchange. And I look at these and I don't see many "so many screens" issues! Indeed, the only applications having many windows I can think of is Visual Studio. Maybe also Visio, a little. But take Word - You have a ribbon and a main window. Or take Facebook: You have those lists on the left (Favorites, Lists, Groups etc.), the status middle, the adds and then the Contacts sidebar. But it's only one page. Of course, I understand that in enterprise scenarios there are dashboad applications where multiple segments of the screen are updated from multiple non-related services. This I dig. But other scenarios? So - What am I missing? What is the "multiple screens" monster Pirsm is supposed to be the silver bullet solution for? Shoud I invest in studying Prism in addition to learning WPF or ASP.NET MVC?

    Read the article

  • Question about JPanel "transition" for Java Swing

    - by user16778
    I want to make like a sort of main menu (in GUI). When the user clicks the start button, the screen transition into another "screen" (JPanel). This image will make it easier to understand. http://i.imgur.com/Cfdry.png Currently, I have a MainMenu extends JPanel and that gets added into a driver class with a JFrame. I can't figure how to switch to another class like Game extends JPanel. So when the user clicks the start button in MainMenu, I want it to somehow hide itself and the Game to show itself. Thanks.

    Read the article

  • Radeon HD 6290 terrible performance on a certified laptop

    - by dac
    I bought Asus K53U laptop, which is Ubuntu certified with pre-installed 11.10. The graphic card is Radeon HD 6290 but 720p playback is terrible. Even page scrolling in Firefox is very laggy. Proprietary drivers are installed by default. How is this possible, why is the laptop Ubuntu certified if the performance is poor? Any solution to this? I just did apt-get autoremove, and after that, this message came out in terminal: Error inserting vesafb (/lib/modules/3.0.0-15-generic/kernel/drivers/video/vesafb.ko): No such device Could that be the problem?

    Read the article

  • How do I install lubuntu? (kernel panic)

    - by melvincv
    Please help me install Lubuntu 12.04 i386 on an old computer. I select the "Try Lubuntu without installing" and it crashes with a kernel panic. Rarely I do get to the live OS, but soon the display goes blank. The messages log gives me '[drm] ERROR GPU hung/wedged' The specs are: Pentium 4 2.4GHz 1GB DDR RAM 40GB PATA HDD Intel 845GL chipset (8MB framebuffer, 64MB shared system memory set in the BIOS)

    Read the article

  • Texture artifacts on iPad

    - by MrDatabase
    I'm porting an iPhone game to the iPad. When I move textures "quickly" (5.0 pixels every update at a rate of 60 Hz) I start to see little "artifacts" or remnants of where the texture used to be. I'm not sure if I know the correct terminology for this... imagine a texture at some location on the screen... then next to it is the same texture but faded a bit... then the same texture again just faded a bit more. I'm using CADisplayLink to drive my update loop if that helps. Also I didn't see this issue on the 3G or the iPhone 4. Any ideas? Cheers!

    Read the article

  • Anti aliasing problem

    - by byronyasgur
    I am auditioning fonts on google web fonts and one that I was discounting was Ubuntu because it looked a bit jagged ( screenshot below taken straight from google); however afterward I read an article where it was mentioned as a good choice, and there was a screenshot where it looked really good ( to me anyway ). I am using windows 7 and have tried looking at it in chrome and firefox. I notice the same thing with some other fonts but this one is a good example because it looks perfect in the screenshot but not so good when I look at it on their site. I know this essentially is a question about setting my computer, but I thought that this would be the best place to pose the question: Is there something wrong with the settings on my machine seeing as it's obviously not showing the font the same on my computer as it did when the article writer downloaded it and used it in an image. The screenshot from Google ... The screenshot from the article above ...

    Read the article

  • Objects won't render when Texture Compression + Mipmapping is Enabled

    - by felipedrl
    I'm optimizing my game and I've just implemented compressed (DXTn) texture loading in OpenGL. I've worked my way removing bugs but I can't figure out this one: objects w/ DXTn + mipmapped textures are not being rendered. It's not like they are appearing with a flat color, they just don't appear at all. DXTn textured objs render and mipmapped non-compressed textures render just fine. The texture in question is 256x256 I generate the mips all the way down 4x4, i.e 1 block. I've checked on gDebugger and it display all the levels (7) just fine. I'm using GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag one. The texture is being compressed and mipmaps being created offline with Paint.NET tool using super sampling method. (I also tried bilinear just in case) Source follow: [SNIPPET 1: Loading DDS into sys memory + Initializing Object] // Read header DDSHeader header; file.read(reinterpret_cast<char*>(&header), sizeof(DDSHeader)); uint pos = static_cast<uint>(file.tellg()); file.seekg(0, std::ios_base::end); uint dataSizeInBytes = static_cast<uint>(file.tellg()) - pos; file.seekg(pos, std::ios_base::beg); // Read file data mData = new unsigned char[dataSizeInBytes]; file.read(reinterpret_cast<char*>(mData), dataSizeInBytes); file.close(); mMipmapCount = header.mipmapcount; mHeight = header.height; mWidth = header.width; mCompressionType = header.pf.fourCC; // Only support files divisible by 4 (for compression blocks algorithms) massert(mWidth % 4 == 0 && mHeight % 4 == 0); massert(mCompressionType == NO_COMPRESSION || mCompressionType == COMPRESSION_DXT1 || mCompressionType == COMPRESSION_DXT3 || mCompressionType == COMPRESSION_DXT5); // Allow textures up to 65536x65536 massert(header.mipmapcount <= MAX_MIPMAP_LEVELS); mTextureFilter = TextureFilter::LINEAR; if (mMipmapCount > 0) { mMipmapFilter = MipmapFilter::NEAREST; } else { mMipmapFilter = MipmapFilter::NO_MIPMAP; } mBitsPerPixel = header.pf.bitcount; if (mCompressionType == NO_COMPRESSION) { if (header.pf.flags & DDPF_ALPHAPIXELS) { // The only format supported w/ alpha is A8R8G8B8 massert(header.pf.amask == 0xFF000000 && header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGBA8; mFormat = GL_BGRA; mDataType = GL_UNSIGNED_BYTE; } else { massert(header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGB8; mFormat = GL_BGR; mDataType = GL_UNSIGNED_BYTE; } } else { uint blockSizeInBytes = 16; switch (mCompressionType) { case COMPRESSION_DXT1: blockSizeInBytes = 8; if (header.pf.flags & DDPF_ALPHAPIXELS) { mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT; } else { mInternalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT; } break; case COMPRESSION_DXT3: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; break; case COMPRESSION_DXT5: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT; break; default: // Not Supported (DXT2, DXT4 or any compression format) massert(false); } } [SNIPPET 2: Uploading into video memory] massert(mData != NULL); glGenTextures(1, &mHandle); massert(mHandle!=0); glBindTexture(GL_TEXTURE_2D, mHandle); commitFiltering(); uint offset = 0; Renderer* renderer = Renderer::getInstance(); switch (mInternalFormat) { case GL_RGB: case GL_RGBA: case GL_RGB8: case GL_RGBA8: for (uint i = 0; i < mMipmapCount + 1; ++i) { uint width = std::max(1U, mWidth >> i); uint height = std::max(1U, mHeight >> i); glTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, mFormat, mDataType, &mData[offset]); offset += width * height * (mBitsPerPixel / 8); } break; case GL_COMPRESSED_RGB_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT3_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT5_EXT: { uint blockSize = 16; if (mInternalFormat == GL_COMPRESSED_RGB_S3TC_DXT1_EXT || mInternalFormat == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) { blockSize = 8; } uint width = mWidth; uint height = mHeight; for (uint i = 0; i < mMipmapCount + 1; ++i) { uint nBlocks = ((width + 3) / 4) * ((height + 3) / 4); // Only POT textures allowed for mipmapping massert(width % 4 == 0 && height % 4 == 0); glCompressedTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, nBlocks * blockSize, &mData[offset]); offset += nBlocks * blockSize; if (width <= 4 && height <= 4) { break; } width = std::max(4U, width / 2); height = std::max(4U, height / 2); } break; } default: // Not Supported massert(false); } Also I don't understand the "+3" in the block size computation but looking for a solution for my problema I've encountered people defining it as that. I guess it won't make a differente for POT textures but I put just in case. Thanks.

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • Ubuntu 13.10 upgraded from 13.04 issues

    - by Andrew Sadach
    The keyboard stopped working after a while, I started using 13.04 again VIA USB because I am waiting for the keyboard issues that 13.10 has to get an update. 13.04 had tons of issues I didn't care about because most of it worked. Now almost none of it works. There's even a huge amount of graphical errors. Others have had these issues I've noticed while looking at the similar questions area next to this text box, but my question is can I downgrade 13.10 to 13.04?

    Read the article

  • Pointer problem on external monitor

    - by Herby Pepper
    Pointer looks and work fine on laptop but when I connect external monitor it appears there as a shaking, square shape. Videos are not showing either. my laptop hp 2133: Graphic card: VIA Technologies, Inc. CN896/VN896/P4M900, Chrome 9 HC. System: Lubuntu 12.04 I think it is graphic problem but can't find drivers for my card and system. I do have xserver-xorg-video-openchrome and disper installed. I did not have that problem with Lubuntu 11.10. My problem is a bit like : Mouse pointer strange problem but it was not solved so I decided to post my question.

    Read the article

  • How do I plot individual pixels using the XNA APIs?

    - by izb
    If I wanted to fill my game screen with individually coloured pixels, how would I do this? For example, if I wanted to write a 'game of life'-type game where each pixel was a cell, how would I achieve this using XNA? I've tried just calling SetData() on a Texture2D object using a screen-sized array of Color values, but it complains with: You may not call SetData on a resource while it is actively set on the GraphicsDevice. Unset it from the device before calling SetData. How do I do as it asks? Or better still... is there an alternative, better, efficient way to fill a screen with arbitrary pixels?

    Read the article

  • Shader inputs in a general purpouse engine

    - by dreta
    I'm not familiar with SDKs like Unity or UDK that much, so i can't check this off hand. Do general purpouse engines allow users to create custom uniform variables? The way i see it, and the way i have implemented it in an engine i'm writing to learn 3D, is that there is a "set" of uniforms provided by the engine and if you want to write a custom shader then you utilize uniforms you need to create a wanted effect. Now, the thing is, first of all i'm not an artist, second of all, i didn't have a chance to create complex scenes yet. So my question is, is it common practice to define variables that the engine provides and only allow the user to work with what they're given? Allowing users to add custom programs and use them where they want is not hard, but i have issues imagining how you'd go about doing the same for uniforms.

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • How can I design good continuous (seamless) tiles?

    - by Mikalichov
    I have trouble designing tiles so that when assembled, they don't look like tiles, but look like a homogeneous thing. For example, see the image below: Even though the main part of the grass is only one tile, you don't "see" the grid; you know where it is if you look a bit carefully, but it is not obvious. Whereas when I design tiles, you can only see "oh, jeez, 64 times the same tile," like in this image: (I took this from another GDSE question, sorry; not be critical of the game, but it proves my point. And actually has better tile design that what I manage, anyway.) I think the main problem is that I design them so they are independent, there is no junction between two tiles if put closed to each other. I think having the tiles more "continuous" would have a smoother effect, but can't manage to do it, it seems overly complex to me. I think it is probably simpler than I think once you know how to do it, but couldn't find a tutorial on that specific point. Is there a known method to design continuous / homogeneous tiles? (My terminology might be totally wrong, don't hesitate to correct me.)

    Read the article

  • Why would video stutter on HDMI but not on DVI?

    - by CorvT
    I've got a system running Ubuntu 12.04 with an i3 2120T CPU/GPU. When I play video through mplayer, I notice when I'm hooked up to a screen via HDMI there is a small stutter (1-2 frames) every few seconds. I don't see this happening when I connect via DVI on the same screen. Resolution and refresh rate are same for both HDMI and DVI, so I'm not sure where else the problem could be coming from. I've also tried two different screens, and different cables. I see the stutter with either HDMI-HDMI cables, or DVI-HDMI cable with DVI from the PC and HDMI into the screen. I don't see the stutter with DVI-DVI cables, or when I use HDMI-DVI cables with HDMI from the PC to DVI into the screen. I've also tried using an AMD 5XXX series card with the open source radeon driver, and saw the same problem. I then tried an nVidia GeForce 210 card with the closed source driver, and the stutter went away. To me this smells like a driver/mesa/glx issue (since the problem went away with the nvidia card/driver), but I have no idea how to track this down.

    Read the article

  • I want to learn to program in SDL C++where do i start? I want to learn only what i need to to start making 2d games [on hold]

    - by user2644399
    Lazyfoo of Lazyfoo.net of the SDL 2d tutorial wrote that in order for me to start game programming in SDL, I need to know these concepts well; Operators, Controls, Loops, Functions, Structures, Arrays, References, Pointers, Classes, Objects how to use a template and Bitwise and/or. I want to know the fastest way to learn as much as I need of basic c++ that would allow me to make 2d games. Thanks in advance.

    Read the article

  • Multiple TOC with MediaWiki using section headings in single page

    - by user1704043
    I'm running my own installation of MediaWiki, which has been great! I haven't been able to find the answer to this small problem in any post, how to, etc. Here's the setup: Article TOC (limited to showing only H1 and H2) ==H1== ===H2=== ====H3==== ====H3==== I don't want the H3 to show up on the main table of contents, because it would make the list very long. Instead, under the H2, I would like to display another TOC with all the H3's under that listing. From my understanding, you cannot have multiple table of contents on a single page. I've thought about making a template for each H2 that has the H3 links, but that seems like it duplicates a lot of work and creates loads of pages. I'd love a template that sucks all subsection names and spits them out, but I don't see how to do that. Alternatively, is there a way to enable multiple TOCs in a custom install of MediaWiki that I'm missing? Even that would get closer to what I'm trying to do.

    Read the article

  • How well does Intel 3000 HD work on Ubuntu?

    - by Simon
    Right now i have notebook with Nvidia 8400M GS (I know, it's not good card) and it's impossible to work normally when i'll plugin external monitor (1920x1080). Windows 7 can deal with it without problems (1440x900 on notebook + 1920x1080 external). On Ubuntu i have to choose one screen and turn off the second one. Even with only one screen Ubuntu (Unity or even Gnome3) sometimes hangs for a while, I've not found solution for this yet, but nevermind, it's probably because of my card or/and nvidia's drivers. I'm going to buy new PC, but for now only with integrated Intel 3000HD, and my question is: Should i expect similar problems with this card? Here i've found link to Intel's webpage about drivers - "only community develop them", and i'm a bit concerned. I'll use then only one monitor (the bigger one), but how well does those driver work? Are there any performance tests?

    Read the article

  • Nvidia Driver versions?

    - by Patrick Krenz
    I've looked all over and can't find any reason as to why or how Nvidia names their drivers. for example they have a 330.xxx/340.xxx series that are current but also a 300.xxx and i've found that they aren't always release in order by number. Here's an example on there site with version and release date 331.38 - January 13 334.16 - Feb 7 331.49 - Feb 18 I'm really confused about what driver to actually go with, a few different series versions seem to work adequately and I just want to have an understanding of it and what the best option to work from would be. I really appreciate any information

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • OpenGL and switchable graphic cards

    - by Orcun
    I use a laptop and this laptop has readon AMD Radeon HD 6470M and onboad graphic card. When I run fglrxinfo, I get this error: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 Is it a problem ? Because of I reason I can't use opengl. Because, I can't run any opengl applications.

    Read the article

  • File system layout for multiple build targets

    - by Yttrill
    I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc. Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library. On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk. My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist. The question is: how to manage the layout of all the other stuff?

    Read the article

  • Why doesn't my graphic card support 1280*1024?

    - by Allwar
    Hi, I have an external monitor which is an 20" 1280*1024. In windows 7 it works fine with that resolution but in ubuntu it can't. Example: In windows I connect it and activates it, done. In ubuntu I connect and the only resolution that is available is the ones my laptop screen support, 12" 1366*768. My laptop is an asus 1201n. If I force it to use 1280*1024 both screen crashes and i have to force a reboot. what should I do? alvar@alvars-laptop:~$ disper -l display DFP-0: HSD121PHW1 resolutions: 320x175, 320x200, 360x200, 320x240, 400x300, 416x312, 512x384, 640x350, 576x432, 640x400, 680x384, 720x400, 640x480, 720x450, 640x512, 700x525, 800x512, 840x525, 800x600, 960x540, 832x624, 1024x768, 1366x768 display CRT-0: CRT-0 resolutions: 320x240, 400x300, 512x384, 680x384, 640x480, 800x600, 1024x768, 1152x864, 1360x768

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >