Search Results

Search found 955 results on 39 pages for 'gpu'.

Page 21/39 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Brightness not working; HP Pavilion Dv6; ATI Radeon HD6770M

    - by Yogesh Dhamija
    I am new to Ubuntu, but so far I am loving it. I was always unable to change my brightness since I installed Ubuntu, but I figured that installing the latest ATI driver for my graphics card would work. I did, but I still can't change the brightness. The slider goes up and down, but the brightness stays the same (on full). I have switchable graphics, an ATI Radeon HD 6770M, and an Intel integrated GPU. Since I am new to Linux, I am not familiar with terminal, so you will have to spell everything out for me, including if you need more information and how to get it. Thanks.

    Read the article

  • Frequent GUI pauses in Ubuntu 13.04 / Unity / Intel HD4000

    - by Simon
    I'm experiencing very frequent (and regular) GUI pauses on my system. Every 30 seconds (pretty much exactly) the GUI will freeze for maybe .25 to .5 seconds. The mouse stops moving, keys stop echoing and a stopwatch timer briefly pauses. I'm using the Intel Graphics driver available from: https://download.01.org/gfx/ubuntu/13.04/main I've looked in a few places and tried a few things for a solution: I've checked cron and anacron for scheduled processes. I've disabled background processes (eg mysql, postgres, apache) not that these were doing anything anyway I've checked the following posts and tried the suggestions there: Unity GUI pauses/freezes for less than a few seconds How to go about troubleshooting frequent system pauses I've watched the system using top and System Monitor and there are no spikes (or even blips) of cpu usage when the pauses occur. There are no obvious error messages in dmesg or syslog There is loads of free RAM (8GB+) and no swap usage If it helps it's a ZooStorm i5 laptop with a HD4000 GPU, 16GB Ram and an SSD. Any help / suggestions would be very gratefully received.

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • My ASUS U32U with fresh Xubuntu install shows a black screen 50-80% of the startups

    - by Jona Ekenberg
    I have recently installed Ubuntu 12.10 with Xubuntu-package on my ASUS U32U notebook (Radeon HD 6320 GPU). The issue I have is that more often than not, after the GRUB-select screen I get a black screen, and three times total white lines (kind of) flashes very quickly (with maybe 5 seconds between each flash). I'm not even able to get to the login-screen (nor the Xubuntu loading screen). At first I thought it was simply me having installed something dumb or messed up some settings, but even after reformatting the partition and installing ubuntu again, the problem remains. Before I formatted it xfce4's window manager wouldn't start either, but it does now (when I am able to see anything). I can access the virual consoles (ctrl+alt+f1), but I can't see anything, but I've managed to shutdown the computer by using it (sudo shutdown -h now).

    Read the article

  • OpenGL behaviour depending on the graphics card?

    - by Dan
    This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL/GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..)

    Read the article

  • What the different hardware temperatures listed in psensor, sensor viewer etc reffer to?

    - by cipricus
    I have installed psensor and see a list of temperatures, but listed as ”Temperature 1”, 2, 3 etc . I can only guess where the processor is: but who's who for sure? The same question stands for Sensors Viewer. I can also type sensors in Terminal but I get no more than that acpi -t gives Thermal 0: ok, 65.0 degrees C Thermal 1: ok, 37.9 degrees C Thermal 2: ok, 56.0 degrees C Thermal 3: active, 71.0 degrees C Considering psensor, I know for a fact that: - the temperature that varies most depending on the CPU use is Temp1 and it is one of the two highest - the other high temperature is Temp4 and it goes to the ceiling when using youtube/flash - Temp2 is very stable at a medium level of 50-60 degrees Celsius - Temp3 is by far the lowest and most imobile So, I guess Temp1 is the CPU temperature, and Temp4 is the GPU temperature. Temp2 and 3 must be the motherboard and the hdd. Does anybody know for sure?

    Read the article

  • Black screen after upgrading from 13.04 to 13.10

    - by Harri
    Just upgraded from 13.04 to 13.10 and all I got was a black screen. The hardware I'm running is Asus Zenbook UX31A (Intel GPU). I do hear that the login screen drums do play, so the system does boot to login screen. When I try to boot using kernel 3.11.0-12 recovery mode, it tells me "initctl: event failed". Then if I go on an press ctrl+alt+f2, log in and command startx, it dies because "Fatal server error: no screens found". Here are some logs from /var/log/Xorg.0.log http://pastebin.com/ZQasUKJx Kernel 3.8.0-31 work ok, as did things before the upgrade.

    Read the article

  • ACER Aspire V5-171 compatibility

    - by JamerTheProgrammer
    Im thinking about buying a V5-171 with an i3 in it. Im worried about secure boot though. I heard some people cant turn it off and it wont work... Im not shy to open it up and replace the hard drive with Ubuntu preinstalled. Im also worried about the wifi working. I have heard its been dropping out for people quite a bit along with also the trackpad not working. I dont mind replacing the wifi stick (if its even possible?) inside. Is the GPU (HD4000 i think) supported in ubuntu with full video accel? Thanks!

    Read the article

  • Firefox 4 : sortie de la beta 12, améliorations du support du Flash et de l'accélération matérielle

    Firefox 4 : sortie de la beta 12 Améliorations du support du Flash et de l'accélération matérielle Mise à jour du 28/02/11 La douzième ? et a priori dernière - beta de Firefox 4 est sortie ce week-end. Elle corrige 7.000 bugs et apporte une amélioration dans la lecture des vidéos (en Flash). L'intégration de l'accélération matérielle (allouer des tâches spécifiques de calcul au GPU plutôt qu'au CPU) a elle aussi été retravaillée. Le tout permettant une meilleure stabilité du navigateur. Elle n'inclut malheureusement pas encore les patchs « miracles*» qui permettent de diviser par deux son temps de démarrage (lire par ail...

    Read the article

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

  • pwmconfig: "There are no pwm-capable sensor modules installed"

    - by Sman789
    I'm trying to reduce my fan speed with fancontrol and pwmanager because, despite the temperatures being the same, they are much louder on Linux (Ubuntu Gnome 14.04) than on Windows. I've followed the instructions in the first answer here but when running pwmanager I get pwmconfig: "There are no pwm-capable sensor modules installed" I know that my system has working thermal sensors because PSensor has no trouble telling me my CPU temp and GPU temp. I would appreciate any help you can give in helping me reduce my fan speed to that of Windows (which uses the ASUS AI Suite 3 software which came with the Z87-A motherboard, if that's relevant).

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • Loud fans despite cool system under Linux (but not Windows)

    - by Sman789
    My new desktop computer runs almost silently under Windows, but the fans seem to run on a constantly high setting under Linux. Psensor shows that the GPU (with NVidia drivers) is thirty-something degrees and the CPU is about the same, so it's not just down to Linux somehow being more processor-intensive. I've read that the BIOS controls the fans under Linux, which makes sense given the high fan speeds when in BIOS as well. It's under Windows, when the ASUS AI Suite 3 software seems to take control, that the system runs more quietly and only speeds the fans up when required. So is there a Linux app which offers a similar dynamic control of the fans, or a setting hidden somewhere in the ASUS BIOS which allows the same but regardless of the OS? EDIT - I've tried using lm-sensors and fancontrol, but pwmconfig tells me "There are no pwm-capable sensor modules installed". This is after the sensors-detect command does find an 'Intel digital thermal sensor', and despite the sensors working fine in apps like psensor. Help getting this to work would likely solve the problem.

    Read the article

  • files power_profile and power_method missing on ubuntu 12.04 after clean isntall

    - by Nikola
    OK here is the problem,I am using gnome-shell, ubuntu 12.04, kernel 3.2.0-32-generic-pae and the proprietary drivers for my ati card (Installed via "additional drivers") , the laptops is a hp 4310s probook and i want to control the power_profiles and power_method , because my GPU temp is high. before i reinstalled ubuntu 12.04, i used the .sh method on startup to write to those files, and everything worked like a charm, but now they are missing, and i can't create them.this is what i get when i try to create the directories mkdir: cannot create directory `/sys/class/drm': No such file or directory How can i can get them back?if you need some information , just ask and i will give it.

    Read the article

  • downgrade ppa packages to versions available at a previous point in time

    - by Will
    The backstory is that the normal Intel GPU drivers don't do the various OpenGL extensions that my hobby coding and some games want. So I have to install xorg-edgers and then its happy. However, last Wednesday or so there was an update to xorg-edgers - lots of packages - and it broke badly; the drivers lock up and take the whole computer with them; hard reset required. So how can you downgrade - select package versions in a PPA that represent a point in the past, ignoring versions newer than that?

    Read the article

  • How can I generate signed distance fields (2D) in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time: There's something else:

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How can I generate signed distance fields in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time:

    Read the article

  • Avoiding lag when rendering Texture2D for first time

    - by Emir Lima
    I have found a similar question here, but it is about playing sounds. I am using 2048 x 2048 textures for sprite sheets and every time I call spriteBatch.Draw using a sheet for the first time in game execution, causes a considerable lag. The lag doesn't appears for the next times. Someone has faced this problem before? What can I do to overcome this? Update: I inserted a code in the end of content load routine that draws EVERY Texture2D that is loaded into ContentManager before follow to the game screen. This works well. None lag occurs when different textures are rendered over the time, EXCEPT if the IsFullScreen are changed. Apparently, changing this property makes the textures loaded in the GPU gone. Is that correct?

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • How to solve dual monitor issue, which happens only during X start?

    - by tamashumi
    When is loading and two monitors are connected, instead of a login screen I see this: ...after clicking OK, selection appears: Then I'm following to console login, disconnecting by hand the secondary monitor cable, restart lightdm with a command sudo service lightdm restart ...and voila! System loads fine. If I disconnect the cable before boot X will be loaded fine too. It's not a nice 'feature' when I have to disconnect the cable each boot or X restart. I was trying to delete monitors.xml but it didn't help. The situation relates to my notebook with Intel integrated GPU. The same happens on two different pairs of monitors: at the office and at home. How can I fix this? Ubuntu 12.04 x64 Desktop with default Unity GUI.

    Read the article

  • internal error message pops up after each time system is rebooted

    - by Biju
    I had installed ubuntu 12.04 using wubi. But each time i boot the system an internal error message pops up. As show below:- Executable path /usr/share/apport/apport-gpu-error-intel.py Package xserver-xorg-video-intel 2:2.17.0-1ubuntu4 Problem Type crash Apportversion 2.0.1-0ubuntu7 and so on.. I had earlier upgraded to ubuntu 12.04 from ubuntu 11.10. And encountered the same issue. Hence i uninstalled the OS and reinstalled using wubi. I had posted the same query in ubuntu.com/support (Question Number: 195525) But couldnt find a solution. I am using dell inspiron with intel pentium. Need ur help in resolving this issue. thanking u, Biju

    Read the article

  • La beta de Moonlight 4 se rapproche de Silverlight 4, l'implémentation open-source ajoute accélération matérielle et support du H.264

    La beta de Moonlight 4 se rapproche de Silverlight 4 Son implémentation open-source propose désormais accélération matérielle et support du H.264 Moonlight 4 vient de sortir en version beta. L'implémentation open-source de Silverlight propose à présent l'accélération matérielle (pour la prise en charge des vidéos et de la 3D par le GPU) ou le support du codec H.264. Avec cette version de développement, Moonlight intègre plusieurs nouveautés de Silverlight 4, notamment la prise en charge des APIs de Silverlight 3 et 4. Elle permet également de construire et de faire tourner des applications « hors du navigateur ». Néanmoins, cette beta ne propose pas toutes les foncti...

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >