Search Results

Search found 955 results on 39 pages for 'gpu'.

Page 23/39 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • W520 External monitor setup with Ubuntu 12.10

    - by user108372
    I just installed a fresh Ubuntu 12.10 64-Bit Desktop on my Lenovo W520. It looks like there are a lot of challenges around making it work with out of the box Nouveau drivers or propriety Nvidia drivers or Intel GPU. I looked at couple of notes on how to make it work with Bumblebee with Optimus Nvidia. None of them seems to work for 12.10. Anybody has a solid answer on this? It seems like a lot of people are suffering from this. Here is my xrandr output. Let me know if you need any additional information. Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 LVDS1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1920x1080 60.0*+ 59.9 50.0 1680x1050 60.0 59.9 1600x1024 60.2 1400x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) Thanks, Sef

    Read the article

  • Is this CPU usage normal for Xorg?

    - by Samuaz
    I checked System Monitor to see if the frequency of my CPU increases without doing anything and saw that xorg is always using 10-40% of the CPU even if it's not doing much of anything on the desktop or simply surfing the Internet. Is this normal? If not, how can I fix it? I have: a Macbook white 4,1 Core2 Duo running at 2.10 GHz GPU Intel GMA X3100 4GB of RAM Ubuntu 11.04 I am running Unity and I do not have many effects enabled. I have only activated Compiz animations, scale, desktop, some shadows...

    Read the article

  • Nvidia Powermizer Performance Levels

    - by jeffrey
    Is there anyway to configure Nvidia Powerimizer performance levels? My current setup has 3 power levels with the lowest one being 50mhz. The problem with this it that it lags compiz when it goes to the lowest performance level 0. Minimizing, maximizing, dragging windows, etc. are all sluggish when it's at the lowest level. Once powermizer leaves level 0 everything is very smooth and runs great. Is there anyway for me to remove level 0 and just run Level the two higher levels 1/2? I don't want to complete disable powermizer, but I can't stand the lagging once powermizer goes into performance level 0. Setting the option "prefer maximum performance" fixes the problem as it disables powermizer, but the GPU is overkill at stock speeds for most desktop use @ 850mhz. intel i5 2500k asus gene-z z68 evga 560ti fpb (driver 295.40) ubuntu 12.04 LTS x64

    Read the article

  • L'Unreal Engine 3 fonctionne sur Windows 8 RT, le moteur d'Epic Games tente de prendre des parts de marché à Unity

    L'Unreal Engine 3 fonctionne sur Windows 8 RT Une réponse logique de la part de Epic Games, face à la récente annonce de Unity. Après l'annonce du support de Windows 8 et Windows Phone 8 par Unity 3D, NVIDIA propose une vidéo montrant la démo porte-étendard pour les plateformes mobiles : Epic Citadel, de l'Unreal Engine 3. Elle fonctionne sur la tablette ASUS Vivo Tab RT, intégrant un NVIDIA Tegra. Pour rappel, ce processeur basé sur l'architecture ARM, combine CPU et GPU sur une même puce. Un des points ...

    Read the article

  • When should I clear an auxilliary render target?

    - by Raptormeat
    I'm using a few different render targets in my game in addition to the back buffer. These other render targets are only used in a few places, for specific tasks. I'm wondering when I should be clearing them. Right now I clear all of my render targets at the beginning of the frame, and it seems like I'm waiting for all the textures to clear before the rest of the drawing gets underway. Would it be more efficient to clear these textures later in the frame, when they aren't being used? Is there any hope of the GPU sort of clearing them "on the side" while unrelated rendering is happening? Or are these tasks always sequential and will I always need to wait for clearing?

    Read the article

  • Kinect Fusion bientôt accessible au public, le système de numérisation 3D temps réel sera intégré au prochain SDK Kinect pour Windows

    Kinect Fusion bientôt accessible aux développeurs le système de numérisation 3D temps réel sera intégré dans le prochain SDK Kinect pour Windows La prochaine mise à jour du SDK de Kinect pour Windows intégrera Kinect Fusion. Kinect Fusion utilise un capteur Kinect mobile qui permet de capter des données de profondeur et de créer des modèles 3D de haute qualité, comme par exemple la modélisation d'une pièce et de son contenu. L'implémentation s'appuie sur le GPU pour le suivi de la caméra. Quant à la reconstruction de surfaces, elle fonctionne en interactif et en temps réel pour permettre la mise en oeuvre d'applications en réalité augmentée ou l'interaction homme-machine. K...

    Read the article

  • nVidia GT 220 not working properly with Ubuntu 12.10

    - by Glaedr
    I used to enable proprietary nVidia drivers on every previous Ubuntu release to get it working properly (elseway I was forced to a very low resolution and no graphic acceleration), and everything worked fine then. In particular, I noticed - still I don't know why - that the GPU fan on every OS gets noisy until the video drivers are loaded (runlevel 5, it seems, in Linux), and then slows down to a normal speed. Today I installed 12.10. Running the Live CD, surprisingly, everything was working fine: full resolution, acceleration, silent fan, and so on. The running driver was nvidia-current (GT 216). After installing and booting I found that the fan was overrunning. The installed driver is nouveau. I tried installing nvidia-current, or any other proprietary driver, even installing the kernel headers & source and then the drivers (as suggested here), but all I'm getting with proprietary drivers is, the irony, low resolution, noisy fan and no acceleration (thus obviously unity and compiz refusing to start). Does anybody know a way out?

    Read the article

  • Xbox Surface : bientôt une tablette 7 pouces dédiée au jeu ? Microsoft plancherait sur le sujet

    Xbox Surface : bientôt une tablette 7 pouces dédiée au jeu ? Microsoft plancherait sur le sujet Après Apple avec l'iPad Mini, ce serait au tour de Microsoft de se lancer dans la conception d'une tablette de petite taille (7 pouces). À la différence d'autres dispositifs du même type, la tablette de Microsoft sera spécialement optimisée pour le jeu. Selon un article du magazine The Verge des sources proches de Microsoft, le développement du projet aurait même déjà bien évolué. La tablette xBox Surface reposera sur une architecture ARM spécifique avec une bande passante mémoire importante, afin de pouvoir répondre aux besoins d'un GPU plus puissant que celui disponible ...

    Read the article

  • Intel travaille sur un processeur à 48 coeurs pour mobile, la puce pourrait être disponible dans 5 ans

    Intel travaille sur un processeur à 48 coeurs pour mobile la puce pourrait être disponible dans 5 ans Intel est surtout connu pour ses processeurs pour PC. Mais, dans le secteur des smartphones et tablettes, le constructeur est à la traîne. Une situation que la firme veut changer en apportant des alternatives innovantes aux solutions actuelles. En effet, les chercheurs de la société travaillent actuellement sur une meilleure façon d'utiliser et gérer un grand nombre de coeurs dans un appareil mobile. De nos jours, les terminaux mobiles utilisent des processeurs double-coeurs ou au plus quadri-coeurs avec plusieurs GPU. Les travaux d'Intel pourraient about...

    Read the article

  • Recommended Books for OpenGL [closed]

    - by TheBlueCat
    I'm fairly new to OpenGL and I'm have been researching any books that would be beneficial, I've had suggested to me (I've finished reading the OpenGL Book online): Real Time Rendering GPU Gems 3 OpenGL Super Bible Does anyone know any other books that they've found useful in the past, even if it covers higher level algorithms. Also, can anyone suggest an IDE/Text Editor for Linux? I'm using Komodo and it's super buggy, I just booted into Windows today and tried Visual Studio and loved it, is their anything similar for Linux? Although, the books I've been reading are saying to not use IDEs, partly because of the reliance you place onto them, per se. I use Eclipse a lot for my Java programming, can I use C and OpenGL with that? Lastly, do you think it would be more beneficial staying on Windows and programming in C/OpenGL on their, I do like Linux, but I found Visual Studio to be pretty good in some aspects?

    Read the article

  • ????JavaFX??Java???????·?????????????????Java Developer Workshop #2?????|WebLogic Channel|??????

    - by ???02
    WebLogic Server?????????Java???????????????????WebLogic Channel?????????JavaOne 2011??Java/Java EE????????!――???????????????!!?????????????????????JavaOne 2011????????????????????????????????????JavaFX?????2011?12?1?????????????Java?????????????Java Developer Workshop #2????JavaOne 2011?JavaFX???????????????Oracle Corporation?JavaFX??????Nandini Ramani?(Client Java Group???????????)??????JavaFX 2.0-Next generation Java client solution????????????????????JavaFX?????????????????????(???)?Pure Java???????UI??????JavaFX 2.0??JavaOne 2011??Java/Java EE????????!???????????API????Java????????????1?????????Ramani?????????JavaFX????????JavaFX 2.0?????????????????????? ???JavaFX 2.0?????????????????????????????????JavaFX Script??????????????????Java?????????????·???????????????????????Java????????????????????????????? ??????????????PC????????????·??????????????????????????????????????API???????????????????·?????????????????????????????????????????????900????????????Java???????????JavaFX??????????????????????????????·???????(UI)????????????????????(Ramani?) Ramani??????JavaFX 2.0??????/???????????100% Java API?Swing????FXML???UI????????WebKit???Web???????????????????????????? ??????FXML(FX Markup Language)???JavaFX?UI????????XML????????????????Ramani????????????????????????????????·?????????????UI????????????????????????JavaScript?Groovy?Scala???JVM???????????????????????? ???JavaFX 2.0????????(JavaFX Runtime)???????????????????????AWT????????????????OS???????????????Glass Windowing Toolkit??2D/3D????????·???????GPU???????????Prism???????????????? ?????Prism????????????????·??????????3D?????????????????????????????????????????????·????????60fps??HD??????????VP6?MP3?????????????????????????????????????·?????????????? ?????????????????????????JavaFX 2.0???????Ramani???????????????????·????????????·???????????????????????????????JavaFX 2.0?????????????·?????????????????????????????????????Prism???????????????????????????????????????????????????????????????????????JavaFX??????????·??????????????????????????????????????????????/???????????(?????????)???????????????????? ??????????????????NetBeans IDE 7.0?????Eclipse?JDeveloper???????IDE?????????????????????????????&??????????????UI???????JavaFX Scene Builder???????? ?????JavaFX 2.0???????????·???????????????3D????????????·????????????????????????????????????Ramani????JavaFX Labs????????????JavaFX 2.0????????????????????????????3D???????????????????????????????UI?????????????????????????????????????3D???·????????????????? ???JavaFX 2.0?????????????3D?????????·??????·??????????????????????·?????·?????Kinect?????????????????????·?????????????????????·?????·????Kinect????3D?????????????????????????????? ????JavaFX????????????????????????JavaFX????????·?????????Linux?????????PC?iPad???????????????????? ?????????2???????????JavaFX??Java??????????????????GUI?????????????????????????????JavaFX??????????????????????Ramani??????????? ?JavaFX???????????????????????????????·??????????????????Swing?AWT???????????????·????????????????????????????????????? ???JavaFX???????????·???????OpenJFX?????OpenJDK????????????????????????????UI??????????????????Ramani??????????????????????????????????????????????Java???????????????????JavaFX???????????????????????????????????????????:?Java Developer Workshop #2?????Nandini Ramani?????????????????????

    Read the article

  • OpenGL Shading Language portability

    - by Luca
    I've noticed that my GLSL shaders are not compilable when the GLSL version is lower than 130. What are the most critical elements for having a backward compatible shader source? I don't want to have a full backward compatibility, but I'd like to understand the main guidelines for having simple shaders running on GPU with GLSL lower than 130. Thank you

    Read the article

  • Faster integer division when denominator is known?

    - by aaa
    hi I am working on GPU device which has very high division integer latency, several hundred cycles. I am looking to optimize divisions. All divisions by denominator which is in a set { 1,3,6,10 }, however numerator is a runtime positive value, roughly 32000 or less. due to memory constraints, lookup table is not option. Can you think of alternatives? I have thought of computing float point inverses, and using those to multiply numerator. Thanks

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • Jerky Silverlight 4 animations when running app in OOB

    - by sha1dy
    I was playing with new Silverlight 4 and to my surprise when I run my sample application in OOB all animations become very jerky when I move mouse around during animations, but when I run my app in browser animations are smooth even when moving mouse around. I tried my app on two different computers, turned on GPU acceleration in OOB settings - and got the same jerky result. Is this a know problem with Silverlight?

    Read the article

  • When should a uniform be used in shader programming?

    - by Phineas
    In a vertex shader, I calculate a vector using only uniforms. Therefore, the outcome of this calculation is the same for all instantiations of the vertex shader. Should I just do this calculation on the CPU and upload it as a uniform? What if I have ten such calculations? If I upload a lot of uniforms in this way, does CPU-GPU communication ever get so slow that recomputing such values in the vertex shader is actually faster?

    Read the article

  • Specifying a callback in Matlab after any runtime error

    - by JmG
    Is there a way to specify code to be run whenever an error occurs in Matlab? Googling I came across RunTimeErrorFcn and daqcallback, but I believe these are specific to the Data Acquisition Toolbox. I want something for when I just trip over a bug, like an access to an unassigned variable. (I use a library called PsychToolbox that takes over the GPU, so I want to be able to clear its screen before returning to the command prompt.)

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • Delayed computation as DAG in .NET

    - by Tristan
    I'm playing around with declarative / delayed computation, where expressions are built up into a directed acyclic graph. Microsoft's GPU Accelerator does something similar. Are there any libraries available for .Net languages that makes it easier to build a representation of the computation?

    Read the article

  • Ubuntu: Graphics freeze

    - by Phil
    We have recently updated a java application which runs on an Ubuntu PC, and are now experiencing a graphics problem that we didn't encounter before. The system is running constantly, and randomly maybe twice a month but sometimes within a few days the systems graphics will freeze, and the gnome panels are frozen. Here is an extract from the syslog; Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970021] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970177] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 937626 at 937625)

    Read the article

  • OpenCL or CUDA Which way to go?

    - by holydiver
    I'm investigating ways of using GPU in order to process streaming data. I had two choices but couldn't decide which way to go? My criterias are as below: Ease of use.(good API) Community and Documentation. Performance Future I'll code in C and C++.

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >