Search Results

Search found 19752 results on 791 pages for 'cpu window'.

Page 135/791 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Has anyone seen .NET 4 RC MVC2 RTM web apps hogging CPU on Win2008 R2?

    - by kim3er
    We have a number of .NET4 RC ASP.NET MVC2 RTM web applications running on a Windows 2008 R2 server. All behave very well except one that we regularly find running at 99% CPU. It is the most complex of the applications, but is not doing anything extraordinary. It relies on ASP.NET Cache quite heavily, but we have limited the amount of memory it is allowed to use. Does this sound like an issue with the environment? Rich

    Read the article

  • Do Hyper-V guests see multiple CPUs (sockets) or multiple CPU cores when assigned more than 1 vCPU?

    - by Filip Kierzek
    I have SQL Server 2008 Express running on Hyper-V based virtual machine with two vCPU-s. I've just been reading up on SQL Server 2012 Express and noticed that it's CPU is "Limited to lesser of 1 Socket or 4 cores" (http://msdn.microsoft.com/en-us/library/cc645993(v=SQL.110).aspx) My question is how do the SQL Server 2012 limits on CPUs/Cores translate into vCPU-s? Are they "processors" or are they "cores"?

    Read the article

  • Does VMware ESX Fault Tolerance (FT) support depend on the CPU only?

    - by user71784
    I'm trying to find out whether VMware ESX 4.x Fault Tolerance (FT) is supported on a particular server and VMware's HCL is confusing me. It says that some servers with FT-supported processors (specifically the Xeon 3400 Lynnfield) do not support FT and some with almost identical specs (same chipset for instance) do support FT. Could this be a mistake on the HCL itself? To my understanding FT support is based only on the CPU. Thanks. RC

    Read the article

  • How can I configure GIMP 2.8 to be a single window in XMonad?

    - by Pubby
    I'm trying to get GIMP to display as a single window in XMonad. Currently, it's floating strangely in front of every other display and I can't use it. I have tried reading this: http://www.haskell.org/haskellwiki/Xmonad/General_xmonad.hs_config_tips#Gimp But it seems this is for versions of GIMP before 2.8 when there wasn't the option to have GIMP use only 1 window. Because of this, it's an XMonad problem, not a GIMP one. How can I do this?

    Read the article

  • How to limit a process to a single CPU core?

    - by Jonathan
    How do you limit a single process program run in a Windows environment to run only on a single CPU on a multi-core machine? Is it the same between a windowed program and a command line program? UPDATE: Reason for doing this: benchmarking various programming languages aspects I need something that would work from the very start of the process, therefore @akseli's answer, although great for other cases, doesn't solve my case

    Read the article

  • I want to press a key combination in OpenBox and have a terminal appear below my resized Chromium window

    - by Erik
    This is one of those things that looks like it might have a simple solution but is rather time consuming once you start investigating PyTile, Xnee and the likes. I know, I should just use a tiling window manager etc., but I suppose it can be done in OpenBox, and I am just hoping somebody already has a working solution. Ok, so I want to press a key combination while I am in an OpenBox session (Lubuntu LXDE to be more precise) and have my terminal appear below my then resized Chromium window (say ~60% Chromium and ~40% Terminal).

    Read the article

  • how can i sell hp and ibm server cpu?

    - by elvayee
    i'm now working in a company exporting hp and ibm server cpu. Our price is very competitve, and our quality is very high, also, we have good after sale service. But the problem is: we don't have paid B2B. How can I find customers? if anyone knows, pls contact me by msn : melodyhua123 AT hotmail dot com or elvayee123 at gmail dot com thanks!

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • D key not working on Ubuntu

    - by Jonathan
    For some inexplicable reason the capital d key on my Ubuntu system is no longer producing output. Hitting caps lock and then d produces a D. I've tried multiple keyboards and the issue is the same. There's nothing bound to Shift+d under System Preferences Keyboard Shortcuts. xev produces the following: shift + a KeyPress event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31268952, (130,-16), root:(1000,525), state 0x10, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31269376, (130,-16), root:(1000,525), state 0x11, keycode 38 (keysym 0x41, A), same_screen YES, XLookupString gives 1 bytes: (41) "A" XmbLookupString gives 1 bytes: (41) "A" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31269584, (130,-16), root:(1000,525), state 0x11, keycode 38 (keysym 0x41, A), same_screen YES, XLookupString gives 1 bytes: (41) "A" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31269608, (130,-16), root:(1000,525), state 0x11, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False shift + d KeyPress event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31102792, (115,-13), root:(985,528), state 0x10, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False FocusOut event, serial 36, synthetic NO, window 0x4c00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 36, synthetic NO, window 0x4c00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 36, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KeyRelease event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31103104, (115,-13), root:(985,528), state 0x11, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • winkey + arrow key doesn't move windows to 2nd monitor on linux

    - by Berry Tsakala
    in Windows, win+ <-- and win+ --> snap any window to the left/right side of a monitor, and if pressed again, snap the window to the other monitor, in multi-monitor environment. when a window is maximized, it will "restore" the window first, and at the next win+ <-- key the window will snap. this is very easy to use and remember. But on Linux a maximized window is not affected by win+ <-- , and the user must explicitly un-maximize it (win+down. the window is not moved to another monitor a side-snapped window cannot be resized - it behaves like a maximized window. is there another program that behaves like Windows, or I should write one by myself? (i'm using ubuntu/mint)

    Read the article

  • Capital D key not working / producing output

    - by Jonathan
    For some inexplicable reason the capital d key on my Ubuntu system is no longer producing output. Hitting caps lock and then d produces a D. I've tried multiple keyboards and the issue is the same. There's nothing bound to Shift+d under System Preferences Keyboard Shortcuts. xev produces the following: shift + a KeyPress event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31268952, (130,-16), root:(1000,525), state 0x10, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31269376, (130,-16), root:(1000,525), state 0x11, keycode 38 (keysym 0x41, A), same_screen YES, XLookupString gives 1 bytes: (41) "A" XmbLookupString gives 1 bytes: (41) "A" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31269584, (130,-16), root:(1000,525), state 0x11, keycode 38 (keysym 0x41, A), same_screen YES, XLookupString gives 1 bytes: (41) "A" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31269608, (130,-16), root:(1000,525), state 0x11, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False shift + d KeyPress event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31102792, (115,-13), root:(985,528), state 0x10, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False FocusOut event, serial 36, synthetic NO, window 0x4c00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 36, synthetic NO, window 0x4c00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 36, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KeyRelease event, serial 36, synthetic NO, window 0x4c00001, root 0x27a, subw 0x0, time 31103104, (115,-13), root:(985,528), state 0x11, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • Ubuntu Unity (64-bit) Bugs on Nvidia dual screen [closed]

    - by Kristofer
    On my work computer I have upgraded 10.04 to 10.10 and now to 11.04. Upgrades have mostly worked well and the dual screen setup is almost working. There are however a couple of annoying bugs which I tried to report but the bug reporting tool told me to ask about the bugs here. Here's what I have found: Auto-hide stops working. It doesn't seem to matter which application is maximised/covering the dock, it just stays on top. I have tested changing the autohide settings with no result. Scenario: One maximised window on each screen. If I now try to drag the window that doesn't have focus by dragging the top of the window nothing happens. I can also not give the window focus by clicking the top bar, I have to click inside the window and then I can drag the window. If I unmaximise the window with focus it does work and I can both change focus and drag the window directly by clicking the top of the maximised window. Any bug fixes coming for these issues? Any work-arounds?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >