Search Results

Search found 6387 results on 256 pages for 'cpu allocation'.

Page 159/256 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Win 7 BSOD stucks at dumping

    - by AFO
    Recently Ive been getting frequent bsods, which always gets stuck at dumping without finishing it, with most of them showing no error messages. This never happened before i upgraded to new rams, but 2 passes of memtest86 turned out fine. Tried reinstalling windows, problem still there. Tried initiating a manual crash, and that did create a successful dump. The SSD win 7 is on doesn't seem to have any problems, crystaldisk says its healthy. Varied the cpu multiplier between stock and +500Mhz, crashed regardless. Left voltage control to auto. Im fairly sure its a hardware problem, just cant pinpoint which specific part(s). The specs- Windows 7 x64 (1 day old) 955 x4 BE c3 (running at stock) (3.5 years old) GA-970A-D3 (1.5 years old) Gigabyte 6950, unlocked to 6970 (still at 6950 speeds) (<3 years old) 2x4GB 1600 CL9 HyperX Blu (running at 11-11-11, default motherboard setting) (<1 month old) Plextor M5s (around 5 months)

    Read the article

  • How to fully use the 4G in my Laptop under Ubuntu 9.10 - 32-bit

    - by jfmessier
    I have a Toshiba A100, which I upgraded to 4G of RAM. The hardware startup indeed shows 4G of RAM, and I recently installed Windows 7, just to see how it behaves on it. So far so good, it displays 4G of RAM. Not that I tried to use it all, but it displays it. Previously under XP, I also would see 4G of RAM. But under Ubuntu 9.10 (32 or 64 bits), it only displays 2.9 G of RAM. And my kernel is the "pae" compiled one, which is supposed to do the trick to work around the 32-bit CPU limitation. How can I get Ubuntu to fully use my 4G of RAM ?

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • What metric captures why my OSX machine is so slow during XCode indexing

    - by Ben Flynn
    My entire machine OSX Lion machine slows down while XCode 4.4 is indexing. The CPU is less than 10% busy, I've got over 500 MB free memory, plenty of disk space, disk IO rate is not high, network activity is not high. Indexing just a few files can take minutes and builds are extremely slow. While this is going on, even loading a new web page in Chrome can be slow. Knowing how to fix it would be great, but more fundamentally how can I measure what is actually going slowly? What metrics should I be looking at? Nothing in Activity Monitor, iostat, top, or sar betray anything about what's going on to me. Even getting a man page is interminable.

    Read the article

  • Memory usage on debian webserver keeps going up

    - by Steven De Groote
    my webserver is running apache 1.3.x for a PHP application, along with mysql on the same machine. Most of the time it runs fine, CPU usage still with nice margin, but somehow memory usage keeps growing throughout uptime. While it looks like it is chunked from time to time, I've had moments my server going down because it's out of memory. Restarting apache or mysql only reduced memusage by 100M. Attached is an overview of monthly memory usage. The 2 massive drops are server restarts after out-of-memory situations. http://imageshack.us/photo/my-images/51/memorymonth.png/ Any explanations for his behaviour or how I could solve this? Thanks! Steven

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • Windows 7 USB power lose after a few seconds / minutes

    - by Stefan Dunn
    My friend's computer has a problem where the USB ports causes problems with the power of some devices connected to the computer. The USB mouse has no problems, however the Wireless Adapter looses power after around 20 seconds of use and USB Flash Drives cause the computer to either freeze, lose power (and become unresponsive) or become disconnected (still shown in Device Manager, but not in My Computer) when trying to transfer any type of file to / from the computer. I have a suspicion it's the Motherboard but could it also be a Software problem? Tried a new case, RAM, CPU and GFX Card which had no effect. The problem occurs on both the Front USB and Back (Motherboard) USB Ports. UPDATE: Tried the USB devices with an Ubuntu Live CD and they work fine, could this mean it's a problem with Windows (x64)?

    Read the article

  • IIS7 Compression Configuration

    - by Level1Coder
    Hi, Previously when I used IIS6, I used IIS6 Metabase Explorer to edit Metabase.xml and manually turned on compression, specified the compression level and the file extensions to compress. IIS7 seems a bit different, there is no Metabase.xml file in the system32\inetsrv folder. Enabling compression is easy to turn on by checking the checkbox in the Compression module. But how do I manually tweak and set the compression levels and file extensions to compress? I also ran across an article saying that IIS7 also automatically throttles the compression if your CPU load is 50% then compression is turned off. Where are all these settings located?

    Read the article

  • Why does QuickTime lag in Firefox if I don't put my mouse over it?

    - by Jim McKeeth
    This has happened for me as long as I can remember. Since the first version of Firefox, on multiple computers and under different versions of Windows. QuickTime plays fine in IE and Chrome (even with Firefox in the background), but in Firefox if my mouse is not over the QuickTime window then it will start to studder, then lag and eventually just stop. To be honest, I do keep quite a few tabs open, but Firefox stays at 1% CPU (even when QuickTime runs) and I have a few gigs of free RAM. It is the same for any resolution of video or audio. If the mouse is just one pixel in the client area of the QuickTime then it usually plays fine. Other video formats typically play fine. Does anyone else notice this behavior? Ultimately I would like a fix besides keeping my mouse over the QuickTime window.

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • Hard drive trouble trying to recover data

    - by DEmetria
    How to get my information off my old hard drive to my new one So I wanna know can anyone help me Laptop stop working and have me a hard disk error I finally got it to boot up again after playing with it, and I tried to copy my stuff and it froze on me and hasn't booted since Bought another hard drive and ended up making an image of my friends computer but couldn't get my stuff off my old drive so I tried the freezer method now and I put it in for two hours and it didn't boot but I'm putting it back in for 12hrs So my end result is I just wanna get my drive up enough to create an image of my CPU to my new hard drive and is there another way I could do it if my hard drive won't boot!! But here's the kickers when I made the image of my friends computerized up loaded it to the new hard drive and I have my old hard drive plugged up to an USB enclosure so I need help. Thanks in advance!!!

    Read the article

  • IIS 7.x Application Pool Best Practices

    - by Eric
    We are about to deploy a bunch of sites to some new servers. I have the following questions about application pools: 1) It seems advisable to have an application pool per website. Are there any caveats to this approach? Can one application pool, for example, hog all the CPU, Memory, Etc...? 2) When should you allow multiple worker processes in an application pool. When should you not? 3) Can private memory limit be used to prevent one application pool from interfering with another? Will setting it too low cause valid requests to recycle the application pool without getting a valid response? 4) What is the difference between private and virtual memory limits? 5) Are there compelling reasons NOT to run one application pool per site? Thanks!

    Read the article

  • Squid Proxy Antivirus - Recommendations / Performance

    - by Jon Rhoades
    Due to our user's increasing expertise at downloading virus and the like, we are investigating adding Antivirus to our Squid proxy. A casual Google reveals several free and one paid: HAVP squid-vscan Viralator Safe Squid (commercial) None of the 'free' ones have reached v1 yet, nor do any inspire huge confidence form their websites (although of course I would rather they spent their time on the app rather than their website!). Does anybody have experience with any of these (or any other similar apps). If so are they suitable for a production network with 400ish concurrent users, and what sort of CPU/RAM requirements does it have?

    Read the article

  • Disable user_deny.db in Cyrus/imapd

    - by netmano
    We have a cyrus 2.4.12 on Debian, we use packages, rather than building each software ourselves. I am getting the this "log" constantly, a lot of, various users, and 8-10 times per user request: fetching user_deny.db entry for 'user123' I have searched for it, but haven't found a real solution, there were some patches for 2.3.xx, but we don't want ot build it, we prefer to use packages. Is there any solution to disable the user_deny.db at all. We don't need this feature. It wastes the CPU as disk.

    Read the article

  • New power supply and now HDDs are not recognized.

    - by Michael
    So I upgraded to a new X4 ULTRA power supply that was recommended to me by a local TigerDirect store. After installing it along with a new liquid cooling system, I booted it up and it automatically fried my CD Drive. After that I noticed that the OS wouldn't start and figured out that none of the 4 HDDs in my computer were being recognized by the BIOS. I feel them spool at a steady pace and have tried new cables and connections but to no avail. I triple checked all of the connections and cables and have no idea what is wrong. This isn't the first time I changed a PS or CPU cooling system but I am at a dead end. Any ideas, aside from buying a USB HDD reader and seeing if they are all fried? Also, this is a stock Gateway mobo with the mobo USB connections already dead. Could the new PS have fried the SATA connections??

    Read the article

  • Remote monitoring with screen capture

    - by Anonymous
    Is there anyway i can perodically take screenshots of a remote computer using nagios? I am experimenting with Nagios, and i am trying to explore different monitoring. So my question is apart from using nagios to monitor cpu usage, bandwidth utilization, uptime etc.. can i monitor my worker's productivity by checking what is he doing on his computer in a form of image output. Being able to monitor processes would not be of much help to me, as i only know if he or she is running firefox.exe for example he maybe using excessive use of firefox for facebook or other stuff but he claims he is troubleshooting and looking for solutions on forums. I saw a check_vnc script but i am unable to install the requsite vnc server anyone succesfully tried the vnc script care to share how to go about it? If not anyother way to try this?

    Read the article

  • Reduce the I/O priority of Windows Backup (Windows Server 2008 R2)

    - by HelloSam
    I have a PostgreSQL running on Windows Server 2008 R2 x64 box. And I have scheduled a backup everyday from the RAID 1 DB disk to a dedicated standalone disk. They are SAS 15k on Dell PERC 6i. I am using the built-in Windows Server Backup for purpose. The problem is, whenever the backup process is kicked in, the database performance is hogged. I would say almost a 10x of performance reduction. From the resource monitor, the disk queue is in the double digit range when backing up, and less than 1 during the day. The disk activity is like ~30-50MB/s during backup, so I guess the hardware is acting normally, though wbengine.exe takes up most of the portions. I think reduce the IO priority of the backup process would be an answer, but I couldn't find a way to. Tuning process CPU priority does not seems to help.

    Read the article

  • Loss of feature set with VMware EVC

    - by Peter
    If I have two machines both 3rd Generation AMD Opteron, one Shanghai & one Istanbul and I can vMotion between them. Does it buy me anything to enable EVC at the 3rd Generation AMD level? Will I lose any CPU features? My thoughts are, I can enable EVC 3rd Generation with running VMs and I can't enable 2nd Generation EVC with running VMs. I figure there won't be a lose in any features because if there was a reduction in feature set then I couldn't enable EVC with running VMs.

    Read the article

  • Understanding an Application based on the OS interaction with a Hypervisor

    - by Dewy
    Following I will ask a few specific questions but I will set the stage first. My goal is to monitor Applications in a very odd place - between the OS and a Hypervisor. If you have comments about this probably unachievable goal please do educate me. One good advice or link can save me days of work. Now to my current attempt. I installed on WinXP a VirtualBox (being open-source) and got a guest OS of latest Ubuntu running within. Where should I go next? Can I set the logs to show all memory/CPU/disk instructions of the guest OS? Thanks, Dewy

    Read the article

  • zip being too nice (osX)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi this all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • What could cause my LAN Pings be greater than 100ms?

    - by James Holland
    I have 2 servers (Both: Windows Server 2008, Dual Xeon 2.8Ghz, 32GB RAM, 8 x 15k SAS Drives). One of them is a DC / Web server / Exchange Server, the other is a SQL Server (2008). I have a 48 port Netgear GS748T Gigabit switch. When I ping from server to server, I get ping times <1ms, great, but when I ping from a PC, I get varying pings from the occasional <1ms to 500ms!! If I log into either server and look at Task Manager, CPU usage peaks at 20%, memory usage is 100%, but I am led to believe this is normal as Exchange will just use as much as you have, and release it when requested. Network usage peaks at 1%. I really don't understand how the ping can vary that much. I know I am giving very little info, but this is all I know, I apologise, but can anyone help? In response to question, I have pinged by both IP address and hostname, no difference in ping times.

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • How to install apt-get on a busybox embedded system?

    - by Daniel YC Lin
    My embedded system is for sh4 CPU. The debian distribution may get on http://www.si-linux.co.jp/pub/debian-sh/lenny-sh4/ I get the apt*.deb and extract the data.tar.gz. After setup the /etc/apt/sources.list, I could do 'apt-get update'. But it missing dependency when I try to run 'apt-get install ntpdate'. Is there any method to let apt-get ignore some base packages? Because those package is build by my original embedded system.(eg. busybox).

    Read the article

  • Under kvm, Vista guest OS install halts on black screen

    - by Isaac Sutherland
    I am using kvm on my ubuntu-server-10.04 amd64 dual core PC. I am trying to install a Windows Vista guest OS. The installation proceeds properly until the system reboot halfway into the installation process, at which point it stops on a black screen and CPU usage goes to near zero. I created the vm with virt-install as follows: virt-install -n vista --connect qemu:///system -r 1024 -vcpus 2 \ --os-type windows --os-variant vista \ --virt-type kvm --accelerate \ -c /dev/sr0 \ --disk path=/dev/main/vista-hd \ --network bridge=br0 \ --vnc --noautoconsole Where /dev/sr0 is the physical drive with the vista installation DVD, and /dev/main/vista-hd is a 20-GB lvm logical volume I created. A number of people seem to have had success installing vista under KVM, but I haven't been able to determine what is causing my problem. Ideas anyone?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >