Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 466/545 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • ACER ASPIRE V3-571G-9435 Fan not kicking in leading to overclocking

    - by brythespy
    This laptop has always had this problem. The temperatures kick up to the thermal ceiling of 99C for the CPU (i7-3610QM) and 94C for the GPU (GT 640M). Problem is, the FAN doesn't give a damn. It's actually QUIETER when the temperatures are that high, than when it's at 60C or so. I figured it was a problem with the BIOS, so I updated that, no change. So maybe it was a problem with windows? Nope, same result on gaming with Ubuntu. The major problem of this, is that after gaming for ten minutes the CPU throttles itself to 1197MHz(as opposed to 3193), and the GPU goes down to 135MHz( as opposed to 843MHz). The problem is that the fan won't kick in like I know it can, because when the laptop is in POST, like at BIOS setup, the fan is like a vacuum cleaner it's so loud! I don't really care about noise, so I'd love to have the fan like that all the time as long as the temperatures don't fly through the roof... So, things I've tried so far, to avoid possible duplicate answers. Checked for dust: It's been this way since the laptop was new, and I've since then taken it apart. No dust buildup. Background stuff running?: No, problem persists across OS'es, and it happens while gaming anyways Manually underclocking both CPU/GPU: Using windows, I can force the CPU to stay at 1.1GHz, but the temperature STILL easily hits 99C after 5 min of gaming Contacted Acer support?: No help at all. They told me to update and reset the BIOS, which I have done multiple times. There are only about 6 changeable things anyway, none of which should affect the FAN control Third party fan control program?: None detect the fan So, I'm screwed until I can afford to replace this laptop, but I am very satisfied with performance in games... Whenever the CPU/GPU aren't being throttled. Anyone that can offer advice to solve this problem would be greatly appreciated. Hell, if you solved my problem I'd send you some monies through paypal.

    Read the article

  • ssh initial prompt hangs for 10 minutes but console login and initial prompt is very responsive - why?

    - by rfreytag
    I have been running an ESXi 4.0 server for months with a couple of WinServer2003 and several Ubuntu Server 10.4 VMs. The performance has been impressive on 6GB i7 Asus P6T hardware. Suddenly, a week ago, ssh logins to the Ubuntu VMs take 10 minutes when connecting over the LAN (over a WAN the connection (pipe) is broken long before that). When logging in to these VMs the password prompt arrives immediately, and failed passwords are responded to immediately. But the moment I log in then the shell prompt appears and I hang for many minutes. Sometimes the connection hangs before the shell prompt appears and sometimes I can type in a command but the moment I hit return the machine hangs. 10 full minute later control returns and the VM is responsive. NOTE: there are several Ubuntu VMs on the same host machine that are identical in all ways that I can tell. However, only one of the VMs displays this behavior. That is why I mention the ESXi host in passing - I don't think it has anything to do with the problem. This behavior is never seen when I connect with the troubled-VM's console (through vSphere Client). From the console the Ubuntu VMs all respond beautifully. I have seen: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1003496&sliceId=1&docTypeID=DT_KB_1_1&dialogID=229586372&stateId=1%200%20229588522 ...and since that relates to delays in seeing the password prompt that does not appear to be the solution here. Any other suggestions very welcome - thank you.

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • SSD as primary or secondary drive on a small Linux server?

    - by Alex Martelli
    I'm pensioning off my 10-years-old home server and replacing it with an Ubuntu 10.04 box. The two storage devices are a Western Digital Caviar Green 2.0TB HD and an Intel X25-M 34nm Gen 2 80GB SATA II 2.5inch SSD (the box has 8GB RAM and an i5 750, if it matters). I don't care much about boot times (since I don't plan to reboot all that often;-); the main frequent, performance-demanding task will be (re)building large open source C or C++ software packages from sources (as an open source contributor, I do that often). So, I thought I'd keep the SSD as the secondary drive and the HD as the primary one, using the SSD mostly for the files that can otherwise demand a lot of seeking (esp. in a parallel make). However, the friendly vendor (perhaps more experienced in Windows systems than in Linux ones) thinks the "normal" way to configure the machine would be with the SSD as the primary drive. I'm pretty rusty on configuring and tuning systems, so, I thought I'd better double check on SuperUser... thanks in advance for advice about this choice!

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • What can I do to determine the root cause of a Windows server hanging/freezing?

    - by Aaronaught
    We set up a new server here a few weeks ago that I am informally responsible for managing. Almost everything works perfectly except for one thing: Every so often it hangs without warning. To clarify: When I say hangs, I mean completely. None of the services respond and I'm unable to even get onto a local console - the display acts as though there's no VGA signal. One time, the server actually responded to pings, another time I got the "destination host unreachable" response, but most of the time the pings just time out, as one would expect for a hung server. Event logs don't show anything after a reboot. I don't mean that they don't show anything interesting, I mean that they don't show anything at all from before the failure occurs to after the reboot. And there are never any performance problems, strange errors, or other obvious signs of impending doom before it happens. I don't expect any easy answers here. What I'd like to know his I can methodically determine the root cause of this problem, be it a misbehaving service, defective hardware, or something else. Is there any kind of logging I can set up that will help me get to the bottom of this? Any hardware diagnostics or remote monitoring? Anything else I can do to help me discover what's actually happening, or at least be able to eliminate what isn't wrong? Just to reiterate, I really don't want to start speculating about possible causes and take a trial-and-error approach, because it's going to be at least several days at a time before I would have conclusive results. I'm looking for solutions to reliably trace the problem to its source.

    Read the article

  • Is 2 GB of RAM better than 2.5 GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • Wireless keeps shutting off in Windows 7

    - by Nathan Adams
    I have Windows 7 Ultimate 32bit installed on a Dell Latitude XT Tablet and for the life of me I can't figure out this really weird problem. The symptom is that the Wireless will disconnect from the AP and if I tell it to scan again, it says there are no APs in the area. I do have another wireless card in the laptop and if I disable the first one and enable the second, I am able to get onto the wireless however if I want to use the first card again I have to restart. I tried enabling/disabling the device, nothing will kick start the wireless again in the first card without a restart. I even tried different drivers. So, it seems it is random but it does occur more often when there is increased network activity (ie downloading a large file). The laptop doesn't seem to be overheating. I have tried the following: Under "Change Advanced Power Settings" for the current power profile, I set the "Wireless Adapter settings" to "Maximum Performance". Under device manger, I went to the card in question, and went to the advanced tab and set the "Power Saving mode" to "MAX_PSP" Both cards I have seem to exhibit the behavior after awhile. Both models of those cards are: Dell Wireless 1505 Draft 802.11n WLAN Mini-Card Gigabyte GN-WS30N 802.11n mini WLAN Card Has anyone have any ideas or ran into this before?

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Master File Table Corrupt, any way to save data?

    - by domen
    hi. I've used search, but none of the results match my problem so I didn't have to ask separate question. I've Installed Windows 7 RTM recently and since then partitions located on one of my HDDs have gone "crazy". They used to "freeze" and didn't open in explorer for some time (minute or two, usually), sometimes all partitions of the drive wouldn't show until reboot and finally, one of those partitions started showing "disk structure is corrupted and unreadable" warning, it appeared in Disk Management window as RAW and chkdsk showed "mft corrupt". There were no important data on the partition and I didn't have enough time to analyze the problem at the moment, so I just reformatted it and ran antivirus scan on system. After that problem settled for some time, but yesterday the problematic HDD vanished again from the system. After reboot chkdsk identified mft of four partitions corrupt and now they are all in same conditions as the above mentioned one. But the difference is that the files stored in them are extremely important. and just for info: I upgraded from Win7 build 7077, but had some performance issues, so I reformatted system drive and installed fresh Win7 RTM on it. I've downloaded TestDisk and it shows all the partitions marked as NTFS (not RAW) and my knowledge of the program wasn't sufficient to obtain any other info from it :-) and the images that could help describe the problem (sorry, I'm not allowed to post images and more than one hyperlink): http:// img22.imageshack.us/img22/5909/chkdskz.jpg http:// img198.imageshack.us/img198/5576/computeray.jpg I'm interested, is there a way to let me restore the MFT or just access files so I can backup them before reformatting the drive. Thanks for your time. :) P.S. my reformatted drive is showing no problems, could there be a problem with windows 7 itself? I googled, but with no results.

    Read the article

  • Flash Backed Write Cache (FBWC) without capacitor pack

    - by Martyn
    I brought a HP Smart Array P410 controller and it is installed and working fine in a HP Prolient Microserver with 4 drives in two RAID 1 arrays. I didn’t realise however that it came without any cache so would only work by directly writing straight to the disk, and the performance was horrible. So I then brought the 512MB Flash Backed Write Cache (FBWC) memory module as I was under the impression that with FBWC I would not need a battery. I got this idea from a forum post. "What do you guys think of the choice between 'BBWC' (battery backed write cache) and 'FBWC' (flash backed write cache)? The flashed based ones use non-volitile memory so need no battery." After installing the cache module however the server pretty much won’t boot. The P410 has a flashing amber light on it, and from the manual that doesn’t sound good. I’ve managed to get to the on board BIOS once and even managed to get to boot to the HP Array Configuration Utility (ACU) CD once, but every other time the Server continuingly reboots once it get to the POST screen and reads ARRAY INITILIZING %%%. The one time I reached the ACU, it reported a problem with the Cache Module. To me, it seems like the cache module is faulty, however the supplier tells me “Do you have an FBWC battery pack, p/n 587324-001, because that is required for the cache to work. If you have it, please complete an RMA form and we'll send a replacement / credit.” Does this sound right to you? I’ve been ordering the parts from the US and I don’t want to spend $77 + $40 p&p on a battery, wait a week for the shipping to find the card is faulty, and I don’t want to send back a working card?

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • System Center 2012 VMM UI is very slow

    - by Grant
    I've recently setup system center 2012 a new server 2008 r2 server which I'm using for virtual machines. Everything seems to be working fine, and the virtual machines are nice and fast. But the Virtual Machine Manager interface is always excruciatingly slow. Sometimes taking up to 15 seconds moving between screens. It's very frustrating trying to use it when a task that just involves a couple clicks ends up taking several minutes. Pages that have a lot of form fields seem to take the longest to load - such as the page to change hardware settings of a virtual machine. Is this just normal performance for VMM? If not, where can I look to find what is slowing it down. Nothing else on the system seems to suffer. I can load and use Hyper-V manager with no noticable slowness. Even programs like event viewer that are usually rather slow seem to load fairly fast. Only the system center programs seem slow. Server is a Dell R710, 2x16 core opteron 6274 processors, 96GB RAM. OS drive is 2x500GB 7.2k RPM SAS drives in RAID1 (opted for the less expensive 7.2k drives since pretty much everything is stored on the SAN). Am I just being impatient? Does anyone else use VMM 2012 and find it slow?

    Read the article

  • Server 2003 and SSL Certificates

    - by Keith Stokes
    I have a Windows 2000 domain with dozens of Windows 2000 servers and a few 2003 servers. Each server runs a custom app talking to a 3rd party utilizing self-signed certificates. To help troubleshooting we've created a custom test app. The 2000 servers are able to talk within seconds. The 2003 servers take anywhere from 10-30 seconds using a domain account and much less, usually under 5 seconds using a local account. The only exception to the local account performance is a new account, which is slow initially then faster. If you leave the test app open and reconnect repeatedly it talks in seconds. If you leave it open for sometime between 1 and 2 hours, it reverts back to the previous 10 seconds, so obviously something is caching. Installing the destination certificates in the local 2003 server store makes no difference. I've installed the certificates in AD and that apparently makes domain accounts work in 9-12 seconds, vs 30 seconds that was regular before. Manually clearing the certificate store on the 2003 server makes no difference. I'm at a loss as to where the certs might be cached and if I'm using some sort of domain certificate store that's hiding from me.

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • Use external display from boot on Samsung laptop

    - by OhMrBigshot
    I have a Samsung RV511 laptop, and recently my screen broke. I connected an external screen and it works fine, but only after Windows starts. I want to be able to use the external screen right from boot, in order to set the BIOS to boot from DVD, and to then install a different OS and also format the hard drive. Right now I can only use the screen when Windows loads. What I've tried: I've tried opening up the laptop and disconnecting the display to make it only find the external and use the VGA as default -- didn't work. I've tried using the Fn+key combo in BIOS to connect external display - nothing I've been looking around for ways to change boot sequence without entering BIOS, but it doesn't look like it's possible. Possible solutions? A way to change boot sequence without entering BIOS? Someone with the same brand/similar model to help me blindly keystroke the correct arrows/F5/F6 buttons while in BIOS mode to change boot sequence? A way to force the external display to work from boot, through modifying the internal connections (I have no problem taking the laptop apart if needed, please no soldering though), through BIOS or program? Also, if I change boot sequence without accessing external screen, would the Ubuntu 12.1 installation sequence attempt to use the external screen or would I only be able to use it after Linux is installed and running? I'd really appreciate help, I can't afford to fix the screen for a few months from now, and I'd really like to make my computer come back to decent performance! Thanks in advance!

    Read the article

  • Installed Bunch of New Fonts on Windows 7 - Now None Show Up and System Lags

    - by Josh Stodola
    So I went to install about 5,000 fonts on my Windows 7 64-bit machine. It was slow to install them, and I had to leave. I came back and my PC was shut down, and I had to go through the Windows recovery BS when I powered it on. Now my computer runs EXTREMELY slow and any program that has a font menu locks up my whole machine (nothing in Microsoft Office works). When I go to "Fonts" in the control panel, it says 0 items. I went through all of the font settings trying to get them to appear. Nothing helps. I tried to bring up the Character Map and that froze up my machine too. How can I fix this? If I do not get this issue resolved soon, I am wiping this drive and going back to XP (and probably never purchasing another version of Windows again). I never had any issues with XP and have had nothing but performance problems when switching to Windows 7. My quad-core intel extreme with 8GB of RAM should never flinch with the kind of work that I do, and something simple like playing a song off an external HD takes up to five seconds on Windows 7. Unbelievable that I had to pay for this crap!

    Read the article

  • All applications quit when printing on Mac OS X 10.5.8

    - by Tamany
    I recently ran a software update. I'm not sure if my problems are associated with this but I'm pretty sure they are as I printed successfully before update. I checked the log at time of printing: 03/05/2010 22:03:15 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x17a82b50 03/05/2010 22:03:15 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:17 [0x0-0x51051].com.microsoft.Word[697] Mon May 3 22:03:17 leopards-imac-2.local Word[697] <Error>: The function `CGPDFDocumentGetMediaBox' is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance. Please use `CGPDFPageGetBoxRect' instead. 03/05/2010 22:22:09 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x1b036500 Any thoughts on how to fix this?

    Read the article

  • Move EFI System Partition to another drive

    - by Pincopallino
    I had a Windows 8 installation on an HDD, using UEFI as boot. The HDD has the following GPT table: DISKPART> list partition Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Ripristino 300 Mb 1024 Kb Partizione 2 Sistema 100 Mb 301 Mb Partizione 3 Riservato 128 Mb 401 Mb Partizione 4 Primario 390 Gb 529 Mb Partizione 5 Primario 540 Gb 390 Gb (I apologize it's in Italian, but the translation is quite straightforward). I recently bought an SSD drive, connected it and installed a fresh Windows 8. Now I have a working dual boot, but the UEFI partition is on the HDD instead of the SSD. Here's the SDD partition list: Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Riservato 128 Mb 1024 Kb Partizione 2 Primario 221 Gb 129 Mb I think that the best solution would be to have it on the SSD for two reasons: the first is performance (I guess it would be a little be faster on the SSD due to the spin up time for an HDD, but I may be wrong about that) second reason is consistency. As I plan to use only the Windows 8 installation that is located on the SSD and I'm probably going to erase the system partition on the HDD to use it as a data storage device, I think that the boot partition should be on the same drive as the OS. So the question is how do I move the EFI System Partition to the SSD?

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >