Search Results

Search found 15939 results on 638 pages for 'low memory'.

Page 323/638 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Ubuntu 64-bit vs 32-bit

    - by tukushan
    Is it worth installing the Ubuntu 9.10 64-bit version over the 32-bit x86 version? I will get the ability to address more than 4 GB of memory, but other than that, how does the 64-bit version fare in terms of performance and stability?

    Read the article

  • How to optimally configure memcache running on 16 cores 144G ram server?

    - by Ivko Maksimovic
    Memcache is the only important app running on the server Server has 16 cores and 144G RAM Memcache is given 135G Memcache runs at 32 threads Gigabit network, test shows at least 300Mbit/s availability on network port 600 connections 3000 requests per second Say that memcache (memory) usage is at 50% - it's definitely not full As we increase number of requests towards server, requests slow down (from 8ms to 100ms per request) but server load remains 0.00. We suspect this can be solved by adjusting configuration but we don't understand many of the configuration parameters (besides, maybe, the number of threads). Any ideas?

    Read the article

  • looking for the best power supply-- building computer--- [closed]

    - by fello
    What would be an appropriate power supply and form factor for those specifications below? CPU -Intel Core i7-950 Bloomfield 3.06GH Motherboard -- ASRock X58 Extreme 3 LGA ATX Hard drive-- Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s 3.5" Memory-- Kingston HyperX 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Processor---Intel Core i7-950 Bloomfield 3.06GHz LGA 1366 130W Quad-Core

    Read the article

  • Excel freezes when copying / cutting to paste elsewhere

    - by Barry
    When cutting/copying some cells to paste them into another sheet/page, sometimes Excel freezes/locks up and fades out. At the top toolbar it says in brackets "not responding". Eventually, I must click 'X' to close the program. It offers to wait for the program to respond, but never does – it just does nothing until I finally close it, where it offers to recover files etc. Is there an issue with memory here? What can I do to stop it locking up?

    Read the article

  • What are the differences between the "generic" and "server" kernel images provided by Ubuntu?

    - by dcrosta
    In particular, I'm wondering if there are any patches or config adjustments made to the disk cache size in the server edition. I'm running on a small system (256M RAM), and would like to experiment with keeping the disk cache size smaller so that there's more memory available for applications. I've found this page at Ubuntu's website, which neither answers my questions nor is about the 9.04 release.

    Read the article

  • Excel 2007 charts disappearing

    - by AppsByAaron
    I have an Excel 2007 file with macros and VB (.xlsm) and one of the worksheets has charts. When I open the file those charts are shown. However when I CTRL+Scroll to zoom in the charts vanish. I need to be able to see the charts so I can move/resize them. Any help is appreciated. Running XP Pro with latest SP Over 3 GB memory Office 2007 Pro

    Read the article

  • Are these tools a gimmick?

    - by dotnetdev
    Hi, Are tools which analyse which enable you to use all your CPU cores (eg: GBP.htm"http://www2.ashampoo.com/webcache/html/1/product_2_0061_GBP.htm) and also tools which help you to regain memory (can't think of any just yet but seen plenty) a gimmick? Do these tools really work? Thanks

    Read the article

  • how to set up a git repository which can be accessed by network in ubuntu 12.10

    - by hguser
    Now we want to set up a private git repository in the ubuntu 12.10,then other developments can access it through the local network. Now I just can create a repository use git init,for example: cd myproject git init Which will create .git directory,but I do not know how to access it thougth network like: git://192.168.1.1/myproject/.git Any idea? BTW,I have tried: git init --bare which will give me a error: git add error : "fatal : malloc, out of memory"

    Read the article

  • Resource consumption of FreeBSD's jails

    - by Juan Francisco Cantero Hurtado
    Just for curiosity. An example machine: an dedicated amd64 server with the last stable version of FreeBSD and UFS for the partitions. How much resources consume FreeBSD for each empty jail? I mean, I don't want know what is the resource consumption of a jailed server or whatever, just the overhead of each jail. I'm especially interested on CPU, memory and IO. For a few jails the overhead is negligible but imagine a server with 100 jails.

    Read the article

  • 7-Zip compression on multi-core computers

    - by Peter Mortensen
    Does 7-Zip take advantage of multiprocessor or multi-core systems when compressing? For example, would there be a close to 16 times speed-up on a 16 core system assuming no disk or memory bottlenecks? Or is it is limited to 2 threads (2 times speed-up on systems with more than one CPU or core)?

    Read the article

  • Xen command xl doesn't create a vm but xend/xm does

    - by ineff
    I'm a newbie to Xen, and I've recently installed Xen 4.2 by sources on my system. I've found a strange thing I've a VM when I start it via the command "xm create machine.cfg" all work fine, but if I use "xl create machine.cfg" it gives me the following error xc: error: panic: xc_dom_core.c:442: xc_dom_alloc_segment: segment ramdisk too large (0x4ba 0x2000 - 0x1bd9 pages): Out of memory libxl: error: libxl_dom.c:208:libxl__build_pv xc_dom_build_image failed: Invalid argument cannot (re-)build domain: -3 xenconsole: Could not read tty from store: No such file or directory What could be the problem? Any idea?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • A Versatile Physical Server

    - by Paul
    How does one judge potential memory and processor needs for linux web servers? Specifically, given: A Debian or Ubuntu OS Running a web server (apache2), and A database (MySQL), and A DNS server (bind), and Being used by up to 100 concurrent users, at some points each downloading high-resolution (0.5 to 1 MB) images via a web app. How much should one budget in terms of RAM, type of processor(s), and number of cores? Thanks!

    Read the article

  • negative regexp in Squirm (for Squid). Possible ?

    - by alex8657
    Did someone achieve to do negative regexp (or part of) with Squirm ? I tried negative lookahead things and ifthenelse regexps, but Squirm 1.26 fails to understand them. What i want to do is simply: * If the url begins by 'http://' and contains 'account', then rewrite/redirect to 301:https:// * It the url begins by 'https://' and does NOT contains 'account, then rewrite/redirect to 301:http:// So far, i do that using 2 lines of perl, but squirm redirectors would take less memory

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • 256MB VPS on XEN

    - by user63410
    I am thinking of getting a 256MB server on XEN but not sure is it capable of handling this setup: Varnish + Nginx + Eaccelerator + PHP-FM + MySQL + Mail/FTP? I've tried this setup on OpenVZ and it was partly disastorous, especially with varnish in the equation, i've heard that XEN is better at memory management, sometimes shaving off 50mb or more in comparison to OpenVZ setups... if anyone has any helpful suggestions / input please let me know thanks

    Read the article

  • Anyone tried boosting Windows performance by putting Swap File on a Flash drive?

    - by Clay Nichols
    Windows Vista introduced ReadyBoost which lets you use a Flash drive as a third (after RAM and HD) type of memory. It occurred to me that I could boost peformance on an old PC here w/ Win XP (32 bit, max'd at 4GB RAM) by putting it's swap file (page file) on a flash drive. (Now, before anyone comments: apparently Flash drives (10-30MB/s transfer rates) are slower than HDD (100+ MB/s) (I'm asking that as a separate question on this forum).

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

  • Is the XP VMM a bottleneck on a multi core machine?

    - by JeffV
    I have a dual Xeon hex core machine running an IO intensive application. (WinXP 32) I am seeing a hardware driver (1/2 user mode, 1/2 kernel, streaming data) that is using 6k delta page faults per second. When other applications load or allocate large amounts of memory the driver's hardware buffer gets an underrun (application not feeding it fast enough). Could this be because the kernel is only using one core to service page fault interrupts?

    Read the article

  • How to partition my two hard drives

    - by Thoma Bigueres
    I've got a computer running under the OS "Window Server 2008 R2" on which i have : 60GB disk C: NTFS (Disk 0) 40GB unallocated memory (Disk 1) I would like to partition my disk so that i'll have : 30GB disk C: 70GB disk D: Can you help me on the step i should do to be abble to have this configuration ? I saw that first of all i should merge the two volumes into one, but when i click right on the c: Volume, i can't click on the "Extend Volume" link. Do you know how i can overcome this ? Thanks a lot

    Read the article

  • Need help in using svn on ubuntu 9.10

    - by michael
    Hi, I have install svn on ubuntu 9.10. But when I try to use svn to checkout code for an open source project, i get this error: $ svn co svn://svn.valgrind.org/valgrind/trunk valgrind svn: Berkeley DB error for filesystem '/home/svn/repos/valgrind/db' while opening 'nodes' table: Cannot allocate memory svn: bdb: Lock table is out of available locker entries Can you please tell me how to fix it? Thank you.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >