Search Results

Search found 25984 results on 1040 pages for 'disk load'.

Page 401/1040 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Will Windows 7 work at all on my old toshiba [closed]

    - by andrew
    Windows 7 requires the following specifications: 1 gigahertz (GHz) or faster 32-bit (x86) or 64-bit (x64) processor 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit) 16 GB available hard disk space (32-bit) or 20 GB (64-bit) DirectX 9 graphics device with WDDM 1.0 or higher driver Will it work at all on my old toshiba Satellite A100 PSAA8C-SK400E Intel® Core™ Solo processor T1350 (1.86GHz, 533MHz FSB, L1 Cache 32KB/32KB, L2 Cache 2MB) Standard Memory: 2x512 MB DDR2 Intel® Graphics Media Accelerator 950 with 8MB-128MB. The main problem I can see is that the graphics is not up to it.

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

  • "Windows cannot find" file when opening Excel spreadsheet

    - by DanH
    For all of my Excel spreadsheets when I attempt to open them (by double-clicking in explorer) I get the message "Windows cannot find C:...". The files are there, and are valid zip files as seen by 7-Zip. There are no apparent lock files in the directories. I did just install Norton-360 over the weekend (replacing Kasperski), but the Norton log shows no events related to Excel. However, while installing Norton I did reboot with some Excel files open. Presumably something is hosed in my Excel configuration but I don't know what. Update (Before actually posting) -- I found an article that suggested turning off Advanced Option "Ignore other applications that use DDE", then doing excel.exe /unregister followed by excel.exe /register. I tried this but I suspect that the two Excel calls were ignored (Excel opened, but no obvious change). With that option off the spreadsheets load OK, but not with it on. And, curiously, spreadsheets load OK with the option on or off if I open Excel first and then open the spreadsheet in it. Does anyone have any idea what effect leaving that option off will have? Update 2 -- I tried running the "repair" option. It said it corrected a couple of config things (without saying what they were), but I still get a failure if I double-click an Excel file with the "Ignore other applications..." option checked. Update 3 -- I managed to fix this problem, but failed at the time to come back and say what I did, and now I can't remember for sure. But I think it had something to do with "Options"/"Save" and some of the values there. Something to do with AutoRecover, perhaps. (Possibly there was a file in recovery and I had to specify "Disable AutoRecover for this workbook" to let bring-up get past it. Or perhaps the AutoRecover file location was hosed.) Anyway, if it happens to someone else, and you find the fix, post it below and I'll mark it answered.

    Read the article

  • Xorg eating up too much RAM on Ubuntu 9.10 box

    - by Yang
    Xorg is eating up 444MB of 2GB total RAM on my Ubuntu 9.10 x86_64 machine with nvidia drivers installed for the nvidia G86 (GeForce 8300 GS). top shows: top - 18:21:41 up 6 days, 2:40, 9 users, load average: 0.46, 1.12, 1.22 Tasks: 266 total, 3 running, 262 sleeping, 1 stopped, 0 zombie Cpu(s): 8.4%us, 2.0%sy, 0.0%ni, 89.1%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055736k total, 1965136k used, 90600k free, 3952k buffers Swap: 979924k total, 979908k used, 16k free, 102636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1432 root 20 0 1154m 442m 7492 S 8 22.0 32:56.97 Xorg 18462 yang 20 0 1001m 219m 8356 S 0 10.9 5:13.25 chrome 24099 yang 20 0 865m 83m 13m S 0 4.2 0:06.91 chrome xrestop shows: xrestop - Display: :0.0 Monitoring 47 clients. XErrors: 0 Pixmaps: 40430K total, Other: 142K total, All: 40573K total res-base Wins GCs Fnts Pxms Misc Pxm mem Other Total PID Identifier 1c00000 21 46 1 19 697 9128K 18K 9146K 3169 x-nautilus-desktop 1000000 4 3 0 17 194 9000K 4K 9004K 3134 gnome-settings-daemon 1600000 51 2 1 25 1100 7648K 28K 7676K ? compiz For comparison, here's my other Ubuntu box, which also has compiz etc. enabled but with ATI RV370 (Radeon X300SE): top - 18:18:18 up 58 days, 4:27, 9 users, load average: 0.00, 0.00, 0.00 Tasks: 224 total, 1 running, 223 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 98.8%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024964k total, 987124k used, 37840k free, 247012k buffers Swap: 2048276k total, 94296k used, 1953980k free, 264744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24324 yang 20 0 61936 35m 6364 S 0 3.5 4:35.84 nxagent 1768 ntop 20 0 190m 32m 5388 S 1 3.2 283:36.15 ntop 1178 root 20 0 60588 29m 1788 S 0 3.0 5:48.89 console-kit-dae ... 1315 root 20 0 343m 4956 4020 S 0 0.5 3:43.87 Xorg Any ideas on how to get to the bottom of this? (i.e. not "Log out"/"Reboot") Thanks in advance.

    Read the article

  • How to recover the ubuntu system?

    - by Hoang
    I istalled the ubuntu virtual machine on vmware. However, one time the disk was full, the system was installing some updates, it quit without giving any message. Now the system is crashed, I can not even launch firefox to download data. How can I recover this virtual machine to a previous state?

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • HVM virtualization with PV drivers on XenServer

    - by Nathan
    Is it possible to create an HVM guest in XenServer 5.5 that uses PV drivers for disk and network without being fully paravirutalized? This should give me decent performance from the VM without having to jump through hoops to create a PV guest when a pre-built template doesn't exist. Since PV drivers exist for Windows, and XenServer provides templates for windows that use HVM virtualization this must be possible, I just don't see how to configure this myself.

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Non-Airport Express wirless N router with audio server

    - by iansinke
    I'm interested in hooking up three things to a wireless router: speakers, a printer, and a hard disk. At first the obvious solution was Airport Express, but then I found out that Airport Express does not support hard disks. Any ideas as to other wireless routers that would have the requisite feature set?

    Read the article

  • du excluding hard links possible?

    - by balor123
    I'm trying to determine how big a cloned Git repository is from a local file system. It creates hard links for some but not all files. How can I determine the disk usage of it? The best I can come up with is "du -a" right now with the original and again with the clone to determine the difference, since each hard linked file will be counted only once. Ideally, I would just run du on the clone and count each hard linked file zero times.

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • EFS recovery given everything but the Registry

    - by Joel in Gö
    I have an unfortunate problem: my old Win Xp installation has died, probably due to the hard drive failing. The drive now fails all SMART tests, but I can get files off it OK. I have now installed Windows 7 on a new drive, and want to transfer files from the old drive. However, some sensitive files were in an encrypted folder (I think EFS?). How can I un-encrypt them, given that I have essentially my entire old XP installation on disk? Thanks!

    Read the article

  • Common and maximum number of virtual machines per server?

    - by Rabarberski
    For a project I am trying to get real-life estimates for the number of virtual machines per server, both typically and maximally. Of course, the maximum number of VMs would be depending on the type of applications (disk intensive, network intensive, ...), and server hardware (like number of cores, memory, ...), but still it would be useful to know if a typical maximum is about 10, 20 or 30 VMs per server. Can anybody give practical numbers?

    Read the article

  • Abnormal hangs and restarts Ubuntu 8.04

    - by jai-ho
    Hi, I am using Ubuntu 8.04 LTS and seeing the following behaviors: The system hangs after a while and becomes completely unresponsive. The system sometimes restarts itself ! Can you please help me identify what is the problem? Also please mention where should I look for the possible cause of this error. Thanks. EDIT: Got the following from the dmesg output (the system got hung and had to restart) [ 15.452015] Driver 'sr' needs updating - please use bus_type methods [ 15.456882] Driver 'sd' needs updating - please use bus_type methods [ 15.457987] sr0: scsi3-mmc drive: 52x/52x writer cd/rw xa/form2 cdda tray [ 15.457993] Uniform CD-ROM driver Revision: 3.20 [ 15.458058] sr 0:0:1:0: Attached scsi CD-ROM sr0 [ 15.463028] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463051] sd 1:0:0:0: [sda] Write Protect is off [ 15.463055] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463083] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463151] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463167] sd 1:0:0:0: [sda] Write Protect is off [ 15.463171] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463197] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463202] sda:<5sr 0:0:1:0: Attached scsi generic sg0 type 5 [ 15.464634] sd 1:0:0:0: Attached scsi generic sg1 type 0 [ 15.470120] sda1 sda2 < sda5 [ 15.495536] sd 1:0:0:0: [sda] Attached SCSI disk [ 15.759549] Attempting manual resume [ 15.759554] swsusp: Resume From Partition 8:5 [ 15.759556] PM: Checking swsusp image. [ 15.759742] PM: Resume from disk failed. [ 15.779964] EXT3-fs: INFO: recovery required on readonly filesystem. [ 15.779970] EXT3-fs: write access will be enabled during recovery. [ 19.904204] kjournald starting. Commit interval 5 seconds [ 19.904235] EXT3-fs: sda1: orphan cleanup on readonly fs [ 19.904245] ext3_orphan_cleanup: deleting unreferenced inode 303260 [ 19.904304] ext3_orphan_cleanup: deleting unreferenced inode 303329 [ 19.932763] ext3_orphan_cleanup: deleting unreferenced inode 3801871 [ 19.932785] ext3_orphan_cleanup: deleting unreferenced inode 3801874 [ 19.932798] ext3_orphan_cleanup: deleting unreferenced inode 3801910 [ 19.951253] ext3_orphan_cleanup: deleting unreferenced inode 3801912 [ 19.951266] ext3_orphan_cleanup: deleting unreferenced inode 3801914 [ 19.951278] ext3_orphan_cleanup: deleting unreferenced inode 3959212 [ 19.951299] ext3_orphan_cleanup: deleting unreferenced inode 3959213 [ 19.960335] ext3_orphan_cleanup: deleting unreferenced inode 3959215 [ 19.963531] ext3_orphan_cleanup: deleting unreferenced inode 3801875 [ 19.963545] ext3_orphan_cleanup: deleting unreferenced inode 3663727 [ 19.963565] ext3_orphan_cleanup: deleting unreferenced inode 3663708 [ 19.963577] ext3_orphan_cleanup: deleting unreferenced inode 4072122 [ 19.963597] ext3_orphan_cleanup: deleting unreferenced inode 4072157 [ 19.968616] ext3_orphan_cleanup: deleting unreferenced inode 4072159 [ 19.970252] ext3_orphan_cleanup: deleting unreferenced inode 4072160 [ 19.970264] ext3_orphan_cleanup: deleting unreferenced inode 4072161 [ 19.992889] ext3_orphan_cleanup: deleting unreferenced inode 4072264 [ 19.992903] ext3_orphan_cleanup: deleting unreferenced inode 4072267 [ 19.999585] ext3_orphan_cleanup: deleting unreferenced inode 4072268 [ 20.008329] ext3_orphan_cleanup: deleting unreferenced inode 4072270 [ 20.008343] ext3_orphan_cleanup: deleting unreferenced inode 4072123 [ 20.008360] ext3_orphan_cleanup: deleting unreferenced inode 4072452 [ 20.008374] ext3_orphan_cleanup: deleting unreferenced inode 4072453 [ 20.008385] ext3_orphan_cleanup: deleting unreferenced inode 4072124 [ 20.008398] ext3_orphan_cleanup: deleting unreferenced inode 311574 [ 20.008413] ext3_orphan_cleanup: deleting unreferenced inode 967890 [ 20.008420] EXT3-fs: sda1: 28 orphan inodes deleted [ 20.008423] EXT3-fs: recovery complete. [ 20.082622] EXT3-fs: mounted filesystem with ordered data mode. [ 29.025379] input: PC Speaker as /devices/platform/pcspkr/input/input2 [ 29.187133] Linux agpgart interface v0.102 [ 29.225338] iTCO_vendor_support: vendor-support=0 [ 29.259662] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.02 (26-Jul-2007)

    Read the article

  • Resize partition of Windows 7 running on VirtualBox with dynamically allocated storage

    - by Nicolas Raoul
    I run Windows 7 inside VirtualBox. I resized the disk of Windows 7 from 25 GB to 50 GB: VBoxManage modifyhd Windows\ 7\ Pro.vdi --resize 50000 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% PROBLEM: I can' grow the partition, neither with Windows 7 itself nor with GParted: It looks like VirtualBox does not tell the client OS about the new size. What additional step is necessary?

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

  • Creating DOS bootable cd

    - by dotnetdev
    I need to burn a cd which I will boot from to reset a password on Windows Server. I am using Active Password Changer, but I get an error like so: http://www.freeimagehosting.net/image.php?faa4737491.png How can I create a "DOS bootable disk"? I have an ISO on the cd I thought that will work. The manual says I need DOS system files, where do I get these from? Thanks

    Read the article

  • Ofice for Mac 2011 does not start, how do I repair the database?

    - by RomanT
    After a TimeMachine restore; Office 2011 is having kittens over permissions it would seem. Having attempted a 'repair' out of Disk Utility, am still seeing: there is a problem with the Office database upon startup, after which Word/Excel work without issues. Outlook on the other hand won't even start. Given the obvious message here "You do not have write access to the Outlook application folder" – where is the DB located to check?

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Issue booting Linux Mint from Live CD?

    - by Vee
    I had Windows 8 and Linux Mint 15 dual booted on my laptop. When I first installed Linux, I wasn't able to load into because the grub would not show. To fix this, I used boot-repair from a Live CD. This time, I updated to Windows 8.1 and it showed a watermark telling me my secure boot wasn't configured properly. I then went and enabled secure boot (BIOS) and I believe it was after that that the Grub would not show once again. I tried to boot from a Linux CD again but when I try, it gives me the following errors: error: failure reading sector 0x0 from 'hd1' error: you need to load the kernel first. Press any key to continue... Before, it was giving me an error with sector 0x6d200 or something instead of 0x0. I am completely unsure of what to do. I do not know what other details to give except that this my have happened after I enabled secure boot, and I actually clicked reset to default setting so I am unsure if any other settings were changed in the BIOS menu.

    Read the article

  • Remove Mac OS X and install Windows?

    - by user18948
    Is there a way to completely remove Mac OS X from MacBook Pro and replace it with Windows 7? I’m not talking about Boot Camp, I’m talking about completely wiping disk of any files and partitioning it for Windows installation. Any BIOS, booting, compatibility problems? I know it’s rare to replace Mac OS X for Windows, but I have this one situation where this is needed so I would appreciate any help. Thanks!

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >