Search Results

Search found 10244 results on 410 pages for 'space complexity'.

Page 236/410 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Can you shrink the sparse disk image of a Mac OS X guest OS in VMWare Fusion?

    - by Paul D. Waite
    I use VMWare Fusion on my Mac to run a virtual Windows 7 machine, and the Microsoft IE compatibility Windows XP virtual machines. In VMWare Tools on the Windows guest OSes, there’s a “Shrink” option that lets you reduce the size of the sparse disk image used by the guest OS, to save hard drive space on your host OX. I’ve recently created another virtual machine, this time running Snow Leopard Server. I was wondering if I could shrink the spare disk image used by this machine too, but I can’t find a VMWare Tools app on the Mac guest OS, even though VMWare Tools have been installed (as VMWare’s Shared Folders feature is working). Is there any way to shrink the sparse disk image used by Mac OS X guest OSes in VMWare Fusion?

    Read the article

  • Looking for a good Web Server that is cheap

    - by SoLoGHoST
    I am a Project Manager, and former Lead Developer for a software portal system that requires a forum software to run. I am in need of a server that is cheap, reliable, and supports the latest PHP (5.2+), MySQL, unlimited e-mails (preferably), a cPanel, multiple sub-domains (atleast 3+). Currently I am paying $34.95 USD/month (approx. $420 USD/year). This is too high for me to pay to keep the site running. I just recently became Project Manager and in charge of Finances and I'm extremely concerned for the future of Dream Portal. With those prices I'm not sure I'll be able to keep it running for too long. Can someone please tell me of a good server that meets all of the requirements that I listed above that is cheaper on a yearly basis? Note: Currently on a Dedicated Server with limited disk space at 15000 MB (15 GB), monthly bandwidth = 500000 MB, 50 emails limit, 20 sub-domains limit, 30 FTP accts., and 25 SQL Databases.

    Read the article

  • Why can't I copy a 7 GB file to an external USB HD with 120 GB free?

    - by Johann Gerell
    Yes, why can't I? I was stashing away some old photography backup zips last night. I could copy 4 of my 1 GB backup zips to my external USB connected hard-drive when I got the error message "Cannot copy file. Not enough free space." (sort of) for a zip of roughly 7 GB. But there are 120 GB free. Why is this? EDIT: Clarification - the files that I could copy was smaller than 4 GB. The failing one was 7 GB. The cause seems to be the FAT32 4 GB limit.

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • How do I make a partition usable in windows 7 after power loss?

    - by user1306322
    A few days ago I was installing some software and power went down. When I rebooted, the partition to which the software was installed was not accessible. Disk manager shows that it's there, but doesn't show type, if it's healthy and gives me an error when trying to read its properties. The problem seems to be common after power loss, people recommend solving it by assigning a letter to the partition via DiskPart utility, but partition isn't listed in my case. I can access that partition with bootable OSs (like bootable Ubuntu or winXP) and all the files are there, but another installation of Windows 7 gives me the same results as the original. I could just copy all data to another disk if there was enough space, but unfortunately the size of partition I'm having problems with is 1.1TB. How do I regain access to the partition in my original Windows 7 installation without losing any data?

    Read the article

  • `rsync` NEVER uses its 'famous' delta-transfer!

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • How can I fix an inconsistent NTFS file system without Windows?

    - by Demetri
    I have a Dell laptop that came with Windows from the factory. Since then, I have installed Linux and replaced the hard drive with an SSD. The NTFS partition is inconsistent (a result of bad sectors on the HDD) and needs to be fixed, but I cannot boot into Windows to run chkdsk. How do I fix this problem? Until I do, I cannot move my NTFS partition to expand space on my root filesystem, which is critically low. EDIT: All of my partitions were cloned from my dying HDD to my SSD via Clonezilla. There are no bad sectors on the SSD, but the NTFS partition is still in an inconsistent state.

    Read the article

  • Windows DFS Limitations

    - by Phil
    So far I have seen an article on performance and scalability mainly focusing on how long it takes to add new links. But is there any information about limitations regarding number of files, number of folders, total size, etc? Right now I have a single file server with millions of JPGs (approx 45 TB worth) that are shared on the network through several standard file shares. I plan to create a DFS namespace and replicate all these images to another server for high availability purposes. Will I encounter extra problems with DFS that I'm otherwise not experiencing with plain-jane file shares? Is there a more recommended way to replicate these millions of files and make them available on the network? EDIT: I would experiment on my own and write a blog post about it, but I don't have the hardware for the second server yet. I'd like to collect information before buying 45 TB of hard drive space...

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • Can not understand this script

    - by Jim
    Can someone help me understand this script? It is from sysconf_add and I am new to scripting. I need to do something similar. function add_word() { local word=$1 local word_quoted=$2 if ! word_present; then $debug && cp $file $tmpf sed -i -e "${lineno} { s/^[[:space:]]*\($var=\".*\)\(\".*\)/\1 $word_quoted\2/; s/=\" /=\"/ }" $file $debug && diff -u $tmpf $file else echo \"$word\" already present fi # some balancing for vim"s syntax highlighting }

    Read the article

  • sizes of RAM, of virtual memory and of swap for 32-bit OS

    - by Tim
    If I understand correctly, a 32-bit OS (Ubuntu) can only address 4GiB memory, so RAM with size larger than 4Gib will only be used 4Gib of itself and the rest is a waste. I am now confused about this situation for RAM with similar one for virtual memory and for swap. with virtual memory being swap + RAM, if the size of the virtual memory exceeds 4Gib, will the exceeding part be a waste for the 32-bit OS? if I now have to choose the size for my swap partition, is it a factor to consider that the 32-bit OS can only address 4GiB memory? Does the size of swap have to be chosen with respect to the 4Gib addressible limitation? Will the swap exceeding 4GiB always be a waste? is virtual memory equal to RAM and swap? or can virtual memory use space on the hard drive outside the swap partition? Thanks and regards!

    Read the article

  • Nvidia 9 series or Intel HD 2000? [closed]

    - by EApubs
    I just tested an Nvidia 9300 GS card with a Intel Corei3 HD 2000 graphics system. Here is the windows experience index scores I got : Nvidia 9300 GS : Base Score 3.9 Processor : 7.1 Memory : 7.5 Graphics : 3.9 Gaming Graphics : 5.1 Hard Disk : 5.9 Intel HD 2000 : Base Score : 5.2 Processor : 7.1 Memory : 5.9 Graphics : 5.2 Gaming Graphics : 5.8 Hard Disk : 5.9 My questions are : When using Intel HD graphics, it reduces the score of my Ram! How is that possible? It checks the speed of the ram. Not the size (i think). Intel graphics take some of the ram space but how can that effect the speed? From both of them, what will be the good choice?

    Read the article

  • How to use symbolic links in windows server 2008R2 across the network (mklink)

    - by server info
    I have One Server (Srv1) which holds data with file shares and the storage is full. Now I have second Server (Srv2) which has alot more space. No I would like to transfer all the data von Serv1 to Serv2 and have links to the new destination. I found mklink very useful here but unfortunately it does not work over the network. Which also points the docu out. People heavily rely on the path's so it would be helpful if somone has a pointer for me... how to handle symbolic links a cross the network with Windows Servers. I am running Windows Server 2008. Thanks for any help

    Read the article

  • Any Suggestions on How to Soup Up/ Mod a MacBook Pro 13"?

    - by 5arx
    So I've got a mid-2009 MacBook Pro 13". Integrated GPU so not a games machine but fast enough for doing .Net development in VMs. I love the little thing and wanted to give it a Christmas present so thought I'd mod it up a bit and give it a boost. I'm probably going to go for a 500GB Seagate Momentus XT hybrid drive rather than full-on SSD (I need 500GB space) but was wondering if there are any other mods/tweaks people could suggest? I saw something online about swapping a HDD for the DVD drive and wondered if anyone had tried this or similarly drastic mods to the smallest of the MBPs. Cheers.

    Read the article

  • How long do uploaded files stay in the tmp folder in Linux Ubuntu?

    - by Jean-Nicolas Boulay Desjardins
    I am building a web application where my users will be able to upload files. After the files are uploaded I need to send the files to two other servers, and after they will be deleted from the server where they were just uploaded to. I am wandering is it a good I idea to keep the uploaded files in the tmp/ folder the time the uploaded files are sent to the other two servers or should I move them to another folder incase they get deleted? I am also wandering because I would like to know if I have to build a cron script to get rid of the files that have been transfered to the other servers so that I get my disk space back.

    Read the article

  • How can I change the lock screen in Windows 8 that appears for the default user?

    - by Mark Allen
    This is about Windows 8 RTM. How can I change the lock screen in Windows 8 that appears after connecting to the machine via RDP? You can change your lock screen for your user account like so: Hit the Windows key. Right click your user name in the upper right hand corner, choose Change Account Picture. Click Lock Screen, choose a new picture. However, if you then connect to the computer where you've done this from another computer via RDP, using the same account, the physical machine you've connected to will display the "default" user lock screen - a stylized Space Needle / Seattle picture. It's not a bad picture, but I'd like to change it.

    Read the article

  • Shrinking physical volumes in LVM on a Linux Guest in ESXi 5.0

    - by Stew
    The problem: Linux guest (OpenSuse 12.1), with multiple virtual disks attached. 3 disks are in a logical volume, two of which are exactly 2TB. None of the disks are independent, and due to the backup software we use, cannot be independent. When the two 2TB virtual disks are "dependent", the snapshot fails stating that the file is too large for the datastore. When I put those two disks in independent mode, snapshots work fine (the other disk is 1.8TB). I have therefore concluded that even shrinking the two physical disks by 100GB should solve the problem, however I am having trouble conceptualizing how to go about getting those disks smaller without breaking the LVM entirely. The actual LV has 1.3TB free, so there is plenty of space to shrink with. What I need to accomplish: Deallocate 100GB from the two, 2TB virtual disks within the linux guest. Shrink the two virtual disks by 100GB within vsphere (not as complicated). Are there any vsphere/LVM gurus that can give me a clue?

    Read the article

  • Can't mount home after trying to resize (bad geometry: block count exceeds size of device).

    - by Lynn
    This is on a fresh computer (super computer actually). It got to me with 15T on the home mount and 50G on the root. I tried allocating 7T to root and resizing (since I'm putting a local yum repo on this machine as it has no internet access nor will it ever). I tried following the instructions here: Centos 6.3 disk space allocation but something went wrong and the home won't mount again. Instead I get from dmesg | tail: EXT4-fs (dm-2): bad geometry: block count 4294967295 exceeds size of device (1342177280 blocks) df -h nets this output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 7.0T 3.6G 6.6T 1% / tmpfs 190G 216K 190G 1% /dev/shm /dev/sda1 485M 38M 422M 9% /boot I didn't have any files on /dev/mapper/VolGroup-lv_home. Will simply running mke2fs fix it to be mountable? What sort of options should I run it with. I've never resized volumes before or used mke2fs. I don't want to make this mess worse.

    Read the article

  • Online File Sharing that acts just like LAN shared drives, etc.

    - by Dayton Brown
    Hi All, Have a small business client that wants to move their current file share to the web. Specs are as follows, 20 to 30 GB of space, file sizes are normal (nothing more than 50 to 100 mb) 3 users ideal solution would be exact same functionality as windows explorer. CHEAP!!! But not super cheap. I would like to keep it around $20 per user per month. I've explored a bunch of solutions, but they are all a bit on the complicated side. Thanks in advance for the recommendations.

    Read the article

  • CGI error from PHP when running exec() on IIS

    - by Patrick
    Windows Server 2003 x64 PHP 5.2 IIS 6.0 The program Ink2Png.exe is set with Everyone-Read and Execute permissions. As does its dependency (microsoft.ink.dll) PHP Safe Mode is off exec() is passed [the full exe path], space, [full path to another file] This other file also has full read permissions. The output directory has full write permissions. As soon as exec() is hit, the connection dies, the browser does not even receive a full set of http headers, and it reports a CGI error. Examining the output, it appears the program was not even run. Any ideas? How can I figure out what exactly is happening and get it running again? EDIT: Also, it is a .NET application, if that is significant in any way.

    Read the article

  • Copy files with filter (XP)

    - by fire
    I have a huge folder (over 6GB) with multiple sub-folders that I want to copy onto an external hard drive, however I do not want it to copy any PDF, EXE or ZIP files across to save space. Is there any software that will help me achieve this? I have looked at TeraCopy but this doesn't seem to have any filter mechanism on it. I am using Windows XP (* sigh *). *edit: found the xcopy command, will this do it? Can anyone help me with the syntax?

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • Does Intel Smart Response provide any statistics on the cache usage?

    - by Tom Seddon
    I've set up my Z68-based Core i7 PC with a 60GB SSD dedicated as a Smart Response cache drive. Is there any way I can get any statistics out of it? It would be nice to have some information on how much cache space is actually being used, maybe how much of it was actually accessed recently, and how many reads in general are coming from the SSD rather than from the mechanical disk. These statistics might help to quickly provide some evidence for or against the use of Smart Response, without my having to reinstall Windows on the SSD (etc.) to find out. The Windows ReadyBoost feature has some performance counters you can access via the Windows 7 perfmon tool, for example, which is the kind of thing I'm hoping is somehow available. Smart Response provides no perfmon counters, though, and the Intel Rapid Storage Utility tells you pretty much nothing except that Smart Response is switched on.

    Read the article

  • Why the Utilities button of Virtual Machine Settings/Hardware/Hard Disk(IDE) is disabled?

    - by q0987
    I am using ubuntu 11.04 virtual image with VMWare Player 4.0.2 build-591240. Because I found that the folder that holds the VMWare image goes to 8GB and want to shrink the size by using the utilities tool provided by the VMWare Image Player. However, I found that the button is disabled like this. Question How to enable the button so that I can use it to shrink the size of virtual image used by ubuntu? Question If I cannot enable the button, is there another way that I can shrink the size of used space? Thank you

    Read the article

  • VMWare Server :: VM set to 2gb RAM but vmware process shows 100mb physical, 1900mb virtual

    - by brad
    I've set up a VMWare instance to run CastIron Integration Appliance. I allocated 2gb of memory to the instance, assuming it would take this as physical memory (my server has 8gb total). When I view top however on the server, the vmware-vmx process has about 100m Resident memory and 1900m Virtual. Running CastIron it reports that the appliance often hits 50% memory usage. Does this mean I'm using 900mb of harddrive space as memory? I wanted VMWare to use 2gb of physical memory, no swap. Can anyone tell me how to achieve this? Setup Debian Lenny 5.0.3 VMWare Server 2.0.2

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >