Search Results

Search found 16174 results on 647 pages for 'disk space'.

Page 288/647 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Exchange 2007 | Mailbox DB Size 180GB

    - by rihatum
    Hi All, I have a Exchange 2007 SP1 server running on Windows 2008 6 HD Drives in a RAID-1 OS, DB, Logs on separate RAID-1 Disks Size of the Mailbox Database is 183GB and increasing We only have First Storage Group and Second Storage Group There is no more space on the server to install new Physical Disks and create a Storage Group Q - Can I resize the RAID-1 Partition where the DB is ? Q - Any other suggestions as to how I can decrease the Mailbox DB Size ? Will be grateful for your suggestions on this. Kind Regards

    Read the article

  • Does SDHC have any write error recovery ?

    - by marc
    What happen if SDHC card get write error (damaged cell / bad sector) ? Whole card is unusable (to trash, all data written to that sector now and in future will be lost) ? or rewrite sector (flash memory get corrupted when writing so maybe have any function to check if sector was written successfully) to another and mark as fault as unusable what will be seen as reduction of capacity but no data lost. I have to do some research about SD card-s on disk less machines. regards

    Read the article

  • How to move the files of a replicated database (SQL Server 2008 R2) to a different drive

    - by ileon
    I would appreciate if someone could help me with the following problem: We use two SQL Server 2008 R2 databases under transactional replication: transactional publication with updatable subscriptions. because we run out of disk space we need to move the database files into a new drive. But I don't want to break the replication. What I'm looking for are the required steps that will help me to move the files to the new drive. Thanks

    Read the article

  • pipe from tar to ftp

    - by facha
    I have ftp access to a server I do not control. I'd like to start sending archives of my server's FS to that ftp. The problem is I don't have enough free space on my system to create a backup archive first (and store it on my fs) and then send it to ftp. So I'm wondering if it is possible to do something like this: tar -jcpvf - / | ftp-put ftp://user:pass@host/file.tbz Normally there is no problem doing it over ssh, but in this case I only have ftp available.

    Read the article

  • How to mount a compressed ISO image?

    - by dma_k
    I have problem to mount a compressed (ISZ) image under Linux, which was created by e.g. UltraISO? I am aware about user-space fuseiso, but it fails to mount these images, as I have reported in Debian bugtracker (correct me if I ddi something wrong). I ask the community for a help: I need a proved solution to mount these images without decompressing them. I believe that CONFIG_ZISOFS kernel option cannot help, as it refers a special RockRidge extension (per-file compression with mkisofs -z or mkzftree).

    Read the article

  • How to figure out disks performance in Xen?

    - by cpt.Buggy
    So, I have a Dell R710 with PERC 6/i Integrated and 6 450Gb Seagate 15k SAS disks in RAID10, I have 30 Xen vps working on it. Now I need to deploy second server with same hardware for same tasks and I want to figure out maybe it's a good idea to use RAID5 instead of RAID10 because we have a lot of "free" memory on first server and not so much "free space". How do I find out disks performance on first server and find out could I move it to RAID5 without slowing down of whole system?

    Read the article

  • How Can I Map IIS on My PC with Static IP to my Domain Name?

    - by Subhen
    Hi, Now I have got an internet connection with Static Ip. I want to know How can I map my website to my Static IP(Received from ISP). I know this is not a good Idea for Security and Performance issues, But just Wanted to know as I can set up a test Project. Again, Can't I just by a domain name and map it to my Static Ip, instead of buying them from WebHosting Providers. Now I have bought the space from bizzhost and hosted my site by setting the Name Server. Thanks, Subhen

    Read the article

  • Executing batch file from sql server job

    - by uzay95
    I want to create backup job on sql server. And i want to execute batch file in job. I just wonder the part of executing batch file from sql job. Do you have any idea? Any help would appreciated. use MyDb go BACKUP DATABASE MyDb TO DISK = 'C:\BackUps\MyDb.bak' WITH differential go -- Call my batch file (which will zip MyDb.bak file)

    Read the article

  • Use a RAID Controller without drivers?

    - by cian1500ww
    Ordered an Adaptec 1420SA RAID card for my Debian Squeeze media server but didn't check to see if it was compatible, turns out it's not because it uses something called hostRAID which requires special drivers that aren't available for Debian. Could I still use the card as an ordinary controller and just use OS software RAID?? I'm not looking for speed, just need to mirror some drives that will be used for storage, the OS will reside on a disk connected to the server's onboard controller so the system won't be booting from any drives on the Adaptec controller.

    Read the article

  • Executing batch file from sql server job

    - by uzay95
    I want to create backup job on sql server. And i want to execute batch file in job. I just wonder the part of executing batch file from sql job. Do you have any idea? Any help would appreciated. use MyDb go BACKUP DATABASE MyDb TO DISK = 'C:\BackUps\MyDb.bak' WITH differential go -- Call my batch file (which will zip MyDb.bak file)

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Does SDHC have any write (ECC) error recovery ?

    - by marc
    What happen if SDHC card get write error (damaged cell / bad sector) ? Whole card is unusable (to trash, all data written to that sector now and in future will be lost) ? or rewrite sector (flash memory get corrupted when writing so maybe have any function to check if sector was written successfully) to another and mark as fault as unusable what will be seen as reduction of capacity but no data lost. I have to do some research about SD card-s on disk less machines. regards

    Read the article

  • Installing Cygwin, what distro do I use?

    - by user2699451
    I have a fresh install of Windows and a Linux OS that I can't access, how do I fix this? I do not have the .iso/disk for Linux anymore. So I figured, I can install Cygwin and through that install Grub, but I am used to Linux Mint, which uses apt-get. I have used CentOS before which uses rpm, but how do I install and use packages in the Cygwin terminal, and is it possible to install Grub through Cygwin?

    Read the article

  • centos freezes with this error kernel: ata1.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x0

    - by lakshman
    0 down vote favorite share [fb] share [tw] I am using centOs 5.5 version with raid 1 configuration the server freezes and goes to non response . the only thing i found on messages file is kernel: ata1.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x0 The server is built recently Please let us know what is the problem the hard disk details are Model Number: ST500NM0011 Serial Number: Z1M02LT7 Firmware Revision: SN02

    Read the article

  • Few question on windows explorer properties(win 7)

    - by Nrew
    I've red this article from howtogeek, but it didn't explain this one which is placed in the target portion when you right click on windows explorer and click properties: %windir%\explorer.exe shell:desktop\Inbox And why does local disk E: shows up when I have this one: %windir%\explorer.exe shell:E:\FINAL SAVE DATA I don't really get the code, especially the part in shell: desktop\Inbox. What's that supposed to mean. How do I change it so that when I click on the Windows Explorer shortcut, I get to see this location: E:\FINAL SAVE DATA

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • How to mount LUKS partition securely on server

    - by Ency
    I'm curious if it is possible to mount a partition encrypted by cryptsetup with LUKS securely and automatically on Ubuntu 10.0.4 LTS. For example, if I use the key for the encrypted partition, than that key has to be presented on a device that is not encrypted and if someone steals my disk they'll be able to find the key and decrypt the partition. Is there any safe way to mount an encrypted partition? If not, does anything exist to do what I want?

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Why do my VMware Images get so large?

    - by stevebot
    Hi, I have a Centos VMware Image that I have recreated a couple times, and I notice that after a while it gets pretty large. It starts out at 8 GBs when I make it, and a week or two later it is 25GB and then a month later it is a whole 50GB or so. I am not installing anything crazy on it, and my disk usage on the VM is pretty low. Is there an option that could be affecting the size of these VMs?

    Read the article

  • How to restore ubuntu boot loader

    - by jack
    Hi I recently installed Ubuntu. It lets me to use GParted to let me take free space from Windows XP partition. After installing Ubuntu, it nicely presents a boot menu that lets me to boot either UBuntu or XP. Now, my XP sucks and I need to remove/re-install it. Then how can I restore the boot menu of Ubuntu? Thank you.

    Read the article

  • Linux data storage and partitioning

    - by Rajeev
    In the following output of df -h you can see that i have added a new hard drive(/dev/hdd1) and have mounted as /hdd1. My question is if I start dumping data to /opt will that data be mounted in /hdd1 or / My goal is to utilise the new hdd1 instead of old disk(/dev/sda3). How can this be done? Filesystem Size Used Avail Use% Mounted on /dev/sda3 442G 312G 12G 86% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 57M 128M 31% /boot /dev/sdb1 1.7T 201M 2.6T 1% /hdd1

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >