Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 211/361 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Motherboard: Intel S5520HCR s1366 SSI EEB

    - by Crazy_Bash
    I'm building a storage server for online video streaming. I thought about adding two SSD drive for a OS. other 15*(12 SATA & 3 SSD) drives i want to build with aufs XFS and ethernet 4GB/sec network. But I'm confused a little. S5520HCR board supports 6, SATA/300, RAID: 0, 1, 10, Intel ICH10R. Does it mean i can use SATAIII HDD? I'm planing on buying SEAGATE SV35 Series (3.5, 3??, 64??, SATA III-600). also my Chassis supports up-to 16 sata and the motherboard only 6 what kind of sata controller should i use? What's better in terms of performance 1366 or 2011 socket? My server so far: AIC RSC-3EG-80R-SA1S-2 3U Motherboard: Intel S5520HCR s1366 SSI EEB Kingston DDR3 8192Mb PC3-10600 1333MHz (KVR1333D3N9/8G) Seagate 3000GB 64MB 3.5" 7200rpm SATAIII (ST3000DM001) Kingston 480GB SSD 2.5" SATAIII Intel E1G44HTBLK Intel Xeon E5606 2133MHz/L3-8192Kb/QPI s1366 tray SERVER ACC CARD SAS PCIE 16P HBA 9201-16I LSI00244 SGL LSI

    Read the article

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

  • How do I usb tether my Cyanogen modded G1's internet connection to my Toshiba Tecra 8000 running Xubuntu?

    - by atticus
    I have usb-tethering enabled in my phone. It works fine with Vista. When I plug my phone into my Tecra 8000 laptop running Xubuntu, dmesg shows: "usb 1-1: new full speed USB device using uhci_hcd and address 8". I see that the OS has detected it as a storage device, but I can't get it to function correctly as a network device. /dev/us* shows no usb0, but it does show /dev/usbdev1.1_ep00 /dev/usbdev1.1_ep81 /dev/usbdev1.8_ep00 ... usbdev1.8_ep83 I could just use the wireless tether app for android, but I can't get my Netgear wg511 v2 (made in China) wireless card to work in this laptop either. But that's another post for later.

    Read the article

  • How do I get a USB hard-disk adaper to work with a PCMCIA USB card?

    - by Carl
    I have a PCMCIA (Cardbus) 32bit USB 2.0 2-port card. If I plug my MP3 player in it works fine. I have a USB adapter for a hard disk drive. When I plug that into the PCMCIA USB slot the light on that hard disk flashed between red and green - it should stay green. "USB Mass Storage device" is added in the Device Manager, but no drive letter appears in My Computer. The installing device icon down by the clock never goes away. (Windows XP SP3.)

    Read the article

  • Why does Mac OS X sometimes complain that a copy failed because a file is in use?

    - by orj
    Recently I've been copying files from DVDs to network storage on my Mac running Leopard 10.5.7. I'm just dragging and dropping in Finder to perform the copy. Occasionally the copy will fail with a dialog complaining that a file is in use. If I repeat the copy generally it completes successfully. I could understand this being a problem if one was trying to move a file and it was open by another app. But none of these files are open in other apps. I just pop the DVD in, drag and drop the files to my NAS's network share and sometimes it fails with the "file in use" error. This is very annoying. Anyone have any ideas?

    Read the article

  • Samba 4 or Active Directory

    - by Jon Rhoades
    Now that Samba 4 has finally been released we find ourselves in new position of having a choice of of either upgrading our Samba 3 domain to either a Samba 4 domain on Linux or a Windows AD domain on Windows 2012. Given that we are equally expert at managing Windows and Linux servers, is there any reason not to use Samba 4 over AD on Windows; specifically: Are there functional differences from a Windows/OS X client perspective? Are there issues with other services that use AD, such as storage appliances that use AD/Kerberos for authentication/authorisation. Will the Microsoft "System Centre" suite of tools and other similar products work seamlessly? How will Samba 4 handle AD's Multimaster DC model and FMSO roles. Are there any other issues to be aware of, such as vendor support?

    Read the article

  • Sudden slow read & write speed on all IO

    - by user23392
    I have a custom built rig that has 2 storage drives. for OS: Western Digital 1.0TB HARD DR 64MB for other stuff: Corsair Performance 3 128GB (SSD) [ expected read speed: 400 mb/s ] The system was incredibly fast for a couple of months, then one day i was playing a game then it started to get buggy (some sounds and objects disappearing), i stopped the game and the system seemed to be unstable so i had to shut it down, next morning i couldn't start it up, it was saying something about corrupt device. I formatted both disks and installed a fresh copy of windows, all i can say that since that day the system was never like before, it takes 10 minutes to boot up (the icons and desktop slowly appear). but once it's done the slowness isn't as noticeable. Here's my benchmark on the HDD ( read speed - write speed ): And the SSD: Anyone knows what could be the issue?

    Read the article

  • Advice on off-site backup of Hyper-V Failover Cluster

    - by Paul McCowat
    We are currently setting up a Server 2008 R2 which will be off-site over a leased line with VPN. At the main site is 2 x Hyper-V hosts in a failover cluster with PowerVault M3000i iSCSI SAN. We are using BackupAssist for local backups and each host backups up itself and it's guests nightly creating a 500GB backup each which is copied to a 2TB rotated NAS drive. Files and SQL DB's are also backed up / log shipped etc. Looking for the best way to backup the Hyper-V VM's and copy them off-site so that the OS's are only a month old and the data is a day old. The main backups are too large to transfer between backups so options discussed so far are: Take rotating individual backups of the VM's each day and copy over, Day 1 SQL VM, Day 2 Exchange VM etc, would require more storage. Look in to Hyper-V snapshots, however don't believe these are supported in clustering. 3rd party replication tools

    Read the article

  • Virtualbox, merging snapshots and base disk

    - by Henrik
    Hi, I have a virtual machine with about 30 snapshots in branches. The current development path is 22 snapshots plus the base disk. The amount of files is seemingly having an impact now on IO and the dev laptop I'm using (don't know if it is host disk performance issues with the 140GB total size over a lot of fragments, or just the fact that it is hitting sectors distributed across a lot of files). I would like to merge the current development branch of snapshots together with the base disk, but I am unsure if the following command would produce the correct outcome. I am not able to boot this disk after the procedure completes (5-6 hours). vboxmanage clonehd "C:\VPC-Storage\.VirtualBox\Machines\CRM\Snapshots\{245b27ac-e658-470a-b978-8e62137c33b1}.vhd" "E:\crm-20100624.vhd" --format VHD --type normal Could anyone confirm if this is the correct approach or not?

    Read the article

  • How to partition my hard drive, quicker?

    - by Sam
    When I install Windows 7 on my hard drive, it makes three partitions. One with the OS itself, one with bootmgr inside (that is 100 MiB), and one with the factory image (all the crapware from HP). My final goal is to have the OS on a partition of 100 GiB and keep the rest (900 GiB) for storage. I thought it would be easy using gparted, but it is taking so long. It will take hours. There must a way to partition the drive before installing Windows. Yeah, because what I think makes the shrinking/moving of the partitions take so long is because they are not empty (am I wrong?).

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • USB Format Error

    - by Dan Finan
    I'm having a real headache trying to reformat a USB drive. Initially it had a 200mb EFI partition and it caused the drive to disappear altogether. Since then I ran the CMD and wiped the disk using 'diskpart'. It took a few attempts but it finally cleaned the drive. Since then it has reappeared under (:E) however I am unable to access the drive and Windows is preventing me from reformatting it. I am just presented with 'Windows was unable to complete the format'. It's now acting like a CD drive instead of removable storage. I've tried going through Disk Management and I'm presented with the same error. I've removed the USB controllers from Device Manager - when the drive is connected again it re-installs the drivers and acts the same way. Any help given will be greatly appreciated, thank you. (Windows 7 machine)

    Read the article

  • MS Access ADP front end and SQL Server back end for field data collection?

    - by Brash Equilibrium
    I am an anthropologist. I am going to the field and will use a netbook to collect survey data. The survey forms will need to allow me to enter data into multiple tables, search tables, allow subforms, and be fast enough to not slow down my interview. I have considered storing the data in a SQL Server Express 2008 R2 server (there will be a lot of data) while using a Microsoft Access data project as a front end. To cut down the number of steps required to collect and store data, I'm considering using the netbook for both data storage and collection (after reading this article about SQL Server on a netbook). My questions are: (1) Is there a simpler solution that is also gratis (gratis because I already have a MS Access license from my workplace, and SQL Server Express is, obviously, free)? (2) Does my idea to store and collect data using the netbook make sense? Thank you.

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • create symlink to another machine

    - by microchasm
    Hi, I have 2 machines. Both running CentOS. Box1 is webserver with apache, php. Box2 is mysql, and file storage. The files will only be accessible from Box1 within the webapp. I'd like to somehow create a symlink or somesuch on box1 to a folder on box2 where uploaded files can be stored and retrieved. Security in mind, what would be the best way to go about linking these 2 boxes up in a transparent (to apache) way? NB: the boxes are connected directly to each other via a crossover cable; no lan access to box2. Much thanks!

    Read the article

  • Can a RAID 0 disk/config be rebuilt ?

    - by Rogue
    Recently one of the hard drives of one of my RAID 0 configuration gave an error. What do I do now I'm hoping that I can replace that faulty disk with a new hard drive and that the RAID can rebuild itself. (using Intel Matrix Storage Console) Is this possible? Though I doubt it. Is there anyway that I can rebuild the RAID? or have I lost all the matter on it. TECH INFO: I have a software raid on an Intel DG965WH motherboard and the current operating system is Windows

    Read the article

  • Can a RAID 0 disk be rebuilt

    - by Rogue
    Recently one of the hard drives of one of my RAID 0 configuration gave an error. What do I do now I'm hoping that I can replace that faulty disk with a new hard drive and that the RAID can rebuild itself. (using Intel Matrix Storage Console) Is this possible? Though I doubt it. Is there anyway that I can rebuild the RAID? or have I lost all the matter on it. TECH INFO: I have a software raid on an Intel DG965WH motherboard and the current operating system is Windows

    Read the article

  • How can I copy files to an external drive and verify their integrity in OS X?

    - by jedavis
    I'm moving large amounts of data from one external drive to another larger one. The files are important and the smaller drives need to be cleared and reused (HD camera). Is there some utility for moving files and verifying their integrity? I've been using this command find . -type f -exec md5 '{}' \; > md5list.txt in the terminal to create a list of MD5s for each file then using diff to compare the two. However, I am moving 320GB at a time, which takes a while by itself. Computing the checksums takes another hour or so. It would be much more efficient to do this on the fly, during the copy. I'm just hoping someone has already written the software...

    Read the article

  • Server 2003 slow share.

    - by G V
    I am running an 03 box with shares active. When uploading to the share, the speed is average. About 15-20 mbps.. But when you think about it, it is bad because it is a direct connection to a couple machines. When uploading to another server the connection speed is twice that of the direct storage. When uploading s massive folder, 250 GB, the upload will start as normal, but as it progresses it drops in speed. Now it is sitting at around 2-7 Mbps. Any ideas on how i can boost the transfer rate? On a side note, the download speed is great. It is a speed that you would expect from this setup, the main problem is uploading and what is causing the extreme slowness in speeds. Any help would be great.

    Read the article

  • Resized NTFS partition, now it wont mount.

    - by H4Z3Y
    I have had a 1.5TB drive used as an external for 6 months or so, then I decided to put it in my linux server for network storage. ntfs was being crazy inefficient so I wanted to change the filesystem to ext4. I used the ntfsresize command to reduce the partition to 650GB and that took abour 2 hours, then I deleted all of the entries in fstab like a guide told me too and created a new one the size of the ntfs partition, or, 650GB. after I modified fstab the ntfs partition would no longer mount and when plugging it in to windows it says "This Hard Drive needs to be formatted". any ideas on how I can recover the data off of the drive? I have 600GB of free space on a different drive so I just need some way of copying them off.

    Read the article

  • Can a S3 mount be used as the document root for Apache?

    - by Hesse
    Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)? I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem. In my httpd.conf I have DocumentRoot "/mnt/s3". When I restart Apache I get the error "DocumentRoot must be a directory". Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root. Thanks

    Read the article

  • df-h command in ubuntu

    - by Esha Sharma
    I am a new user of Ubuntu. When I type df -h in terminal , it gives me list of all storage devices and space usage. In my system I get this. Filesystem Size Used Avail Use% Mounted on /cow 934M 173M 761M 19% / udev 925M 4.0K 925M 1% /dev tmpfs 374M 856K 373M 1% /run /dev/sdb1 7.5G 2.8G 4.8G 37% /cdrom /dev/loop0 1.5G 1.5G 0 100% /rofs tmpfs 934M 16K 934M 1% /tmp none 5.0M 0 5.0M 0% /run/lock none 934M 76K 934M 1% /run/shm /dev/sda 299G 74M 299G 1% /media/q I understand that /dev/sda is my hard drive which is 320 gb(in gib it is 299 and hopefully that is what is being displayed) and /dev/sdb1 is pendrive of 8gb from which I am running the live cd. My question is what are the other folders and what is the physical location of these folders if complete memory is taken by the device dev/sda?

    Read the article

  • Existing tables with binaries to use filestream

    - by user1098487
    I've got a few tables for which I want to use filestream storage. These tables already contain binary data and have rowguids. However at the time they were were created, the tables were not added to a filestream enabled filegroup. What is the best way to have these tables use filestream at this point? Do I need to drop + recreate the tables and migrate the data? Is there an easier way? The database already has filestream enabled and there are other tables which are using them.

    Read the article

  • Routing traffic to a specific NIC in Windows

    - by Stoicpoet
    I added a 10GB NIC to a SQL server which is connected over to a backend storage using ISCSI. I would like to force traffic going to a certain IP address/host to use the 10gb NIC, while all other traffic should continue to use the 1GB NIC. The 10gb nic is configured using a private network. So far I have added a entry in the host file to the host I want to go over the private network and when I ping the host, it does return the private IP, but I'm still finding traffic going to the 1gb pipe. How can I force all traffic to this host to use the 10gb interface? Would the best approach be a static route? 160.205.2.3 is the IP to the 1gb host, I actually want to the traffic to route over an interface assigned 172.31.3.2, which is also defined as Interface 22. That said, would this work? route add 160.205.2.3 mask 255.255.255.255 172.31.3.2 if 22

    Read the article

  • It's the ethernet 10/100 in LAN transfer faster than USB 1.0?

    - by dag729
    I have an old laptop (PIII 800MHz, with 256 RAM) that I wish to use as my home server: it'll have to serve just two people, so I think that I'll be more than ok as for the RAM and the CPU. The issue is about data, because the internal hard disk is a 12GB, that is...ridicolous! I have more than 60GB of mixed storage and counting (images, videos and music) in an external usb hd. I could put the hd in my desktop pc just to serve the big files through ethernet or let it inside its usb box attached to the laptop. The question is: which of these solutions will be the fastest? USB 1.0 attached to the server (laptop) or internal hard disk serving files via 10/100 ethernet to the laptop on demand?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >