Search Results

Search found 450 results on 18 pages for 'zfs'.

Page 8/18 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • SAS Array with or without expander

    - by tegbains
    Is it better to use a SAS Expander backplane for 12 drives via one SAS connection or is it better to use a SAS backplane with 3 SAS connections? This is in terms of performance, rather than expansion. This array will be setup using ZFS on a OpenSolaris via a LSI SAS controller as an iSCSI target. The two products being considered are the SuperMicro SuperChassis 826A-R1200LPB or the SuperChassis 826E2-R800LPB

    Read the article

  • Best Filesystem to use for Desktop Linux?

    - by contagious
    I'm going to be building a fancy new desktop soon, and I want to experiment with file systems. I know that ext3 is the most common for linux, but what about ext4, or zfs? Are their any pros or cons to certain ones? I won't be doing anything spectacularly off the wall, just using it as my main box. It is a good possibility that it will double as my web server, though.

    Read the article

  • How do I integrate a OpenSolaris NAS with AD?

    - by Neo
    I basically want a OpenSolaris NAS (ZFS goodies) but I'd like to integrate it with AD, so that when I create a new user in AD, his roaming profile is created in the NAS. That means all his ACLs have to work (I know they're compatible), etc. The tutorials I found don't actually work, so any help would be much appreciated.

    Read the article

  • Nexenta storage metro cluster - what are components involved?

    - by Jiri Xichtkniha
    I'm quite imporesses that Nexenta can build storage metro cluster (site to site storage mirroring). As Nexenta is built on Illumos (successor of OpenSolaris) I was thinking what kind of components are involved in their storage metro cluster. Could anybody enlight me what components are doing this site-site mirroring and if these components are open source so one can build similar storage metro cluster on his own? ZFS is local filesystem so what takes care of clustering?

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload?

    Read the article

  • Writing files directly in zpool

    - by Phliplip
    Hi, We're on freebsd 8, and i created a zpool of 3 drives. # zpool create mypool da1 da2 da3 Now my question is, can i begin saving files to this?.. we´re talking 1TB of pictures (photography). Or is it required and or safest to create a zfs on it first.

    Read the article

  • Technical details for Server 2012 de-duplication feature

    - by syneticon-dj
    Now that Windows Server 2012 comes with de-duplication features for NTFS volumes I am having a hard time finding technical details about it. I can deduce from the TechNet documentation that the de-duplication action itself is an asynchronous process - not unlike how the SIS Groveler used to work - but there is virtually no detail about the implementation (algorithms used, resources needed, even the info on performance considerations is nothing but a bunch rule-of-thumb-style recommendations). Insights and pointers are greatly appreciated, a comparison to Solaris' ZFS de-duplication efficiency for a set of scenarios would be wonderful.

    Read the article

  • Ubuntu 12.04 transmission-daemon and zfsonlinux: bad file descriptor and corrupt pieces

    - by Ivailo Karamanolev
    I'm running a Ubuntu 12.04 with zfsonlinux and transmission-daemon. The issues: sporadic Bad File Descriptor and Piece #xxx is corrupt errors. After I recheck the torrent, everything seems fine. That happens only when downloading: once it's in seeding mode. This only happens after the torrent client has been running for some time. I installed zfsonlinux from the offical stable ppa (https://launchpad.net/~zfs-native/+archive/stable). I previously tried running transmission-daemon from the Ubuntu repository, but since I've switched to building the latest transmission from source with the latest libevent (all stable) - same thing. I've seen bug reports (https://trac.transmissionbt.com/ticket/4147) for that issue, but none of them seem to have a solution. How can I fix these errors, or at least understand where they come from and what I can do to rectify the issue?

    Read the article

  • How to build a NAS?

    - by Walter White
    Hi all, I have quite a bit of photos I'd like to organize and get away from sparse DVDs and move to a NAS solution. Ideally, this would let me have some level of redundancy and more easily find what I'm looking for. That being said, hard drives are relatively cheap. My next question is, I would like to run ZFS on the drives with the ability to add / remove drives for additional redundancy, or change the configuration of the drives possibly. Is there a NAS box that let's you run your OS of choice (FreeNAS) so all I'd need to do is get the hard drives, the NAS box, and modify the firmware / OS with FreeNAS? Walter

    Read the article

  • iSCSI: LUNs per target?

    - by badnews
    My question relates specifically to ZFS/COMSTAR but I assume is generally applicable to any iSCSI system: Should one prefer to create a target for every LUN that you want to expose? Or is it good practise to have a single target with multiple LUNs? Does either approach have a performance impact? And is there some crossover point where the other approach makes sense? The use case is for VM disks, where each disk (zvol) is a LUN. So far we have created a a separate target for each VM; but a single target that contains all the LUNs would probably greatly simplify management... but we may need hundreds of LUNs per a single target. (And then possibly tens of initiator connections to that target)

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser; feel free to migrate answers here if you know how).

    Read the article

  • Virtualizing OpenSolaris with physical disks

    - by Fionna Davids
    I currently have a OpenSolaris installation with a ~1Tb RaidZ volume made up of 3 500Gb hard drives. This is on commodity hardware (ASUS NVIDIA based board on Intel Core 2). I'm wondering whether anyone knows if XenServer or Oracle VM can be used to install 2009.06 and get given physical access to the three SATA drives so that I can continue to use the zpool and be able to use the Xen bits for other areas. I'm thinking of installing the JeOS version of OpenSolaris, have it manage just my ZFS volume and some other stuff for work(4GB), then have a Windows(2GB) and Linux(1GB) VM (theres 8Gb RAM on that box) virtualised for testing things. Currently I am using VirtualBox installed on OpenSolaris for the Windows and Linux testing but wondered if the above was a better alternative. Essentially, 3 Disks - OpenSolaris Guest VM, it loads the zpool and offers it to the other VMs via CIFS.

    Read the article

  • Older raid controllers in raid 5 vs. Jbod and SW raid

    - by TEB
    Hi. Im in the fortunate position to have 6 Supermicro older VOD servers with the following config: Supermicro 3U case, 3xPSU Dual Xeon 3ghz P4 class cpu (5 years old.. havnt checked the exact type) 4GB Ram 3ware 9500-8 SATA controller 8 SATA SLOTS and alot of free drives. 2GB FLASH Bootdrive What im curious about is the RAID5 performance on these old beasts in HW mode vs. SW on Linux with the controller set in JBOD mode. Im thinking on using Centos 5.5 or Ubuntu or ZFS RaidZ on Opensolaris. Any tips? or reccomendations ? best regards TEB

    Read the article

  • How best to integrate PPA into Debian?

    - by eicos
    I'm working with ZFS on Linux, on my Debian squeeze server. I've found a useful package in an Ubuntu PPA, apparently by one of the ZoL developers, and I would like to integrate it into my package system. However, I am really having a terrible time doing this. It seems like it would be possible if I upgraded my system to the testing branch, but I'd prefer not to do this for obvious reasons. So, what is the One True Way to do this? Or, what is a passable way to do this, i.e. one that does not involve an ice nine-like assimilation of my entire system to testing branch? Edit: Silly question. I clicked the little green "technical information about this package" on launchpad and all was revealed.

    Read the article

  • Access MacZFS via network from XBMC

    - by AreusAstarte
    I have a ZFS RAID (zpool with three drives) hooked up to my Mac that I want to share in my LAN so that the XBMC client on my OUYA console hooked up to the television can read the drive and use it to stream my movies and television shows onto the television set. I've searched around for a bit but so far haven't found anything that helped me with it. I know that when connecting to the Mac with SSH I can't just access the drive due to different formatting. What do I have to do so that XBMC will be able to read it? How do I share it?

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • Zpool disk failure - Where am I at?

    - by JT.WK
    After checking the status of one of my zpools today, I was faced with the following: root@server: zpool status -v myPool pool: myPool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM myPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c6t9d0 ONLINE 54 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t36d0 INUSE currently in use c6t37d0 AVAIL c6t38d0 AVAIL errors: No known data errors From what I can see, c6t9d0 has encountered 54 write errors. It seems as though it has automatically resilvered with the spare disk c6t36d0, which is now currently in use. My question is, where exactly am I at? Yes the 'action' tells me to determine whether or not the disk needs replacing, but is this disk currently still in use? Can I replace / remove it? Any explanation would be much appreciated as I'm quite new to this stuff :) update: After following the advice from C10k Consulting, ie detaching: zpool detach myPool c6t9d0 and adding as a spare: zpool add myPool spare c6t9d0 It appears as though all is well. The new status of my zpool is: root@server: zpool status -v myPool pool: myPool state: ONLINE scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM muPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t37d0 AVAIL c6t38d0 AVAIL c6t9d0 AVAIL errors: No known data errors Thanks for your help c10k consulting :)

    Read the article

  • Solaris 10: How to remove devices from a zpool with /usr currently mounted?

    - by cali-spc
    I use Solaris 10 on SPARC. I have /usr legacy mounted on a zpool 'usr-pool'. I now need to move some of the devices in usr-pool to another zpool which is running out of room. What is the safest way for me to do this? I already know that (since my zpool is not mirrored) I need to destroy and recreate the zpool. I know how to backup and restore a zfs snapshot. However... I'm stumped on how to unmount usr-pool without losing access to the commands I need on /usr to complete the backup/restore. Cursory research indicated that I should boot to OpenBoot (init 0) and then 'boot cdrom -s'. I did this but none of the zpools are accessible on that runlevel. I also read I could just copy /usr to another location, symlink /usr to that location, then do my backup/restore. Is that safe to do? I would appreciate some guidance. S.

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • Debian on HP ProLiant server hangs (disk i/o is my guess)

    - by Martin
    I installed Debian (2.6.32-5-amd64) on my HP ProLiant MicroServer (purchased recently.) I also added 3 2tb hd in zfs. I've experienced several server froze. Sometimes it showed Soft lockup CUP stuck for 61s! Today I experienced a different problem (I think) and the message looked like this [431336.200002] Call Trace: [431336.200002] [<ffffffff812fcc7c>] ? _write_lock+0xe/0xf [431336.200002] [<ffffffff810d7a86>] ? __vmalloc_node+0x99/0xe2 : : and (in different screen) [431354.222318] Node 0 DMA32 free: 2064kB min:5520kB low:69900kB high:8280kB active_anon:181648kB inactive_anon:61728kB active_file:313152kB inactive_file:832456kB unevictable: 0kB isolated(anon): 0kB isolated(file):0kB present:1922596kB mlocked:0kB dirty:72kB writeback:0kB mapped:25620kB shmem:344kB slab_reclaimable:34460kB slab_unreclaimable:31400kB kernel_stack:2288kB pagetables:7556kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [431354.222431] lowmem_reserve[]: 0 0 0 0 : : Is this a hardware problem? What tools/methods can I find out the source of the problem? I've used Debian for years but never had problem like this.

    Read the article

  • Multi-petabyte scale out storage solution [closed]

    - by Alex Yuriev
    Let's say that I have a need to have a single-name space scale to multi-petabyte object store with a file system-like wrapper. What is currently out there that supports the following: Single name space that can take 1B files. Support for multiple entry points using NFS At least node level replication ( preferably node and file level replication ) Online software upgrades No "magic sauce" on the storage layer The following has been evaluated: Gluster & Lustre - just ick - fundamental lack of understanding of why online upgrades are mandatory. OneFS - we have it. It is smelling more and more like it hides a dead body under the hood. Other than MapR and zfs am I missing anything? P.S. Oh yes, I keep forgetting that the forums are for people to discuss if 2TB drive actually stores 2TB info. May bad. Seriously though - how the heck can "meets the following requirements" can be considered a "debate"? P.P.S. I did not throw an idiotic insult - i pointed out that this is actually an interesting question compared to a conversation about storage capacity of a 2TB hard drive. It is not a question of what works better - it is a question that asks did I miss any of the products that currently exist which fit the criteria where criteria is clearly outline. I got one answer below which included something that I have not looked at in a long time which looks quite a bit grown up compared to the time I briefly look at it before.

    Read the article

  • Do your filesystems have un-owned files ?

    - by darrenm
    As part of our work for integrated compliance reporting in Solaris we plan to provide a check for determining if the system has "un-owned files", ie those which are owned by a uid that does not exist in our configured nameservice.  Tests such as this already exist in the Solaris CIS Benchmark (9.24 Find Un-owned Files and Directories) and other security benchmarks. The obvious method of doing this would be using find(1) with the -nouser flag.  However that requires we bring into memory the metadata for every single file and directory in every local file system we have mounted.  That is probaby not an acceptable thing to do on a production system that has a large amount of storage and it is potentially going to take a long time. Just as I went to bed last night an idea for a much faster way of listing file systems that have un-owned files came to me. I've now implemented it and I'm happy to report it works very well and peforms many orders of magnatude better than using find(1) ever will.   ZFS (since pool version 15) has per user space accounting and quotas.  We can report very quickly and without actually reading any files at all how much space any given user id is using on a ZFS filesystem.  Using that information we can implement a check to very quickly list which filesystems contain un-owned files. First a few caveats because the output data won't be exactly the same as what you get with find but it answers the same basic question.  This only works for ZFS and it will only tell you which filesystems have files owned by unknown users not the actual files.  If you really want to know what the files are (ie to give them an owner) you still have to run find(1).  However it has the huge advantage that it doesn't use find(1) so it won't be dragging the metadata for every single file and directory on the system into memory. It also has the advantage that it can check filesystems that are not mounted currently (which find(1) can't do). It ran in about 4 seconds on a system with 300 ZFS datasets from 2 pools totalling about 3.2T of allocated space, and that includes the uid lookups and output. #!/bin/sh for fs in $(zfs list -H -o name -t filesystem -r rpool) ; do unknowns="" for uid in $(zfs userspace -Hipn -o name,used $fs | cut -f1); do if [ -z "$(getent passwd $uid)" ]; then unknowns="$unknowns$uid " fi done if [ ! -z "$unknowns" ]; then mountpoint=$(zfs list -H -o mountpoint $fs) mounted=$(zfs list -H -o mounted $fs) echo "ZFS File system $fs mounted ($mounted) on $mountpoint \c" echo "has files owned by unknown user ids: $unknowns"; fi done Sample output: ZFS File system rpool/ROOT/solaris-30/var mounted (no) on /var has files owned by unknown user ids: 6435 33667 101 ZFS File system rpool/ROOT/solaris-32/var mounted (yes) on /var has files owned by unknown user ids: 6435 33667ZFS File system builds/bob mounted (yes) on /builds/bob has files owned by unknown user ids: 101 Note that the above might not actually appear exactly like that in any future Solaris product or feature, it is provided just as an example of what you can do with ZFS user space accounting to answer questions like the above.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >