Search Results

Search found 35 results on 2 pages for 'nexenta'.

Page 2/2 | < Previous Page | 1 2 

  • NexentaStor Community Edition Released

    <b>NexentaStor:</b> The NexentaStor project has released Community edition v3.0 of it's free storage appliance. The distribution is based on the Nexenta Core Platform - OpenSolaris kernel combined with Ubuntu userland.

    Read the article

  • How to get physical partition name from iSCSI details on Windows?

    - by Barry Kelly
    I've got a piece of software that needs the name of a partition in \Device\Harddisk2\Partition1 style, as shown e.g. in WinObj. I want to get this partition name from details of the iSCSI connection that underlies the partition. The trouble is that disk order is not fixed - depending on what devices are connected and initialized in what order, it can move around. So suppose I have the portal name (DNS of the iSCSI target), target IQN, etc. I'd like to somehow discover which volumes in the system relate to it, in an automated fashion. I can write some PowerShell WMI queries that get somewhat close to the desired info: PS> get-wmiobject -class Win32_DiskPartition NumberOfBlocks : 204800 BootPartition : True Name : Disk #0, Partition #0 PrimaryPartition : True Size : 104857600 Index : 0 ... From the Name here, I think I can fabricate the corresponding name by adding 1 to the partition number: \Device\Harddisk0\Partition1 - Partition0 appears to be a fake partition mapping to the whole disk. But the above doesn't have enough information to map to the underlying physical device, unless I take a guess based on exact size matching. I can get some info on SCSI devices, but it's not helpful in joining things up (iSCSI target is Nexenta/Solaris COMSTAR): PS> get-wmiobject -class Win32_SCSIControllerDevice __GENUS : 2 __CLASS : Win32_SCSIControllerDevice ... Antecedent : \\COBRA\root\cimv2:Win32_SCSIController.DeviceID="ROOT\\ISCSIPRT\\0000" Dependent : \\COBRA\root\cimv2:Win32_PnPEntity.DeviceID="SCSI\\DISK&VEN_NEXENTA&PROD_COMSTAR... Similarly, I can run queries like these: PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_TargetClass PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_PersistentDevices These guys return information relating to my iSCSI target name and the GUID volume name respectively (a volume name like \\?\Volume{guid-goes-here}), but the GUID volume name is no good to me, and there doesn't appear to be a reliable correspondence between the target name and the volume that I can join on. I simply can't find an easy way of getting from an IQN (e.g. iqn.1992-01.com.example:storage:diskarrays-sn-a8675309) to physical partitions mapped from that target. The way I do it by hand? I start Disk Management, and look for a partition of the correct size, verify that its driver says NEXENTA COMSTAR, and look at the disk number. But even this is unreliable if I have multiple iSCSI volumes of the exact same size. Any suggestions?

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • UID /GID with NFS and ZFS

    - by profy
    Hi, I have a server with a zfs file system (nexenta core), and I'm sharing files overs nfs with zfs share share_nfs. When I mount the file system on my client (a ubuntu workstation) I can't have the original UID/GID :( I mount my client with the following options : 192.168.1.4:/home /media/testnfs nfs rw,dev,noexec,nosuid,auto,nouser,noatime,rsize=8192,wsize=8192 0 0 If I configure idmapd I have nobody:nogroup and without idmapd I have 4294967294:4294967294, how can I get the original ID's ? Is it a problem with the nfs server or the client ? Thanks for answers.

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • OpenSolaris livecd, NForce NIC driver, and NTFS USB mounting. Oh My!

    - by Jake Wharton
    I'm attempting to install OpenSolaris 2009.06 on my server. Before I do I would like to test that everything works and am running in to problems. It has an Abit AN-M2 motherboard with an NForce chipset. The driver config utility says that I need a third-party driver and links me to http://homepage2.nifty.com/mrym3/taiyodo/eng/. Scrolling to the bottom, I have downloaded both tgzs just in case. Now the fun part: The only way to get this on to the computer is via a USB drive since I can't access the network. Also, install CD in the drive otherwise I'd just burn them to DVD. Since my USB key is NTFS formatted I cannot mount it since the install CD seems to be lacking NTFS drivers which require more downloaded packages. What should I do? The server will simply be a dumb NAS and I know that there exists other OpenSolaris-based flavors such as Nexenta but from what I read the stock install is likely the best. If this is not the case and pursuing a different flavor is required or better I will also accept that as an answer (but please don't jump straight to it).

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

< Previous Page | 1 2