Search Results

Search found 321 results on 13 pages for 'iscsi'.

Page 10/13 | < Previous Page | 6 7 8 9 10 11 12 13  | Next Page >

  • Why does eth0 show an IP if I'm booting into runlevel 1?

    - by banjer
    I'm having some issues with networking on a new Linux server I'm building. The OS is SLES 11. When booting into runlevel 1, I see that eth0 is showing an IP. Physically, there is a network cable plugged into the card associated with eth1, and then there is a network cable plugged into a QLogic iSCSI card (eth4, not shown). I've been troubleshooting this for awhile, and it seems like eth0 is somehow getting assigned an IP, even though it isn't configured in Linux or even plugged into the network for that matter. Thoughts? ifconfig -a Here is the ifconfig output (Sorry, I need more rep before I can post images on SF...)

    Read the article

  • HP DL180 G6 P410 8x SATA 1TB, what is the optimal configuration?

    - by Oneiroi
    I have a HP DL180 G6 with a P410 raid controller. Presently this runs using 4x 1TB Samsung Spinpoint SATA drives, in a RAID10 configuration using default settings. I am about to add a backplane to increase the drive capacity from 4 to 12 drives, and I plan to install 4 more 1TB SATA Drives. The drives are matched and have close serial numbers (They arrived together in the Manufacturers pallet). Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s I will also be installing RHEL 6.1 x86_64. My question is what would be the optimal RAID settings (stripe etc.) for this configuration? To recap: 8x Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s RAID 10 configuration. Thanks in advance. Update for role: Server is to become an iscsi target for an internal openstack deployment currently underway. (Glance) Will also provide virtualisation through KVM

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • How to install network drivers during installation?

    - by Matt
    I have a server that I'm attempting to install windows onto. However, the disk is an iscsi target provided by ipxe. Everything appears to go well until about 3/4 through the install process I get an error about a critical driver missing and the installation is cancelled. I would say the critical driver would be the network card. It's an intel nic and the drivers are not on the windows installation CD. I tried slipstreaming them with RTSevenLite, but after it created the CD it seems it failed to make it bootable. I've also not been successful in making a bootable USB thumb drive or USB HDD. I suspect a buggy bios even though I have the latest. How to install network drivers during installation? Windows used to provide an optional F6 during install feature but this seems to be missing in Windows Servert 2008. Perhaps there is a way to do this, or another method?

    Read the article

  • Best practice? Using DPM to backup VMs within each VM or through the host?

    - by andrew
    We've got two Hyper-V hosts running multiple VMs (all flavors of Windows Servers). One of the VMs is running MS Data Protection Manager 2010, which runs beautifully (most of the time) and is connected to a separate NAS via iSCSI for the DPM storage. I noticed when I installed the DPM agent on the Hyper-V hosts, it enumerates the VMs in the DPM Protection listing. I don't want to burn through my storage space too fast with duplicate protection, so I was wondering: Is it recommended to back up VMs through the host, or is it better to install the DPM agent on each VM and backup as I would any other machine? It would seem as though most people (currently including me) do it the second way, but is there any advantage to including the entries under HyperV (Backup using Child Partition Snapshop)?

    Read the article

  • VMware Virtual vCenter and High Availability

    - by rufo
    To continue with this question: Should be Vmware vCenter server high available? According to the response there even if vCenter is down HA will continue to work. So, if my vCenter is a VM, using the express sql edition in the same VM, and that VM is hosted in the same cluster it manages (and the cluster is setup for HA): Am I correct to assume that if the host that hosts the vCenter goes down HA will vmotion the vCenter VM to another host and it will continue to function? BTW: my environment is small, two ESXi 5.0 hosts, with about 50 VMs, using iSCSI shared storaged for everything.

    Read the article

  • Oracle RAC interconnect in a Dell M1000e Blade Enclosure

    - by Antitribu
    We are looking at a Dell M1000e enclosure and appropriate Blades with 4 NICs each. We are planning on running Linux/Oracle 11g RAC on two blades, storage will be handled on an iSCSI SAN for which two NICs (via passthrough) will be connected leaving us with two NICs (via blade centre switches). We would like to have an interconnect (obviously) , an external IP and an internal IP. Would best practice be to: bond the remaining two interfaces and VLAN as appropriate to provide three virtual interfaces? run the interconnect on one interface and VLAN the external/internal interfaces? purchase a blade with more NICs as the above is a terrible idea? Another option? Please feel free to point out the blindingly obvious or to relevant documentation on support.oracle. I am specifically interested in supported configurations and best practices. Thanks!

    Read the article

  • Xen private networking between multiple hosts

    - by Joe
    I have two physical hosts running Xen 3.2, sharing storage via iSCSI. On these two hosts are a number of domUs and I'd like to network them in multiple private networks so they can only contact other domUs on their private network. My understanding of the xen documentation suggests it's possible to do this within one dom0 (ie create virtual networks between domUs), but I've found nothing explaining how this can be implemented across multiple dom0s on different hosts. The only thing that jumps to mind is manually creating iptable rules to route data to the other host, but this seems to lack elegance and could quickly grow cumbersome. Any suggestions? All advice is much appreciated!

    Read the article

  • (Open Source) Cloud-Filesystem to run a Database on Top?

    - by jens
    Hello, what are current "technologies and implementations" to get a filesystem with unlimited capacity by using single servers with their harddisk to form a "grid/cloud filesystem"? I need to have unlimited space (by adding further servers) but it must be a filesysem that is capable of running a database on top. I know of Apache Hadoop but that seems not be be Ideal for running a DB on top of it (or am I wrong??) And iSCSI seems to be "remote/networked" but I do not know how and if this is clusterable? thank you very much!! jens

    Read the article

  • (Open Source) Cloud-Filesystem to run a Database on Top?

    - by jens
    Hello, what are current "technologies and implementations" to get a filesystem with unlimited capacity by using single servers with their harddisk to form a "grid/cloud filesystem"? I need to have unlimited space (by adding further servers) but it must be a filesysem that is capable of running a database on top. I know of Apache Hadoop but that seems not be be Ideal for running a DB on top of it (or am I wrong??) And iSCSI seems to be "remote/networked" but I do not know how and if this is clusterable? thank you very much!! jens

    Read the article

  • distributed, fault-tolerant network block device

    - by gucki
    I'm looking for a distributed, fault-tolerant network storage system which exposes block devices (not filesystems) on the clients. A client's block device should write simultaneously to several storage nodes A client's block device should not fail as long as not all storage nodes backing it went down The master should automatically redistribute storages' data when a storage node fails or gets added/ removed A single master (which is for metadata only) is fine So ideally the architecture would be very similar to moosefs (http://www.moosefs.org/) but instead of exposing a real filesystem mounted using a fuse client it'd expose block devices on the clients. I know of iscsi and drbd but both don't seem to offer what I'm looking for. Or am I missing something?

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser; feel free to migrate answers here if you know how).

    Read the article

  • Kernel appears to have no modules

    - by George Reith
    Useful info: OS: CentOS 5.8 final Kernel: 2.6.32-042stab056.8 My kernel came prebuilt with the server, I don't know anything about kernels and not a lot about linux however as far as I do know I should have some modules loaded by the kernel. I came across this problem because I am unable to run iscsi as it is expecting certain modules to be loaded. lsmod returns nothing. depmod -a returns: WARNING: Couldn't open directory /lib/modules/2.6.32-042stab056.8: No such file or directory FATAL: Could not open /lib/modules/2.6.32-042stab056.8/modules.dep.temp for writing: No such file or directory I have rebooted and nothing has changed. Does anyone know why this is happening?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • esx 4 - c7000 - cisco 3020

    - by gdavid
    I have 4 blades with esx 4 installed in a HP c7000 enclosure. They have 6 cisco 3020 for hp switches in the backend. The plan was to use 2 switches for iSCSI traffic and the other 4 for data traffic. I am having a problem trunking the switches to our existing environment. The documentation i keep finding online has commands/features that are not available on the 3020 switch. Does anyone have this setup anywhere? I am looking to do Virtual Switch Tagging (VST) so i can control the machines vlan via the port groups. The only time any configuration worked for us was when our network team placed the command switchport native vlan x this setup only allowed vlan x to pass traffic and only when the port group was in vlan 0. Ideas? thanks for any help. -GD

    Read the article

  • How to present shared storage for MS Cluster Services running on vSphere 5

    - by MDMarra
    I've seen two approaches to handling the presentation of shared storage to Windows Server 2008 R2 cluster VMs on VMWare vSphere. One is the traditional method of carving out a LUN on your SAN and presenting it to both hosts through the Microsoft ISCSI software initiator. The other method is to make a vmdk on an existing LUN and attach it to both hosts and made it an independent disk so that it isn't affected by snapshots. Is one way the "correct" way, or are both viable? Is there any advantage or disadvantage to doing either?

    Read the article

  • Running SQL 2008 on a VM

    - by chris.w.mclean
    We are pondering trying to set up a SQL 2008 instance inside a VM for a production environment. All our SQL instances use iSCSI over gigabit ethernet to talk to a NAS, as would this new instance. Any reason this is a bad idea or any considerations to make this work well? The VM would be running in Xen 5.5 or we could set it up in Hyper-V if there's a compelling case for that. And the VM's VHD would be stored on a different NAS then the SQL storage is on.

    Read the article

  • Brocade 200E Switch - Fibre Channel

    - by Arthor
    What I have: Fujitsu-Siemens PRIMERGY BX600 Brocade 200E (16 port, 4gbit fibre). My question: Imagine a QNAP with a fiber 10GBIT card connected to the Brocade 200E (16 port, 4gbit fibre). Would this work; would the card drop down to 4GBIT? Are 10GBIT fiber cards backwards completable. Update. I have the specs of my server now.... Fujitsu-Siemens PRIMERGY BX600 S3 Blade Ecosystem Blade Chassis comprising; 2 x A3C40073243 Blade Management modules 2 x A3C40089238 GBE Switch Blade SB9F 30/12 2 x A3C40085736 4Gb 10 port pass through blades 1 x A3C40083767 Digital KVM Modules 2 x A3C40073245 Fan enclosures + cooling fans 4 x A3C40073262 Power Supplies My Goals and Objectives To have a blade system in place for 8 blades for video rendering, the other 2 for database and scripts etc The system will be built on VMWARE ESXi 5 Use ISCSI on the QNAP to support HA and vmotion if needed Users to access the qnap for video editing QANAP has 12 drive (2 x (6 HDD in RAID 10)

    Read the article

  • What disk setup is needed / best practice for hypervisor-only servers?

    - by Luke404
    Planning to buy some servers to run an hypervisor (Citrix XenServer or VMware vSphere, still have to decide between the two) we'd like to boot off the local redundant SD card module offered by various vendors (eg. Dell, HP, etc...). The actual VMs will run from an existing iSCSI SAN (which, by the way, can't support booting the servers directly off the SAN). What are the reasons, if any, to choose completely diskless servers VS having some local storage? And what would be the guidelines to choose that local storage? (number of spindles, raid level, etc)

    Read the article

  • Suggestions on providing HA access to an external (fibre) RAID subsystem

    - by user145198
    We are looking at upgrading our storage capacity with an external RAID subsystem that has redundant (2) fibre controllers, each controller has 4 x 8 Gbps fibre ports. I would like to make access to this storage system occur via HA Linux. Ideally I would connect 2 fibre ports from each controller into each Linux server, and then export either NFS or iSCSI via a 10 Gbe interface. I have seen plenty of references to DRBD, however all of those references tend to use block storage that is solely attached to each machine, rather than having a shared block storage device, so I am unsure if DRBD could (or should) be used in this case. Ideas?

    Read the article

  • Windows 2003 Server - Can I map a folder to another folder on the same server?

    - by TheCleaner
    Scenario: I have a server that was running low on space. We have an external iscsi SAN that it now has a LUN on and connected to the server as E:\ We are moving the PHOTOS folder from the old location on D:\ to the new E:. That new drive is being shared out as "ARCHIVE". So: \server\shared\photos now becomes \server\archive\photos I can easily place a shortcut in the original location saying "DOUBLE CLICK HERE FOR THE PHOTOS", but it isn't ideal. What I'd like is to be able to have \server\shared\photos simply point to \server\archive\photos. So that if someone wants to map a drive to \server\shared and then browse to the photos folder once in there they will see what is in the \server\archive\photos location. Is that possible? I was thinking about SUBST or DFS, but I don't think either of those will do it.

    Read the article

  • ESXi5 - management services crashes - vms running

    - by Frederik Nielsen
    I have a setup with two ESXi5 servers. We are(were) running with a ISCSi box to server disk for the VM's - however we are in the progress of migrating away from it, because the storage os disk is bad. Now, one of the ESXi hosts has been running for ~20hrs, and it seems like the management services just crashed on that host.. The vms are still running - so it's not really serious. However, I want to fix it. Should I be worried? Will the VM's keep running? The hosts does respond on pings. I am running a vcenter to administrate the hosts. Thanks in advance.

    Read the article

  • export block device over network without root

    - by dschatz
    I'm trying to export a file as a block device over the network. I do not have root access on the machine where the file exists. I do have root access on the machine(s) where I will mount the block device. I've seen ATA-Over-Ethernet and ISCSI but there don't seem to be any implementations which allow me to export the block without root at least (some even require kernel modules). Is there an implementation of either of these or some other protocol that doesn't require root? Perhaps I can tunnel ethernet over IP to do this?

    Read the article

  • Server 2012 Storage Pools, Raid Controller... can the Storage Pool deal with it?

    - by TomTom
    Before trying it out - I don't find any documentation. Given that Storage Pools have serious performance problems with parity, and do not rebalance data at the moment when you add discs, my preferred way to use them would be as think provisioned space, ISCSI targets - with every "Pool" running against 1 RAID that comes from a Raid controller (who also introduces SSD read and write caching - another thing missing from Storage Pools). The main question is - how does a Storage Pool handle the change in the underlying disc that can happen? I mostly talk about OCE (Online Capacity Expansion), where a disc after an expansion suddenly reports a larger space. Standard Windows allows you to use this additional space (and expand the partitions). How does a storage pool handle it?

    Read the article

  • Swap files in Cloud Infrastructures

    - by ffeldhaus
    At our company we set up an OpenStack Cloud and are currently creating internal guidelines for creation of OS templates / images. One controversial topic was if we should provide swap inside the VM templates. Therefore I'd like to ask the following questions From an elastic Cloud provider point of view, does it make sense to offer swap partitions / files in the VM templates or is swap not needed when a VM can be resized? Which scenarios necessarily demand a swap file to be present? What kind of Storage should be used for swap files (e.g. local / central, FC / iSCSI / NFS)? Are there any best practices for offering swap files in a performant way in Cloud Infrastructures?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13  | Next Page >