Search Results

Search found 761 results on 31 pages for 'nfs perfomance'.

Page 16/31 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • Oracle Database 11g R2 támogatott SAP alatt is

    - by Lajos Sárecz
    Húsvét óta már SAP alatt is használható az Oracle Database 11g R2. Köztudott, hogy az SAP csak a Release 2-re ad ki támogatást, így ez most egy igazán örömteli hír az SAP felhasználóknak, hiszen az alábbi 11g R2 újdonságokat tudják alkalmazni SAP környezetben: • Advanced Compression opció (táblára, RMAN mentésre, expdp-re, Data Guard hálózatra) • Real Application Testing • Oracle Database 11g Release 2 Database Vault • Oracle Database 11g Release 2 RAC • Advanced Encryption táblaterekre, RMAN mentésekre, expdp-re, Data Guard hálózatra • Direct NFS • Deferred Segments • Online Patching Azaz például tömöríthetové válik az SAP adatbázisa, vagy az abból készített mentések. Az eddigi tapasztalatok szerint a tömörítés aránya adatbázistól függoen 2-4-szeres. Az adatbázis upgrade és minden egyéb adatbázis infrastruktúrát érinto változatatás kockázata jelentosen csökkentheto lesz a Real Application Testing alkalmazásával. A rendszergazdai szerepkörök szeparaláhatóvá válnak a Database Vault felhasználásával. A Real Application Clusters 11g R2 újdonságai is elérheto lesznek. A Transparent Data Encryption révén a táblaterek és a mentések titkosíthatók úgy, hogy az alkalmazás számára mindez transzparens, azonban a médiához közvetlenül hozzáférve nem lesznek visszafejthetok az adatok. Támogatott lesz a Direct NFS kliens, ezzel NFS elérési sebesség jelentosen javul. A Deffered Segments révén pedig a tábla szegmensek csak akkor kerülnek lefoglalásra, amikor adat kerül a táblába. Ez azért hasznos, mert általában alkalmazások telepítésekor létrejön minden tábla, azonban sok táblába nem kerül adat. Ezáltal mind a telepítés ideje, mind az adatbázis mérete csökkentheto. Az Online Patching pedig lehetové teszi a leállításmentes patch telepítést. Hát azt gondolom ezek vonzó lehetoségek, érdemes betervezni a közeljövobe az SAP rendszerek alatti adatbázis frissítését, hiszen a 10g verzió Premier Support idén nyáron lejár. Az upgrade-hez pedig mindenképp javaslom a Real Application Testing használatát, amivel az éles terhelés mellett teszthelheto teszt környezetben az upgrade. A Sun Oracle Database Machine és az Exadata sajnos még nem támogatott SAP alatt, mivel az ASM certifikáció még nem zárult le. A hírek szerint 2011 elejére várható, hogy ez megtörténik.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • Heartbeat won't successfully start up resources from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. The servers are directory connected with a 1000Mbps crossover cable on eth1 and have access to an IP camera LAN on eth0. Now, let's say that one node is down and the remaining functional node is booting after having been shut down. The node that is still functioning won't start up heartbeat and provide access to the drbd resource from a cold boot. I have to manually restart heartbeat by sudo service heartbeat restart to get everything up and running. How can I get it to start fine from a cold start, when only one server is present? Here is the ha.cf: debug /var/log/ha-debug logfile /var/log/ha-log logfacility none keepalive 2 deadtime 10 warntime 7 initdead 60 ucast eth1 192.168.2.2 ucast eth0 10.1.10.201 node EMserver1 node EMserver2 respawn hacluster /usr/lib/heartbeat/ipfail ping 10.1.10.22 10.1.10.21 10.1.10.11 auto_failback off Some material from the syslog: harc[4604]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/status status mach_down[4632]: 2012/11/27_13:54:49 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[4632]: 2012/11/27_13:54:49 info: mach_down takeover complete for node emserver2. Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: Initial resource acquisition complete (T_RESOURCES(us)) Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: mach_down takeover complete. IPaddr[4679]: 2012/11/27_13:54:49 INFO: Resource is stopped Nov 27 13:54:49 EMserver1 heartbeat: [4605]: info: Local Resource acquisition completed. harc[4713]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp ip-request-resp[4713]: 2012/11/27_13:54:49 received ip-request-resp IPaddr::10.1.10.254 OK yes ResourceManager[4732]: 2012/11/27_13:54:50 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[4759]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 start IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated nic for 10.1.10.254: eth0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated netmask for 10.1.10.254: 255.255.255.0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: eval ifconfig eth0:0 10.1.10.254 netmask 255.255.255.0 broadcast 10.1.10.255 IPaddr[4804]: 2012/11/27_13:54:50 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/drbddisk r0 start Filesystem[4965]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 start Filesystem[5039]: 2012/11/27_13:54:50 INFO: Running start for /dev/drbd1 on /shr Filesystem[5033]: 2012/11/27_13:54:51 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:51 info: Running /etc/init.d/nfs-kernel-server start Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: Local Resource acquisition completed. (none) Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: local resource transition completed. Nov 27 13:57:46 EMserver1 heartbeat: [4586]: info: Heartbeat shutdown in progress. (4586) Nov 27 13:57:46 EMserver1 heartbeat: [5286]: info: Giving up all HA resources. ResourceManager[5301]: 2012/11/27_13:57:46 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[5372]: 2012/11/27_13:57:46 INFO: Running stop for /dev/drbd1 on /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: Trying to unmount /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: unmounted /shr successfully Filesystem[5366]: 2012/11/27_13:57:47 INFO: Success ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[5509]: 2012/11/27_13:57:47 INFO: ifconfig eth0:0 down IPaddr[5497]: 2012/11/27_13:57:47 INFO: Success Nov 27 13:57:47 EMserver1 heartbeat: [5286]: info: All HA resources relinquished. Nov 27 13:57:48 EMserver1 heartbeat: [4586]: info: killing /usr/lib/heartbeat/ipfail process group 4603 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBFIFO process 4589 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4590 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4591 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4592 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4593 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4594 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4595 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4596 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4597 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4598 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4599 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4589 exited. 11 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4596 exited. 10 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4598 exited. 9 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4590 exited. 8 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4595 exited. 7 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4591 exited. 6 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4592 exited. 5 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4593 exited. 4 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4597 exited. 3 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4594 exited. 2 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4599 exited. 1 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: emserver1 Heartbeat shutdown complete. Here is some more from the log ResourceManager[2576]: 2012/11/28_16:32:42 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[2602]: 2012/11/28_16:32:42 INFO: Running OK Filesystem[2653]: 2012/11/28_16:32:43 INFO: Running OK Nov 28 16:32:52 EMserver1 heartbeat: [1695]: WARN: node emserver2: is dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Dead node emserver2 gave up resources. Nov 28 16:32:52 EMserver1 ipfail: [1807]: info: Status update: Node emserver2 now has status dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Link emserver2:eth1 dead. Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: NS: We are still alive! Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: Link Status update: Link emserver2/eth1 now has status dead Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Asking other side for ping node count. Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Checking remote count of ping nodes. Nov 28 16:32:57 EMserver1 heartbeat: [1695]: info: Heartbeat shutdown in progress. (1695) Nov 28 16:32:57 EMserver1 heartbeat: [2734]: info: Giving up all HA resources. ResourceManager[2751]: 2012/11/28_16:32:57 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[2829]: 2012/11/28_16:32:57 INFO: Running stop for /dev/drbd1 on /shr Filesystem[2829]: 2012/11/28_16:32:57 INFO: Trying to unmount /shr Filesystem[2829]: 2012/11/28_16:32:58 INFO: unmounted /shr successfully Filesystem[2823]: 2012/11/28_16:32:58 INFO: Success ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[2971]: 2012/11/28_16:32:58 INFO: ifconfig eth0:0 down IPaddr[2958]: 2012/11/28_16:32:58 INFO: Success Nov 28 16:32:58 EMserver1 heartbeat: [2734]: info: All HA resources relinquished. Nov 28 16:32:59 EMserver1 heartbeat: [1695]: info: killing /usr/lib/heartbeat/ipfail process group 1807 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBFIFO process 1777 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1778 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1779 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1780 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1781 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1782 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1783 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1784 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1785 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1786 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1787 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1778 exited. 11 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1779 exited. 10 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1780 exited. 9 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1781 exited. 8 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1782 exited. 7 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1783 exited. 6 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1784 exited. 5 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1785 exited. 4 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1786 exited. 3 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1787 exited. 2 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1777 exited. 1 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: emserver1 Heartbeat shutdown complete. If I restarted heartbeat at this point... the resources heartbeat controls would start up fine.... please help!

    Read the article

  • Coherent access to mainframe files from Win32 application and IBM RDZ/Eclipse?

    - by Ira Baxter
    I have a suite of tools for processing IBM COBOL source code; these tools are built as Win32 applications and talk to Windows (including network) files using traditional Windows file system calls (open, close, read, write) and work just fine, thank you. I'd like to integrate these with Eclipse; we understand how to get Eclipse to do UI for us we think. The problem is that Eclipse/RDZ users access mainframe files through some IBM magic. In How does RDZ access mainframe files I tried to understand how Eclipse accessed files on a mainframe. Apparantly Eclipse/RDZ has a secret filesystem access backdoor not available to normal mortals. At issue is how our tools, reading some Windows-accessible file (local disk file, NFS to mainframe, ...) can associate such files with the files that Eclipse can access or is using? Ideally we'd like UI-integrated versions of our tools take an Eclipse file-name string for a mainframe file, pass it to our Windows application to process, have the Windows application open/read/process the file, and return results associated with that file to the Eclipse UI. Is there a canonical file name path that would be used with mainframe NFS that would be equivalent to the name or access object the Eclipse RDZ used to access the same file? Are all operations doable internally by Eclipse, doable by the mainframe NFS [for instance, can NFS read/update an element in a partitioned data set? Can Eclipse RDZ? Does it matter?] Is the mainframe file access available to custom Java code running under Eclipse RDZ (e.g., equivalents of open/close/read/write based on filename/path/something?) If so, can somebody steer me towards documentation describing the access methods? Anybody else already solve this problem or have a good suggestion?

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • Weird graphics behaviour while playing games

    - by Ayush Khemka
    When I play any high-end game like NFS Most Wanted or FIFA 12, I get these weird things on my graphics. While playing NFS, my car has various transparent diamonds all over its body, and while playing FIFA I get these weird black lines all over the field. My PC specs are :- AMD Athlon II X2 ASUS M4A785D-M PRO 500GB Seagate HDD 2GB DDR2 Transcend RAM 1033Mhz ATI Radeon HD4350 512MB graphics card Tell me if I need to provide anything else. Please help me. Thanks.

    Read the article

  • Short Look at Frends Helium 2.0 Beta

    - by mipsen
    Pekka from Frends gave me the opportunity to have a look at the beta-version of their Helium 2.0. For all of you, who don't know the tool: Helium is a web-application that collects management-data from BizTalk which you usually have to tediously collect yourself, like performance-data (throttling, throughput (like completed Orchestrations/hour), other perfomance-counters) and data about the state of BTS-Applications and presents the data in clearly structured diagrams and overviews which (often) even allow drill-down.  Installing Helium 2 was quite easy. It comes as an msi-file which creates the web-application on IIS. Aditionally a windows-service is deployt which acts as an agent for sending alert-e-mails and collecting data. What I missed during installation was a link to the created web-app at the end, but the link can be found under Program Files/Frends... On the start-page Helium shows two sections: An overview about the BTS-Apps (Running?, suspended messages?) Basic perfomance-data You can drill-down into the BTS-Apps further, to see ReceiveLocations, Orchestrations and SendPorts. And then a very nice feature can be activated: You can set a monitor to each of the ports and/or orchestrations and have an e-mail sent when a threshold of executions/day or hour is not met. I think this is a great idea. The following screeshot shows the configuration of this option. Conclusion: Helium is a useful monitoring  tool for BTS-operations that might save a lot of time for collecting data, writing a tool yourself or documentation for the operations-staff where to find the data. Pros: Simple installation Most important data for BTS-operations in one place Monitor for alerts, if throughput is not met Nice Web-UI Reasonable price Cons: Additional Perormance-counters cannot be added Im am not sure when the final version is to be shipped, but you can see that on Frend's homepage soon, I guess... A trial version is available here

    Read the article

  • Disable automount in Nautilus

    - by jimmybondy
    I have added my mount devices in /etc/fstab and they get correctly mounted (2nd partition with ntfs and nas-share with nfs). I have also disabled automount in Nautilus preferences by using dconf-editor (following this post): user@server:~$ gsettings list-recursively org.gnome.desktop.media-handling org.gnome.desktop.media-handling automount false org.gnome.desktop.media-handling automount-open false org.gnome.desktop.media-handling autorun-never false Now i see every mount device twice in Nautilus (the entry in fstab already mounted). When clicking on the other entry i get the eligible message mount.nfs: /media/nas_share is busy or already mounted How i can i disable the appearente of those devices in Nautilus? EDIT Regarding automount: I have only one line in /etc/auto.master: +auto.master

    Read the article

  • Can I share a TV card?

    - by Boris
    My environment: 2 PCs, a desktop and a laptop, both on Oneiric they are connected together by ethernet wire nfs-common is installed and configured: the desktop is the server a TV tuner card is installed on the desktop, I can watch TV with the software Me-TV It works fine, TV on desktop, and my network too: I share folders thanks to NFS. But I would like more: How can I share my TV tuner card from the desktop and be able to watch TV on the laptop too? If possible I would like a solution that allows me to keep using the software Me-TV, on both PCs. I bet that there is a solution to create a fake TV card on the 2nd PC with xNBD. I'm trying xnbd-server --target /dev/dvb/adapter0/demux0 but I cant make it work. Trying to understand some examples of xNBD command lines, it seems to be meant only for sharing disk player. If someone as ever used xNBD, he's welcome.

    Read the article

  • Can't ping Ubuntu laptop from my LAN

    - by oskar
    My laptop has Ubuntu 10.10 and is connected to my router with full internet access, yet I can't ping it from other computers on my LAN. I tried the following: I can successfully ping those other computers from my Ubuntu laptop, so I didn't accidentally connect to someone else's network. I can successfully ping my Ubuntu laptop from itself, though I don't know if that means anything. I haven't messed with iptables at all, so it currently doesn't have any rules set that would cause it to reject anything. I made a DHCP reservation for my laptop's MAC address in my router to make sure I was always using the correct IP address. Please note that I am using a "command line only" install of Ubuntu, so I can't use any GUI network config tools. The reason I want to ping it is because I am trying to run an NFS server on the laptop, yet despite correctly setting it up I cannot access the NFS volume on another computer because it isn't even visible on the network right now.

    Read the article

  • Can't ping Ubuntu laptop from my LAN

    - by oskar
    My laptop has Ubuntu 10.10 and is connected to my router with full internet access, yet I can't ping it from other computers on my LAN. I tried the following: I can successfully ping those other computers from my Ubuntu laptop, so I didn't accidentally connect to someone else's network. I can successfully ping my Ubuntu laptop from itself, though I don't know if that means anything. I haven't messed with iptables at all, so it currently doesn't have any rules set that would cause it to reject anything. I made a DHCP reservation for my laptop's MAC address in my router to make sure I was always using the correct IP address. Please note that I am using a "command line only" install of Ubuntu, so I can't use any GUI network config tools. The reason I want to ping it is because I am trying to run an NFS server on the laptop, yet despite correctly setting it up I cannot access the NFS volume on another computer because it isn't even visible on the network right now.

    Read the article

  • How can I share the TV card on my home network? with a solution like `ln -s`?

    - by Boris
    My environment: - 2 PCs, a desktop and a laptop, both on Oneiric - they are connected together by ethernet wire - nfs-common is installed and configured: the desktop is the server - a TV tuner card is installed on the desktop, I can watch the French TNT TV with the software Me-TV It works fine, TV on desktop, and my network too: I share folders thanks to NFS. But I would like more: how can I share my TV tuner card from the desktop and be able to watch TV on the laptop too? If possible I would like a solution that allows me to keep using the software Me-TV, on both PCs. I bet that there is a solution to create a fake TV card on the 2nd PC. Something like ln -s on /dev/dvb/adapter0/.

    Read the article

  • Why 'nobody' always starts a new `find` program that always consume my memory?

    - by UniMouS
    $ ps -elf | grep ... 0 D nobody 27320 27319 2 90 10 - 353471 sleep_ 07:54 ? 00:02:19 /usr/bin/find / -ignore_readdir_race ( -fstype NFS -o -fstype nfs -o -fstype nfs4 -o -fstype afs -o -fstype binfmt_misc -o -fstype proc -o -fstype smbfs -o -fstype autofs -o -fstype iso9660 -o -fstype ncpfs -o -fstype coda -o -fstype devpts -o -fstype ftpfs -o -fstype devfs -o -fstype mfs -o -fstype shfs -o -fstype sysfs -o -fstype cifs -o -fstype lustre_lite -o -fstype tmpfs -o -fstype usbfs -o -fstype udf -o -fstype ocfs2 -o -type d -regex \(^/tmp$\)\|\(^/usr/tmp$\)\|\(^/var/tmp$\)\|\(^/afs$\)\|\(^/amd$\)\|\(^/alex$\)\|\(^/var/spool$\)\|\(^/sfs$\)\|\(^/media$\)\|\(^/var/lib/schroot/mount$\) ) -prune -o -print0 ... This job always start automatically and consumes my memory. Even after I kill it, it will starts several hours later. What's that job? EDIT Note: the pid is different from the above because I killed the above one, wait for several hours, then the second one comes. $ pstree -psl |-anacron(25920)---sh(25929)---run-parts(25930)---locate(26343)---updatedb.findut(26348)-+-frcode(26358) | |-sort(26357) | `-updatedb.findut(26356)---su(26387)---sh(26402)---find(26403) This is what it look like in a graphical tool:

    Read the article

  • Tell the kernel to strongly cache a particular directory

    - by silviot
    This question is a rephrasing of Optimizing EXT4 performance. I have a directory that contains build files, most very small, but totaling 5.6G. I usually access the same subset of files (some thousands, for some tens of megabytes) over and over again. The subset changes daily (different projects, different versions of libraries). What takes longer when I use it seem to be disk seeks. For example if I do a du twice the second time it takes as much time as the first, and disk activity is similar. Ideally I'd like to tell the kernel to allocate X Mb to the metadata and Y to data in the folder, like the options for nfs cache. Is it possible in some way, other than mounting nfs from localhost and caching it to a ramdisk?

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >