Search Results

Search found 676 results on 28 pages for 'nfs'.

Page 1/28 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • 12.10 update breaks NFS mount

    - by TarekEldeeb
    I've just upgraded to the latest 12.10 beta. Rebooted twice. The problem is with the NFS folders not mounting, here's a verbose log. # mount -v myserver:/nfs_shared/tools /tools/ mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Mon Oct 1 11:42:28 2012 mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: Connection timed out My IP is 109, the server is 22. Just before the last upgrade, I was able to mount normally. Other PCs on my network are able to mount too, it's not a server issue. How can I analyse the problem? Certain log files? Any help?

    Read the article

  • NFS issue: clients can mount shares as NFSv3 but not as NFSv4 -- or how to debug NFS?

    - by tdn
    Problem description I have a file server running Debian. On it I have a few NFS shares. When I mount the shares from a client using NFSv3 (mount.nfs 10.0.0.51:/exports/video /mnt -o vers=3,soft,intr,timeo=10), it works. However, I would like to use NFSv4 because of improved security and performance. When I try to mount an NFSv4 share on malbec the mount command just hangs and finally times out after 2 minutes. How do I make the clients mount the NFSv4 shares as NFSv4? How do I troubleshoot NFS? There is no information in the syslog on neither client nor server. What are any errors in my configuration? Facts: Server is corvina(10.0.0.51) Client is malbec(10.0.0.1) Malbec runs Ubuntu 12.04 Server runs Debian 7 wheezy Both are connected through 1 GbE LAN. Firewalls are off. rpcinfo (root@malbec) (13-07-02 21:00) (P:0 L:1) [0] ~ # rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 4000 status 100024 1 tcp 4000 status (root@malbec) (13-07-02 21:00) (P:0 L:1) [0] ~ # rpcinfo -p corvina program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 4000 status 100024 1 tcp 4000 status 100003 3 udp 2049 nfs 100227 3 udp 2049 100021 1 udp 4003 nlockmgr 100021 3 udp 4003 nlockmgr 100021 4 udp 4003 nlockmgr 100021 1 tcp 4003 nlockmgr 100021 3 tcp 4003 nlockmgr 100021 4 tcp 4003 nlockmgr 100005 1 udp 4002 mountd 100005 1 tcp 4002 mountd 100005 2 udp 4002 mountd 100005 2 tcp 4002 mountd 100005 3 udp 4002 mountd 100005 3 tcp 4002 mountd tcpdump The following is output from tcpdump on malbec while running this command: # rpcinfo -p corvina ~ # tcpdump -i eth0 host 10.0.0.51 21:14:51.762083 IP malbec.vineyard.sikkerhed.org.948 > corvina.vineyard.sikkerhed.org.sunrpc: Flags [S], seq 3069120722, win 14600, options [mss 1460,sackOK,TS val 146111 ecr 0,nop,wscale 7], length 0 21:14:51.762431 IP corvina.vineyard.sikkerhed.org.sunrpc > malbec.vineyard.sikkerhed.org.948: Flags [S.], seq 770684199, ack 3069120723, win 14480, options [mss 1460,sackOK,TS val 398850 ecr 146111,nop,wscale 7], length 0 21:14:51.762458 IP malbec.vineyard.sikkerhed.org.948 > corvina.vineyard.sikkerhed.org.sunrpc: Flags [.], ack 1, win 115, options [nop,nop,TS val 146111 ecr 398850], length 0 21:14:51.762556 IP malbec.vineyard.sikkerhed.org.948 > corvina.vineyard.sikkerhed.org.sunrpc: Flags [P.], seq 1:45, ack 1, win 115, options [nop,nop,TS val 146111 ecr 398850], length 44 21:14:51.762710 IP corvina.vineyard.sikkerhed.org.sunrpc > malbec.vineyard.sikkerhed.org.948: Flags [.], ack 45, win 114, options [nop,nop,TS val 398850 ecr 146111], length 0 21:14:51.763282 IP corvina.vineyard.sikkerhed.org.sunrpc > malbec.vineyard.sikkerhed.org.948: Flags [P.], seq 1:473, ack 45, win 114, options [nop,nop,TS val 398850 ecr 146111], length 472 21:14:51.763302 IP malbec.vineyard.sikkerhed.org.948 > corvina.vineyard.sikkerhed.org.sunrpc: Flags [.], ack 473, win 123, options [nop,nop,TS val 146111 ecr 398850], length 0 21:14:51.764059 IP malbec.vineyard.sikkerhed.org.948 > corvina.vineyard.sikkerhed.org.sunrpc: Flags [F.], seq 45, ack 473, win 123, options [nop,nop,TS val 146111 ecr 398850], length 0 21:14:51.764454 IP corvina.vineyard.sikkerhed.org.sunrpc > malbec.vineyard.sikkerhed.org.948: Flags [F.], seq 473, ack 46, win 114, options [nop,nop,TS val 398850 ecr 146111], length 0 21:14:51.764478 IP malbec.vineyard.sikkerhed.org.948 > corvina.vineyard.sikkerhed.org.sunrpc: Flags [.], ack 474, win 123, options [nop,nop,TS val 146111 ecr 398850], length 0 The following is output from tcpdump on malbec while runing this command: ~ # time mount.nfs4 10.0.0.51:/ /mnt -o soft,intr,timeo=10 21:14:58.397327 IP malbec.vineyard.sikkerhed.org.872 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1298959870, win 14600, options [mss 1460,sackOK,TS val 147769 ecr 0,nop,wscale 7], length 0 21:14:58.397655 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.872: Flags [R.], seq 0, ack 1298959871, win 0, length 0 21:14:59.470270 IP malbec.vineyard.sikkerhed.org.854 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 4111013041, win 14600, options [mss 1460,sackOK,TS val 148038 ecr 0,nop,wscale 7], length 0 21:14:59.470569 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.854: Flags [R.], seq 0, ack 4111013042, win 0, length 0 21:15:01.506179 IP malbec.vineyard.sikkerhed.org.988 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1642454567, win 14600, options [mss 1460,sackOK,TS val 148547 ecr 0,nop,wscale 7], length 0 21:15:01.506514 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.988: Flags [R.], seq 0, ack 1642454568, win 0, length 0 21:15:05.542216 IP malbec.vineyard.sikkerhed.org.882 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 3844460520, win 14600, options [mss 1460,sackOK,TS val 149556 ecr 0,nop,wscale 7], length 0 21:15:05.542484 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.882: Flags [R.], seq 0, ack 3844460521, win 0, length 0 21:15:13.602228 IP malbec.vineyard.sikkerhed.org.969 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1317773588, win 14600, options [mss 1460,sackOK,TS val 151571 ecr 0,nop,wscale 7], length 0 21:15:13.602527 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.969: Flags [R.], seq 0, ack 1317773589, win 0, length 0 21:15:18.615027 ARP, Request who-has malbec.vineyard.sikkerhed.org tell corvina.vineyard.sikkerhed.org, length 46 21:15:18.615048 ARP, Reply malbec.vineyard.sikkerhed.org is-at cc:52:af:46:af:23 (oui Unknown), length 28 21:15:23.622223 IP malbec.vineyard.sikkerhed.org.1003 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 2896563167, win 14600, options [mss 1460,sackOK,TS val 154076 ecr 0,nop,wscale 7], length 0 21:15:23.622557 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.1003: Flags [R.], seq 0, ack 2896563168, win 0, length 0 21:15:28.629913 ARP, Request who-has corvina.vineyard.sikkerhed.org tell malbec.vineyard.sikkerhed.org, length 28 21:15:28.630223 ARP, Reply corvina.vineyard.sikkerhed.org is-at 00:9c:02:ab:db:54 (oui Unknown), length 46 21:15:33.662200 IP malbec.vineyard.sikkerhed.org.727 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1334644196, win 14600, options [mss 1460,sackOK,TS val 156586 ecr 0,nop,wscale 7], length 0 21:15:33.663657 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.727: Flags [R.], seq 0, ack 1334644197, win 0, length 0 21:15:43.698207 IP malbec.vineyard.sikkerhed.org.rsync > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 688828331, win 14600, options [mss 1460,sackOK,TS val 159095 ecr 0,nop,wscale 7], length 0 21:15:43.698541 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.rsync: Flags [R.], seq 0, ack 688828332, win 0, length 0 21:15:48.707710 ARP, Request who-has malbec.vineyard.sikkerhed.org tell corvina.vineyard.sikkerhed.org, length 46 21:15:48.707726 ARP, Reply malbec.vineyard.sikkerhed.org is-at cc:52:af:46:af:23 (oui Unknown), length 28 21:15:53.738188 IP malbec.vineyard.sikkerhed.org.946 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 2021272456, win 14600, options [mss 1460,sackOK,TS val 161605 ecr 0,nop,wscale 7], length 0 21:15:53.738519 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.946: Flags [R.], seq 0, ack 2021272457, win 0, length 0 21:16:03.806216 IP malbec.vineyard.sikkerhed.org.902 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 3889059201, win 14600, options [mss 1460,sackOK,TS val 164122 ecr 0,nop,wscale 7], length 0 21:16:03.806546 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.902: Flags [R.], seq 0, ack 3889059202, win 0, length 0 21:16:08.821900 ARP, Request who-has corvina.vineyard.sikkerhed.org tell malbec.vineyard.sikkerhed.org, length 28 21:16:08.822172 ARP, Reply corvina.vineyard.sikkerhed.org is-at 00:9c:02:ab:db:54 (oui Unknown), length 46 21:16:13.874209 IP malbec.vineyard.sikkerhed.org.712 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1480927452, win 14600, options [mss 1460,sackOK,TS val 166639 ecr 0,nop,wscale 7], length 0 21:16:13.874553 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.712: Flags [R.], seq 0, ack 1961062188, win 0, length 0 21:16:18.880588 ARP, Request who-has malbec.vineyard.sikkerhed.org tell corvina.vineyard.sikkerhed.org, length 46 21:16:18.880605 ARP, Reply malbec.vineyard.sikkerhed.org is-at cc:52:af:46:af:23 (oui Unknown), length 28 21:16:23.910209 IP malbec.vineyard.sikkerhed.org.758 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1375860626, win 14600, options [mss 1460,sackOK,TS val 169148 ecr 0,nop,wscale 7], length 0 21:16:23.910532 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.758: Flags [R.], seq 0, ack 1375860627, win 0, length 0 21:16:33.982258 IP malbec.vineyard.sikkerhed.org.694 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 1769203987, win 14600, options [mss 1460,sackOK,TS val 171666 ecr 0,nop,wscale 7], length 0 21:16:33.982579 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.694: Flags [R.], seq 0, ack 1769203988, win 0, length 0 21:16:44.026241 IP malbec.vineyard.sikkerhed.org.841 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 530553783, win 14600, options [mss 1460,sackOK,TS val 174177 ecr 0,nop,wscale 7], length 0 21:16:44.026505 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.841: Flags [R.], seq 0, ack 530553784, win 0, length 0 21:16:46.213388 IP malbec.vineyard.sikkerhed.org.43460 > corvina.vineyard.sikkerhed.org.ssh: Flags [P.], seq 64:128, ack 33, win 325, options [nop,nop,TS val 174723 ecr 397437], length 64 21:16:46.213859 IP corvina.vineyard.sikkerhed.org.ssh > malbec.vineyard.sikkerhed.org.43460: Flags [P.], seq 33:65, ack 128, win 199, options [nop,nop,TS val 427466 ecr 174723], length 32 21:16:46.213883 IP malbec.vineyard.sikkerhed.org.43460 > corvina.vineyard.sikkerhed.org.ssh: Flags [.], ack 65, win 325, options [nop,nop,TS val 174723 ecr 427466], length 0 21:16:54.094242 IP malbec.vineyard.sikkerhed.org.kerberos-master > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 2673083337, win 14600, options [mss 1460,sackOK,TS val 176694 ecr 0,nop,wscale 7], length 0 21:16:54.094568 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.kerberos-master: Flags [R.], seq 0, ack 2673083338, win 0, length 0 21:17:04.134227 IP malbec.vineyard.sikkerhed.org.1019 > corvina.vineyard.sikkerhed.org.nfs: Flags [S], seq 2176607713, win 14600, options [mss 1460,sackOK,TS val 179204 ecr 0,nop,wscale 7], length 0 21:17:04.134566 IP corvina.vineyard.sikkerhed.org.nfs > malbec.vineyard.sikkerhed.org.1019: Flags [R.], seq 0, ack 2176607714, win 0, length 0 21:18:46.314021 IP malbec.vineyard.sikkerhed.org.43460 > corvina.vineyard.sikkerhed.org.ssh: Flags [P.], seq 128:192, ack 65, win 325, options [nop,nop,TS val 204749 ecr 427466], length 64 21:18:46.314462 IP corvina.vineyard.sikkerhed.org.ssh > malbec.vineyard.sikkerhed.org.43460: Flags [P.], seq 65:97, ack 192, win 199, options [nop,nop,TS val 457494 ecr 204749], length 32 21:18:46.314482 IP malbec.vineyard.sikkerhed.org.43460 > corvina.vineyard.sikkerhed.org.ssh: Flags [.], ack 97, win 325, options [nop,nop,TS val 204749 ecr 457494], length 0 21:18:51.317908 ARP, Request who-has corvina.vineyard.sikkerhed.org tell malbec.vineyard.sikkerhed.org, length 28 21:18:51.318177 ARP, Reply corvina.vineyard.sikkerhed.org is-at 00:9c:02:ab:db:54 (oui Unknown), length 46 mount command outputs mount.nfs4: Connection timed out mount.nfs4 10.0.0.51:/ /mnt -o soft,intr,timeo=10 0,00s user 0,00s system 0% cpu 2:05,80 total Returncode is 32 Server configuration I have enabled idmapd by adding NEED_IDMAPD=yes in /etc/default/nfs-common. Bind mounts in /etc/fstab: # nfs-audio /data/audio /exports/audio none bind 0 0 # nfs-clear /data/clear /exports/clear none bind 0 0 # nfs-video /data/video /exports/video none bind 0 0 /etc/exports: /exports 10.0.0.0/255.255.255.0(rw,no_root_squash,no_subtree_check,fsid=0,crossmnt) /exports/video 10.0.0.0/255.255.255.0(rw,no_root_squash,no_subtree_check,crossmnt) Output from # ls -al /exports total 20 drwxr-xr-x 5 root root 4096 Jul 2 14:14 ./ drwxr-xr-x 28 root root 4096 Jul 2 13:46 ../ drwxr-xr-x 7 tdn audio 4096 Jun 7 11:30 audio/ drwxr-xr-x 11 root root 4096 Jun 29 12:07 clear/ drwxrwx--- 12 tdn video 4096 Jun 7 09:46 video/

    Read the article

  • problem in exporting a partition on a usb stick as nfs volume

    - by Bond
    I have a USB disk which has 2 partitions.I exported one of them (on NFS) and now I am trying to mount it at client machine. Each time it gets error mount -t nfs 192.168.1.19:/media/vol2 /mnt/nfs/ mount.nfs: access denied by server while mounting 192.168.1.19:/media/vol2 Here is the /etc/exports file entry showmount -e on nfs server gives Export list for bond: /media/vol2 */24 On the client machine nfs-client package is installed. What more I need to check? Is it logged some where?

    Read the article

  • Mounting NFS share between OSX and Centos VM

    - by Adam
    I'm having issues mounting an NFS share I've made on my Mac host (server) from a Centos VM (client). I'm getting a permission denied error. I have this line in /etc/exports on server: /Users/adam/Sites/ 192.168.1.223(rw) and in /etc/fstab on client: 192.168.1.186:/Users/adam/Sites/ /home/adam/Sites/ nfs rw 0 0 I'm sure this is a simple configuration issue, but I've never set up NFS properly before. Extra info: # mount -v 192.168.1.186:/Users/adam/Sites/ /home/adam/Sites/ mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Mon Nov 26 07:31:40 2012 mount.nfs: trying text-based options 'vers=4,addr=192.168.1.186,clientaddr=192.168.1.223' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'addr=192.168.1.186' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.1.186 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.1.186 prog 100005 vers 3 prot UDP port 958 mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 192.168.1.186:/Users/adam/Sites/

    Read the article

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • nfs mount with nfs 3

    - by rahrahruby
    I am running CentOS 6.4 Kernel version 2.6.32-358.23.2.el6.x86_64 #1 SMP and have the following nfs info: nfs-utils-lib-1.1.5-6.el6.x86_64 nfs4-acl-tools-0.3.3-6.el6.x86_64 nfs-utils-1.2.3-36.el6.x86_64 and am trying to mount an nfs volume with nfs3. I have the following line in my fstab: 172.16.11.87:/volume1/web /home/nas nfsver=3 rsize=8192,wsize=8192,timeo=14,intr(no_root_squach) When I run nfsstat it still shows the client as nfs4 Server rpc stats: calls badcalls badauth badclnt xdrcall 0 0 0 0 0 Client rpc stats: calls retrans authrefrsh 1988817 6 1988818 Client nfs v4: null read write commit open open_conf 0 0% 36943 1% 21606 1% 401 0% 392369 19% 375986 18% open_noat open_dgrd close setattr fsinfo renew 0 0% 0 0% 387945 19% 22904 1% 3 0% 2914 0% setclntid confirm lock lockt locku access 1 0% 1 0% 0 0% 0 0% 0 0% 97856 4% getattr lookup lookup_root remove rename link 613996 30% 29888 1% 1 0% 1248 0% 253 0% 414 0% symlink create pathconf statfs readlink readdir 26 0% 226 0% 2 0% 3 0% 0 0% 3825 0% server_caps delegreturn getacl setacl fs_locations rel_lkowner 5 0% 0 0% 0 0% 0 0% 0 0% 0 0% exchange_id create_ses destroy_ses sequence get_lease_t reclaim_comp 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% layoutget layoutcommit layoutreturn getdevlist getdevinfo ds_write 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% ds_commit 0 0%

    Read the article

  • monitoring nfs with monit

    - by Josh Nankin
    I'd like to monitor NFS mounts and the NFS server process using Monit. On the server, I'd need a PID file, but I can't seem to find a way of getting that created with existing configuration files. Is there a way to do this, or has anyone monitored the server in a different way (checking if port 53 is active, etc). On clients, I was thinking of making Monit simply look for a specific file in an NFS mount, and if it's accessible, all is well. Problem is, if the NFS server does go down, file requests usually hang (perhaps even indefinitely, not sure). How would one get around this issue with monit? Any configuration examples would be greatly appreciated!

    Read the article

  • mount.nfs: access denied by server while mounting (null), can't find any log information

    - by Mark0978
    Two ubuntu servers: 10.0.8.2 is the client, 192.168.20.58 is the server. Between the 2 machines, Ping works, ssh works (in both directions). From 10.0.8.2 showmount -e 192.168.20.58 Export list for 192.168.20.58: /imr/nfsshares/foobar 10.0.8.2 mount.nfs 192.168.20.58:/imr/nfsshares/foobar /var/data/foobar -v mount.nfs: access denied by server while mounting (null) Found several things online, tried them all and still can't find any log information anywhere. On the server: [email protected]:/var/log# cat /etc/hosts.allow sendmail: all ALL: 10.0.8.2 /etc/hosts.deny is all comments How can I get a trail of log statements to figure this out? What does it take to get some logging so I have some idea of WHY it won't mount? On the server: [email protected]# nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 21:16 CDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.0000060s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open unknown 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open unknown 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 3.81 seconds From the client: [email protected]:~$ nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 22:14 EDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.73s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open rpcbind (rpcbind V2) 2 (rpc #100000) 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open nfs (nfs V2-4) 2-4 (rpc #100003) 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 191.56 seconds

    Read the article

  • How does NFS read cache work on Debian?

    - by Ztyx
    I am planning to use NFS to serve out many small files. They will be read very often so client side caching is crucial. Does NFS handle this? Is there a way to increase the client side caching in some way? ...or should I look at another solution? Syncing using rsync or unison periodically is not an option since the files are modified on the client side from time to time.

    Read the article

  • NFS server is ignoring anonuid?

    - by paszczak000
    On NFS server I've got user with UID=1024 and GID=1204. On client side too. Both servers are CentOS 6.4 (2.6.32-358.2.1.el6.x86_64). Right now anonuid/anongid is not working. Files aren't mapped to 1024 uid but to 99 (nobody:x:99:99:Nobody:/:/sbin/nologin). /etc/exports /vol/test10.xxx.xxx.xxx(rw,all_squash,anonuid=1024,anongid=1024) /etc/fstab 10.xxx.xxx.xxx:/vol/test /nas/test nfs nosuid,intr,defaults,_netdev,intr 0 0

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • Single NFS folder shared across multiple clients

    - by parthi_for_tech
    I'm trying to mount a single NFS folder from server say "/share/folder" to multiple clients up to 32 clients, and the clients tries to access the folder and create files. The problem I'm facing is that when I execute the write command I see only one client is able to access the folder the remaining clients are blocked and not able to proceed to write. So, is whatever I'm trying to do above is correct? Can we write/read files from the same folder on multiple clients? if yes how can I do it prallel? Kindly advice! Thanks

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • NFS (with Kerberos) mount failing due to "Server not found in Kerberos database" error

    - by Kendall Hopkins
    When running: `sudo mount -t nfs4 -o sec=krb5 sol.domain.com:/ /mnt` I get this error on the client: mount.nfs4: access denied by server while mounting sol.domain.com:/ And on the server syslogs UNKNOWN_SERVER: authtime 0, nfs/[email protected] for nfs/ip-#-#-#-#[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for nfs/ip-#-#-#-#[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database UNKNOWN_SERVER: authtime 0, nfs/[email protected] for krbtgt/[email protected], Server not found in Kerberos database Server keytab file: ubuntu@sol:~$ sudo klist -e -k /etc/krb5.keytab Keytab name: WRFILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 7 host/[email protected] (aes256-cts-hmac-sha1-96) 7 host/[email protected] (arcfour-hmac) 7 host/[email protected] (des3-cbc-sha1) 7 host/[email protected] (des-cbc-crc) 9 nfs/[email protected] (aes256-cts-hmac-sha1-96) 9 nfs/[email protected] (arcfour-hmac) 9 nfs/[email protected] (des3-cbc-sha1) 9 nfs/[email protected] (des-cbc-crc) Client keytab file: ubuntu@mercury:~$ sudo klist -e -k /etc/krb5.keytab Keytab name: WRFILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 3 host/[email protected] (aes256-cts-hmac-sha1-96) 3 host/[email protected] (arcfour-hmac) 3 host/[email protected] (des3-cbc-sha1) 3 host/[email protected] (des-cbc-crc) 3 nfs/[email protected] (aes256-cts-hmac-sha1-96) 3 nfs/[email protected] (arcfour-hmac) 3 nfs/[email protected] (des3-cbc-sha1) 3 nfs/[email protected] (des-cbc-crc)

    Read the article

  • NFS Mounts Issues

    - by user554005
    Having some issue with a NFS Setup on the clients it just times out refuses to connect [root@host9 ~]# mount 192.168.0.17:/home/export /mnt/export mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). Here are the settings I'm using: [root@host17 /home/export]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap: 192.168.0.0/255.255.255.0 lockd: 192.168.0.0/255.255.255.0 rquotad: 192.168.0.0/255.255.255.0 mountd: 192.168.0.0/255.255.255.0 statd: 192.168.0.0/255.255.255.0 [root@host17 /home/export]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL [root@host17 /home/export]# cat /etc/exports /home/export 192.168.0.0/255.255.255.0(rw) [root@host17 /home/export]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6379 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:nfs ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:32803 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:filenet-rpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:892 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:892 ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:rquotad ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:rquotad ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:pftp ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:pftp REJECT all -- anywhere anywhere reject-with icmp-host-prohibited on the clients here is some rpcinfos [root@host9 ~]# rpcinfo -p 192.168.0.17 program vers proto port 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 45857 mountd 100005 1 tcp 55772 mountd 100005 2 udp 34021 mountd 100005 2 tcp 59542 mountd 100005 3 udp 60930 mountd 100005 3 tcp 53086 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100021 1 udp 59832 nlockmgr 100021 3 udp 59832 nlockmgr 100021 4 udp 59832 nlockmgr 100021 1 tcp 36140 nlockmgr 100021 3 tcp 36140 nlockmgr 100021 4 tcp 36140 nlockmgr 100024 1 udp 46494 status 100024 1 tcp 49672 status [root@host9 ~]# [root@host9 ~]# rpcinfo -u 192.168.0.17 nfs rpcinfo: RPC: Timed out program 100003 version 0 is not available [root@host9 ~]# rpcinfo -u 192.168.0.17 portmap program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting program 100000 version 4 ready and waiting [root@host9 ~]# rpcinfo -u 192.168.0.17 mount rpcinfo: RPC: Timed out program 100005 version 0 is not available [root@host9 ~]# I'm running CentOS 5.8 on all systems

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • OpenBSD configuration: Client unable to automount via NFS using amd

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • NFS mount mounted inside another NFS mount disappears randomly

    - by espenfjo
    I have quite an odd issue where my nested NFS mounts just disappear randomly from time to time. The fstab entries look somewhat like this: nfs:/home /home/nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp 0 0 nfs:/bigdir /home/bigdir nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp,bg 0 0 The issue is that from time to time the "/home/bigdir" folder will be empty, even though mtab think that the share is still mounted. nfsstat et. al. do also think the share is still mounted. Only thing that works is by unmounting, and then (re)mounting the bigdir share. The server side is a NetApp. The client side is RHEL5.5, 2.6.18-194 kernel (Yes, I know 5.8 is out, but as far as I can see there are no erratas for this particular issue). I can use various hacks like automount, or mounting it to another path and then using --mount bind, but I would like to fix the underlying issue. -- Best regards Espen Fjellvær Olsen

    Read the article

  • NFS I/O monitoring

    - by Gordon
    I have a NFS mounted directory, and I'd like to monitor the I/O usage on it (MB/s reads and writes). What's the recommended way to do that ? This is the NFS client, I don't have access to the NFS server. I'm not interested in general I/O usage (otherwise I would use vmstat/iostat). It also has multiple NFS mounts, I'm interested in monitoring just one specific mount (or I might have used ethereal). Thanks!

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • NFS to NFS mount

    - by dude
    I have a machine that I need to bridge NFS files to. Can I mount an NFS directory on machine2 from machine1 and then mount the mounted NFS directory on machine2 on machine3 via NFS? Do you see any problems with that? I am basically bridging some subnet domains this way, in a certain fashion. My development machine is on a different and separate (unbridged) than where I would like to use the files, and I would like this machine1(dev machine) - machine2(passthrough machine) - machine3(test machine) connection. And no there is no way to move the test machine as it's a chassis :) and it's two buildings away.

    Read the article

  • Unable to mount root fs over NFS [on hold]

    - by johnmadrak
    I am attempting to set up a Raspberry Pi running Pidora to boot from an NFS share. My configuration in cmdline.txt is: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs nfsroot=<serverip>:/fake/path,nfsvers=3,rw,nolock nfsrootdebug ip=dhcp elevator=deadline rootwait On the Pi, the output I see is: IP-Config: Got DHCP answer from <router>, my address is <clientip> IP-Config: Complete: device=eth0, hwaddr=<macaddress>, ipaddr=<clientip>, mask=255.255.255.0, gw=<routerip> host=<clientip>, domain=, nis-domain=(none) bootserver=<routerip>, rootserver=<serverip>, rootpath= nameserver0=<routerip> (It pauses for a bit here) VFS: Unable to mount root fs via NFS, trying floppy VFS: Cannot open root device "nfs" or unknown-block(2,0); error -6 Please append a correct "root=" boot option; here are the available partitions: ..... On the NFS Server (an OpenVZ Container), the output I see in the /var/log/messages is: Aug 22 23:24:01 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:783 for /fake/path (/fake/path) Aug 22 23:24:38 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:741 for /fake/path (/fake/path) Aug 22 23:25:25 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:752 for /fake/path (/fake/path) Aug 22 23:26:12 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:876 for /fake/path (/fake/path) To test, I've made sure I can mount (non-root) from both the Pi and another machine and it worked. Does anyone have an idea on what could be wrong or how to narrow it down? Thank you in advanced for your help.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >