Search Results

Search found 11 results on 1 pages for 'jumboframes'.

Page 1/1 | 1 

  • TCP packets larger than 4 KB don't get a reply from Linux

    - by pts
    I'm running Linux 3.2.51 in a virtual machine (192.168.33.15). I'm sending Ethernet frames to it. I'm writing custom software trying to emulate a TCP peer, the other peer is Linux running in the virtual machine guest. I've noticed that TCP packets larger than about 4 KB are ignored (i.e. dropped without an ACK) by the Linux guest. If I decrease the packet size by 50 bytes, I get an ACK. I'm not sending new payload data until the Linux guest fully ACKs the previous one. I've increased ifconfig eth0 mtu 51000, and ping -c 1 -s 50000 goes through (from guest to my emulator) and the Linux guest gets a reply of the same size. I've also increased sysctl -w net.ipv4.tcp_rmem='70000 87380 87380 and tried with sysctl -w net.ipv4.tcp_mtu_probing=1 (and also =0). There is no IPv3 packet fragmentation, all packets have the DF flag set. It works the other way round: the Linux guest can send TCP packets of 6900 bytes of payload and my emulator understands them. This is very strange to me, because only TCP packets seem to be affected (large ICMP packets go through). Any idea what can be imposing this limit? Any idea how to do debug it in the Linux kernel? See the tcpdump -n -vv output below. tcpdump was run on the Linux guest. The last line is interesting: 4060 bytes of TCP payload is sent to the guest, and it doesn't get any reply packet from the Linux guest for half a minute. 14:59:32.000057 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [S], cksum 0x8da0 (correct), seq 10000000, win 14600, length 0 14:59:32.000086 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 44) 192.168.33.15.22 > 192.168.33.1.36522: Flags [S.], cksum 0xc37f (incorrect -> 0x5999), seq 1415680476, ack 10000001, win 19920, options [mss 9960], length 0 14:59:32.000218 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0xa752 (correct), ack 1, win 14600, length 0 14:59:32.000948 IP (tos 0x10, ttl 64, id 53777, offset 0, flags [DF], proto TCP (6), length 66) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], cksum 0xc395 (incorrect -> 0xfa01), seq 1:27, ack 1, win 19920, length 26 14:59:32.001575 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0xa738 (correct), ack 27, win 14600, length 0 14:59:32.001585 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 65) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], cksum 0x48d6 (correct), seq 1:26, ack 27, win 14600, length 25 14:59:32.001589 IP (tos 0x10, ttl 64, id 53778, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], cksum 0xc37b (incorrect -> 0x9257), ack 26, win 19920, length 0 14:59:32.001680 IP (tos 0x10, ttl 64, id 53779, offset 0, flags [DF], proto TCP (6), length 496) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 27:483, ack 26, win 19920, length 456 14:59:32.001784 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0xa557 (correct), ack 483, win 14600, length 0 14:59:32.006367 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 1136) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 26:1122, ack 483, win 14600, length 1096 14:59:32.044150 IP (tos 0x10, ttl 64, id 53780, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], cksum 0xc37b (incorrect -> 0x8c47), ack 1122, win 19920, length 0 14:59:32.045310 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 312) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 1122:1394, ack 483, win 14600, length 272 14:59:32.045322 IP (tos 0x10, ttl 64, id 53781, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], cksum 0xc37b (incorrect -> 0x8b37), ack 1394, win 19920, length 0 14:59:32.925726 IP (tos 0x10, ttl 64, id 53782, offset 0, flags [DF], proto TCP (6), length 1112) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], seq 483:1555, ack 1394, win 19920, length 1072 14:59:32.925750 IP (tos 0x10, ttl 64, id 53784, offset 0, flags [DF], proto TCP (6), length 312) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 1555:1827, ack 1394, win 19920, length 272 14:59:32.927131 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x9bcf (correct), ack 1555, win 14600, length 0 14:59:32.927148 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x9abf (correct), ack 1827, win 14600, length 0 14:59:32.932248 IP (tos 0x10, ttl 64, id 53785, offset 0, flags [DF], proto TCP (6), length 56) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], cksum 0xc38b (incorrect -> 0xd247), seq 1827:1843, ack 1394, win 19920, length 16 14:59:32.932366 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x9aaf (correct), ack 1843, win 14600, length 0 14:59:32.964295 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 104) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 1394:1458, ack 1843, win 14600, length 64 14:59:32.964310 IP (tos 0x10, ttl 64, id 53786, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], cksum 0xc37b (incorrect -> 0x85a7), ack 1458, win 19920, length 0 14:59:32.964561 IP (tos 0x10, ttl 64, id 53787, offset 0, flags [DF], proto TCP (6), length 88) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 1843:1891, ack 1458, win 19920, length 48 14:59:32.965185 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x9a3f (correct), ack 1891, win 14600, length 0 14:59:32.965196 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 104) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 1458:1522, ack 1891, win 14600, length 64 14:59:32.965233 IP (tos 0x10, ttl 64, id 53788, offset 0, flags [DF], proto TCP (6), length 88) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 1891:1939, ack 1522, win 19920, length 48 14:59:32.965970 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x99cf (correct), ack 1939, win 14600, length 0 14:59:32.965979 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 568) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 1522:2050, ack 1939, win 14600, length 528 14:59:32.966112 IP (tos 0x10, ttl 64, id 53789, offset 0, flags [DF], proto TCP (6), length 520) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 1939:2419, ack 2050, win 19920, length 480 14:59:32.970059 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x95df (correct), ack 2419, win 14600, length 0 14:59:32.970089 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 616) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 2050:2626, ack 2419, win 14600, length 576 14:59:32.981159 IP (tos 0x10, ttl 64, id 53790, offset 0, flags [DF], proto TCP (6), length 72) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], cksum 0xc39b (incorrect -> 0xa84f), seq 2419:2451, ack 2626, win 19920, length 32 14:59:32.982347 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x937f (correct), ack 2451, win 14600, length 0 14:59:32.982357 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 104) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 2626:2690, ack 2451, win 14600, length 64 14:59:32.982401 IP (tos 0x10, ttl 64, id 53791, offset 0, flags [DF], proto TCP (6), length 88) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 2451:2499, ack 2690, win 19920, length 48 14:59:32.982570 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x930f (correct), ack 2499, win 14600, length 0 14:59:32.982702 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 104) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 2690:2754, ack 2499, win 14600, length 64 14:59:33.020066 IP (tos 0x10, ttl 64, id 53792, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], cksum 0xc37b (incorrect -> 0x7e07), ack 2754, win 19920, length 0 14:59:33.983503 IP (tos 0x10, ttl 64, id 53793, offset 0, flags [DF], proto TCP (6), length 72) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], cksum 0xc39b (incorrect -> 0x2aa7), seq 2499:2531, ack 2754, win 19920, length 32 14:59:33.983810 IP (tos 0x10, ttl 64, id 53794, offset 0, flags [DF], proto TCP (6), length 88) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 2531:2579, ack 2754, win 19920, length 48 14:59:33.984100 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x92af (correct), ack 2531, win 14600, length 0 14:59:33.984139 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x927f (correct), ack 2579, win 14600, length 0 14:59:34.022914 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 104) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 2754:2818, ack 2579, win 14600, length 64 14:59:34.022939 IP (tos 0x10, ttl 64, id 53795, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.15.22 > 192.168.33.1.36522: Flags [.], cksum 0xc37b (incorrect -> 0x7d77), ack 2818, win 19920, length 0 14:59:34.023554 IP (tos 0x10, ttl 64, id 53796, offset 0, flags [DF], proto TCP (6), length 88) 192.168.33.15.22 > 192.168.33.1.36522: Flags [P.], seq 2579:2627, ack 2818, win 19920, length 48 14:59:34.027571 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.33.1.36522 > 192.168.33.15.22: Flags [.], cksum 0x920f (correct), ack 2627, win 14600, length 0 14:59:34.027603 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 4100) 192.168.33.1.36522 > 192.168.33.15.22: Flags [P.], seq 2818:6878, ack 2627, win 14600, length 4060

    Read the article

  • Setup link aggregation and jumbo frames on VMware ESXi 4

    - by Sysadminicus
    I'm setting up an ESXi 4 server to connect to an NFS datastore. I'd like to bond two of the NICs together and use jumbo frames for the NFS connection on a private (non-management) network. I setup a new switch with the 2 NICs and am able to connect to the NFS share over it, but could use some guidance on getting jumbo frames and link aggregation/bonding/teaming working.

    Read the article

  • Issue with Netgear GS108T Managed Switch and Jumbo Frames

    - by Richie086
    I recently purchased a Netgear GS108T managed switch and I am trying to configure jumbo packets between my NAS (Thecus N4100Pro), PC and managed switch. I should mention the fact that I was able to use jumbo frames between my PC and NAS before I purchased the switch without issue. My Desktop has a wired gigabit NIC (Intel 82579V Gigabit) and has the ability to configure jumbo frames (see pic) that are either 9014 bytes or 4088 bytes. I choose 9014 bytes for the jumbo frame size My NAS supports jumbo frames as well, and is configured to use 9014 as the frame size. When I go into my Netgear managed switch and set the frame size to 9014 on the ports I am using for my PC and NAS. See image As soon as I hit apply in the web interface, I loose my connection to the SMB shares on my NAS and I can no longer connect to the web admin interface for my NAS. The really strange thing is I can ping my NAS via the ping command, but when I try to connect to the web interface on port 80 or port 443 the page never loads. I did a scan from my PC to my NAS using nmap and I can see the following ports open PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 443/tcp open https 445/tcp open microsoft-ds 631/tcp open ipp 2000/tcp open cisco-sccp 2049/tcp open nfs 3260/tcp open iscsi 49152/tcp open unknown MAC Address: 00:14:FD:15:00:44 (Thecus Technology) Read data files from: C:\Program Files (x86)\Nmap Nmap done: 1 IP address (1 host up) scanned in 211.97 seconds Raw packets sent: 1 (28B) | Rcvd: 1 (28B) Anyone have any idea what is going on here? Why is nmap able to detect the ports are open and listening for http, https and file sharing but I cant connect when all devices have jumbo packets enabled? Stranger still - I did a packet capture using wireshark while the nmap scan was running and filtered so I only saw converstations between my PC and my NAS. Here are the packet details from my scan Only 4 packets over 5k bytes? What is going on here? Do I not need to configure jumbo frame sizes on the switch? I have an internet connection from my pc to the switch to my router - I just cannot connect to my NAS. I just checked on my iPhone and I am able to open my NAS web admin interface without issue on my iPhone! WTF!!!!!! Let me know if you need more details..

    Read the article

  • Problem with Jumbo Frames

    - by Spookyone
    Hello, I am trying to set up jumbo frames on my gigabit home LAN but no luck so far. My setup is: * D-Link DIR-655 router, HW Revision A3, Firmware 1.21 EU * Synology DS107+, Firmware 3.0-1337 * Laptop w/ Win7 x64, external PCIx NIC managed by "Generic Marvel Yukon 88E8053 based Ethernet Controller" The router is supposed to support jumbo frames but doesn't feature any relevant setting. I set the Jumbo Packet value to 9000 on both the NIC and the Synobox but it doesn't work, ping -f -l 8972 says "Packet needs to be fragmented but DF set". Is there any other setting I overlooked, the DIR-655 doesn't actually support jumbo frames, or what else could be the problem?

    Read the article

  • Are there any statistics on the number of networks using Ethernet Jumbo frames?

    - by Russell Heilling
    I am often told by Sales Engineers and Product managers that a Layer 2 Ethernet service is not fit for purpose if the maximum supported frame size is in the 1518-1522 range (enough to support standard Ethernet frames or VLAN tagged frames). Or in other words: an MTU of 1500 is not enough (see this blog post for my definition of MTU and a short rant on how the term is often misused) I am never able to get any details from them on: a) What proportion of customers (Enterprise / SMB) require Jumbo frames b) What are typical expectations on frame size for Jumbos on WAN links? 1600, 2000, 9k, etc... I know that in Telco and Data Centre environments Jumbos are pretty common, but I am after some insight as to how common this is within Enterprise and SMB networks.

    Read the article

  • VMWare converter performance

    - by bellocarico
    Hello, I have a question about my test lab. It's more to understand the concept more than apply this into production: I have an ESXi with few VMs linux/windows configured and I'd like to use VMWare converter to create backups. To speedup the process I decided to create a Windows VM on the same ESXi host where I've installed Windows 7 and VMWare Converter. The Host has a gigabit card but it's currently connected to a 100Mb FD port. Windows 7 sees a 1gb card connected. When I do the backup using VMWare converter I specify the host IP as source and destination, so I thought the copy could be faster then use my laptop across the network. Well, to cut a long sotry short: I get dreadful performance (4Mb/sec). I'm a buit confused on this because despite the fact that the host is running 100Mb communication between VMs and hosts shouldn't (correct me if I'm wrong) have any limitation instead. I did tweak windows 7 to optimise network performane but I got just a little improvement. i still need 4 hours to back up a 50Gb (thin) VM. Additionally I wanted to ask: Would jumbo frame help in this? I know that jumbo frame have to be supported end to end, and the network switch where the host is currently connected doesn't support this, but I was wondering: 1) Does ESXi host support jumbo frames at all? 2) Can I enable it somehow? 3) If I do so, I guess bulk transfert between VMs and host would improve, but would this affect the communication going through the real switch as this doesn't do jumbo? Thanks for reading

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Missing Jumbo and MTU for Broadcom NIC?

    - by Mike
    I have two Broadcom BCM5708C NetXtreme II GigE NICs in my Dell Windows SBS 2008 server. I would like make one them them Jumbo Frames enabled so that I can add it to my SAN whose VLAN on the switch is already using 9000 MTU. Broadcom's own data sheet for this NIC claims that it is Jumbo Frame capable up to a 9000 MTU. The problem is that there is no setting for Jumbo or MTU in the NIC's configurable settings. There are other settings but just not the one I need to change. Am I missing something here? The driver claims to be up-to-date when I allow Windows to search on-line.

    Read the article

  • Jumbo Frames, ISCSI and ESXi

    - by vlannoob
    I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well. So now the questions: Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well? Whats the best way of testing that Jumbo Frames are indeed working as intended?

    Read the article

  • Cannot increase MTU on gigabit network

    - by RayQuang
    Hi, I have just set up my new gigabit network and when I was about to increase the MTU to use jumbo frames, I get this error: root@rayquang-desktop:~# ifconfig eth1 mtu 9000 SIOCSIFMTU: Invalid argument Could anyone help me to increase the MTU. Details: NIC: NETGEAR GA311; Switch: NETGEAR GS105, running Ubuntu 10.10 and Debian Lenny on desktop, server respectively. Help would be greatly appreciated, RayQuang

    Read the article

  • vSphere ESX 5.5 hosts cannot connect to NFS Server

    - by Gerald
    Summary: My problem is I cannot use the QNAP NFS Server as an NFS datastore from my ESX hosts despite the hosts being able to ping it. I'm utilising a vDS with LACP uplinks for all my network traffic (including NFS) and a subnet for each vmkernel adapter. Setup: I'm evaluating vSphere and I've got two vSphere ESX 5.5 hosts (node1 and node2) and each one has 4x NICs. I've teamed them all up using LACP/802.3ad with my switch and then created a distributed switch between the two hosts with each host's LAG as the uplink. All my networking is going through the distributed switch, ideally, I want to take advantage of DRS and the redundancy. I have a domain controller VM ("Central") and vCenter VM ("vCenter") running on node1 (using node1's local datastore) with both hosts attached to the vCenter instance. Both hosts are in a vCenter datacenter and a cluster with HA and DRS currently disabled. I have a QNAP TS-669 Pro (Version 4.0.3) (TS-x69 series is on VMware Storage HCL) which I want to use as the NFS server for my NFS datastore, it has 2x NICs teamed together using 802.3ad with my switch. vmkernel.log: The error from the host's vmkernel.log is not very useful: NFS: 157: Command: (mount) Server: (10.1.2.100) IP: (10.1.2.100) Path: (/VM) Label (datastoreNAS) Options: (None) cpu9:67402)StorageApdHandler: 698: APD Handle 509bc29f-13556457 Created with lock[StorageApd0x411121] cpu10:67402)StorageApdHandler: 745: Freeing APD Handle [509bc29f-13556457] cpu10:67402)StorageApdHandler: 808: APD Handle freed! cpu10:67402)NFS: 168: NFS mount 10.1.2.100:/VM failed: Unable to connect to NFS server. Network Setup: Here is my distributed switch setup (JPG). Here are my networks. 10.1.1.0/24 VM Management (VLAN 11) 10.1.2.0/24 Storage Network (NFS, VLAN 12) 10.1.3.0/24 VM vMotion (VLAN 13) 10.1.4.0/24 VM Fault Tolerance (VLAN 14) 10.2.0.0/24 VM's Network (VLAN 20) vSphere addresses 10.1.1.1 node1 Management 10.1.1.2 node2 Management 10.1.2.1 node1 vmkernel (For NFS) 10.1.2.2 node2 vmkernel (For NFS) etc. Other addresses 10.1.2.100 QNAP TS-669 (NFS Server) 10.2.0.1 Domain Controller (VM on node1) 10.2.0.2 vCenter (VM on node1) I'm using a Cisco SRW2024P Layer-2 switch (Jumboframes enabled) with the following setup: LACP LAG1 for node1 (Ports 1 through 4) setup as VLAN trunk for VLANs 11-14,20 LACP LAG2 for my router (Ports 5 through 8) setup as VLAN trunk for VLANs 11-14,20 LACP LAG3 for node2 (Ports 9 through 12) setup as VLAN trunk for VLANs 11-14,20 LACP LAG4 for the QNAP (Ports 23 and 24) setup to accept untagged traffic into VLAN 12 Each subnet is routable to another, although, connections to the NFS server from vmk1 shouldn't need it. All other traffic (vSphere Web Client, RDP etc.) goes through this setup fine. I tested the QNAP NFS server beforehand using ESX host VMs atop of a VMware Workstation setup with a dedicated physical NIC and it had no problems. The ACL on the NFS Server share is permissive and allows all subnet ranges full access to the share. I can ping the QNAP from node1 vmk1, the adapter that should be used to NFS: ~ # vmkping -I vmk1 10.1.2.100 PING 10.1.2.100 (10.1.2.100): 56 data bytes 64 bytes from 10.1.2.100: icmp_seq=0 ttl=64 time=0.371 ms 64 bytes from 10.1.2.100: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 10.1.2.100: icmp_seq=2 ttl=64 time=0.241 ms Netcat does not throw an error: ~ # nc -z 10.1.2.100 2049 Connection to 10.1.2.100 2049 port [tcp/nfs] succeeded! The routing table of node1: ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.1.1.0 255.255.255.0 Local Subnet vmk0 10.1.2.0 255.255.255.0 Local Subnet vmk1 10.1.3.0 255.255.255.0 Local Subnet vmk2 10.1.4.0 255.255.255.0 Local Subnet vmk3 default 0.0.0.0 10.1.1.254 vmk0 VM Kernel NIC info ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 133 IPv4 10.1.1.1 255.255.255.0 10.1.1.255 00:50:56:66:8e:5f 1500 65535 true STATIC vmk0 133 IPv6 fe80::250:56ff:fe66:8e5f 64 00:50:56:66:8e:5f 1500 65535 true STATIC, PREFERRED vmk1 164 IPv4 10.1.2.1 255.255.255.0 10.1.2.255 00:50:56:68:f5:1f 1500 65535 true STATIC vmk1 164 IPv6 fe80::250:56ff:fe68:f51f 64 00:50:56:68:f5:1f 1500 65535 true STATIC, PREFERRED vmk2 196 IPv4 10.1.3.1 255.255.255.0 10.1.3.255 00:50:56:66:18:95 1500 65535 true STATIC vmk2 196 IPv6 fe80::250:56ff:fe66:1895 64 00:50:56:66:18:95 1500 65535 true STATIC, PREFERRED vmk3 228 IPv4 10.1.4.1 255.255.255.0 10.1.4.255 00:50:56:72:e6:ca 1500 65535 true STATIC vmk3 228 IPv6 fe80::250:56ff:fe72:e6ca 64 00:50:56:72:e6:ca 1500 65535 true STATIC, PREFERRED Things I've tried/checked: I'm not using DNS names to connect to the NFS server. Checked MTU. Set to 9000 for vmk1, dvSwitch and Cisco switch and QNAP. Moved QNAP onto VLAN 11 (VM Management, vmk0) and gave it an appropriate address, still had same issue. Changed back afterwards of course. Tried initiating the connection of NAS datastore from vSphere Client (Connected to vCenter or directly to host), vSphere Web Client and the host's ESX Shell. All resulted in the same problem. Tried a path name of "VM", "/VM" and "/share/VM" despite not even having a connection to server. I plugged in a linux system (10.1.2.123) into a switch port configured for VLAN 12 and tried mounting the NFS share 10.1.2.100:/VM, it worked successfully and I had read-write access to it I tried disabling the firewall on the ESX host esxcli network firewall set --enabled false I'm out of ideas on what to try next. The things I'm doing differently from my VMware Workstation setup is the use of LACP with a physical switch and a virtual distributed switch between the two hosts. I'm guessing the vDS is probably the source of my troubles but I don't know how to fix this problem without eliminating it.

    Read the article

1