Search Results

Search found 3 results on 1 pages for 'espenfjo'.

Page 1/1 | 1 

  • Routing RFC1918 addresses through dd-wrt via a switch

    - by espenfjo
    I am a bit stuck with an experiment of mine. I have a network looking somewhat like this. | Internet | | ---- |Switch| ---- | | Server w/pub IP | DD-WRT router 192.168.1.1 | | RFC1918 clients 192.168.1.0/24 What I want is for the RFC1918 clients to speak directly with each others. On the server with the public IP I have this route: 192.168.1.0/24 dev eth0 scope link and can see that packets are infact reaching the dd-wrt router for 192.168.1.1, even though if I get no answer. Trying to reach one of the RFC1918 clients from the public IP server will get no result, as the dd-wrt router is not announcing that network on to its external interface (arp who-has 192.168.1.107 tell xxx.xxx.xxx.xxx, but no answer). The router being an WLAN dd-wrt router has of course a load of routes, VLANs and interfaces: xxx.xxx.xxx.1 dev vlan2 scope link 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.244 84.215.64.0/18 dev vlan2 proto kernel scope link src xxx.xxx.xxx.xxx 169.254.0.0/16 dev br0 proto kernel scope link src 169.254.255.1 127.0.0.0/8 dev lo scope link 0.0.0.0 via xxx.xxx.xxx.1 dev vlan2 xxx.xxx.xxx.xxx being the public IP, and xxx.xxx.xxx.1 being the default route for the public IP. I am not sure where to continue with this. I would recon that I both need routing on the dd-wrt router, as well as some iptables magic? Why do something this complex? Why not ;) Also, do not mind that "Internet" can get RFC1918 traffic, it wont go outside of the walls. EDIT 1: Following the tip from stew I do indeed get the correct ARP flowing. And adding an iptables rule for allowing traffic from that specific public IPd machine I get traffic between the systems! Oddly enough though, the speed I get from Server w/pub IP - RFC1918 clients are the same as if the traffic were routed out onto the Internet and back. Edit 2: Ok, disconnecting the external Internet connection will still give the same, crappy transfer speed. So it has to be something else. Edit 3: Ok, I guess there are other reasons for this crappy speed. Case closed. :)

    Read the article

  • NFS mount mounted inside another NFS mount disappears randomly

    - by espenfjo
    I have quite an odd issue where my nested NFS mounts just disappear randomly from time to time. The fstab entries look somewhat like this: nfs:/home /home/nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp 0 0 nfs:/bigdir /home/bigdir nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp,bg 0 0 The issue is that from time to time the "/home/bigdir" folder will be empty, even though mtab think that the share is still mounted. nfsstat et. al. do also think the share is still mounted. Only thing that works is by unmounting, and then (re)mounting the bigdir share. The server side is a NetApp. The client side is RHEL5.5, 2.6.18-194 kernel (Yes, I know 5.8 is out, but as far as I can see there are no erratas for this particular issue). I can use various hacks like automount, or mounting it to another path and then using --mount bind, but I would like to fix the underlying issue. -- Best regards Espen Fjellvær Olsen

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

1