Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 525/1663 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • Recovering from bad ownership

    - by Christian Sciberras
    I was going to change the ownership of a directory to apache:apache, but I ended up running: chown -R apache:apache / Bad! Very bad! I knew what was going on when it started saying: chown: changing ownership of `/proc/2694/fd/48': Permission denied That's when I stopped everything (Ctrl+C). The current system I have is a server running virtualbox running CentOS 5. This problem happened inside the VM. Currently, everything seems to be working, but I have not restarted the system yet, and to be honest, I'm afraid that if I did something will break. I do not know chown's order, should I be concerned and assume something will break after a reboot? Is there a way to recover form this problem without having to rely on backups? I do have a daily one, but I thought there may be a simpler way out.

    Read the article

  • Mounting /var /tmp /var/log to separate partition

    - by William MacDonald
    Per DISA hardening requirements for RHEL, I'm supposed to make sure a number of locations on the filesystem are mounted on separate partitions. A few of the locations they specify include /var /tpm /var/log etc. Is it possible to go about doing this on a live machine (without booting a separate OS)? And how would I go about doing that. I've backed up the OS so if I do screw something up I can recover. Thanks!

    Read the article

  • Can a named (bind) crash make a server unreachable?

    - by giorgio79
    My server recently became unreachable, and after restart a named error was the last line I found in /var/log/messages before restart: Jun 26 00:15:06 host named[1303]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:500:71::29#53 Jun 26 06:38:55 host kernel: imklog 5.8.10, log source = /proc/kmsg started. Jun 26 06:38:55 host rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1294" x-info="http://www.rsyslog.com"] start Jun 26 06:38:55 host kernel: Initializing cgroup subsys cpuset Can a named crash make a server unreachable? I doubt it, as I assume I should still be able to login with ssh via IP, but the server did not respond...So, I am trying to make heavy guesses here.

    Read the article

  • How can I optimize ubuntu desktop to run my webserver

    - by Parry
    Hi, I am using ubuntu desktop edition to run My drupal website on Intranet. I know for running web servers best thing to install is ubuntu server editions, but due to some problem i am using Desktop edition. I installed XAMPP on my machine an my website is up and running. I want to know how can i optimize my machine?? Since I will not use very less features of desktop editions are there any things which I can remove or stop which will free memory and cpu consumption, are there any packages which i should install to increase the performance of my ubuntu??

    Read the article

  • Why can't I get out of display mirror mode?

    - by Roy Smith
    I've been running Ubuntu (10.04.1 LTS, 64-bit) for a while and just replaced my hardware with a faster machine with an ATI Radeon HD 5700 video card. I've got twin 1920 x 1080 displays. I downloaded the latest driver (ati-driver-installer-10-9-x86.x86_64.run) from the ATI web site and installed that. I've gone through a few rounds of playing with /etc/X11/xorg.conf, and can't get things right. At the moment, it's in display mirroring mode, and I can't figure out how to get it out of mirror mode. If I run Monitor Preferences, there's a "Same image in all monitors" checkbox. If I uncheck that, the little preview window switches to show two monitors. When I click Apply, it asks me to log out and log back in again. When I do that, I'm right back to mirrored mode. What's really weird is that I'm currently running a copy of xorg.conf from a coworker's machine. He's got identical hardware, and his display works fine. So, I'm inclined to think there's something else going on other than the conf file. Any ideas what might be wrong?

    Read the article

  • How can I enforce directory space limits in a OpenVPZ system?

    - by George
    The title says it all. I have some programs on a server (centos4-openvz) that use a directory as temp directory - but pay no attention to the size it grows. I want to enforcee a limit, like this folder cannot exceed 300mb. I would use quota but OpenVZ does not support loop devices to mount a file as such. Any other solutions? (apart from scripting a periodic delete of files in the directory). Editing the application's code to implement such a functionality is not entirely out of the question - if it can be done easily and no other ways exist.. Its written in cpp - but I don't know how to implement it.

    Read the article

  • NIC is receiving, but not transmitting at all?

    - by Shtééf
    I'm trying to fix a very strange problem remotely on a machine at a customer site. The machine is a Dell PowerEdge, I believe a 1950 (haven't verified, but the lspci output matches specs I found.) The machine has two similar NICs, identified as Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) by lspci, and using the bnx2 driver. (I suspect these are on-board and on the same controller, which is what I'm accustomed to for this type of machine.) The primary interface eth0 works perfectly, and is in fact how I am ssh'd in. However, the secondary interface eth1 is not transmitting. I can see this in ifconfig output, for example, where the TX field is always 0. However, it is receiving, and tcpdump shows ARP requests coming from the ISP's gateway on the other side. The interface is physically connected to a Siemens BSTU4 modem, configured by the ISP. The link is properly set to 10MBps and full duplex, without negotation, as the ISP requested. A small /30 subnet is configured. For the sake of anonimity, let's say the machine is 3.3.3.2/30, and the ISP's gateway .1. The machine has no firewall settings whatsoever. Even running something like arping -I eth1 3.3.3.1, and running tcpdump alongside, shows no traffic whatsoever being transmitted on the interface. (But the other side keeps steadily sending ARP requests, and that is all that can be seen.) What could be causing this? Here's some output, anonymized, which may hopefully help: $ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Link detected: yes $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:15:c5:xx:xx:xx brd ff:ff:ff:ff:ff:ff $ ip -4 addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 inet 3.3.3.2/30 brd 3.3.3.3 scope global eth1 $ ip -4 route show match 3.3.3.0/30 3.3.3.0/30 dev eth1 proto kernel scope link src 3.3.3.2 default via 10.0.0.5 dev eth0

    Read the article

  • fstab and cifs mounting, possible to store authentication information outside of fstab?

    - by tj111
    I am currently using cifs to mount some network shares (that require authentication) in /etc/fstab. It works excellently, but I would like to move the authentication details (username/pass) outside of fstab and be able to chmod it 600 (as fstab can have issues if I were to change its permissions). I was wondering if it is possible to do this (many-user system, don't want these permissions to be viewable by all users). from: //server/foo/bar /mnt/bar cifs username=user,password=pass,r 0 0 to: //server/foo/bar /mnt/bar cifs <link to permissions>,r 0 0 (or something analogous to this). Thanks.

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • Execute a script with root permission

    - by Bastien974
    Hi all, I need a script that will chown/chmod some files. This script need to be executable by any user. The problem is that those files are owned by different users, so it needs to be executed as root. I tried the SUID so that any users with X permission can execute the script as root, but seems that it doesn't work with a bash script because of security issue. How can I do that ? thanks.

    Read the article

  • Can not understand this script

    - by Jim
    Can someone help me understand this script? It is from sysconf_add and I am new to scripting. I need to do something similar. function add_word() { local word=$1 local word_quoted=$2 if ! word_present; then $debug && cp $file $tmpf sed -i -e "${lineno} { s/^[[:space:]]*\($var=\".*\)\(\".*\)/\1 $word_quoted\2/; s/=\" /=\"/ }" $file $debug && diff -u $tmpf $file else echo \"$word\" already present fi # some balancing for vim"s syntax highlighting }

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • CentOS send mail with external SMTP server and without local daemons

    - by Vilx-
    I've got a little old server with CentOS 6.5 on it. The hardware is old and crappy, but enough for what it has to do. Which consists of SSH (+SFTP), Apache, PHP and MySQL. Still, I'm trying to cut away all that I can. One thing that it does not need to do is to be an SMTP server. There are no mailboxes on it and nobody will ever route mail through it. However I do want it to send me an email when something goes wrong. Also, the webpages will send emails from PHP. So that brings me to the question - can I set up the mail system in such a way that there isn't an expensive mailer daemon sitting in the background with queues and whatnotelse, but rather every email is directly and immediately delivered to an external SMTP server? And how do I go about it?

    Read the article

  • SSH issues: Read from socket failed: Connection reset by peer

    - by nitins
    I compiled OpenSSH_6.6p1 on one of our server. I am able login via SSH to the upgraded server. But I am not able to connect to other servers running OpenSSH_6.6p1 or OpenSSH_5.8 from this. While connecting I am getting an error as below. Read from socket failed: Connection reset by peer On the destination server in the logs, I am seeing it as below. sshd: fatal: Read from socket failed: Connection reset by peer [preauth] I tried specifying the cipher_spec [ ssh -c aes128-ctr destination-server ] as mentioned in here and was able to connect. How can configure ssh to use the cipher by default? Why is the cipher required here?

    Read the article

  • NTP daemon or ntpdate doesn't synchronize

    - by user2862333
    I'm having some problems with synchronization with an NTP server. 1) The NTP daemon doesn't sync the system clock at all, even though it's running (confirmed with /etc/init.d/ntp status). Forcing to sync with ntpd -q or ntpd -gq does not work either. 2) Stopping the NTP daemon and syncing manually with ntpdate does give me the following output: ~# ntpdate -d 0.debian.pool.ntp.org 6 Nov 16:48:53 ntpdate[4417]: ntpdate [email protected] Sat May 12 09:07:19 UTC 2012 (1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) server 79.132.237.5, port 123 stratum 2, precision -20, leap 00, trust 000 refid [79.132.237.5], delay 0.05141, dispersion 0.00145 transmitted 4, in filter 4 reference time: d624e3b1.f490b90d Wed, Nov 6 2013 16:50:09.955 originate timestamp: d624e457.eaaf787c Wed, Nov 6 2013 16:52:55.916 transmit timestamp: d624e36c.4a7036fd Wed, Nov 6 2013 16:49:00.290 filter delay: 0.08537 0.05141 0.05151 0.06346 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6038 235.6087 235.6095 235.6068 0.000000 0.000000 0.000000 0.000000 delay 0.05141, dispersion 0.00145 offset 235.608782 server 85.234.197.2, port 123 stratum 2, precision -20, leap 00, trust 000 refid [85.234.197.2], delay 0.05151, dispersion 0.00336 transmitted 4, in filter 4 reference time: d624e3e7.dc6cd02b Wed, Nov 6 2013 16:51:03.861 originate timestamp: d624e458.1c91031f Wed, Nov 6 2013 16:52:56.111 transmit timestamp: d624e36c.7da1d882 Wed, Nov 6 2013 16:49:00.490 filter delay: 0.05765 0.07750 0.06013 0.05151 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6048 235.6014 235.6035 235.6078 0.000000 0.000000 0.000000 0.000000 delay 0.05151, dispersion 0.00336 offset 235.607826 server 194.50.97.34, port 123 stratum 3, precision -23, leap 00, trust 000 refid [194.50.97.34], delay 0.03021, dispersion 0.00090 transmitted 4, in filter 4 reference time: d624e38d.2bce952c Wed, Nov 6 2013 16:49:33.171 originate timestamp: d624e458.4dbbc114 Wed, Nov 6 2013 16:52:56.303 transmit timestamp: d624e36c.b0d38834 Wed, Nov 6 2013 16:49:00.690 filter delay: 0.03030 0.03636 0.03091 0.03021 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6095 235.6085 235.6098 235.6105 0.000000 0.000000 0.000000 0.000000 delay 0.03021, dispersion 0.00090 offset 235.610589 server 79.132.237.1, port 123 stratum 3, precision -20, leap 00, trust 000 refid [79.132.237.1], delay 0.05113, dispersion 0.00305 transmitted 4, in filter 4 reference time: d624dfcb.6acea332 Wed, Nov 6 2013 16:33:31.417 originate timestamp: d624e458.838672ad Wed, Nov 6 2013 16:52:56.513 transmit timestamp: d624e36c.e405181c Wed, Nov 6 2013 16:49:00.890 filter delay: 0.06345 0.05113 0.05681 0.05656 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6087 235.6038 235.6010 235.6074 0.000000 0.000000 0.000000 0.000000 delay 0.05113, dispersion 0.00305 offset 235.603888 6 Nov 16:49:00 ntpdate[4417]: step time server 79.132.237.5 offset 235.608782 sec Clearly, ntpdate can reach the NTP server(s), but after checking the clock, it hasn't changed and is still displaying the wrong time. Any ideas what would be the problem would be much appreciated.

    Read the article

  • Ubuntu Natty 11.04, Turning the wireless switch off; switches it off permanently!

    - by ZiGi
    i'm using an hp pavilion dv2000 i turned the wifi switch off by mistake, the LED turned orange and the wifi got disconnected. and now when i turn the switch on, it remains orange and the wifi still isn't functional. this happened before; i found a fix that worked searching google. it was done via terminal commands and i didn't have to download anything but i can't find the solution anymore! wlan0 shows up when i use: :~$iwconfig #BLA BLA BLA #... wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off more results: :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: yes Hard blocked: yes :~$ sudo rfkill unblock all :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: no Hard blocked: yes :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill it's still hard blocked! even though the switch is turned on; gives the same result eitherways a direction to a page with a working solution is a much appreciated answer!

    Read the article

  • FTP server (vsftpd) with webgui

    - by manutenfruits
    I want to build a file server to make users able to upload and download mostly multimedia, but also common files. Right now I have an Arch installation with vsftpd and I'm about to install miniDLNA for multimedia sharing. The only problem is that FTP doesn't seem to fit my needs, because almost always makes the users need a client such as FileZilla to make the server friendly. I have been looking for a web frontend for vsftp but apart from management interfaces there's nothing. I need a frontend accessible from a browser through which users can navigate throught the folders in an easier and more elegant way than the plain FTP display that browsers make by default. It should be able to let users upload files and, as an awesome extra, let them play the multimedia directly on the browser. For this, I am willing to dump FTP if needed, I've heard about HTTP File Servers but don't know too much about it. I could code everything myself, but there's gotta be something out there already.

    Read the article

  • my.cnf in server directory, why

    - by Mellon
    On my Ubuntu machine, I have installed MySQL . I notice that there are /etc/my.cnf file which contain the content (only two lines): innodb_buffer_pool_size = 1G max_allowed_packet = 512M While there is also /etc/mysql/my.cnf with a long content like: # The MySQL database server configuration file. ... ... For me, it looks like both are configurations for MySQL server, but Why there are two my.cnf in different locations, can't the content to be merged to one my.cnf ? What is the purpose to have seperate my.cnf for MySQL server ?

    Read the article

  • how to remove background layer of djvu file

    - by Jon
    Hello, I've downloaded some files from the Internet Archive. They come in different file formats and most of the time I use pdf. However, sometimes the scans are saves in colour instead of b/w. This makes it difficult/impossible to read on a dedicated ebook reader. In that case I downloaded the djvu files as on the PC you can select which layer (color, bw,fore,back) one would like to see. Selecting the bw gives excellent results. However, the ebook reader does not has this option. The question is, how can I remove /extract a layer from the djvu file and save only this layer. So far I've tried the following two approaches: 1) select bw in the djvu viewer on the PC and printed to postscript file. Followed by a ps2pdf conversion. This works, but generates a fairly large pdf file. Sure, I can again upload it to any2djvu but it just seems to much manual work for each file. 2) I tried the shared annotation feature and said (mode bw). This works on the PC as desired but is ignored on the ebook reader as the other layers are still present. Any help or suggestions would be greatly appreciated.

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >