Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 520/1663 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

  • How do I limit concurrent sftp / port forwarding logins

    - by Kyoku
    I have ssh set up so my users can only access sftp and port forwarding, how can I limit the number of concurrent logins on a per user basis? In my sshd_config I have UsePAM set to yes and in /etc/security/limits.conf I have: username - maxlogins 1 I also tried: username hard maxlogins 1 Neither of these works and the users can still log in multiple times.

    Read the article

  • Append symbolic link to served media

    - by Hellnar
    Hello, I have two folders such as nonserved/ folder1/ folder2/ and a served folder via Apache media/ js/ css/ img/ In the end, I want to include/append contents of /nonserved to /media so that www.mysite.com/media will be as such: /media /js /css /img /folder1 /folder2 I am running Ubuntu Server, I am up for either apache config or symbolic link based answer :) Plus nonserved folder is rather dynamic thus manual symbolic linking to each folder is impossible.

    Read the article

  • Set custom mount point and mount options for USB stick

    - by kayahr
    Hello, I have an USB stick which contains private stuff like the SSH key. I want to mount this stick to my own home directory with 0700 permissions. Currently I do this with this line in /etc/fstab: LABEL=KAYSTICK /home/k/.kaystick auto rw,user,noauto,umask=077,fmask=177 0 0 This works great but there is one minor problem: In Nautilus (The Gnome file manager) the mount point ".kaystick" is displayed. I guess Nautilus simply scans the /etc/fstab file and displays everything it finds there. This mount point is pretty useless because it can't be clicked when the device is not present and it can't be clicked when the device is present (Because then it is already mounted). I know this is a really minor problem because I could simply ignore it but I'm a perfectionist and so I want to get rid of this useless mount point in Nautilus. Is there another way to customize the mount point and mount options for a specific USB device? Maybe it can be configured in udev? If yes, how?

    Read the article

  • Apache - The name

    - by Joshua Enfield
    I am working on a migration to a newer virtualized server. The old one has Apache 2.2.4 according to the old servers phpinfo(). The new one with the most up to date has 2.2.3. How can this be assuming no trickery is involved? The old one is years old. A lot of the guides I reference use apache2 in folders names and many of the conventions. The newest version of things, as I understand it is called httpd. Did apache change the name from what it originally was? (i.e. break the web server component into its own project called httpd, I realize the original daemon was probably still called httpd)

    Read the article

  • Partition Label Problem after DD Image

    - by bobby
    After imaging a 100GB hard drive into an image file with dd, I dd'd the image to a larger hdd After boot get mkrootdev: label / not found I have gone in with finnix and relabeled the partition to the same label with e2label and still have problems. Has anyone resolved this before?

    Read the article

  • Is there a way to make nautilus display the "recently used" files and directories?

    - by Peltier
    Is there a way to make nautilus display the "recently used" files and directories, just like the "open file" dialog does? Just to make my question clearer, here are two screenshots: The GTK open file dialog, showing the recently used items: A nautilus window, which doesn't offer to display recently used items: EDIT : This has been added as a feature request to Nautilus. Don't hesitate to make your voice heard if you want it to happen!

    Read the article

  • System time is off in Fedora

    - by NoClue
    I have a desktop PC with Fedora, Ubuntu and Windows installed and grub used for multibooting. But time in Fedora is always 5 hours behind. If I change the time in Fedora, then Windows and Ubuntu will be 5 hours ahead of the current time. I don't understand how to fix it. Any ideas? All the timezone settings in Fedora, Ubuntu and Windows are the same.

    Read the article

  • Different graphic cards drivers while booting from external media

    - by goran
    I am booting a certain system of mine with ubuntu 9.10 from external HDD. I am satisfied with the setup and it works fine, however I would like to modify it so that I can choose which graphic card drivers to load during the boot time. Specifically I would like to choose between: nvidia proprietary driver ati proprietary driver generic driver Currently if I am using proprietary drivers then dont boot into X, delete xorg.conf, start gdm and reconfigure the system using jockey (for hardware drivers). What would be the steps to make this (semi-)automatic and avoid restarting X?

    Read the article

  • Ubuntu can't install an older version of a package

    - by Trevor Newhook
    When I try to do an apt-get install, I keep getting an error: Depends: libgtk-3-common (= 3.4.1-0ubuntu1) but 3.4.2-0ubuntu0.4 is to be installed when I run sudo apt-get -f install, I get several dpkg: warning: files list file for package 'XXX' missing, assuming package has no files currently installed. then Preparing to replace libgtk-3-bin 3.4.1-0ubuntu1 (using .../libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb) ... Adding 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin' dpkg-divert: error: rename involves overwriting `/usr/sbin/update-icon-caches.gtk2' with different file `/usr/sbin/update-icon-caches', not allowed dpkg: error processing /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure why it's complaining about a newer version of a package, but any help would be appreciated

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • How to start a service at boot time in ubuntu 12.04, run as a different user?

    - by Alex
    I have a server ClueReleaseManager which I have installed on a Ubuntu 12.04 system from a separate user (named pypi), and I want to be able to start this server at startup. I already have tried to create a simple bash script with some commands (login as user pypi, use a virtual python environment, start the server), but this does not work properly. Either the terminal crashes or when I try to ask the status of the service it is started and I am logged in as user pypi ...? So, here the question: What are the steps to take to make sure the ClueReleaseManager service properly starts up on boot time, and which I can control (start/stop/..) during runtime, while the service is running from a user pypi? Additional information and constraints: I want to do this as simple as possible Without any other packages/programs to be installed I am not familiar with the Ubuntu 12.04 init structure All the information I found on the web is very sparse, confusing, incorrect or does not apply to my case of running a service as a different user from root.

    Read the article

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • How can I register a custom protocol with xdg?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • How long does badblocks take on a 1TB drive?

    - by Steven Don
    I'm running badblocks (or rather "e2fsck -c") on a 1TB drive and if the progress indicator is any indication (no pun intended), it's going to take almost forever to complete. Right now it says 0.01% done, 30:20 elapsed which would mean the thing would take 17 weeks or so to complete, which seems rather excessive in my book. Is that a normal amount of time for such a check to take or it simply that my suspicions are correct in that the drive is failing, thus causing the check to take only slightly shorter than eternity? I found this question here, but that pertains to the amount of passes done.

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

  • Why does my Mac address reset after reconnecting?

    - by Mr.Student
    I have ubuntu 12. I'm changing my mac address with ifconfig wlan0 hw ether xx:xx:xx:xx:xx:xx which works. However when I restart my connection my computer resets my mac to my original mac address. I'm guessing that this happens because something calls... ifconfig wlan0 down ... do something before connecting ifconfig wlan0 up ... connect to designated access point I want my mac address to however be the same no matter how many times I disconnect and reconnect, whether to another network or the same one. Also it would be nice to turn off the auto-connect feature for my network-manager with out having to edit each individual connection. Lastly I would like to know how to connect to a wifi network through the terminal and not via gui network manager ubuntu provides.

    Read the article

  • SFTP through proxy

    - by aerodynamic_props
    I have a large amount of data on scratch space at computer b that I want to get. In my network I cannot directly connect to computer b (ssh exits with "No route to host"); I must first connect to computer a, and then connect to computer b. I cannot move the data from the scratch space on computer b to computer a because of a disk quota that is imposed on me at computer a. How can I move the data from computer b to my computer in this situation?

    Read the article

  • Only tunnel certain applications via OpenVPN

    - by jinjin
    Hi, I've purchased a VPN solution, it works correctly when I have "redirect-gateway def1" in the configuration file (routing all traffic through the VPN). However when I remove that line from the configuration file, I am still able to ping-out of the machine (ping -I tap0), however I cannot ping the IP assigned to the machine (it's a public ip), i get the error: Destination Host Unreachable. I only want to have certain applications sending traffic through the VPN tunnel (eg: ZNC, irssi), all of which i can select which IP they use. However they can't recieve any data, making the tunnel essentially useless to me when disabling redirect-gateway. Any ideas on how to allow specific applications use the tunnel, without of forcing everything to go through it? My configuration file is as follows: dev tap remote #.#.#.# float #.#.#.# port 5129 comp-lzo ifconfig #.#.#.# 255.255.255.128 route-gateway #.#.#.# #redirect-gateway def1 secret key.txt cipher AES-128-CBC The output of ifconfig -a when the tunnel is connected: tap0 Link encap:Ethernet HWaddr 00:ff:47:d3:6d:f3 inet addr:#.#.#.# Bcast:#.#.#.# Mask:255.255.255.255 inet6 addr: <snip> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:612 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:25704 (25.1 KiB) TX bytes:6427 (6.2 KiB) EDIT: the Bcast:#.#.#.# (ifconfig) is different from route-gateway #.#.#.# (openvpn) if that makes any difference.

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • View Script Over SSH?

    - by user74781
    A friend, using a remote machine, ran a script that SSHed to my machine, and ran the following python script that resides on my machine: while (1): ....print "hello world" (this script simply prints 'hello world' continuously). I am now logged in to my machine. How can I see the output of the script my friend was running? if it helps, I can 'spot' the script my friend is using: me@home:~$ ps aux | grep justprint.py **friend 7494 12.8 0.3 7260 3300 ? Ss 17:24 0:06 python TEST_AREA/justprint.py** friend 7640 0.0 0.0 3320 800 pts/3 S+ 17:25 0:00 grep --color=auto just what steps should I take in order to view the "hello world" messages on my screen?

    Read the article

  • Why is my ethernet interface in promiscuous mode

    - by nhed
    I read that seeing a flag of M in netstat -i is the way to tell which of your interfaces is in promiscuous mode I run it and I see that eth1 is in promiscuous mode $ netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 0 1770161198 0 0 0 57446481 0 0 0 BMRU lo 16436 0 97501566 0 0 0 97501566 0 0 0 LRU This seems to be the case on all the machines I checked (All Centos6.0, both virtual and physical), any idea why ethernet devices would be in such a mode unless someone was running any pcap based app (sudo lsof | grep pcap shows nothing)? I did not see any mention of promiscuous in any of the config files (sudo grep -r promis /etc) Any ideas what puts the interface into that mode and why? p.s. most of the posts I see seem to be security related, this is not that

    Read the article

  • how do I set up a virtual host (it's not working, and I've done everything right)

    - by piratepartypumpkin
    My router redirects port 80 to port 8080. My router works fine and my domain name is routed properly. This is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /home/admins/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> I can access my website by entering "mywebsite.com:8080" but I cannot access it by entering "mywebsite.com" For further information, this is a part of my httpd.conf: Listen 8080 Servername localhost:8080 DocumentRoot "/home/admins/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/home/admins/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >