Search Results

Search found 26810 results on 1073 pages for 'linux scripting'.

Page 352/1073 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • open mysql to any connection on ubuntu

    - by ThomasReggi
    I simply want to open up mysql to be accessible from any server ip. I have already commented out the bind-address in /etc/mysql/my.conf. I have already setup the user account within mysql. I have no clue whats stopping me from connecting. The more challenging I see this being the more I realize how much of a security risk it is, and I get that, I just want to be able to do it temporarily. I think that the iptables firewall is the last thing that is preventing me from achieving this, but sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT is seemingly doing nothing.

    Read the article

  • What does the filesystem give me?

    - by Alec
    I'm writing a specialised database. It currently sits atop of ext4, with one big file that it reads and writes to. I'm wondering whether it would be worth forgoing the filesystem and reading and writing directly to the device. I already use O_DIRECT, so as far as I can tell it wouldn't require much of a code change. What might the risks, advantages and disadvantages of forgoing the filesystem be? Is it likely to improve performance, or harm it?

    Read the article

  • Moving from Ubuntu desktop to Ubuntu Server via SSH

    - by Daniel Elessedil Kjeserud
    So a little while ago I installed regular Ubuntu for a home server, but that gave me a lot of extra packages. What I should have done was to install Ubuntu Server, since I don't even own a screen to connect to it. Does anybody know of a way to convert my Ubuntu machine to a Ubuntu Server machine in one big swoop? It has to be done over SSH, since I don't have a screen to connect to it, like I said. It's currently running 9.10, about to be upgraded to 10.4.

    Read the article

  • No space left on device with encrypted disk that takes all space

    - by Yosef
    I use Ubuntu 11.04. There's no space left on device. I have encrypted the disk that takes up space (maybe it's good to disable it, but I don't know how). In shell, I get this message: No space left on device I run df -I: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3055616 602499 2453117 20% / none 210161 890 209271 1% /dev none 214789 8 214781 1% /dev/shm none 214789 53 214736 1% /var/run none 214789 3 214786 1% /var/lock /home/myuser/.Private 3055616 602499 2453117 20% /home/myuser df -I Edit: When I run only df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 48060296 45618928 0 100% / none 1538340 684 1537656 1% /dev none 1547596 808 1546788 1% /dev/shm none 1547596 104 1547492 1% /var/run none 1547596 0 1547596 0% /var/lock /home/myuser/.Private 48060296 45618928 0 100% /home/myuser Edit: I thinking about few solution but I don't know which better and how exactly to do them: to enlarge partition size (I cant install gparted - no more disk space) remove encryption of partition - I really not need that

    Read the article

  • Is it possible to skip .rvmrc confirmation?

    - by Viacheslav Molokov
    We are using RVM for managing Ruby installations and environments. Usually we are using this .rvmrc script: #!/bin/bash if [ ! -e '.version' ]; then VERSION=`pwd | sed 's/[a-z/-]//g'` echo $VERSION > .version rvm gemset create $VERSION fi VERSION=`cat .version` rvm use 1.9.2@$VERSION This script forces RVM to create new gem environment for each our project/version. But each time we was deploying new version RVM asks us to confirm new .rvmrc file. When we cd to this directory first time, we are getting something like: =============================================================== = NOTICE: = =============================================================== = RVM has encountered a not yet trusted .rvmrc file in the = = current working directory which may contain nasty code. = = = = Examine the contents of this file to be sure the contents = = are good before trusting it! = = = = Press 'q' to exit the reader when finished reading the file = =============================================================== (press enter to continue when ready) This is not as bad for development environments, but with auto deploy it require to manually confirm each new version on each server. Is it possible to skip this confirmation?

    Read the article

  • How do I remove Xen kernel and put normal kernel on RHEL 5

    - by yan bellavance
    I have 3 identical machines (hardware wise) that all have RHEL 5.3 installed. 2 of those machines have the Xen kernel and one doesnt. I cannot install nvidia drivers on the ones that have the xen kernel and so I was wondering how I managed to do this and how to replace them with normal kernels. Could this of happened during install time when for example I was queried on certain components to install? (development,virtualization, webserver)

    Read the article

  • Debian 7: "Failure of key exchange and association"

    - by pyrogoggles
    I'm installing Debian using the Live CD with GNOME and the non-free packages. The internet works just fine when using the live CD, but when you try to open the installer and select a network, it gives me the error of "Failure of key exchange and association". I've put everything in right, and tried multiple times. However, it never works. Am I just going to have to use ethernet to install? Thanks, people.

    Read the article

  • Change the default route without affecting existing TCP connections

    - by Patrick Horn
    Let's say I have two public network addresses on my server: one NAT through an ISP (192.168.99.0/24), and a VPN through a different ISP (192.168.1.0/24), already configured with a per-host route to the VPN server through my ISP. Here is my initial routing table. I am currently routing through my ISP on subnet 192.168.99.0/24. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.99.1 0.0.0.0 UG 0 0 0 eth1 55.66.77.88 192.168.99.1 255.255.255.255 UGH 0 0 0 eth1 192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Now, I want new TCP connections to switch to my 192.168.1.0/24 so I type the following: $ route add -net 0.0.0.0 gw 192.168.1.1 dev tap0 When I do this, it causes some long-standing TCP connections to hang. Is there a way to I safely change the default interface for new connections, while allowing existing TCP connections to use the old route (i.e. do I need enable some sort of stateful routing table)? I am okay with a solution that only works with established TCP connections, and I don't care how hacky it is. For example, if there is a way to add temporary iptables rules for existing connections to force them over the old route. But there has to be some way to do this. EDIT: Just a note about a simple "route add -host ... " for existing connections: this solution would work if I am fine with leaving a subset of IPs on the old interface. However, in my application, this actually doesn't solve my problem because I want to allow new connections to come on the new interface even if they have the same source IP. I'm now looking at using the "ip route" command to set source-based routing rules.

    Read the article

  • rTorrent, too low memory usage !?

    - by Claudiu
    I want to know from more experienced rTorrent users how to tweak the .rtorrent.rc so that rTorrent will cache disk reading and writing (same as uTorrent does). I have set the max_memory_usage = 1GB but this amount is not used. I run 6 rTorrent instances on a Quad Core, 8 GB Ram machine and total used memory reported by htop is only ~500MB. I need to use memory buffers cause disk IO activity is very high.

    Read the article

  • OSSEC agent behind NAT

    - by Eric
    I am working on an OSSEC deployment where I will have multiple agents behind 1 public IP. Below is an example of the setup Private Network OSSEC-Agent1 (192.168.1.10) OSSEC-Agent2 (192.168.50.33) OSSEC-Agent3 (10.10.10.1) Those IPs NAT to 1 public IP (1.1.1.1) Then 1.1.1.1 talks to the public OSSEC server on 2.2.2.2 I've read some OSSEC documentation talking about NAT here, but it doesn't tell me exactly what I need to know. Their example is using an entire /24 subnet and mine will mainly have multiple agents to only 1 public IP. With the setup so far, I brought Agent1 online fine and it is communicating to the OSSEC server. However Agent2 continues to fail trying to connect to 2.2.2.2. Even though when I added the key, I had the correct name for it, so I know it talked to the portal at least once for that information. I'm assuming it's just getting confused with the multiple keys to 1 public IP. I basically want to know if this is possible and/or if I'm just overlooking something simple. Any help would be greatly appreciated.

    Read the article

  • Webserver: chrooted PHP gives mysql.sock error when attempting to reach mysql

    - by Jon L.
    Hey guys, I've configured an Ubuntu webserver with Nginx + PHP5-FPM. I've created a chrooted environment (using jailkit) that I'm tossing my developers into, from where they can develop their test applications. Chroot jail: /home/jail Nginx and PHP5-FPM run outside the chroot, but are configured to function with websites within the chrooted environment. So far, Nginx and PHP5-FPM are serving up files without issue, except for the following: When attempting to connect to MySQL, we receive this error: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' Now, I believe the issue is due to the non-chrooted php.ini referencing mysqld.sock outside of the chroot environment (it's actually using the MySQL default setting currently). My question is, how can I configure PHP to access MySQL via loopback or similar? (Found that as a suggestion in a google result, but without any instructions) Or if I'm missing some other obvious setting, let me know. If there's an option of creating a hardlink (that would remain available even if mysql is restarted), that would be handy as well.

    Read the article

  • How can I log when reads to /dev/random block?

    - by ldrg
    I've noticed that since updating my server to Debian Squeeze the amount of entropy as reported by /proc/sys/kernel/random/entropy_avail is much lower than it was before the upgrade. I would like to know if this lower pool size is big enough to function with or if I need to look into getting more entropy sources. I think having a way to log blocking reads of /dev/random would show whether I have enough entropy or not.

    Read the article

  • Ubuntu 10.04 Keyboard and Mouse Freezing Problem

    - by nitbuntu
    I had a partition setup with Windows XP and Ubuntu 8.04 dual booting. I recently upgraded to Ubuntu 10.04 by installing fresh from CD but leaving the previous /home folder as is. Things seemed to be working fine, but started finding that my mouse and keyboard were freezing. After a quick search on the internet, I found the following suggestions as shown here:- Ubuntu Forums Here the suggestion was to:- Edit /etc/default/grub, go to the line that begins like: GRUB_CMDLINE_LINUX_DEFAULT= Change it to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=off" After that, run this command: sudo update-grub and Reboot This seemed to have resolved the issue but after a couple of days I again find my mouse and keyboard freezing. I also find that my parallel port printer had also stopped working. I have saved the output of dmesg and my syslog. The first can be viewed here but the syslog had too many characters, so if someone can suggest an alternative to freetexthost, I can post it there. Moreover, if there is any other information that should be provided, do let me know. I do hope we can get to the bottom of this issue. Thank you in advance for any help that could be provided.

    Read the article

  • SSSD Authentication

    - by user24089
    I just built a test server running OpenSuSE 12.1 and am trying to learn how configure sssd, but am not sure where to begin to look for why my config cannot allow me to authenticate. server:/etc/sssd # cat sssd.conf [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss,pam domains = test.local [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 # Section created by YaST [domain/mose.cc] access_provider = ldap ldap_uri = ldap://server.test.local ldap_search_base = dc=test,dc=local ldap_schema = rfc2307bis id_provider = ldap ldap_user_uuid = entryuuid ldap_group_uuid = entryuuid ldap_id_use_start_tls = True enumerate = False cache_credentials = True chpass_provider = krb5 auth_provider = krb5 krb5_realm = TEST.LOCAL krb5_kdcip = server.test.local server:/etc # cat ldap.conf base dc=test,dc=local bind_policy soft pam_lookup_policy yes pam_password exop nss_initgroups_ignoreusers root,ldap nss_schema rfc2307bis nss_map_attribute uniqueMember member ssl start_tls uri ldap://server.test.local ldap_version 3 pam_filter objectClass=posixAccount server:/etc # cat nsswitch.conf passwd: compat sss group: files sss hosts: files dns networks: files dns services: files protocols: files rpc: files ethers: files netmasks: files netgroup: files publickey: files bootparams: files automount: files ldap aliases: files shadow: compat server:/etc # cat krb5.conf [libdefaults] default_realm = TEST.LOCAL clockskew = 300 [realms] TEST.LOCAL = { kdc = server.test.local admin_server = server.test.local database_module = ldap default_domain = test.local } [logging] kdc = FILE:/var/log/krb5/krb5kdc.log admin_server = FILE:/var/log/krb5/kadmind.log default = SYSLOG:NOTICE:DAEMON [dbmodules] ldap = { db_library = kldap ldap_kerberos_container_dn = cn=krbContainer,dc=test,dc=local ldap_kdc_dn = cn=Administrator,dc=test,dc=local ldap_kadmind_dn = cn=Administrator,dc=test,dc=local ldap_service_password_file = /etc/openldap/ldap-pw ldap_servers = ldaps://server.test.local } [domain_realm] .test.local = TEST.LOCAL [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false minimum_uid = 1 clockskew = 300 external = sshd use_shmem = sshd } If I log onto the server as root I can su into an ldap user, however if I try to console locally or ssh remotely I am unable to authenticate. getent doesn't show the ldap entries for users, Im not sure if I need to look at LDAP, nsswitch, or what: server:~ # ssh localhost -l test Password: Password: Password: Permission denied (publickey,keyboard-interactive). server:~ # su test test@server:/etc> id uid=1000(test) gid=100(users) groups=100(users) server:~ # tail /var/log/messages Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): system info: [Client not found in Kerberos database] Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): authentication failure; logname=LOGIN uid=0 euid=0 tty=/dev/ttyS1 ruser= rhost= user=test Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): received for user test: 4 (System error) Nov 24 09:36:44 server login[14508]: FAILED LOGIN SESSION FROM /dev/ttyS1 FOR test, System error server:~ # vi /etc/pam.d/common-auth auth required pam_env.so auth sufficient pam_unix2.so auth required pam_sss.so use_first_pass server:~ # vi /etc/pam.d/sshd auth requisite pam_nologin.so auth include common-auth account requisite pam_nologin.so account include common-account password include common-password session required pam_loginuid.so session include common-session session optional pam_lastlog.so silent noupdate showfailed

    Read the article

  • Cron stopped working, partially working.

    - by Robi
    Our cron script stopped working in different dates in August. What can be the possible reasons? We did not change anything. Our hosting showed us a log where we can see that cron is executing our scripts. But, nothing is happening in our scripts. If we manually execute the scripts, we're getting correct results like before. I showed the commands to hosting and they showed me that the commands are working. What should I tell my hosting? what should I do? They are php scripts which are executed by CRON and they just post to facebook and twitter. They don't execute any hard or huge things. I even asked my hosting if we broke any rules.

    Read the article

  • How do I add missing dictionaries for aspell?

    - by Ahmed
    Aspell version: $ aspell -v @(#) International Ispell Version 3.1.20 (but really Aspell 0.60.6) Dump dict yields no results: $ aspell dump dicts First noticed the problem when I did this, was originally working on web server, but someone updated something and it hasn't worked since: $ aspell check temp_test_file.txt Error: No word lists can be found for the language "en_US". What's the proper way of installing the required dictionaries? I believe we're running this on CentOS. And also, /usr/lib/aspell-0.60 does not contain the required dictionaries (provided that they're supposed to be saved there). data-dir: /usr/lib/aspell-0.60

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • RHEL 5/CentOS 5 - sshd becomes unresponsive

    - by ewwhite
    I have a number of CentOS 5.x and RHEL 5.x systems whose SSH daemons become unresponsive, preventing remote logins. The typical error from the connecting side is: $ ssh db1 db1 : ssh_exchange_identification: Connection closed by remote host Examining /var/log/messages after a forced reboot shows the following leading up to the restart: Dec 10 10:45:51 db1 sshd[14593]: fatal: Privilege separation user sshd does not exist Dec 10 10:46:02 db1 sshd[14595]: fatal: Privilege separation user sshd does not exist Dec 10 10:46:54 db1 sshd[14711]: fatal: Privilege separation user sshd does not exist Dec 10 10:47:38 db1 sshd[14730]: fatal: Privilege separation user sshd does not exist These systems use LDAP authentication and the nsswitch.conf file is configured to look at local "files" first. [root@db1 ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # passwd: files ldap shadow: files ldap group: files ldap hosts: files dns The Privilege-separated SSH user exists in the local password file. [root@db1 ~]# grep ssh /etc/passwd sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin Any ideas on what the root cause is? I did not see any Red Hat errata that covers this.

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Secondary IP (eth0:0) acts like main server IP

    - by George Tasioulis
    I have a CentOS server, configured with 4 consecutive IPs: eth0 5.x.x.251 eth0:0 5.x.x.252 eth0:1 5.x.x.253 eth0:2 5.x.x.254 The problem is that all traffic goes out to the internet with eth0:0 (5.x.x.252) as the source IP, instead of eth0. # curl ifconfig.me 5.x.x.252 How can I fix this, so that all traffic goes out via eth0, ie my main IP? PS: My server is VPS running on a Xen dom0, the latter being configured in routed mode networking. Thanks in advance! Server configuration # ifconfig eth0 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.251 Bcast:5.x.x.255 Mask:255.255.255.255 inet6 addr: fe80::x:x:x:x/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14675569 errors:0 dropped:0 overruns:0 frame:0 TX packets:9463227 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4122016502 (3.8 GiB) TX bytes:25959110751 (24.1 GiB) Interrupt:23 eth0:0 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.252 Bcast:5.x.x.255 Mask:255.255.255.224 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:23 eth0:1 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.253 Bcast:5.x.x.255 Mask:255.255.255.224 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:23 eth0:2 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.254 Bcast:5.x.x.255 Mask:255.255.255.224 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:23 # cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 5.x.x.251 [fqdn] [hostname] # cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static ONBOOT=yes IPADDR=5.x.x.251 NETMASK=255.255.255.224 SCOPE="peer 5.x.y.82" # cat ifcfg-eth0:0 DEVICE=eth0:0 BOOTPROTO=static ONBOOT=yes IPADDR=5.x.x.252 NETMASK=255.255.255.224 # cat route-eth0 ADDRESS0=0.0.0.0 NETMASK0=0.0.0.0 GATEWAY0=5.x.y.82 # netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 5.x.y.82 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 5.x.x.224 0.0.0.0 255.255.255.224 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 5.x.y.82 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Block IP Address including ICMP using UFW

    - by dr jimbob
    I prefer ufw to iptables for configuring my software firewall. After reading about this vulnerability also on askubuntu, I decided to block the fixed IP of the control server: 212.7.208.65. I don't think I'm vulnerable to this particular worm (and understand the IP could easily change), but wanted to answer this particular comment about how you would configure a firewall to block it. I planned on using: # sudo ufw deny to 212.7.208.65 # sudo ufw deny from 212.7.208.65 However as a test that the rules were working, I tried pinging after I setup the rules and saw that my default ufw settings let ICMP through even from an IP address set to REJECT or DENY. # ping 212.7.208.65 PING 212.7.208.65 (212.7.208.65) 56(84) bytes of data. 64 bytes from 212.7.208.65: icmp_seq=1 ttl=52 time=79.6 ms ^C --- 212.7.208.65 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 79.630/79.630/79.630/0.000 ms Now, I'm worried that my ICMP settings are too generous (conceivably this or a future worm could setup an ICMP tunnel to bypass my firewall rules). I believe this is the relevant part of my iptables rules is given below (and even though grep doesn't show it; the rules are associated with the chains shown): # sudo iptables -L -n | grep -E '(INPUT|user-input|before-input|icmp |212.7.208.65)' Chain INPUT (policy DROP) ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) DROP all -- 0.0.0.0/0 212.7.208.65 DROP all -- 212.7.208.65 0.0.0.0/0 How should I go about making it so ufw blocks ICMP when I specifically attempt to block an IP address? My /etc/ufw/before.rules has in part: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ACCEPT -A ufw-before-input -p icmp --icmp-type source-quench -j ACCEPT -A ufw-before-input -p icmp --icmp-type time-exceeded -j ACCEPT -A ufw-before-input -p icmp --icmp-type parameter-problem -j ACCEPT -A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT I'm tried changing ACCEPT above to ufw-user-input: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ufw-user-input -A ufw-before-input -p icmp --icmp-type source-quench -j ufw-user-input -A ufw-before-input -p icmp --icmp-type time-exceeded -j ufw-user-input -A ufw-before-input -p icmp --icmp-type parameter-problem -j ufw-user-input -A ufw-before-input -p icmp --icmp-type echo-request -j ufw-user-input But ufw wouldn't restart after that. I'm not sure why (still troubleshooting) and also not sure if this is sensible? Will there be any negative effects (besides forcing the software firewall to force ICMP through a few more rules)?

    Read the article

  • NAT ports - how do they work?

    - by Davidoper
    I have the following network schema: Computer A: three nics: NIC 1 (eth0): dhcp, public internet NIC 2 (eth1): static 192.168.1.1, gateway for Computer B NIC 3 (eth2): static 192.168.2.1, gateway for Computer C Computer B: static 192.168.1.2, using gateway 192.168.1.1 (NIC 2). Computer C: static 192.168.2.2, using gateway 192.168.2.1 (NIC 3). So I applied this to get NAT working: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Every computer can connect to the internet now. I have been applying rules to the main computer (Computer A), like dropping connections to some ports, e.g ssh: iptables -A INPUT -p tcp --dport 22 -j DROP But for instance, now I would like only allow connections for ports 20,21,22,53 and 80 in Computer C, and ignore the outside traffic if it's not related to those ports. The allowed connections should be FROM Computer C to outside, but not from outside to Computer C (I mean - Computer C is not hosting any HTTP or SSH, but it is going to use them as a client). I guess this sould be done like this: iptables -A OUTPUT -i eth2 -o eth0 -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth2 -o eth0 -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT The last rule (dropping any other traffic different from those) is at the end of the configuration, so -A should be working correctly. The thing is... it is not working. If I put the last rule like this: iptables -A FORWARD -i eth2 -o eth0 -j DROP It just drops everything and, for instance, port 21 (previously opened as you can see above) is not either working. Can you tell me what could I have done wrong? I have been struggling with this problem for some time and I am unable to solve it. Thanks!

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >