Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 359/1113 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • One server, Two APC UPS on redundant power supplies : How to trigger shutdown ?

    - by Falken
    I have a server racked and its redundant power supplies plugged in two APC Smart-UPS 3000 XLM. Each UPS is connected to two different mains power sources. Two instances of apcupsd are running, each one connected to its own UPS. They can both detect when an UPS is on Battery, and each UPS can then trigger a shutdown on the server. Question is : How NOT to shutdown if ONLY ONE UPS runs out of battery ? Note : Smart-UPS 3000 XLM has a "Power Sync" Function that is able to connect to its peer and detect its status. But when I pulled the plug out of one of them, the Shutdown order was sent anyway. I'm thinking about modifying the shutdown scripts to check with "apcaccess" if the other ups is down. Any experience on this would be appreciated !

    Read the article

  • Cron stopped working, partially working.

    - by Robi
    Our cron script stopped working in different dates in August. What can be the possible reasons? We did not change anything. Our hosting showed us a log where we can see that cron is executing our scripts. But, nothing is happening in our scripts. If we manually execute the scripts, we're getting correct results like before. I showed the commands to hosting and they showed me that the commands are working. What should I tell my hosting? what should I do? They are php scripts which are executed by CRON and they just post to facebook and twitter. They don't execute any hard or huge things. I even asked my hosting if we broke any rules.

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 server?

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • F3-F5 keys incorrectly behaving as audio keys

    - by obvio171
    I don't know if this is a configuration issue or a hardware issue, but I have a Kinesis Advantage USB keyboard and for some reason the F3-F5 keys aren't responding as they used to. They don't respond to anything and, when I tried using F5 on Emacs, it said <XF86AudioNext> is undefined, so I guess it's a weird mapping problem. Any idea how I could remap them to the original meaning?

    Read the article

  • How can I log when reads to /dev/random block?

    - by ldrg
    I've noticed that since updating my server to Debian Squeeze the amount of entropy as reported by /proc/sys/kernel/random/entropy_avail is much lower than it was before the upgrade. I would like to know if this lower pool size is big enough to function with or if I need to look into getting more entropy sources. I think having a way to log blocking reads of /dev/random would show whether I have enough entropy or not.

    Read the article

  • Debian 7: "Failure of key exchange and association"

    - by pyrogoggles
    I'm installing Debian using the Live CD with GNOME and the non-free packages. The internet works just fine when using the live CD, but when you try to open the installer and select a network, it gives me the error of "Failure of key exchange and association". I've put everything in right, and tried multiple times. However, it never works. Am I just going to have to use ethernet to install? Thanks, people.

    Read the article

  • OSSEC agent behind NAT

    - by Eric
    I am working on an OSSEC deployment where I will have multiple agents behind 1 public IP. Below is an example of the setup Private Network OSSEC-Agent1 (192.168.1.10) OSSEC-Agent2 (192.168.50.33) OSSEC-Agent3 (10.10.10.1) Those IPs NAT to 1 public IP (1.1.1.1) Then 1.1.1.1 talks to the public OSSEC server on 2.2.2.2 I've read some OSSEC documentation talking about NAT here, but it doesn't tell me exactly what I need to know. Their example is using an entire /24 subnet and mine will mainly have multiple agents to only 1 public IP. With the setup so far, I brought Agent1 online fine and it is communicating to the OSSEC server. However Agent2 continues to fail trying to connect to 2.2.2.2. Even though when I added the key, I had the correct name for it, so I know it talked to the portal at least once for that information. I'm assuming it's just getting confused with the multiple keys to 1 public IP. I basically want to know if this is possible and/or if I'm just overlooking something simple. Any help would be greatly appreciated.

    Read the article

  • Increasing file descriptor limit on Debian does not work! Help!

    - by Aco
    I am running Debian 6 and I am trying to increase the file descriptor limit but it does not want to work. This is what I have done: I edited /etc/sysctl.conf by adding fs.file-max = 64000 at the end and applied the changes using sysctl -p. I then edited /etc/security/limits.conf and added the following lines: * soft nofile 64000 and * hard nofile 64000. Now when I execute ulimit -Hn and ulimit -Sn I still see 1024. I rebooted the server and I still get the same result. What have I failed to do?

    Read the article

  • virturalmin webmin dose not respond

    - by Miranda
    I have installed Virtualmin on a CentOS remote server, but it dose not seem to work https://115.146.95.118:10000/ at least the Webmin page dose not work. I have opened those ports http ALLOW 80:80 from 0.0.0.0/0 ALLOW 443:443 from 0.0.0.0/0 ssh ALLOW 22:22 from 0.0.0.0/0 virtualmin ALLOW 20000:20000 from 0.0.0.0/0 ALLOW 10000:10009 from 0.0.0.0/0 And restarting Webmin dose not solve it: /etc/rc.d/init.d/webmin restart Stopping Webmin server in /usr/libexec/webmin Starting Webmin server in /usr/libexec/webmin And I have tried to use Amazon EC2 this time, still couldn't get it to work. http://ec2-67-202-21-21.compute-1.amazonaws.com:10000/ [ec2-user@ip-10-118-239-13 ~]$ netstat -an | grep :10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN udp 0 0 0.0.0.0:10000 0.0.0.0:* [ec2-user@ip-10-118-239-13 ~]$ sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:20 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:21 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Since I need more than 10 reputation to post image, you can find the screenshots of the security group setting at the Webmin Support Forum. I have tried: sudo iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT It did not change anything. [ec2-user@ip-10-118-239-13 ~]$ sudo yum install openssl perl-Net-SSLeay perl-Crypt-SSLeay Loaded plugins: fastestmirror, priorities, security, update-motd Loading mirror speeds from cached hostfile * amzn-main: packages.us-east-1.amazonaws.com * amzn-updates: packages.us-east-1.amazonaws.com amzn-main | 2.1 kB 00:00 amzn-updates | 2.3 kB 00:00 Setting up Install Process Package openssl-1.0.0j-1.43.amzn1.i686 already installed and latest version Package perl-Net-SSLeay-1.35-9.4.amzn1.i686 already installed and latest version Package perl-Crypt-SSLeay-0.57-16.4.amzn1.i686 already installed and latest version Nothing to do [ec2-user@ip-10-118-239-13 ~]$ nano /etc/webmin/miniserv.conf GNU nano 2.0.9 File: /etc/webmin/miniserv.conf port=10000 root=/usr/libexec/webmin mimetypes=/usr/libexec/webmin/mime.types addtype_cgi=internal/cgi realm=Webmin Server logfile=/var/webmin/miniserv.log errorlog=/var/webmin/miniserv.error pidfile=/var/webmin/miniserv.pid logtime=168 ppath= ssl=1 env_WEBMIN_CONFIG=/etc/webmin env_WEBMIN_VAR=/var/webmin atboot=1 logout=/etc/webmin/logout-flag listen=10000 denyfile=\.pl$ log=1 blockhost_failures=5 blockhost_time=60 syslog=1 session=1 server=MiniServ/1.585 userfile=/etc/webmin/miniserv.users keyfile=/etc/webmin/miniserv.pem passwd_file=/etc/shadow passwd_uindex=0 passwd_pindex=1 passwd_cindex=2 passwd_mindex=4 passwd_mode=0 preroot=virtual-server-theme passdelay=1 sessiononly=/virtual-server/remote.cgi preload= mobile_preroot=virtual-server-mobile mobile_prefixes=m. mobile. anonymous=/virtualmin-mailman/unauthenticated=anonymous ssl_cipher_list=ECDHE-RSA-AES256-SHA384:AES256-SHA256:AES256-SHA256:RC4:HIGH:MEDIUM:+TLSv1:!MD5:!SSLv2:+SSLv3:!ADH:!aNULL:!eNULL:!NULL:!DH:!ADH:!EDH:!AESGCM

    Read the article

  • CentOS 6.2 Bridge Setup for KVM

    - by Gaia
    I'm trying to set up bridged networking with KVM on CentOS 6.2 to no avail. There are plenty of docs and tutorials about it, but they all seem to conflict or don't provide info specific enough to my situation. I just don't get it. I access the host via public IP "xxx.xxx.128.58". All other available IPs (/29) should be bridged and made available to the only KVM guest (running a public facing LAMP stack) that will be setup on this machine. The amazingly unhelpful NOC people assigned the extra IPs to eth1. Is this correct? Should br0 bridge to eth0 or eth1? How do I set this up? Here is the relevant info: eth0 Link encap:Ethernet HWaddr 00:25:90:68:FE:BC inet6 addr: fe80::225:90ff:fe68:febc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:763 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:550811 (537.9 KiB) TX bytes:648 (648.0 b) Memory:fb980000-fba00000 eth1 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.58 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 inet6 addr: fe80::225:90ff:fe68:febd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1806 errors:0 dropped:0 overruns:0 frame:0 TX packets:1505 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:133166 (130.0 KiB) TX bytes:106070 (103.5 KiB) Memory:fb900000-fb980000 eth1:0 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.59 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:1 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.60 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:2 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.61 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:3 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.62 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:62:55:68 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) > cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=XXXX.domain.com > brctl show bridge name bridge id STP enabled interfaces br0 8000.00259068febc no eth0 virbr0 8000.525400625568 yes virbr0-nic > ls -fl | grep ifcfg -rw-r--r-- 1 root root 198 Jun 7 10:58 ifcfg-eth0 -rw-r--r--. 1 root root 254 Oct 7 2011 ifcfg-lo -rw-r--r-- 1 root root 77 Jun 6 18:51 ifcfg-eth1-range0 -rw-r--r-- 1 root root 168 Jun 6 18:50 ifcfg-eth1 > cat ifcfg-eth0 DEVICE="eth0" BOOTPROTO="static" BRIDGE="br0" HWADDR="00:25:90:68:FE:BC" IPV6INIT="yes" MTU="1500" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" IPADDR="yyy.yyy.216.131" NETMASK="255.255.255.128" > cat ifcfg-eth1 DEVICE="eth1" HWADDR="00:25:90:68:FE:BD" NM_CONTROLLED="yes" ONBOOT="yes" BOOTPROTO="static" IPADDR="xxx.xxx.128.58" NETMASK="255.255.255.248" GATEWAY="xxx.xxx.128.57" > cat ifcfg-eth1-range0 IPADDR_START="xxx.xxx.128.59" IPADDR_END="xxx.xxx.128.62" CLONENUM_START="0" Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xxx.xxx.128.56 * 255.255.255.248 U 0 0 0 eth1 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0 link-local * 255.255.0.0 U 1003 0 0 eth1 default xxx.xxx.128.57 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • NAT ports - how do they work?

    - by Davidoper
    I have the following network schema: Computer A: three nics: NIC 1 (eth0): dhcp, public internet NIC 2 (eth1): static 192.168.1.1, gateway for Computer B NIC 3 (eth2): static 192.168.2.1, gateway for Computer C Computer B: static 192.168.1.2, using gateway 192.168.1.1 (NIC 2). Computer C: static 192.168.2.2, using gateway 192.168.2.1 (NIC 3). So I applied this to get NAT working: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Every computer can connect to the internet now. I have been applying rules to the main computer (Computer A), like dropping connections to some ports, e.g ssh: iptables -A INPUT -p tcp --dport 22 -j DROP But for instance, now I would like only allow connections for ports 20,21,22,53 and 80 in Computer C, and ignore the outside traffic if it's not related to those ports. The allowed connections should be FROM Computer C to outside, but not from outside to Computer C (I mean - Computer C is not hosting any HTTP or SSH, but it is going to use them as a client). I guess this sould be done like this: iptables -A OUTPUT -i eth2 -o eth0 -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth2 -o eth0 -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT The last rule (dropping any other traffic different from those) is at the end of the configuration, so -A should be working correctly. The thing is... it is not working. If I put the last rule like this: iptables -A FORWARD -i eth2 -o eth0 -j DROP It just drops everything and, for instance, port 21 (previously opened as you can see above) is not either working. Can you tell me what could I have done wrong? I have been struggling with this problem for some time and I am unable to solve it. Thanks!

    Read the article

  • Webserver: chrooted PHP gives mysql.sock error when attempting to reach mysql

    - by Jon L.
    Hey guys, I've configured an Ubuntu webserver with Nginx + PHP5-FPM. I've created a chrooted environment (using jailkit) that I'm tossing my developers into, from where they can develop their test applications. Chroot jail: /home/jail Nginx and PHP5-FPM run outside the chroot, but are configured to function with websites within the chrooted environment. So far, Nginx and PHP5-FPM are serving up files without issue, except for the following: When attempting to connect to MySQL, we receive this error: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' Now, I believe the issue is due to the non-chrooted php.ini referencing mysqld.sock outside of the chroot environment (it's actually using the MySQL default setting currently). My question is, how can I configure PHP to access MySQL via loopback or similar? (Found that as a suggestion in a google result, but without any instructions) Or if I'm missing some other obvious setting, let me know. If there's an option of creating a hardlink (that would remain available even if mysql is restarted), that would be handy as well.

    Read the article

  • Block IP Address including ICMP using UFW

    - by dr jimbob
    I prefer ufw to iptables for configuring my software firewall. After reading about this vulnerability also on askubuntu, I decided to block the fixed IP of the control server: 212.7.208.65. I don't think I'm vulnerable to this particular worm (and understand the IP could easily change), but wanted to answer this particular comment about how you would configure a firewall to block it. I planned on using: # sudo ufw deny to 212.7.208.65 # sudo ufw deny from 212.7.208.65 However as a test that the rules were working, I tried pinging after I setup the rules and saw that my default ufw settings let ICMP through even from an IP address set to REJECT or DENY. # ping 212.7.208.65 PING 212.7.208.65 (212.7.208.65) 56(84) bytes of data. 64 bytes from 212.7.208.65: icmp_seq=1 ttl=52 time=79.6 ms ^C --- 212.7.208.65 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 79.630/79.630/79.630/0.000 ms Now, I'm worried that my ICMP settings are too generous (conceivably this or a future worm could setup an ICMP tunnel to bypass my firewall rules). I believe this is the relevant part of my iptables rules is given below (and even though grep doesn't show it; the rules are associated with the chains shown): # sudo iptables -L -n | grep -E '(INPUT|user-input|before-input|icmp |212.7.208.65)' Chain INPUT (policy DROP) ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) DROP all -- 0.0.0.0/0 212.7.208.65 DROP all -- 212.7.208.65 0.0.0.0/0 How should I go about making it so ufw blocks ICMP when I specifically attempt to block an IP address? My /etc/ufw/before.rules has in part: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ACCEPT -A ufw-before-input -p icmp --icmp-type source-quench -j ACCEPT -A ufw-before-input -p icmp --icmp-type time-exceeded -j ACCEPT -A ufw-before-input -p icmp --icmp-type parameter-problem -j ACCEPT -A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT I'm tried changing ACCEPT above to ufw-user-input: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ufw-user-input -A ufw-before-input -p icmp --icmp-type source-quench -j ufw-user-input -A ufw-before-input -p icmp --icmp-type time-exceeded -j ufw-user-input -A ufw-before-input -p icmp --icmp-type parameter-problem -j ufw-user-input -A ufw-before-input -p icmp --icmp-type echo-request -j ufw-user-input But ufw wouldn't restart after that. I'm not sure why (still troubleshooting) and also not sure if this is sensible? Will there be any negative effects (besides forcing the software firewall to force ICMP through a few more rules)?

    Read the article

  • rsyslog - regex trouble

    - by benmccann
    I'm trying to setup the logentries service. If a log entry has a token in it then I would like to send it to api.logentries.com:10000. The token is a guid in the format aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee. Right now I'm doing: # If there's a logentries token then send it directly to logentries :msg, regex, ".*[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}.*" & @@api.logentries.com:10000 I checked the rsyslog debug logs and my regex is not matching, but I can't figure out why or how to fix it: 5245.961161378:7fb79b514700: Filter: check for property 'msg' (value ' fb1c507f-2ede-4d7f-a140-2bd8d56e133 - application - [play-akka.actor.default-dispatcher-1] - Found user: 4fb11ea5e4b00a1aeebe2800') regex '.*[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}.*': FALSE

    Read the article

  • Free space on Dedi' in CentOS

    - by Trance84
    It will sound stupid but i need to figure out how much disk space i have in my dedicated server, it runs CentOS6...the last command i issued was this [root@ks34900 ~]# df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.7G 6.4G 2.9G 69% / /dev/root 9.7G 6.4G 2.9G 69% / none 1000M 288K 1000M 1% /dev /dev/sda2 914G 200M 868G 1% /home But again, stupid as it may sound... i cant figure out how much space i have in "/" folder (root) And is it possible that "/usr" have a different space (partition)?

    Read the article

  • rTorrent, too low memory usage !?

    - by Claudiu
    I want to know from more experienced rTorrent users how to tweak the .rtorrent.rc so that rTorrent will cache disk reading and writing (same as uTorrent does). I have set the max_memory_usage = 1GB but this amount is not used. I run 6 rTorrent instances on a Quad Core, 8 GB Ram machine and total used memory reported by htop is only ~500MB. I need to use memory buffers cause disk IO activity is very high.

    Read the article

  • Postfix connect timing out remotely, working fine locally

    - by Moritz
    Running Postfix on Debian I cannot connect to send mail any more. It worked until approximately a week ago. I do not recall touching the configuration of the server during that time, which makes it difficult for me to find out what the problem is. When connecting from the server to itself it works fine: root@xxxx:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost.localdomain. Escape character is '^]'. ehlo localhost 220 mail.xxxx.de ESMTP Postfix (Debian/GNU) 250-mail.xxxx.de 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. Trying to do the same remotely times out: laptop:~ $ telnet mail.xxxx.de 25 Trying 93.xx.xx.xx... telnet: connect to address 93.xx.xx.xx: Operation timed out telnet: Unable to connect to remote host Configuration is as follows: root@xxxx:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 mailbox_command = mailbox_size_limit = 0 mydestination = localhost.localdomain, localhost.localdomain, localhost myhostname = mail.xxxx.de mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_exceptions_networks = $mynetworks smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_alias_maps = proxy:mysql:$config_directory/mysql_virtual_alias_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = proxy:mysql:$config_directory/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:$config_directory/mysql_virtual_mailbox_maps.cf virtual_minimum_uid = 150 virtual_transport = dovecot Receiving mails is no problem, as is retrieving them remotely. Do you have an idea what I could check next?

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >