I have a webapp running in Jetty on Mac OS 10.6.
After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file.
This properties file is included inside the .war file deployed to the /webapps directory.
If I restart Jetty as the superuser the web service works again just fine.
Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is:
Problem accessing /my-web-service. Reason:
INTERNAL_SERVER_ERROR
Caused by:
java.lang.NullPointerException
at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35)
at com.company.service.ServletHandler.doGet(ProxyClass.java:66)
...
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Here's where the properties files exist that it's trying to read from the .war file:
And this is how the properties are being read from the classpath:
Properties properties = new Properties();
properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream(
"app.properties"));
Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.
I have a startup script placed in /etc/init.d wherein I make the following call:
nohup sudo -u myuser $CATALINA_HOME/bin/startup.sh 2>&1
This causes Tomcat to be run as myuser, which is expected. However after issuing the reboot command the system starts up and root is now the owner of this process. How can I force the process to be started off as myuser on reboot?
Goal is to have a collection of dot files (.bashrc, .vimrc, etc.) in a central location. Once it's there, Puppet should push out the files to all managed servers.
I initially was thinking of giving users FTP access where they could upload their dot files and then having an rsync cron job. However, it might not be the most elegant or robust solution. Wanted to see if anyone else had some recommendations.
We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage.
According to RedHats documentation we should choose the virtual-guestprofile.
What it is doing can be seen here: tuned.conf
We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us.
However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns.
As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed.
Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds).
The other is why they are increasing the sched_min_granularity_ns.
If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?
I'm just wondering if I need to restart my server after editing fstab and mtab. I changed something in this file manually due to problem with awstats report.
I am using ISPConfig 3 with the help of the tutorial from howtoforge. But due to removing/deleting of some account, the configuration of fstab and mtab messed up.
I also ask this question at howtoforge forum but until now no one has answered. If you'd like to read my question please visit it here.
I tried very hard to fix the problem w/o luck.
Update:
Here's what happen to my fstab:
Before the value was (I omitted the other):
/var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web1/log none bind,nobootwait 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web2/log none bind,nobootwait 0 0
So I changed it to the correct path:
/var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web2/log none bind,nobootwait 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web3/log none bind,nobootwait 0 0
I also found mtab to have the same value as above that's why I edited it manually.
from:
/var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web1/log none rw,bind 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web2/log none rw,bind 0 0
to:
/var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web2/log none rw,bind 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web3/log none rw,bind 0 0
I edited those value because the correct path of mydomain.com and example.com should be under web2 and web3 folder respectively.
As of now the log of example.com is pointed to:
/var/www/clients/client1/web2/log
when it should be:
/var/www/clients/client1/web3/log
So I am thinking that this is because of fstab and mtab.
Please guide me how to point the log correctly to it's default directory.
I explain the scenario one by one at this link.
to dawud:
Based on your example mount -o remount,noexec /var, should I run mount -o remount,noexec /var/log/ispconfig/httpd/example.com?
I have a 4 port Digium card in there, and have 4 lines running smoothly. Now, we added ANOTHER 4 port card and have 4 more analog lines coming into the Trixbox server. It still runs the 4 fine, but what do I need to do to add the additional 4 phone numbers/lines?
I want it to act exactly as before, there's nothing special about the new lines. We just need more lines so that when we have 4 out of state customers call, we can have 4 more call and not get the busy signal.
Trixbox CE 2.8
How can I make statd connect to other IP address other than 127.0.0.1?
I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04.
The private ip address is 192.168.1.202
I changed /etc/default/portmap to add:
OPTIONS="-i 192.168.1.202"
The command lsof -n | grep portmap returns:
portmap 10252 daemon cwd DIR 202,0 4096 2 /
portmap 10252 daemon rtd DIR 202,0 4096 2 /
portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap
portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so
portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so
portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6
portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so
portmap 10252 daemon 0u CHR 1,3 960 /dev/null
portmap 10252 daemon 1u CHR 1,3 960 /dev/null
portmap 10252 daemon 2u CHR 1,3 960 /dev/null
portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket
portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc
portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN)
portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping
I defined in /etc/hosts the following:
192.168.1.202 server.local
In /etc/default/nfs-common I changed STATDOPTS to:
STATDOPTS="--name server.local"
Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows:
Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting
Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags:
Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp).
An strace -f rpc.statd -n server.local results in a lot of lines, including this one:
sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56
I write here because I'm really lost, please stay with me because it's not easy to explain.
A company asked me to set-up a private server, now I'm a programmer so I got a solution with technical support and cpanel which helped me to setup everything and it's working smoothless. I'm by no means a professional sysadmin, but I have a fair knowledge of server configurations, but this problem is way over my knowledge, and apparently way over the knowledge of most sysadmins, I really hope that here I'll find someone with enough experience to help me or at least give me more insight.
Now this company for which I'm consulting operates in the UAE (United Arab Emirates) and from there the server is almost unreachable. It started with ns not registering in the UAE, after a week that sorted itself out and now the site is indeed reachable, but it takes almost 2 minutes to load a webpage with one line of text. Emails go in timeout.
The domain currently parked there has been bought appositely for tests, the main one that was supposed to go there, after a catastrophic week has been transferred to a shared hosting solution in the UK, and from there it works like a charme.
Now after doing some research I discovered that I'm not alone in this, there are several reports of webmasters discovering that their website is not reachable inside the UAE, and mind this has nothing to do with the state-wide block of questionable sites, because in that case an error message appears, this seems to be related to the infrastructure of the UAE, which apparently reroutes everything through their own "fake" internet.
Apparently new servers with their own IP are not recognized (yet?) by the UAE infrastructure, while shared hosting solutions seeing that they operates tons of other websites are more likely to be part of the UAE network.
Now my questions are:
1) Has someone a real explanation for this? The only thing I can think of is that the server is on a new IP that is not yet recognized by the UAE, but that doesn't explain why it loads (even if after 2 minutes). I don't have any help from within the UAE as the only people that are "experts" are questionable companies that simply try to sell their own services.
2) If there is really some kind of block of new servers, is it possible to know before if a server is reachable from within the UAE, currently this is not a ns problem as even accessing the server with its IP result in a 2 minute wait.
3) Can it be that the problem lies somewhere else? There are some tests that I can perform? I'm not physically in the UAE, but I can ask the people there, or use teamviewer. Could it be some misconfiguration on the server (mind that the site works EVERYWHERE else in the world).
Thank you for ANY kind of help
I'm having an issue with a CentOS trixbox server which is dual-homed (one private facing NIC [eth1], one internet-facing NIC [eth0]).
I can't seem to get the default gateway to set properly to our ISP's GW via eth0. I've modified the /etc/sysconfig/network to contain both a GATEWAY & GATEWAYDEV line and removed the GATEWAY line from /etc/sysconfig/network-scripts/ifcfg-eth1 (as well as /etc/sysconfig/network-scripts/ifcfg-eth0).
No default GW shows up in the routing table unless it's specified in the ifcfg-eth1 file (which both the wrong interface and wrong gateway IP), otherwise, the routing table simply does not contain a default gateway..any ideas would be greatly appreciated!
Thanks!
EDIT
Just realized when attempting to add the default gateway manually using the route add command, I receive an error stating:
SIOCADDRT: Network is unreachable
I know this error can occur when your default gateway and interface IP address are not on the same subnet..in this case, my public IP address of eth0 is a /29.
I've been playing with multitouch on my Thinkpad and read a few tutorials on how to setup it. One of them mentioned /usr/hal/fdi/policy/20thirdparty/11-x11-synaptics.fdi, I edited it and enabled SHMConfig through it. Later I found out about /etc/hal/policy/ directory and put some customization for my touchpad there as well in separate fdi file.
But now it looks like touchpad doesn't care about my customizations. I have gsynaptec installed and can configure it though GUI, I can configure it with synclient but I can't set any values through fdi files. I even turned off SHMConfig, reverting 11-x11-synaptics,fdi file to it's original state but it seems like SHMConfig still enabled, otherwise I wouldn't be able to configure properties in runtime.
So, I was thinking, maybe there's additional hal files I don't know about. How can I find them, particularly ones responsible for turning SHMConfig on?
hi friends I use the following command to find under /var some param in my script
grep -R "param" /var/* 2/dev/null |grep -wq "param"
my problem is that: after grep find the param in file grep continue to search until all searches under /var/* will completed
How to perform stop immediately after grep match the param word
For example when I run the: grep -R "param" /var/* 2/dev/null |grep -wq "param"
grep find the param after one second.
But grep continue to Search other same param on other files and its take almost 30 seconds
How to stop the grep immediately after param match?
THX
I have a server which I am testing for functionality (not load, not stress) with tsung. 50 users / second, 100 total users. Judging from tsung (tsung is the testing framework) graphs, there TCP connections (red line) drops to 0 while the commenced user sessions (green line) does not. Server logs show nothing to be gripping onto, so I am speculating some kind of TCP issue.
Should this be the case ? Where would I look further on the server, any logs / tools to be looking at ? Only SSH available, no GUI.
> root@XMPP:~# cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=11.10
> DISTRIB_CODENAME=oneiric
> DISTRIB_DESCRIPTION="Ubuntu 11.10"
Thank you
I have a small problem. I have shared keys setup on my domain, so I never type my password to login anymore.
I've forgotten my password now. This is a problem because only my user can sudo. Password authentication for root has been disabled, so without my password, I cannot do maintenance on my web server.
Is there a way to reset my password as my [now only] key-authenticated user?
Specifically, can this be done on CentOS 4?
I need to install two copies of the CentOS 5.5 (bank A and bank B) on different partitions of the same hard disk and install grub boot loader to another partition (visible from both banks). The boot loader should redirect the boot menu to bank A or bank B (according to the configuration).
The new partition is mounted to /common_partition and grub is installed on it using following command:
grub-install /dev/hda
In the new partition I'm created the following menu.lst file:
title BOOTCONTROL REDIRECT : PLEASE WAIT
root (hd0,1)
configfile /boot/menu.lst
boot
On my setup: both partitions (bank A and bank B) are primary and grub is installed on MBR.
The problem is:
but the new boot loader (on common_partition) did not load.
What wrong on my configuration?
To allow write access to Apache, I needed to chown www-data:www-data /var/www/mysite/uploads to my site's upload folder. This allows me to delete files from the folder via unlink() in a PHP script.
Unfortunately, this prevents another PHP script, which uses FTP functions, from working. I think it is because the FTP user is mike and now that the uploads directory is owned by www-data, mike cannot access it.
I added mike to the group www-data, but this does not fix the issue.
Can somebody advise me on how to allow PHP FTP functions to work in addition to file deletion using PHP's unlink() function?
cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin
cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include
cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig
cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib
rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so
ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so
rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1
ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1
mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"`
cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"`
/bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8
for I in manpages/*.8; do \
/bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \
done
/bin/install: cannot stat `manpages/*.8': No such file or directory
make: *** [installdocs] Error 1
Aborting...
==> ERROR: Makepkg was unable to build samba4.
==> Restart building samba4 ? [y/N]
==> -------------------------------
==>c
Any ideas as what is causing my build to fail? I assume it's an issue with manpages I can't figure out exactly what package it is looking for that I don't have.
I would like to mount on my RPi my Google Drive using davfs2 but I did not find any direct way to do it for Google Drive.
There are instructions on how to use dav-pocket to indirectly do that but these are from 2010. Google group discussions about the lack of direct WebDAV access to Google are roughly from the same time and I could not find any other way to do the mount.
Has anything changed and would anyone know if Google enabled WebDAV - and if so what is the URL?
An alternate synchronization system would be fine as well (rsync for instance) - I did not find any particular infos either
Thank you!
In a social networking project we want to store user's avatars in a folder. I think in one year or two it'll reach to 140K (I've seen this issue before and it will be around this number). I want to spread files in folders. If a folder contains 1000 files then create another folder and do store files from 1001 to 2000. Is this a good approach or I'm just very cautious about the issue? (File system : EXT3)
My computer is already connected to a 100Mbps LAN.
I can use wvdial to connect to internet using a modem when I have my LAN disconnected.
Now, I want to share this modem
internet to one of the ip address
available on LAN say 10.100.99.56..
First of all, can it be done?
How do I go about doing that?
Hi everyone,
I've been seeing a few issues lately on a few of my servers where an account gets hacked via outdated scripts, and the hacker uploads a cPanel / FTP Brute forcing PHP script inside the account.
The PHP File reads /etc/passwd to get the usernames, and than uses a passwd.txt file to try and brute force it's way in to 127.0.0.1:2082.
I'm trying to think of a way to block this. It doesn't POST anything except "GET /path/phpfile.php", so I can't use mod_security to block this.
I've been thinking of maybe changing permissions on /etc/passwd to 600, however I'm unsure how this will result in regards to my users.
I was also thinking of rate-limiting localhost connections to :2082, however I'm worried about mod_proxy being affected.
Any suggestions?
Background
I have a server that I'm looking to set up, and provide access to another web developer. I don't want to put many constraints on him, though I wouldn't mind isolating the site that he'll be developing from others on the server that I will develop.
The problem
Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example.
What is a good setup for the sudoers file so that he can do things like:
*install software (through apt-get)
restart apache
access mysql
configure mysql/apache
reboot
edit web development configuration type files in /etc
*
And can't do things like:
take away other admin permissions
change the root password
have control over other security/administrative functions
Example sudoer files that accomplish something like that could be useful, I'm sure that people have needed to do this before.
Hi all,
I created a user account in postfix in the form: [email protected].
When I receive emails from that account, in gmail, for example, the senders name is "editor", but I want to show something like "Editor Surname".
Any idea of how to it? Thanks in advance!
I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting:
Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card)
24" Full HD monitor connected to the HDMI output (controlled by Intel card)
23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently)
VGA output (not used) controlled by nVidia card
First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux
Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected.
Do you have any idea what can I do to make this work?
Here I leave my xRandr output and my Xorg conf:
-> xrandr
Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192
LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
1366x768 60.0*+
1024x768 60.0
800x600 60.3 56.2
640x480 59.9
VGA2 disconnected (normal left inverted right x axis y axis)
HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm
1920x1080 60.0*+ 50.0 25.0 30.0
1680x1050 59.9
1680x945 60.0
1400x1050 74.9 59.9
1600x900 60.0
1280x1024 75.0 60.0
1440x900 75.0 59.9
1280x960 60.0
1366x768 60.0
1360x768 60.0
1280x800 74.9 59.9
1152x864 75.0
1280x768 74.9 60.0
1280x720 50.0 60.0
1440x576 25.0
1024x768 75.1 70.1 60.0
1440x480 30.0
1024x576 60.0
832x624 74.6
800x600 72.2 75.0 60.3 56.2
720x576 50.0
848x480 60.0
720x480 59.9
640x480 72.8 75.0 66.7 60.0 59.9
720x400 70.1
DP1 disconnected (normal left inverted right x axis y axis)
1920x1080_60.00 60.0
The Xorg file:
# Xorg configuration file for using a tri-head display
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "HDMI" 0 0
Screen 1 "USB" RightOf "HDMI"
Option "Xinerama" "on"
EndSection
########### MONITORS ################
Section "Monitor"
Identifier "USB1"
VendorName "Unknown"
ModelName "Acer 24as"
Option "DPMS"
EndSection
Section "Monitor"
Identifier "HDMI1"
VendorName "Unknown"
ModelName "Acer 23SH"
Option "DPMS"
EndSection
########### DEVICES ##################
Section "Device"
Identifier "Device 0"
Driver "intel"
BoardName "GeForce"
BusID "PCI:0:02:0"
Screen 0
EndSection
Section "Device"
Identifier "USB Device 0"
driver "fbdev"
Option "fbdev" "/dev/fb2"
Option "ShadowFB" "off"
EndSection
############## SCREENS ######################
Section "Screen"
Identifier "HDMI"
Device "Device 0"
Monitor "HDMI1"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
Section "Screen"
Identifier "USB"
Device "USB Device 0"
Monitor "USB1"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
I have the following in my /etc/hosts.deny file
#
# hosts.deny This file describes the names of the hosts which are
# *not* allowed to use the local INET services, as decided
# by the '/usr/sbin/tcpd' server.
#
# The portmap line is redundant, but it is left to remind you that
# the new secure portmap uses hosts.deny and hosts.allow. In particular
# you should know that NFS uses portmap!
ALL:ALL
and this in /etc/hosts.allow
#
# hosts.allow This file describes the names of the hosts which are
# allowed to use the local INET services, as decided
# by the '/usr/sbin/tcpd' server.
#
ALL:xx.xx.xx.xx , xx.xx.xxx.xx , xx.xx.xxx.xxx , xx.x.xxx.xxx , xx.xxx.xxx.xxx
but i am still getting lots of these emails:
Time: Thu Feb 10 13:39:55 2011 +0000
IP: 202.119.208.220 (CN/China/-)
Failures: 5 (sshd)
Interval: 300 seconds
Blocked: Permanent Block
Log entries:
Feb 10 13:39:52 ds-103 sshd[12566]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root
Feb 10 13:39:52 ds-103 sshd[12567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root
Feb 10 13:39:52 ds-103 sshd[12568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root
Feb 10 13:39:52 ds-103 sshd[12571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root
Feb 10 13:39:53 ds-103 sshd[12575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root
whats worse is csf is trying to auto block these ip's when the attempt to get in but although it does put ip's in the csf.deny file they do not get blocked either
So i am trying to block all ip's with /etc/hosts.deny and allow only the ip's i use with /etc/hosts.allow but so far it doesn't seem to work.
right now i'm having to manually block each one with iptables, I would rather it automatically block the hackers in case I was away from a pc or asleep