Search Results

Search found 1511 results on 61 pages for 'deny'.

Page 1/61 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • /etc/hosts.deny ignored in Ubuntu 14.04

    - by Matt
    I have Apache2 running on Ubuntu 14.04LTS. To begin securing network access to the machine, I want to start by blocking everything, then make specific allow statements for specific subnets to browse to sites hosted in Apache. The Ubuntu Server is installed with no packages selected during install, the only packages added after install are: apt-get update; apt-get install apache2, php5 (with additional php5-modules), openssh-server, mysql-client Following are my /etc/hosts.deny & /etc/hosts.allow settings: /etc/hosts.deny ALL:ALL /etc/hosts.allow has no allow entries at all. I would expect all network protocols to be denied. The symptom is that I can still web browse to sites hosted on the Apache web server even though there is a deny all statement in /etc/hosts.deny The system was rebooted after the deny entry was added. Why would /etc/hosts.deny with ALL:ALL be ignored and allow http browsing to sites hosted on the apache web server?

    Read the article

  • apache deny directive

    - by user12145
    I am using apache deny to deny a country's ip ranges(Turkey in this case). However from the apache log I still see ips from .tr(using dls connection presumably) accessing and get a valid http 200 response: dslxxx.xxx-xxxxx.ttnet.net.tr what am I missing?

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Can /etc/hosts.deny/allow be overridden?

    - by Tar
    I have security measures put in place to keep unwanted users out of my server. I've changed the SSH port, disabled root login, have a software firewall to block portscans, and have entries in hosts.deny and hosts.allow. I have various services denied to all but another server of mine should my IP change, and two other administrators + my own IP address. My question is, can hosts.deny/allow configuration be overridden so that they can gain access to my server? Does using chroot jail for running things like an IRC server and Teamspeak server prevent people from gaining access to my server and screwing with it?

    Read the article

  • nginx deny directory and files to be downloaded

    - by YeppThat'sMe
    gurus. I have a problem and i dont know how to solve it. I am working with Git and Compass/SASS on some projects. Now i want to protect those directories. When i go only to the folder its all fine – i get what i expected a 403 forbidden. location ~ /\.git { deny all; } But when i try use the full path to the config file from git the browser start to download it. Same scenario with compass. There is a config.rb file within the folder which also starts to download it. How can i prevent this behaviour? How can i deny downloading specific files?

    Read the article

  • NTFS: Deny all permissions for all files, except where explicitly added

    - by Simon
    I'm running a sandboxed application as a local user. I now want to deny almost all file system permissions for this user to secure the system, except for a few working folders and some system DLLs (I'll call this set of files & directories X below). The sandbox user is not in any group. So it shouldn't have any permissions, right? Wrong, because all "Authenticated Users" are a member of the local "Users" group, and that group has access to almost everything. I thought about recursively adding deny ACL-entries to all files and directories and remove them manually from X. But this seems excessive. I also thought about removing "Authenticated Users" from the "Users" group. But I'm afraid of unintended side-effects. It's likely that other things rely on this. Is this correct? Are there better ways to do this? How would you limit the filesystem permissions of a (very) non-trustworthy account?

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • htaccess order Deny,Allow rule

    - by aspiringCodeArtisan
    I'd like to dynamically add IPs to a block list via htaccess. I was hoping someone could tell me if the following will work in my case (I'm unsure how to test via localhost). My .htaccess file will have the following by default: order allow,deny allow from all IPs will be dynamically appended: Order Deny,Allow Allow from all Deny from 192.168.30.1 The way I understand this is that it is by default allow all with the optional list of deny rules. If I'm not mistaken Order Deny,Allow will look at the Deny list first, is this correct? And does the Allow from all rule need to be at the end?

    Read the article

  • Cablemodem (SBG6580) firewall denying some outbound traffic? Why? Not configured [migrated]

    - by lairdb
    I finally got around to turning the syslog on for my cablemodem (Motorola Surfboard SBG6580) and I'm seeing about the expected amount of inbound attackage being blocked... 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 17.172.232.109,5223 --> 66.27.xx.xx,53814 DENY:Firewall interface access request 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,53385 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,59960 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack ...and that's great. (Sad, but great.) But I'm also seeing a HUGE amount of what appears to be denied outbound connectivity: 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request ...and Spot checking suggests that it's all legitimate traffic (Opening connections to CrashPlan, etc.), I have no restrictions configured in the modem; I don't see why it should be blocking anything. Am I misreading the log entry, and it's not actually being denied? (Seems unlikely.) Is the ISP (TWC) pushing deny tables that are not exposed in the UI? (Tinfoil hat too tight.) I'm confused. (The good news, such as it is, is that AFAIK I'm not experiencing any actual issues... but maybe I am; tough to tell.) Thanks.

    Read the article

  • chmod 700 and htaccess deny from all enough?

    - by John Jenkins
    I would like to protect a public directory from public view. None of the files will ever be viewed online. I chmoded the directory to 700 and created an htaccess file that has "deny from all" inside it. Is this enough security or can a hacker still gain access to the files? I know some people will say that hackers can get into anything, but I just want to make sure that there isn't anything else I can do to make it harder to hack. Reply: I am asking if chmod 700 and deny from all is enough security alone to prevent hackers from getting my files. Thanks.

    Read the article

  • keep getting added to hosts.deny + iptables

    - by Sc0rian
    I am confused to why this has started to happen. On my local network, if I click 10-20 apache/http links my server will decide to add me hosts.deny file and block me on iptables. Its not just apache, it seems to happen with any kinda of traffic, that comes in on a flood method. Like I use subsonic, if I change tracks 10-20 times, it will do it. I would assume I have some sort of firewall which is sitting on the server which is doing this. However I do not have fail2ban or any denyhosts in /var/lib. I cannot work out why I keep getting added to hosts.deny/iptables. Thanks

    Read the article

  • Continuous outbound connection from QNAP NAS

    - by user192702
    I notice on my firewall that my QNAP NAS is continuously sending UDP sessions out to the Internet. Every second I have 5 - 7 connections out to addresses like the following: 2013-11-10 23:17:54 Deny 192.168.60.5 93.215.212.162 6881/udp 6881 6881 2013-11-10 23:18:05 Deny 192.168.60.5 87.76.0.83 29872/udp 6881 29872 2013-11-10 23:18:05 Deny 192.168.60.5 5.164.188.224 6881/udp 6881 6881 2013-11-10 23:18:05 Deny 192.168.60.5 80.61.45.206 6881/udp 6881 6881 2013-11-10 23:18:34 Deny 192.168.60.5 37.117.204.129 6881/udp 6881 6881 2013-11-10 23:18:34 Deny 192.168.60.5 71.67.101.30 51413/udp 6881 51413 2013-11-10 23:18:34 Deny 192.168.60.5 89.28.92.191 8621/udp 6881 8621 2013-11-10 23:18:34 Deny 192.168.60.5 94.244.157.85 28221/udp 6881 28221 2013-11-10 23:18:34 Deny 192.168.60.5 213.241.61.240 9089/udp 6881 9089 2013-11-10 23:18:45 Deny 192.168.60.5 88.163.28.100 52721/udp 6881 52721 2013-11-10 23:18:45 Deny 192.168.60.5 37.55.190.20 10027/udp 6881 10027 2013-11-10 23:18:45 Deny 192.168.60.5 62.72.188.146 14306/udp 6881 14306 2013-11-10 23:19:14 Deny 192.168.60.5 85.53.244.205 51413/udp 6881 51413 2013-11-10 23:19:14 Deny 192.168.60.5 67.163.18.215 52130/udp 6881 52130 2013-11-10 23:19:14 Deny 192.168.60.5 86.172.105.140 9089/udp 6881 9089 2013-11-10 23:19:14 Deny 192.168.60.5 99.28.56.121 52383/udp 6881 52383 2013-11-10 23:19:14 Deny 192.168.60.5 109.60.184.249 46217/udp 6881 46217 2013-11-10 23:19:25 Deny 192.168.60.5 121.107.144.174 21135/udp 6881 21135 2013-11-10 23:19:25 Deny 192.168.60.5 84.39.116.180 48446/udp 6881 48446 2013-11-10 23:19:25 Deny 192.168.60.5 183.238.254.62 openvpn/udp 6881 1194 ......... This is frightening as it seems like it's been hacked to send information out. Has anyone observed this behaviour from their QNAP NAS?

    Read the article

  • deny-uncovered-http-methods in Servlet 3.1

    - by reza_rahman
    Servlet 3.1 is a relatively minor release included in Java EE 7. However, the Java EE foundational API still contains some very important changes. One such set of features are the security enhancements done in Servlet 3.1 such as the new deny-uncovered-http-methods option. Servlet 3.1 co-spec lead Shing Wai Chan outlines the use case for the feature and shows you how to use it in a recent code example driven post. You can also check out the official specification yourself or try things out with the newly released Java EE 7 SDK.

    Read the article

  • .htaccess to deny access to most xml files

    - by CEich
    I recently had a Joomla site hacked, so I'm trying to harden the site a bit. There's a section in the recommended .htaccess that restricts outside access to the xml files that come with extensions. However, it also keeps my sitemap.xml file from being accessed. How do I allow a certain file whiles keeping the rest? here's the default code: <Files ~ "\.xml$"> Order allow,deny Deny from all Satisfy all </Files> and my modification that caused a 500 error: <Files ~ "(?!sitemap)\.xml$"> Order allow,deny Deny from all Satisfy all </Files>

    Read the article

  • hosts.deny not blocking ip addresses

    - by Jamie
    I have the following in my /etc/hosts.deny file # # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but it is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular # you should know that NFS uses portmap! ALL:ALL and this in /etc/hosts.allow # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # ALL:xx.xx.xx.xx , xx.xx.xxx.xx , xx.xx.xxx.xxx , xx.x.xxx.xxx , xx.xxx.xxx.xxx but i am still getting lots of these emails: Time: Thu Feb 10 13:39:55 2011 +0000 IP: 202.119.208.220 (CN/China/-) Failures: 5 (sshd) Interval: 300 seconds Blocked: Permanent Block Log entries: Feb 10 13:39:52 ds-103 sshd[12566]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:53 ds-103 sshd[12575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root whats worse is csf is trying to auto block these ip's when the attempt to get in but although it does put ip's in the csf.deny file they do not get blocked either So i am trying to block all ip's with /etc/hosts.deny and allow only the ip's i use with /etc/hosts.allow but so far it doesn't seem to work. right now i'm having to manually block each one with iptables, I would rather it automatically block the hackers in case I was away from a pc or asleep

    Read the article

  • is there any other way to deny php files

    - by moustafa
    see this and this <IfModule !mod_php5.c> <FilesMatch "\.php$"> Order allow,deny Deny from all Allow from none </FilesMatch> </IfModule> now i cant know mod_php5.c name becuase iam is not the server owner me is just have small host now is there any other way to do that this man i want deny access php file when the php is no longer here

    Read the article

  • In Exim, is RBL spam rejected prior to being scanned by SpamAssassin?

    - by user955664
    I've recently been battling spam issues on our mail server. One account in particular was getting hammered with incoming spam. SpamAssassin's memory use is one of our concerns. What I've done is enable RBLs in Exim. I now see many rejection notices in the Exim log based on the various RBLs, which is good. However, when I run Eximstats, the numbers seem to be the same as they were prior to the enabling of the RBLs. I am assuming because the email is still logged in some way prior to the rejection. Is that what's happening, or am I missing something else? Does anyone know if these emails are rejected prior to being processed by SpamAssassin? Or does anyone know how I'd be able to find out? Is there a standard way to generate SpamAssassin stats, similar to Eximstats, so that I could compare the numbers? Thank you for your time and any advice. Edit: Here is the ACL section of my Exim configuration file ###################################################################### # ACLs # ###################################################################### begin acl # ACL that is used after the RCPT command check_recipient: # to block certain wellknown exploits, Deny for local domains if # local parts begin with a dot or contain @ % ! / | deny domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] # to restrict port 587 to authenticated users only # see also daemon_smtp_ports above accept hosts = +auth_relay_hosts condition = ${if eq {$interface_port}{587} {yes}{no}} endpass message = relay not permitted, authentication required authenticated = * # allow local users to send outgoing messages using slashes # and vertical bars in their local parts. # Block outgoing local parts that begin with a dot, slash, or vertical # bar but allows them within the local part. # The sequence \..\ is barred. The usage of @ % and ! is barred as # before. The motivation is to prevent your users (or their virii) # from mounting certain kinds of attacks on remote sites. deny domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ # local source whitelist # accept if the source is local SMTP (i.e. not over TCP/IP). # Test for this by testing for an empty sending host field. accept hosts = : # sender domains whitelist # accept if sender domain is in whitelist accept sender_domains = +whitelist_domains # sender hosts whitelist # accept if sender host is in whitelist accept hosts = +whitelist_hosts accept hosts = +whitelist_hosts_ip # envelope senders whitelist # accept if envelope sender is in whitelist accept senders = +whitelist_senders # accept mail to postmaster in any local domain, regardless of source accept local_parts = postmaster domains = +local_domains # accept mail to abuse in any local domain, regardless of source accept local_parts = abuse domains = +local_domains # accept mail to hostmaster in any local domain, regardless of source accept local_parts = hostmaster domains =+local_domains # OPTIONAL MODIFICATIONS: # If the page you're using to notify senders of blocked email of how # to get their address unblocked will use a web form to send you email so # you'll know to unblock those senders, then you may leave these lines # commented out. However, if you'll be telling your senders of blocked # email to send an email to [email protected], then you should # replace "errors" with the left side of the email address you'll be # using, and "example.com" with the right side of the email address and # then uncomment the second two lines, leaving the first one commented. # Doing this will mean anyone can send email to this specific address, # even if they're at a blocked domain, and even if your domain is using # blocklists. # accept mail to [email protected], regardless of source # accept local_parts = errors # domains = example.com # deny so-called "legal" spammers" deny message = Email blocked by LBL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains sender_domains = +blacklist_domains # deny using hostname in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts # deny using IP in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts_ip # deny using email address in blacklist_senders deny message = Email blocked by BSAL - to unblock see http://www.example.com/ domains = +use_rbl_domains senders = +blacklist_senders # By default we do NOT require sender verification. # Sender verification denies unless sender address can be verified: # If you want to require sender verification, i.e., that the sending # address is routable and mail can be delivered to it, then # uncomment the next line. If you do not want to require sender # verification, leave the line commented out #require verify = sender # deny using .spamhaus deny message = Email blocked by SPAMHAUS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = sbl.spamhaus.org # deny using ordb # deny message = Email blocked by ORDB - to unblock see http://www.example.com/ # # only for domains that do want to be tested against RBLs # domains = +use_rbl_domains # dnslists = relays.ordb.org # deny using sorbs smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = dnsbl.sorbs.net=127.0.0.5 # Next deny stuff from more "fuzzy" blacklists # but do bypass all checking for whitelisted host names # and for authenticated users # deny using spamcop deny message = Email blocked by SPAMCOP - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = bl.spamcop.net # deny using njabl deny message = Email blocked by NJABL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.njabl.org # deny using cbl deny message = Email blocked by CBL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = cbl.abuseat.org # deny using all other sorbs ip-based blocklist besides smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.sorbs.net!=127.0.0.6 # deny using sorbs name based list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ domains =+use_rbl_domains # rhsbl list is name based dnslists = rhsbl.sorbs.net/$sender_address_domain # accept if address is in a local domain as long as recipient can be verified accept domains = +local_domains endpass message = "Unknown User" verify = recipient # accept if address is in a domain for which we relay as long as recipient # can be verified accept domains = +relay_domains endpass verify=recipient # accept if message comes for a host for which we are an outgoing relay # recipient verification is omitted because many MUA clients don't cope # well with SMTP error responses. If you are actually relaying from MTAs # then you should probably add recipient verify here accept hosts = +relay_hosts accept hosts = +auth_relay_hosts endpass message = authentication required authenticated = * deny message = relay not permitted # default at end of acl causes a "deny", but line below will give # an explicit error message: deny message = relay not permitted # ACL that is used after the DATA command check_message: accept

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • Revert "Deny" permissions in Windows 7

    - by saurabhj
    I made a very dumb mistake and I am hoping there is a way to fix this without having to boot in through a Linux Live CD and extracting the data. My user login to my Windows 7 system is: John John is part of the Administrator's group. I have a folder called "C:\Users\John" I tried to make this folder accessible to ONLY John (and deny from all other Administrators) by going to the Folder, Right Click Secturity tab and then selecting all the checkboxes under "Deny" while having selected the "Administrators" group. As a result, I cannot access this folder from any of the accounts: "John" and "Administrator" as both of them belong to the Administrators groupd and deny permissions out-weigh the "Allow Permissions" Is there any way I could revert this back? Thanks a million!

    Read the article

  • hosts.deny not working

    - by Captain Planet
    Currently I am watching the live auth.log and someone is continuously trying the brute force attack for 10 hours. Its my local server so no need to worry but I want to test. I have installed denyhosts. There is already an entry for that IP address in hosts.deny. But still he is trying the attacks from same IP. System is not blocking that. Firstly I don't know how did that IP address get entered in that file. I didn't enter it, is there any other system script which can do that. hosts.deny is sshd: 120.195.108.22 sshd: 95.130.12.64 hosts.allow ALL:ALL sshd: ALL Is there any iptable setting that can override the host.deny file

    Read the article

  • Exim4: Deny outgoing emails with specific destination domains to being sent to the smarthost

    - by Yoann P
    I try to deny outgoing emails with specific destination domains to being sent to the smarthost but unsuccessfully. I'm on a debian "squeeze" configured to use a smarthost. vi /etc/exim4/conf.d/acl/30_exim4-config_check_rcpt Add right after "acl_check_rcpt:" deny message = Domain $domain is prohibited for outgoing mails domains = lsearch;/etc/exim4/restricted_domains Reload exim, but the mails to the restricted domains continue to go out I also tried to add the acl_not_smtp after reading this post but without success either. vi /etc/exim4/conf.d/main/02_exim4-config_options Add "acl_not_smtp = acl_check_not_smtp" vi /etc/exim4/conf.d/acl/30_exim4-config_check_rcpt And add at the top of the file acl_check_not_smtp: deny message = Domain $domain is prohibited for outgoing mails domains = lsearch;/etc/exim4/restricted_domains Can anybody point me what i'm doing wrong please? Thanks, Best regards,

    Read the article

  • Configure IIS 6 to deny access to specific file based upon IP address

    - by victorferreira
    Hello guys, We are using IIS 6 as our webserver. And we need to deny the access for one specif file, placed in only one specific URL, to everybody OUTSIDE the local network. In other words, if somebody is trying to access that filme/page from their own computer at home, using the internet, they must not succeed. But, if the same person try to do that at the same network of the web server, its ok. I am not sure about that, but Apache uses ORDER DENY,ALLOW. You specify the URL, allow or deny to all or to a range of IP. Any suggestions? Thanks!

    Read the article

  • Configure IIS 6 to deny access to specific file based upon IP address

    - by victorferreira
    Hello guys, We are using IIS 6 as our webserver. And we need to deny the access for one specif file, placed in only one specific URL, to everybody OUTSIDE the local network. In other words, if somebody is trying to access that filme/page from their own computer at home, using the internet, they must not succeed. But, if the same person try to do that at the same network of the web server, its ok. I am not sure about that, but Apache uses ORDER DENY,ALLOW. You specify the URL, allow or deny to all or to a range of IP. Any suggestions? Thanks!

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >