Search Results

Search found 4830 results on 194 pages for 'conf d'.

Page 165/194 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • nginx with ssl: I get a 403 and log "directory index of '...dir...' is forbidden" log message. works fine with unencrypted connection

    - by user72464
    As mentioned in the title, I had nginx working fine with my rails app, until I tried to add the ssl server. The unencrypted connection still works but the ssl always returns me a 403 page with the following line in the error log: directory index of "/home/user/rails/" is forbidden, client: [my ip], server: _, request: "GET / HTTP/1.1", host: "[server ip]" Below my nginx.conf server block: server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; client_max_body_size 4G; keepalive_timeout 5; root /home/user/rails; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://0.0.0.0:8080; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/user/rails; } } the /home/user/rails directory and it's parent have all read to all rights. and they belong to the user nginx. the certificate and key file have the following rights: -rw-r--r-- 1 nginx root 830 Nov 8 09:09 server.crt -rw--w---- 1 nginx root 887 Nov 8 09:09 server.key any clue?

    Read the article

  • Packets being dropped by iptables

    - by Shadyabhi
    I am trying to create a Software Access Point in linux. I followed the blog here. Steps I performed: Started dhcp server on wlan0. Properly configured hostapd.conf Enabled packet forwarding & masquerading. Two commands executed regarding iptables: iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface wlan0 -j ACCEPT I enabled logging on iptables & I get this in everything.log Jun 29 19:42:03 MBP-archlinux kernel: [10480.180356] IN=eth0 OUT=wlan0 MAC=c8:bc:c8:9b:c4:3c:00:13:80:40:cd:80:08:00 SRC=195.143.92.150 DST=10.0.0.3 LEN=44 TOS=0x00 PREC=0x00 TTL=52 ID=38025 PROTO=TCP SPT=80 DPT=53570 WINDOW=46185 RES=0x00 ACK URGP=0 Jun 29 19:42:03 MBP-archlinux kernel: [10480.389102] IN=eth0 OUT=wlan0 MAC=c8:bc:c8:9b:c4:3c:00:13:80:40:cd:80:08:00 SRC=195.143.92.150 DST=10.0.0.3 LEN=308 TOS=0x00 PREC=0x00 TTL=52 ID=14732 PROTO=TCP SPT=80 DPT=53570 WINDOW=46185 RES=0x00 ACK PSH URGP=0 Jun 29 19:42:03 MBP-archlinux kernel: [10480.389710] IN=eth0 OUT=wlan0 MAC=c8:bc:c8:9b:c4:3c:00:13:80:40:cd:80:08:00 SRC=195.143.92.150 DST=10.0.0.3 LEN=44 TOS=0x00 PREC=0x00 TTL=52 ID=14988 PROTO=TCP SPT=80 DPT=53570 WINDOW=46185 RES=0x00 ACK FIN URGP=0 Jun 29 19:42:03 MBP-archlinux kernel: [10480.621118] IN=eth0 OUT=wlan0 MAC=c8:bc:c8:9b:c4:3c:00:13:80:40:cd:80:08:00 SRC=195.143.92.150 DST=10.0.0.3 LEN=44 TOS=0x00 PREC=0x00 TTL=52 ID=63378 PROTO=TCP SPT=80 DPT=53570 WINDOW=46185 RES=0x00 ACK FIN URGP=0 I have almost no knowledge of iptables, all I did was through googling. So, can anyone help me in making me understand what wrong is happening here? I have tried running tcpdump on wlan0 & http packets are being sent from wlan0.

    Read the article

  • Computer sending data while turned off

    - by Nicklas Ansman
    I have a some what strange problem (which could have and easy and obvious solution for all I know). My problem is that when I've booted ubuntu (now 10.4 but same problem with 9.10) and turns it off it starts sending a HUGE amount of data via the ethernet cable, so much in fact that my router can't handle it and stops responding. As far as I can tell the computer is completely turned off with no fans spinning. I can add that if I boot windows I do not have this problem, just when exiting ubuntu. There are two "fixes" for my problem: Pull the ethernet cable until the next boot Turn off power to the PSU and wait for the capacitors to unload Is there anyone who knows what could be going on? I'd be happy to post some logs or conf-files. Currently I'm using the ethernet port on my motherboard which is a Asus P6T Deluxe V2 with an updated version of the BIOS (maybe not the latest but since it only happens when I've been in ubuntu I don't wanna mess with the BIOS too much). Regards Nicklas ---------Update 1---------- The router is a D-Link DIR 655 with the latest firmware. ---------Update 2---------- I've now reinstalled ubuntu (with 10.4) and I still experience the same problem.

    Read the article

  • DPMS does not work: the monitor is not switched off

    - by bortzmeyer
    I have a monitor which was properly switched off by my Debian PC when unused. I attached it to another machine and, this times, it is never switched off. In /etc/X11/xorg.conf, I have: Section "Monitor" Identifier "Generic Monitor" Option "DPMS" It is recognized when X11 starts: (II) Loading extension DPMS ... (II) VESA(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display ... (**) Option "dpms" (**) VESA(0): DPMS enabled The operating system is Debian stable "lenny". The graphics card is: 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrate d Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 2a6f Flags: bus master, fast devsel, latency 0, IRQ 5 Memory at fe900000 (32-bit, non-prefetchable) [size=512K] I/O ports at b080 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fe800000 (32-bit, non-prefetchable) [size=1M] Capabilities: [90] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable-Capabilities: [d0] Power Management version 2 X11 is: X.Org X Server 1.4.2 Release Date: 11 June 2008 X Protocol Version 11, Revision 0 Build Operating System: Linux Debian (xorg-server 2:1.4.2-10.lenny2) Current Operating System: Linux ludwigVII 2.6.26-2-686 #1 SMP Sun Jun 21 04:57:3 8 UTC 2009 i686 Build Date: 08 June 2009 09:12:57AM

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Arch Linux: eth0 no carrier - network fails at boot

    - by user905686
    The problem My computer is connected to a network where dhcp is required. So my network configuration in /etc/rc.conf looks like interface=eth0 address= netmask= broadcast= gateway= My deamons are DAEMONS=(!hwclock syslog-ng network netfs crond ntpd) With this configuration, Arch hangs at boot a long time at "Network" (Still it says "[done]", but after boot I have no connection). I found out two workaround: Workaround 1 remove network from deamons run mii-tool --reset eth0 and dhcpcd eth0 after boot (somehow it does not work when placing these commands in /etc/rc.local. Then dhcp work very quickly (because of the reset!). Before executing the first command, ip link show eth0 has "NO CARRIER" in output. Afterwards, it doesn´t. (Also, mii-tool first shows "no link", afterwards eth0: 10 Mbit, half duplex, link ok. Workaround 2 Change network configuration to interface=eth0 address=x.y.z.21 netmask=255.255.255.0 broadcast=xxx.y.z.255 gateway=x.y.z.254 whereas x, y, z build the specific adresses of the network (Though dhcp is used, I get a static ip). Add the commands mii-tool --reset eth0 and dhcpcd eth0 to /etc/rc.local Now network starts quickly at boot (though I don´t know if successfully), the commands in /etc/rc.local are executed and the connection is fine after login. What to do? So the problem seems to be that dhcpcd stucks at "wating for carrier" or sth. I do not like the workaround, because some deamons need network (though they seem to start). What can I do to have eth0 ready for dhcp at boot? Or is there another problem?

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • How to set up hosts file for local environment?

    - by n00b0101
    I'm trying to create subdomains on my localhost and am way out of my territory... I'm running MAMP on my Mac OS X and I thought/think I had/have to do the following: (Assuming I want to create me.localhost.com and you.localhost.com) (1) Edit /private/etc/hosts Right now, it looks like this: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost So, do I just make it: 127.0.0.1 localhost 127.0.0.1 me.localhost.com 127.0.0.1 you.localhost.com 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost (2) I'm assuming I don't need to mess with DNS at all because it's local? So, the hosts file should suffice? (3) And then, I need to edit my httpd.conf file to include virtual hosts? I tried this, but it's not picking it up... NameVirtualHost * <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/me.localhost.com" ServerName me.localhost.com </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/you.localhost.com" ServerName you.localhost.com </VirtualHost> Not sure if I'm way off-base here... Help is greatly appreciated!

    Read the article

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • MongoDB data directory transfer and upgrade

    - by KPL
    I just transferred my data directory (of Mongo 1.6.5) to a new server and installed Mongo 2.0 on it. I set the data directory path and did sudo server mongod restart. It failed, and the log file output says this - ***** SERVER RESTARTED ***** Sun Oct 9 07:51:47 [initandlisten] MongoDB starting : pid=8224 port=27017 dbpath=/database/mongodb 64-bit host=domU-12-31-39-09-35-81 Sun Oct 9 07:51:47 [initandlisten] db version v2.0.0, pdfile version 4.5 Sun Oct 9 07:51:47 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222 Sun Oct 9 07:51:47 [initandlisten] build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41 Sun Oct 9 07:51:47 [initandlisten] options: { auth: "true", config: "/etc/mongod.conf", dbpath: "/database/mongodb", fork: "true", logappend: "true", logpath: "/var/log/mongo/mongod.log", nojournal: "true" } Sun Oct 9 07:51:47 [initandlisten] couldn't open /database/mongodb/local.ns errno:1 Operation not permitted Sun Oct 9 07:51:47 [initandlisten] error couldn't open file /database/mongodb/local.ns terminating Sun Oct 9 07:51:47 dbexit: Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close listening sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to flush diaglog... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 9 07:51:47 [initandlisten] shutdown: closing all files... Sun Oct 9 07:51:47 [initandlisten] closeAllFiles() finished Sun Oct 9 07:51:47 [initandlisten] shutdown: removing fs lock... Sun Oct 9 07:51:47 dbexit: really exiting now I have already run it with --upgrade once.

    Read the article

  • PHP files are downloaded, not executed in UserDir on Apache

    - by Fabian
    We're running a webserver using Debian 6.0.3 with Apache 2, we recently upgraded from Debian 5 to 6. Since then php scripts in the user directories (using mod_userdir) have stopped working, they are downloaded instead of being executed. There is also a website using php outside of the user directories, and that one continues to work fine, so PHP seems to generally work on the server. I tested it with several PHP files, among the a simple phpinfo file that works fine on the main site, but is just downloaded when copying it to one of the user directories. The php files and the directory containing them are executable for everyone. The option in the Apache php5.conf that by default disables PHP in the user directories, is commented out, so the php5.ini looks like this: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> # To re-enable php in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule> </IfModule> We restarted Apache after changing this. I'm running out of ideas now what the problem could be, and I don't know how I could really determine which problem is preventing those php files from being executed. Any ideas on how I can solve this? Update: Strangely, PHP seems to work fine in subfolders of user directories, so if I copy a PHP file from /home/user/public_html/ to /home/user/public_html/test/ it suddenly works.

    Read the article

  • Changing open-files-limit in mysql 5.5

    - by davidv
    I'm having an issue with mysql 5.5 running on Ubuntu 12.04 with the open-files-limit parameter. I recently noticed some problems due to the 1024 limit, and actually the main system limit was set to 1024, so I modified /etc/security/limits.conf with the following: * soft nofile 32000 * hard nofile 32000 root soft nofile 32000 root hard nofile 32000 After that I check the ulimit value for root and even for mysql user, both returned the new value: 32000, so I assume the change has already been done. I also changed the value at the my.cnf file, setting open-files-limit to 24000, like this: open-files-limit = 24000 Now comes the odd part, when I restart the mysql service and check the open_files_limit variable, it returns that it's still set to 1024, so I'm having the same problems that before (obviously), I tried to use open-files-limit instead open_files_limit in the my.cnf config file, same result, BUT if I override the service command to start the service and start only using mysqld (no additional parameters), the service starts and when I check the parameter it returns 32000... I don't know where it's taking that value from, as it's not set at my.cnf and it's not being given through command line, at least, not for myself. Any ideas about why it's not working the change and how to solve it the normal way (launching it through service...)?

    Read the article

  • My DNS works! But, what is the simplest way to add something to it?

    - by Alex
    This is my current DNS example.com.db zone file. I followed a tutorial. It works, because when I point to this DNS from another server via resolve.conf, it will actually forward me to the right IP when I do "ping example.com". ; ; BIND data file for example.com ; $TTL 604800 @ IN SOA example.com. info.example.com. ( 2007011501 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800) ; Default TTL ; @ IN NS ns1.example.com. @ IN NS ns2.example.com. example.com. IN MX 10 mail.example.com. example.com. IN A 192.168.254.1 www IN CNAME example.com. mail IN A 192.168.254.1 ftp IN CNAME example.com. example.com. IN TXT "v=spf1 ip4:192.168.254.1 a mx ~all" mail IN TXT "v=spf1 a -all" Right now, ping example.com....goes to 192.168.254.1. That's great!!! it works! My question is--how can I add something do this file so that when my other servers: ping dbserver1....goes to 44.245.66.222 ping cacheserver1 ....goes to 38.221.44.555 I want to use it like a universal hosts file for my machines.

    Read the article

  • Nginx map module 301 redirecting

    - by Reinier Korth
    I've rebuild my website in Ruby on Rails and now I want to 301 redirect a lot of old urls using Nginx's http://wiki.nginx.org/HttpMapModule For some reason I can't get it to work. It works fine without the rewrite ^ $new permanent; line. Does anyone see what I'm missing? This my nginx.conf: server { server_name example.com; return 301 $scheme://www.example.com$request_uri; } # 301 redirect list map $uri $new { /test123 http://www.example.com/test123; /bla http://www.example.com/bladiebla; } server { server_name www.example.com; rewrite ^ $new permanent; root example/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn-<%= application %>; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • SSL Handshake negotiation on Nginx terribly slow

    - by Paras Chopra
    I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/ Here is my Nginx Conf: user www-data; worker_processes 4; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_proxied any; server_names_hash_bucket_size 128; } upstream abc { server 1.1.1.1 weight=1; server 1.1.1.2 weight=1; server 1.1.1.3 weight=1; } server { listen 443; server_name blah; keepalive_timeout 5; ssl on; ssl_certificate /blah.crt; ssl_certificate_key /blah.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://abc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • netkit: why cant my router 4 pc4 ping my router 1 pc1 - how can I solve this please?

    - by donok
    Below I have four routers connected but my pc1 on r1 cannot ping my pc4 on r4 and also my pc2 on r2 cant ping my pc4 on r4 and vice versa. Below is a network diagram: and the configurations are below that, could anyone help me please on making them accessible? ![connecting 4 routers][1] I cant post my diagram on serverfault(less than 10 rep) so I did on stackoverflow and asked the same question. pc1: ifconfig eth0 195.11.14.5 netmask 255.255.255.0 broadcast 195.11.14.255 up route add default gw 195.11.14.1 dev eth0 pc2.start: ifconfig eth0 200.1.1.7 netmask 255.255.255.0 broadcast 200.1.1.255 up route add default gw 200.1.1.1 dev eth0 pc3: ifconfig eth0 195.20.14.9 netmask 255.255.255.0 broadcast 195.20.1.255 up route add default gw 195.20.14.1 dev eth0 pc4: ifconfig eth0 200.2.1.11 netmask 255.255.255.0 broadcast 200.2.1.255 up route add default gw 200.2.1.1 dev eth0 r1: ifconfig eth0 195.11.14.1 netmask 255.255.255.0 broadcast 195.11.14.255 up ifconfig eth1 100.0.0.9 netmask 255.255.255.252 broadcast 100.0.0.11 up route add -net 200.1.1.0 netmask 255.255.255.0 gw 100.0.0.10 dev eth1 route add default gw 100.0.0.10 lab.conf: if you need more on that Ill post it up but I think most of the info is there. Any help would be greatly appreciated especially trying to make a connection between pc4 and pc1, even if you think it does not make sense please explain why. Thank you.

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >