Search Results

Search found 20625 results on 825 pages for 'client'.

Page 792/825 | < Previous Page | 788 789 790 791 792 793 794 795 796 797 798 799  | Next Page >

  • Ubuntu 10.04: OpenVZ Kernel and pure-ftpd issues on HOST (no guest setup yet)

    - by Seidr
    After compiling and installing the OpenVZ flavour of kernel under Ubuntu 10.04, I am unable to browse to certain directories when connecting to the pure-ftpd server. The clients are dropping into PASSIVE mode, which is fine. This behaviour was happening before the change of kernel, however now when I browse to certain directories the connection just gets dropped. This only happens with a few directories under one login (web in specific), where as with another login it happens as soon as I connect. I've got the nf_conntrack_ftp kernel module installed (required to keep track of passive FTP connections as I understand, and an alias of the ip_conntrack_ftp module), however this has provided no alleviation of my problem. This module was actually required upon initial setup of my OS to get passive FTP working correctly, however when I compiled the OpenVZ kernel a lot of these modules were missing (iptables, conntrack etc). I recompiled the kernel with the missing modules, but to no effect. I've turned verbosity for the pure-ftpd server up, and still no clues have been spotted in either syslog or the transfer log. Neither did an strace provide any clues (that I could discern anyway) - although one strange thing is both in the output to the client and in the strace I notice that it does infact probe the directory and return the number of matches - it just fails after that. One more thing to mention is that if I FTP using the same credentials locally, everything works fine. This suggests that it is in fact an issue with either the conntrack_ftp module not functioning as expected, or a deeper networking issue. The Kernel was compiled and installed following the instructions at https://help.ubuntu.com/community/OpenVZ - bar the changes to the Kernel configuration (such as add iptables as a module). Below is an example of the log sent to the data (under FileZilla). Status: Resolving address of xxxx.co.uk Status: Connecting to 78.46.xxx.xxx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 4 of 10 allowed. Response: 220-Local time is now 08:52. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER xxx Response: 331 User xxx OK. Password required Command: PASS ******** Response: 230-User xxx has group access to: client1 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Status: Directory listing successful Status: Retrieving directory listing... Command: CWD /web Response: 250 OK. Current directory is /web Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PORT 10,0,2,30,14,143 Response: 500 I won't open a connection to 10.0.2.30 (only to 188.220.xxx.xxx) Command: PASV Response: 227 Entering Passive Mode (78,46,79,147,234,110) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 57 matches total Error: Could not read from transfer socket: ECONNRESET - Connection reset by peer Error: Failed to retrieve directory listing Any suggestions please? I'm willing to try anything!

    Read the article

  • Is Apache 2.2.22 able to sustain 1.000 simultaneous connected clients?

    - by Fnux
    For an article in a news paper, I'm benchmarking 5 different web servers (Apache2, Cherokee, Lighttpd, Monkey and Nginx). The tests made consist of measuring the execution times as well as different parameters such as the number of request served per second, the amount of RAM, the CPU used, during a growing load of simultaneous clients (from 1 to 1.000 with a step of 10) each client sending 1.000.000 requets of a small fixed file, then of a medium fixed file, then a small dynamic content (hello.php) and finally a complex dynamic content (the computation of the reimbursment of a loan). All the web servers are able to sustain such a load (up to 1.000 clients) but Apache2 which always stops to respond when the test reach 450 to 500 simultaneous clients. My configuration is : CPU: AMD FX 8150 8 cores @ 4.2 GHz RAM: 32 Gb. SSD: 2 x Crucial 240 Gb SATA6 OS: Ubuntu 12.04.3 64 bit WS: Apache 2.2.22 My Apache2 configuration is as follows: /etc/apache2/apache2.conf LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive On MaxKeepAliveRequests 1000000 KeepAliveTimeout 2 ServerName "fnux.net" <IfModule mpm_prefork_module> StartServers 16 MinSpareServers 16 MaxSpareServers 16 ServerLimit 2048 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ /etc/apache2/ports.conf NameVirtualHost *:8180 Listen 8180 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/mods-available <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /cgi-bin/php5.external <Location "/cgi-bin/php5.external"> Order Deny,Allow Deny from All Allow from env=REDIRECT_STATUS </Location> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:8180> ServerAdmin webmaster@localhost DocumentRoot /var/www/apache2 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg ##### CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> </VirtualHost> /etc/security/limits.conf * soft nofile 1000000 * hard nofile 1000000 So, I would trully appreciate your advice to setup Apache2 to make it able to sustain 1.000 simultaneous clients, if this is even possible. TIA for your help. Cheers.

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • Memcache on ubuntu server lucid and ruby 1.9.1

    - by Thiago
    Hi there, I'm trying to set up a memcache server on the above setup. I'm getting the following error: /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:443:in `load_missing_constant': uninitialized constant MemCache (NameError) from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:92:in `const_missing' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:18:in `<class:MemcacheQueueClient>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:14:in `<module:Clients>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:13:in `<module:Workling>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:12:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/client_runner.rb:2:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/starling_runner.rb:1:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote.rb:3:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `block in load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:379:in `load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:259:in `require_or_load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:425:in `load_missing_constant' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /root/voicegateway/config/environments/development.rb:20:in `block in load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `eval' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `block in load_environment' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:379:in `load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:137:in `process' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /root/voicegateway/config/environment.rb:9:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/commands/server.rb:84:in `<top (required)>' from ./server:3:in `require' from ./server:3:in `<main>' But memcache-client 1.8.3 is on the gem list. What's the problem?

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • nginx - 403 Forbidden

    - by michell90
    I've trouble to get aliases working correctly on nginx. When i try to access the aliases, /pma and /mba (see secure.example.com.conf), i get a 403 Forbidden but the base url works correctly. I read a lot of posts but nothing helped, so here i am. Nginx and php-fpm are running as www-data:www-data and the permissions for the directories are set to: drwxrwsr-x+ 5 www-data www-data 4.0K Dec 5 22:48 ./ drwxr-xr-x. 3 root root 4.0K Dec 4 22:50 ../ drwxrwsr-x+ 2 www-data www-data 4.0K Dec 5 13:10 mda.example.com/ drwxrwsr-x+ 11 www-data www-data 4.0K Dec 5 10:34 pma.example.com/ drwxrwsr-x+ 3 www-data www-data 4.0K Dec 5 11:49 www.example.com/ lrwxrwxrwx. 1 www-data www-data 18 Dec 5 09:56 secure.example.com -> www.example.com/ Im sorry for the bulk, but i thought better too much than too little. Here are the configuration files: /etc/nginx/nginx.conf user www-data www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/secure.example.com server { listen 80; server_name secure.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name secure.example.com; access_log /var/log/nginx/secure.example.com.access.log; error_log /var/log/nginx/secure.example.com.error.log; root /srv/http/secure.example.com; include /etc/nginx/ssl/secure.example.com.conf; include /etc/nginx/conf.d/index.conf; include /etc/nginx/conf.d/php-ssl.conf; autoindex off; location /pma/ { alias /srv/http/pma.example.com; } location /mda/ { alias /srv/http/mda.example.com; } } /etc/nginx/ssl/secure.example.com.conf ssl on; ssl_certificate /etc/nginx/ssl/secure.example.com.crt; ssl_certificate_key /etc/nginx/ssl/secure.example.com.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/index.conf index index.php index.html index.htm; /etc/nginx/conf.d/php-ssl.conf location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } /var/log/nginx/secure.example.com.error.log 2013/12/05 22:49:04 [error] 29291#0: *2 directory index of "/srv/http/pma.example.com" is forbidden, client: 176.199.78.88, server: secure.example.com, request: "GET /pma/ HTTP/1.1", host: "secure.example.com" EDIT: forgot to mention, i'm running CentOS 6.4 x86_64 and nginx 1.0.15 Thanks in advance!

    Read the article

  • Moved DNS and Email Hosting, Now Can't Send/Receive To/From Domains Hosted on Previous Host

    - by maxfinis
    Our company had 4 domains whose emails and DNS were hosted by one company, and then we moved the email and DNS hosting for 3 of the 4 domains to a new company. Now, the 3 domains that were moved can't send or receive emails to and from the one domain still left on the old server. All other email functions work fine for all 4 domains. There are no bouncebacks, error messages, or emails stuck in queue, and no evidence of these missing emails hitting the new servers. The new hosting company confirms that everything is fine on their end, and assures me that it's most likely an old zone file still remaining on the old nameserver, and so the emails sent from the old host is routed to what it believes is still the authoritative nameserver. Because the old zone file's MX records still contain the old resource, the requests never leave the old nameserver to go online to do a fresh search for the real (new) authoritative nameserver. The compounding problem is that the old company is rather inept and doesn't seem to have the technical expertise to identify the problem, much less fix it. (I know, I know.) Is the problem truly that this old zone file just needs to be deleted from the old company's nameserver? If so, what's the best way for me to describe this to them? If not, what do you think could be the issue? Any help is much appreciated. I'm not in IT, so all this is new to me. I know it seems weird for me (the client) to have to do this legwork, but I just want to get this resolved. Here's what I've done: Ran dig to verify that the old server's MX records still point to the old authoritative server, instead of going online to do a fresh search: ~$ dig @old.nameserver.com domainthatwasmoved.com mx ; << DiG 9.6.0-APPLE-P2 << @old.nameserver.com domainThatWasMoved.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 61227 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;domainthatwasmoved.com. IN MX ;; ANSWER SECTION: domainthatwasmoved.com. 3600 IN MX 10 mail.oldmailserver.com. ;; ADDITIONAL SECTION: mail.oldmailserver.com. 3600 IN A 65.198.191.5 ;; Query time: 29 msec ;; SERVER: 65.198.191.5#53(65.198.191.5) ;; WHEN: Sun Dec 26 16:59:22 2010 ;; MSG SIZE rcvd: 88 Ran dig to try to see where the new hosting company's servers look when emails are sent from the 3 domains that were moved, and got refused: ~$ dig @new.nameserver.net domainStillAtOldHost.com mx ; << DiG 9.6.0-APPLE-P2 << @new.nameserver.net domainStillAtOldHost.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: REFUSED, id: 31599 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;domainStillAtOldHost.com. IN MX ;; Query time: 31 msec ;; SERVER: 216.201.128.10#53(216.201.128.10) ;; WHEN: Sun Dec 26 17:00:14 2010 ;; MSG SIZE rcvd: 34

    Read the article

  • SharePoint Upgrade Global Nav Quirks?

    - by elorg
    We're working on a parallel install/upgrade of SharePoint. The client has WSS 2003 on some old hardware. We've installed MOSS 2007 in a medium farm environment. They want to use this as an opportunity to not just upgrade and use the new features, but to also better organize their content and categorize between different site collections. To accommodate, we've created a few site collections per their specifications in the new environment, and when we ran an upgrade test run we ran into a few .. quirks. We made a backup of the old content database, copied it over to the new environment and restored it as a new database. Created a new web app and attached the migrated data to do an in-place upgrade in this new "test" area. This seems pretty standard - no issues. We have to do a little bit of cleanup (e.g. reset pages to site definition, reset themes, and inherit the global nav / top link bar, etc.). Once that's done, we're using stsadm export/import to copy the individual sites over to their ultimate destinations in the various different site collections. So far so good. But then we ran into one particular site that has a link to an .aspx page in the top link bar in WSS 2003 that's not behaving properly after the upgrade. It's just a link to a "dashboard" .aspx page in a doc library - nothing special. It doesn't seem to matter what we do, or what order we do it (in the "test" web app, in the destination web app, or both). In the end, this ONE site will not allow us to create a link/tab in the global nav. It can inherit the global nav just fine. We can break the inheritance just fine. But if we want to manually add a link in the top link bar - we go through the steps that I've done 1,000x before and click OK - and the tab never appears. It doesn't matter if it's to a page within the site itself, or to Google. We can migrate over other sites into the same site collection and add a tab without issue. If we migrate this quirky site over to another site collection we run into the same issue. Yet, in the "test" web app that we're using to upgrade the data we can add a tab? If we add the tab before we export/import to the final destination, the tab is lost during the process? Has anyone run into anything like this? Any ideas? I've tried every combination of everything that I can think of and nothing works. Unless we can figure out how to get this to work, we're going to just add this tab to the global nav for the entire site collection and inherit it for this site (but that adds the link to all of the site that will inherit, which is both a pro & con for them).

    Read the article

  • Postfix sasl login failing no mechanism found

    - by Nat45928
    following the link here: http://flurdy.com/docs/postfix/ with posfix, courier, MySql, and sasl gave me a web server that has imap functionality working fine but when i go to log into the server to send a message using the same user id and password for connecting the the imap server it rejects my login to the smtp server. If i do not specify a login for the outgoing mail server then it will send the message just fine. the error in postfix's log is: Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: connect from unknown[10.0.0.50] Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: SASL authentication failure: unable to canonify user and get auxprops Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL DIGEST-MD5 authentication failed: no mechanism available Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL LOGIN authentication failed: no mechanism available Ive checked all usernames and passwords for mysql. what could be going wrong? edit: here is some other information: installed libraires for postfix, courier and sasl: aptitude install postfix postfix-mysql aptitude install libsasl2-modules libsasl2-modules-sql libgsasl7 libauthen-sasl-cyrus-perl sasl2-bin libpam-mysql aptitude install courier-base courier-authdaemon courier-authlib-mysql courier-imap courier-imap-ssl courier-ssl and here is my /etc/postfix/main.cf myorigin = domain.com smtpd_banner = $myhostname ESMTP $mail_name biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. #myhostname = my hostname alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname local_recipient_maps = mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mynetworks_style = host # how long if undelivered before sending warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, permit # Requirements for the sender details smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 # SASL smtpd_sasl_auth_enable = yes # If your potential clients use Outlook Express or other older clients # this needs to be set to yes broken_sasl_auth_clients = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain =

    Read the article

  • Strange enduser experience with Liferay, Glassfish and Apache on RedHat

    - by Pete Helgren
    Tried multiple forums to get to the bottom of this. I hope I can get some direction here: Here is the stack I am working with: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Liferay 6.0.6 on Glassfish 3.0.1 MySQL 5.0.77 Apache 2.2.3 The Liferay portal provides a variety of portlets to end users. Static content (web pages), static resources (primarily pdf and mp3 files 1mb - 80mb in size), File upload and download capabilities (primarily 40-60mb mp3 files) and online streaming of those MP3 files. Here is the strange end user experiences: Under normal load: (20-30) users uploading, downloading or streaming files and 20-30 accessing static content (some of it downloads), we see the following: 1) Clicking a link triggers the download of a portion of an MP3 (the portion is a few seconds long). 2) Clicking on a link tiggers the download of the page content rather than rendering. 3) Clicking a link causes the page to dump binary data to the end user rather than the expected content. 4) Clicking a link returns the text of a javascript file rather than rendering the page. Each occurrence is totally random (or appears so). Sometimes it works, sometimes it doesn't. It seems to have no relation to browser or client OS. The strange events seem to occur much more frequently when using an SSL connection rather than regular http. Apache serves as a proxy server only (reverse). It basically passes all the requests through to Glassfish. There isn't any static content proxy served by Apache. We rebuilt the entire stack from scratch and redeployed the portlet wars and still have the same issues. Liferay is running as a single server (not clustered). We disabled mod_cache in Apache. The problems are more frequent as the server load grows. This morning the load is pretty light and we are seeing few problems but the use of the site will grow, particularly tonight around 9pm CST through Wednesday morning. You could try the site (http://preview.bsfinternational.org) during those times and I would expect that you might experience one of the weirdnesses as you randomly click links on the site (https is invoked only when signed in). Again, https seems to exacerbate the issue. This seems very much like a caching issue but I don't know where in the stack to start peeling the onion. Apache? Liferay? Glassfish? MySQL? Maybe even Redhat? We are stumped and most forums we have posted to (LifeRay and Glassfish) have returned very few suggestions. I just need an idea of where to start looking. I understand that we could have a portlet EDIT: Opening the files in a Hex editor that appear to be pages that download rather than render, we see that the first 4000 characters are "junk" and then the "HTTP/1.1 ...." 'normal' header is seen. So something is dumping a jumble of characters up to offset 4000 (when viewing it in a Hex editor). Perhaps a clue? Ideas?

    Read the article

  • DNS Server Behind NAT

    - by Bryan
    I've got a Bind 9 DNS server sitting behind a NAT firewall, assume the Internet facing IP is 1.2.3.4 There are no restrictions on outgoing traffic, and port 53 (TCP/UDP) is forwarded from 1.2.3.4 to the internal DNS server (10.0.0.1). There are no IP Tables rules on either the VPS or the internal Bind 9 server. From a remote Linux VPS located elsewhere on the internet, nslookup works fine # nslookup foo.example.com 1.2.3.4 Server: 1.2.3.4 Address: 1.2.3.4#53 Name: foo.example.com Addresss: 9.9.9.9 However, when using the host command on the remote VPS, I receive the following output: # host foo.example.com 1.2.3.4 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; connection timed out; no servers could be reached. From the VPS, I can establish a connection (using telnet) to 1.2.3.4:53 From the internal DNS server (10.0.0.1), the host command appears to be fine: # host foo.example.com 127.0.0.1 Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: foo.example.com has address 9.9.9.9 Any suggestions as to why the host command on my VPS is complaining about the reply coming back from another port, and what can I do to fix this? Further info: From a windows host external to the network >nslookup foo.example.com 1.2.3.4 DNS request timeout timeout was 2 seconds Server: UnKnown Address: 1.2.3.4 DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds *** Request to UnKnown timed-out This is a default install of bind from Ubuntu 12.04 LTS, with around 11 zones configured. $ named -v BIND 9.8.1-P1 TCP Dump (filtered) from internal DNS server 20:36:29.175701 IP pc.external.com.57226 > dns.example.com.domain: 1+ PTR? 4.3.2.1.in-addr.arpa. (45) 20:36:29.175948 IP dns.example.com.domain > pc.external.com.57226: 1 Refused- 0/0/0 (45) 20:36:31.179786 IP pc.external.com.57227 > dns.example.com.domain: 2+[|domain] 20:36:31.179960 IP dns.example.com.domain > pc.external.com.57227: 2 Refused-[|domain] 20:36:33.180653 IP pc.external.com.57228 > dns.example.com.domain: 3+[|domain] 20:36:33.180906 IP dns.example.com.domain > pc.external.com.57228: 3 Refused-[|domain] 20:36:35.185182 IP pc.external.com.57229 > dns.example.com.domain: 4+ A? foo.example.com. (45) 20:36:35.185362 IP dns.example.com.domain > pc.external.com.57229: 4*- 1/1/1 (95) 20:36:37.182844 IP pc.external.com.57230 > dns.example.com.domain: 5+ AAAA? foo.example.com. (45) 20:36:37.182991 IP dns.example.com.domain > pc.external.com.57230: 5*- 0/1/0 (119) TCP Dump from client during query 21:24:52.054374 IP pc.external.com.43845 > dns.example.com.53: 6142+ A? foo.example.com. (45) 21:24:52.104694 IP dns.example.com.29242 > pc.external.com.43845: UDP, length 95

    Read the article

  • How to redirect http requests to https (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. # MANAGED BY PUPPET upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } # setup server with or without https depending on gitlab::gitlab_ssl variable server { listen *:80; server_name gitlab.localdomain; server_tokens off; root /nowhere; rewrite ^ https://$server_name$request_uri permanent; } server { listen *:443 ssl default_server; server_name gitlab.localdomain; server_tokens off; root /home/git/gitlab/public; ssl on; ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? I've also tried the following configurations listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; The logs tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Samba with remote LDAP authentication doesn`t see users properly

    - by LucasBr
    I'm trying to setup a samba server authenticated by a remote LDAP server, and I'm having some problems that I can't figure how to solve. I was able to make an getent passwd at samba server and I could see all users at ldapserver, but when I tried to access \\SAMBASERVER at my windows box I had this at the /var/log/samba/log.mywindowsbox: <...snip...> [2012/10/19 13:05:22.449684, 2] smbd/sesssetup.c:1413(setup_new_vc_session) setup_new_vc_session: New VC == 0, if NT4.x compatible we would close all old resources. [2012/10/19 13:05:22.449692, 3] smbd/sesssetup.c:1212(reply_sesssetup_and_X_spnego) Doing spnego session setup [2012/10/19 13:05:22.449701, 3] smbd/sesssetup.c:1254(reply_sesssetup_and_X_spnego) NativeOS=[] NativeLanMan=[] PrimaryDomain=[] [2012/10/19 13:05:22.449717, 3] libsmb/ntlmssp.c:747(ntlmssp_server_auth) Got user=[lucas] domain=[BUSINESS] workstation=[MYWINDOWSBOX] len1=24 len2=24 [2012/10/19 13:05:22.449747, 3] auth/auth.c:216(check_ntlm_password) check_ntlm_password: Checking password for unmapped user [BUSINESS]\[lucas]@[MYWINDOWSBOX] with the new password interface [2012/10/19 13:05:22.449759, 3] auth/auth.c:219(check_ntlm_password) check_ntlm_password: mapped user is: [SAMBASERVER]\[lucas]@[MYWINDOWSBOX] [2012/10/19 13:05:22.449773, 3] smbd/sec_ctx.c:210(push_sec_ctx) push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449783, 3] smbd/uid.c:429(push_conn_ctx) push_conn_ctx(0) : conn_ctx_stack_ndx = 0 [2012/10/19 13:05:22.449791, 3] smbd/sec_ctx.c:310(set_sec_ctx) setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449922, 2] lib/smbldap.c:950(smbldap_open_connection) smbldap_open_connection: connection opened [2012/10/19 13:05:23.001517, 3] lib/smbldap.c:1166(smbldap_connect_system) ldap_connect_system: successful connection to the LDAP server [2012/10/19 13:05:23.007713, 3] smbd/sec_ctx.c:418(pop_sec_ctx) pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0 [2012/10/19 13:05:23.007733, 3] auth/auth_sam.c:399(check_sam_security) check_sam_security: Couldn't find user 'lucas' in passdb. [2012/10/19 13:05:23.007743, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [lucas] -> [lucas] FAILED with error NT_STATUS_NO_SUCH_USER [2012/10/19 13:05:23.007760, 3] smbd/error.c:80(error_packet_set) error packet at smbd/sesssetup.c(111) cmd=115 (SMBsesssetupX) NT_STATUS_LOGON_FAILURE [2012/10/19 13:05:23.010469, 3] smbd/process.c:1489(process_smb) Transaction 3 of length 142 (0 toread) <...snip...> /etc/samba/smb.conf file follows: [global] dos charset = 850 unix charset = LOCALE workgroup = BUSINESS netbios name = SAMBASERVER bind interfaces only = true interfaces = lo eth0 eth1 smb ports = 139 hosts deny = All hosts allow = 192.168.78. 192.168.255. 127.0.0.1 10.149.122. 192.168.0. name resolve order = wins bcast hosts log level = 3 syslog = 0 log file = /var/log/samba/log.%m max log size = 100000 domain logons = No wins support = Yes wins proxy = No client ntlmv2 auth = Yes lanman auth = Yes ntlm auth = Yes dns proxy = Yes time server = Yes security = user encrypt passwords = Yes obey pam restrictions = Yes ldap password sync = Yes unix password sync = Yes passdb backend = ldapsam:"ldap://192.168.78.206" ldap ssl = off ldap admin dn = uid=root,ou=Users,dc=business,dc=intranet ldap suffix = ldap group suffix = ou=Groups ldap user suffix = ou=Users ldap machine suffix = ou=Computers ldap idmap suffix = ou=Idmap ldap delete dn = Yes add user script = /usr/sbin/smbldap-useradd -m "%u" delete user script = /usr/sbin/smbldap-userdel "%u" add group script = /usr/sbin/smbldap-groupadd -p "%g" delete group script = /usr/sbin/smbldap-groupdel "%g" add user to group script = /usr/sbin/smbldap-groupmod -m "%u" "%g" delete user from group script = /usr/sbin/smbldap-groupmod -x "%u" "%g" set primary group script = /usr/sbin/smbldap-usermod -g "%g" "%u" add machine script = /usr/sbin/smbldap-useradd -W -t5 "%u" idmap backend = ldap:"ldap://192.168.78.206" idmap uid = 16777216-33554431 idmap gid = 16777216-33554431 load printers = No printcap name = /dev/null map acl inherit = Yes map untrusted to domain = Yes enable privileges = Yes veto files = /lost+found/ /publicftp/ So, \\SAMBASERVER says he couldn't find my user, but I can see it by getent passwd . What I can do in order to SAMBASERVER see and authenticate my user? Thanks in advance!

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • How to troubleshoot problem with OpenVpn Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenSvn Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • Memcache on ubuntu server lucid and ruby 1.9.1

    - by Thiago
    I'm trying to set up a memcache server on the above setup. I'm getting the following error: /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:443:in `load_missing_constant': uninitialized constant MemCache (NameError) from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:92:in `const_missing' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:18:in `<class:MemcacheQueueClient>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:14:in `<module:Clients>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:13:in `<module:Workling>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:12:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/client_runner.rb:2:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/starling_runner.rb:1:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote.rb:3:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `block in load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:379:in `load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:259:in `require_or_load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:425:in `load_missing_constant' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /root/voicegateway/config/environments/development.rb:20:in `block in load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `eval' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `block in load_environment' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:379:in `load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:137:in `process' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /root/voicegateway/config/environment.rb:9:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/commands/server.rb:84:in `<top (required)>' from ./server:3:in `require' from ./server:3:in `<main>' But memcache-client 1.8.3 is on the gem list. What's the problem?

    Read the article

  • Strange Upload Problem on Hyper-V

    - by Ring0
    Hi, This one is driving me totally nuts. I have being trying to upload a file to www.virustotal.com (its a harmless exe I have since found out - DiskWipe.exe from diskwipe.org). Using IE8. From Win 7 and Win 2008 R2 Datacenter (which I select to boot from vhd's) onto my main machine hardware, and also on another Win 7 PC elsewhere on my network, when I upload the file to virustotal.com it works perfectly. So, using my native NIC's everything is fine. Using another machine also perfect. Right. OK, from my boot menu the default is my main development machine - the one I'm typing on now. This runs on the metal and has Hyper-V role and I have some guests. All guests are not running. Amazingly, from my console (root partition to be exact) or any guest OS 2003 /XP / 2008 R2 etc. My upload to virustotal.com slows at 32% then HANGS at 38.something% & never finishes!! Here is the kicker. I have another box (my main server) running Hyper-V on the metal and three live guests. Identical H/W to my main dev machine in another room. (Except OS is Datacenter - Mine is Enterprise). If I try and upload from its bare metal console or any guest this file to virustotal.com using IE8 it stops exactly in the same place!! As for "steps I have tried etc." are kind-of blown out of the water as my server box is doing the precise same thing as the machine in my room here. OK, comonalities: Mobo: Gigabyte GA-X58-UD5, 12GB Kingston RAM, Corei7 920 4 cores hyperthreading = 8 & Realtek RTL8168D/8111D Family PCI-E Gigabit Ethernet NIC's. All 3 machines have this same motherboard - revision F11 Bios, all have 12GB RAM, all have the Realtek Nic's. All x64 by the way as I mentioned before I have a Win 7 box also with the UD5 m/Board, 12 GB RAM - bit of an overkill. :-) All these machines when NOT running Hyper-V can upload this file. Perhaps you may like to try it on a Hyepr-v (2008 R2) yourselves with IE8 and the desktop experience is on. See if it works or fails for you. Root OS or any guest. So, looking like its the NIC + Hyper-V = Cannot upload this file (any file I must add.) Realtek Nic is Ver 7.002.1125.2008. Using IE8 I see in the nic settings there are the usual parameters for Jumbo frames / Checksum offloading etc. several others. Should I fiddle with these? I ran Netmon 3.3 in a guest and the TCP session halted as the upload failed. I suppose I could study that further. I dont have Netmon on the root partition machine (yet)! All OS's fully patched - including todays defender files. My box running Office 2007 - but identical server in another room is not. Also, if I fire up a VPN to a distant client and do the upload it works! Of course its a different network path. Suggestions welcome please. If I left out anything important - please yell at me. Many Thanks,

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • redhat Apache fast-cgi selinux permissions

    - by Alejo JM
    My apache installation is running php as fastcgi, and the virtual hosts are pointing to /home/*/public_html. and the fastcgi are home/*/cgi-bin/php.fcgi the public_html setup with selinux was: /usr/sbin/setsebool -P httpd_enable_homedirs 1 chcon -R -t httpd_sys_content_t /home/someuser/public_html The owner and group are the user, for example the user "someuser": ls -all /home/someuser/cgi-bin/ drwxr-xr-x 2 someuser someuser 4096 Sep 7 13:14 . drwx--x--x 6 someuser someuser 4096 Sep 6 18:17 .. -rwxr-xr-x 1 someuser someuser 308 Sep 7 13:14 php.fcgi ls -all /home/someuser/public_html/ | greep info.php -rw-r--r-- 1 someuser someuser 24 Sep 3 16:24 info.php When is visits the site I get "Forbidden ..." and the log said: [Fri Sep 07 12:02:51 2012] [error] [client x.x.x.x] (13)Permission denied: access to /cgi-bin/php.fcgi/info.php denied My selinux conf is: SELINUX=enforcing SELINUXTYPE=targeted SETLOCALDEFS=0 So I kill Selinux (SELINUX=disabled), reboot the system and everything works !!!!! The problem is Selinux, I don't want disable Selinux. I trying this with no success: setsebool -P httpd_enable_cgi 1 chcon -t httpd_sys_script_exec_t /home/someuser/cgi-bin/php.fcgi chcon -R -t httpd_sys_content_t /home/someuser/cgi-bin Or maybe is better change Selinux SELINUX=enforcing to SELINUX=permissive And disable selinux for httpd ? (I think I better find the correct configuration) Thanks for any suggestion on this matter My environment: Red Hat Enterprise Linux Server release 5.8 (Tikanga) Server version: Apache/2.2.3 PHP 5.1.6 (cli) (built: Jun 22 2012 06:20:25) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies Some logs: ps -ZC httpd LABEL PID TTY TIME CMD system_u:system_r:httpd_t 2822 ? 00:00:00 httpd system_u:system_r:httpd_t 2823 ? 00:00:00 httpd system_u:system_r:httpd_t 2824 ? 00:00:00 httpd system_u:system_r:httpd_t 2825 ? 00:00:00 httpd system_u:system_r:httpd_t 2826 ? 00:00:00 httpd system_u:system_r:httpd_t 2836 ? 00:00:00 httpd system_u:system_r:httpd_t 2837 ? 00:00:00 httpd system_u:system_r:httpd_t 2838 ? 00:00:00 httpd system_u:system_r:httpd_t 2839 ? 00:00:00 httpd system_u:system_r:httpd_t 2840 ? 00:00:00 httpd getsebool -a | grep httpd allow_httpd_anon_write --> off allow_httpd_bugzilla_script_anon_write --> off allow_httpd_cvs_script_anon_write --> off allow_httpd_mod_auth_pam --> off allow_httpd_nagios_script_anon_write --> off allow_httpd_prewikka_script_anon_write --> off allow_httpd_squid_script_anon_write --> off allow_httpd_sys_script_anon_write --> off httpd_builtin_scripting --> on httpd_can_network_connect --> off httpd_can_network_connect_db --> off httpd_can_network_relay --> off httpd_can_sendmail --> on httpd_disable_trans --> off httpd_enable_cgi --> on httpd_enable_ftp_server --> off httpd_enable_homedirs --> on httpd_execmem --> off httpd_read_user_content --> off httpd_rotatelogs_disable_trans --> off httpd_setrlimit --> off httpd_ssi_exec --> off httpd_suexec_disable_trans --> off httpd_tty_comm --> on httpd_unified --> on httpd_use_cifs --> off httpd_use_nfs --> off

    Read the article

  • Postfix MySql Dovecot - SMTP Authentication Failure

    - by borncamp
    Hello I have a Postfix setup with Dovecot and MySql. The server is running Debian Squeeze. The MySql server is a slave that has data pushed to it from a primary (postfix) mail server(running a different os). The emails are stored on a replicated GlusterFS volume. I am able to check email using thunderbird over IMAP. However, SMTP requests fail. After turning on query logs for the MySql server I have noticed that no query statement is executed to retrieve the user information when an SMTP client tries to authenticate. I'd like to know what I'm doing wrong or what the next troubleshooting steps are. I'm about to pull my hair out. Below is some log and configuration data that I thought would be relevant. You're help is much obliged. The file /var/log/mail.log shows Oct 11 14:54:16 mailbox2 postfix/smtpd[25017]: connect from unknown[192.168.0.44] Oct 11 14:54:19 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: Oct 11 14:54:25 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:48 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:54 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:57 mailbox2 postfix/smtpd[25017]: disconnect from unknown[192.168.0.44] This is my dovecot.conf file log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/mail/virtual/%d/%n/ auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { mode = 0600 user = postfix } user = root } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocol lda { auth_socket_path = /var/run/dovecot/auth-master mail_plugins = sieve postmaster_address = [email protected] } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } Here is my dovecot-mysql.conf file: connect = host=127.0.0.1 dbname=postfix user=postfix password=ffjM2MYAqQtAzRHX driver = mysql default_pass_scheme = MD5-CRYPT password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1' user_query = SELECT CONCAT('/var/mail/virtual/', maildir) AS home, 1001 AS uid, 109 AS gid, CONCAT('*:messages=10000:bytes=',quota) as quota_rule, 'Trash:ignore' AS quota_rule2 FROM mailbox WHERE username = '%u' AND active='1' Here is my output from 'postconf -n': append_dot_mydomain = no biff = no bounce_template_file = /etc/postfix/bounce.cf broken_sasl_auth_clients = yes config_directory = /etc/postfix delay_warning_time = 0h dovecot_destination_recipient_limit = 1 inet_interfaces = all local_recipient_maps = $virtual_mailbox_maps local_transport = virtual mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_queue_lifetime = 1d message_size_limit = 25600000 mydestination = mailbox2.cws.net, debian.local.cws.net, localhost.local.cws.net, localhost myhostname = mailbox2.cws.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.119 63.164.138.3 myorigin = /etc/mailname proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps readme_directory = no recipient_delimiter = + relay_domains = relayhost = smtp_connect_timeout = 10 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_client_message_rate_limit = 50 smtpd_client_recipient_rate_limit = 500 smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_delay_reject = yes smtpd_discard_ehlo_keyword_address_maps = hash:/etc/postfix/discard_ehlo smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf virtual_gid_maps = static:1001 virtual_mailbox_base = /var/mail/virtual/ virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf virtual_transport = dovecot virtual_uid_maps = static:1001

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • SBS2003 to SBS2011 Migration - Installation Error

    - by Shawn Gradwell
    Microsoft Small Business Server 2003 to 2011 Migration. I followed the Migration Guide from Microsoft and the source server had no errors when running the various tests prior to the migration. I have completed the destination server setup using the Answer File and the server is up and running. It all looks good, I can access Exchange and AD and the only problem is the error message when you log in stating that the setup did not complete and to check the logs. Because all looks good I am continuing the migration to the destination server. I also have to state that this client does not use Sharepoint at all. Do I have to redo everything? Herewith the logs: [4992] 121016.225454.5905: Task: Starting Add User or Group access VSS registry. [4992] 121016.225454.7645: TaskManagement: In TaskScheduler.RunTasks(): The "ConfigureSharePointVSSRegistryTask" Task threw an Exception during the Run() call:System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) [4992] 121016.225454.7655: Setup: An error was encountered on the TME thread: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) [4956] 121016.225455.0685: Setup: _UnhandledExceptionHandler: Setup encountered an error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: The TME thread failed (see the inner exception). ---> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter.TasksCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass._LaunchWizard() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.RealMain(String[] args) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.Main(String[] args) [4956] 121016.225455.0865: Setup: Removed the password. [4956] 121016.225455.0905: Setup: Deleting scheduled task at path Microsoft\Windows\Windows Small Business Server 2011 Standard with name Setup [4956] 121016.225455.8055: Setup: Removed SBSSetup from the RunOnce.

    Read the article

  • Connection Timed Out - Simple outbound Postfix for PHP Contact form

    - by BLaZuRE
    Alright, so I only got Postfix for a PHP contact form that will send email to a single . I only want it to send out mail to a single external address ([email protected]). I have domain sub1.sub2.domain.com. I installed Postfix out of the Ubuntu repo, with minimal config changes. I cannot get Postfix to send mail externally (though it succeeds for internal accounts, which is unnecessary). The email simply defers if I generate an email using PHP mail(). If I try to form my own in telnet, right after rcpt to: [email][email protected][/email], I get a postfix/smtpd[31606]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: example.com; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> when commenting out default_transport = error and relay_transport = error lines, I get the following: Jun 26 14:33:00 sub1 postfix/smtp[12191]: 2DA06F88206A: to=<[email protected]>, relay=none, delay=514, delays=409/0.01/105/0, dsn=4.4.1, status=deferred (connect to aspmx3.googlemail.com[74.125.127.27]:25: Connection timed out) Jun 26 14:36:36 sub1 postfix/smtp[12225]: connect to mta7.am0.yahoodns.net[98.139.175.224]:25: Connection timed out Jun 26 14:38:00 sub1 postfix/smtp[12225]: 22952F88208E: to=<[email protected]>, relay=none, delay=655, delays=550/0.01/105/0, dsn=4.4.1, status=deferred (connect to mta5.am0.yahoodns.net[67.195.168.230]:25: Connection timed out) My main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = sub1.sub2.domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = sub1.sub2.domain.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = error relay_transport = error Also, a dig sub1.sub2.domain.com MX returns: ; <<>> DiG 9.7.0-P1 <<>> sub1.sub2.domain.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4853 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;sub1.sub2.domain.com. IN MX ;; AUTHORITY SECTION: sub2.domain.com. 600 IN SOA sub2.domain.com. sub5.domain.com. 2012062915 7200 600 1209600 600 ;; Query time: 0 msec ;; SERVER: x.x.x.x#53(x.x.x.x) ;; WHEN: Fri Jun 29 16:35:00 2012 ;; MSG SIZE rcvd: 84 lsof -i returns empty netstat -t -a | grep LISTEN returns tcp 0 0 localhost:mysql *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 [::]:www [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN

    Read the article

  • Do email forms need to be santized before sending?

    - by levi
    I have a client that keeps getting reports from godaddy's "websiteprotection.com" stating how the website is insecure. Your website contains pages that do not properly sanitize visitor-provided input to make sure it contains no malicious content or scripts. Cross-site scripting vulnerabilities let malicious users execute arbitrary HTML or script code in another visitor's browser. Output: The request string used to detect this flaw was : /cross_site_scripting.?nasl.asp The output was : HTTP/1.1 404 Not Found\r Date: Wed, 21 Mar 2012 08:12:02 GMT\r Server: Apache\r X-Pingback:http://?CLIENTSWEBSITE.com/?xmlrpc.php\r Expires: Wed, 11 Jan 1984 05:00:00 GMT\r Cache-Control: no-cache, must-revalidate, max-age=0\r Pragma: no-cache\r Set-Cookie: PHPSESSID=?1jsnhuflvd59nb4trtquston50; path=/\r Last-Modified: Wed, 21 Mar 2012 08:12:02 GMT\r Keep-Alive: timeout=15, max=100\r Connection: Keep-Alive\r Transfer-Encoding: chunked\r Content-Type: text/html; charset=UTF-8\r \r <div id="contact-form" class="widget"><form action="http://?CLIENTSWEBSITE.c om/<script>cross_site_?scripting.nasl</script>.asp" id="contactForm" meth od="post"> It looks like it has an issue with the contact form. All the contact form does is posts an ajax request to the same page, and than a PHP script mails the data (no database stuff). Is there any a security issues here? Any ideas on how I can satisfy the security scanner? Here is the form and script: <form action="<?php echo $this->getCurrentUrl(); ?>" id="contactForm" method="post"> <input type="text" name="Name" id="Name" value="" class="txt requiredField name" /> //Some more text inputs <input type="hidden" name="sendadd" id="sendadd" value="<?php echo $emailadd ; ?>" /> <input type="hidden" name="submitted" id="submitted" value="true" /><input class="submit" type="submit" value="Send" /> </form> // Some initial JS validation, if that passes an ajax post is made to the script below //If the form is submitted if(isset($_POST['submitted'])) { //Check captcha if (isset($_POST["captchaPrefix"])) { $capt = new ReallySimpleCaptcha(); $correct = $capt->check( $_POST["captchaPrefix"], $_POST["Captcha"] ); if( ! $correct ) { echo false; die(); } else { $capt->remove( $_POST["captchaPrefix"] ); } } $dateon = $_POST["dateon"]; $ToEmail = $_POST["sendadd"]; $EmailSubject = 'Contact Form Submission from ' . get_bloginfo('title'); $mailheader = "From: ".$_POST["Email"]."\r\n"; $mailheader .= "Reply-To: ".$_POST["Email"]."\r\n"; $mailheader .= "Content-type: text/html; charset=iso-8859-1\r\n"; $MESSAGE_BODY = "Name: ".$_POST["Name"]."<br>"; $MESSAGE_BODY .= "Email Address: ".$_POST["Email"]."<br>"; $MESSAGE_BODY .= "Phone: ".$_POST["Phone"]."<br>"; if ($dateon == "on") {$MESSAGE_BODY .= "Date: ".$_POST["Date"]."<br>";} $MESSAGE_BODY .= "Message: ".$_POST["Comments"]."<br>"; mail($ToEmail, $EmailSubject, $MESSAGE_BODY, $mailheader) or die ("Failure"); echo true; die(); }

    Read the article

< Previous Page | 788 789 790 791 792 793 794 795 796 797 798 799  | Next Page >