Search Results

Search found 14154 results on 567 pages for 'sp send dbmail'.

Page 434/567 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • ipv6 port 445 does not accept the request from a global type address

    - by blacktea
    I want to scan the port 445 in windows server 2003, but my scanner only have one type ipv6 address which is global not link-local. When I do this,I find that I can't find port 445 open. But I use the command "netstat -an" to assure the port 445 is listening. Finally I find this confusing phenomenon: 1.when I set a link-local ddress in my scanner, then it will work in scanning port 445. 2.when I only set a global address in my scanner, it doed not work. This means if a host with a link-local address use socket to send a syn packet to the port 445 in server 2003, it will receive a ack packet. But if with a global address it will receive a rst packet. Thus, I can't scan the port 445 in server 2003 with a global address. I need to know why? Can anybody help? And I use the netsh-firewall to check the exception and netsh-interface-ipv6 to turn off the firewall on the specific interface. Still can't establish the connection with port 445, do you have any ideal about this ?

    Read the article

  • Outlook and IMAP - Outlook doesn't allow the Drafts and Trash folders to sync with the respective IMAP folders

    - by Matt
    I'm using Outlook 2007 and Outlook 2010 against an IMAP server (the problem exists across many, like Gmail, you name it). Outlook lets you set your Outlook "Sent" folder to map to the IMAP server's Sent folder (the other choice is to map your Outlook Sent to your Personal Folders Sent) - this is good. When you send a message from Outlook and then look in the sent folder of the IMAP server (e.g. from a different client or from a browser), the messages are there. This is the behavior I want. Outlook does NOT support the same behavior for Drafts and Trash. In both cases, items deleted (or Drafts saved) in Outlook go in to Outlook's local folders and do NOT show on the IMAP server's Trash or Drafts folders. Same problem in reverse. Thunderbird on the other hand does support the proper mapping of Drafts, Sent and Trash. I expected this to be IMAP-specific but it appears to be client specific. What does Outlook implement it this way and is there a workaround?

    Read the article

  • The email address used to help trust my PC, is no longer valid, how do I fix that?

    - by Rod
    A few nights ago I upgraded my Windows 7 PC to Windows 8. This morning, when I logged into my PC, I noticed the little flag in the system tray, in desktop mode, had a red "X" on it. I checked it and one of the issues was my needing to tell Windows 8 that I trusted my PC. I clicked on the link to trust my PC, and saw the button to tell it to trust my PC, and so I did. After doing that it send it had sent out an email to a really old email address I had, which hasn't been valid for many years. Now what? Will that "trusting my PC" be invalid, because I can't respond to it (which certainly isn't true, if it's sending messages to so old email address). I didn't even know that Windows still had such an old email address. I'm concerned that I won't have a trust relationship with my own PC, that somehow or other whatever holds onto my information has old information and that I am not sure how to change it in as fast a manner as possible so that I can trust my own PC. How do I do these things?

    Read the article

  • The physical working paradigm of a signal passing on wire.

    - by smwikipedia
    Hi, This may be more a question of physics, so pardon me if there's any inconvenience. When I study computer networks, I often read something like this in order to represent a signal, we place some voltage on one end of the wire and the other end will detect the voltage and thus the signal. So I am wondering how a signal exactly passes through wire? Here's my current understanding based on my formal knowledge about electronics: First we need a close circuit to constrain/hold the electronic field. When we place a voltage at somewhere A of the circuit, electronic field will start to build up within the circuit medium, this process should be as fast as light speed. And as the electronic field is being built up, the electrons within the circuit medium are moved, and thus electronic current occurs, and once the electronic current is strong enough to be detected at somewhere else B on the complete circuit, then B knows about what has happend at A and thus communication between A and B is achieved. The above is only talking about the process of sending a single voltage through wire. If there's a bitstream and we need to send a series of voltages, I am not sure which of the following is true: The 2nd voltage should only be sent from A after the 1st voltage has been detected at B, the time interval is time needed to stimulate the electronic field in the medium and form a detectable electronic current at B. Several different voltages could be sent on wire one by one, different electronic current values will exists along the wire simutaneously and arrive at B successively. I hope I made myself clear and someone else has ever pondered this question. (I tag this question with network cause I don't know if there's a better option.) Thanks, Sam

    Read the article

  • Win Server 2k and Win 7 client

    - by Ray Kruse
    I have a Win Server 2000 system with AD configured. The network consists of an OKI printer, a network server, a wifi router a Win 2k client and the server. I'm trying to connect a Win 7 client. The purpose of the network, besides sharing equipment is to move files from client to client and scatter backups over more than 1 machine. The Win 7 client is configured for DHCP and does in fact receive it's IP and DNS configuration from the server and it sees the printer, wifi router and network drive, but does not see the Win 2k client nor the Win 2k server. I have tried the LAN Management Authentication Level set to 'Send LM & NTLM responses' with the 128 bit encryption removed. I've also done the registry hack on the key 'LmCompatibilityLevel'. Neither of these have helped. I have two questions: Is there a fix or is Win 2k totally incompatible? Is the best (or quickest/cheapest) fix to upgrade the server to Win 2k3 and not worry about the Win 2k client? Thanks for any help. Ray Kruse Buffalo, KY

    Read the article

  • Postfix not sending email after upgrading to Ubuntu 12.04

    - by Luke
    After upgrading a server from Ubuntu 10.04 to 12.04, postfix is no longer sending email through sendgrid.com. I followed this guide about 6 months ago and everything had been working perfectly until the upgrade. Now it doesn't seem to be authenticating with sendgrid. This is the error I get in my syslog when I try to send an email. May 22 10:19:55 server postfix/smtp[3844]: 983B11C5DA: to=<to address>, relay=smtp.sendgrid.net[174.36.32.204]:587, delay=0.05, delays=0.01/0/0.04/0, dsn=5.0.0, status=bounced (host smtp.sendgrid.net[174.36.32.204] said: 550 Cannot receive from specified address <sendgrid username>: Unauthenticated senders not allowed (in reply to MAIL FROM command)) This is from postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = no config_directory = /etc/postfix header_size_limit = 4096000 inet_interfaces = loopback-only mailbox_size_limit = 0 mydestination = localhost, mylinode.members.linode.com myhostname = hostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relayhost = [smtp.sendgrid.net]:587 smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl/sendgrid smtp_sasl_security_options = noanonymous smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Any help would be greatly appreciated. I would be happy to post any other logs or other relevant information.

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • nginx inserting extra characters in Multi-status reply body

    - by user125011
    Here's the setup. I've got one server running apache/php hosting ownCloud. Among other things, I'm using to do CardDAV contact syncing. In order to make things work with my domain I have an nginx server running on the frontend as a reverse-proxy to the ownCloud server. My nginx config is as follows: server { listen 80; server_name cloud.mydomain.com; location / { proxy_set_header X-Forwarded-Host cloud.mydomain.com; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-For $remote_addr; client_max_body_size 0; proxy_redirect off; proxy_pass http://server; } } The problem is that when my phone does a PROPFIND on the server, nginx adds extra characters to the content body that throw the phone off. Specifically, it prepends d611\r\n at the front of the body and appends 0\r\n\r\n to the end of the content. (I got this from wireshark.) It also re-chunks the result. How do I get nginx to send the original content as-is?

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • Asus n61Ja notebook bios update

    - by zKs
    I wanted to update the bios, with an official bios update, from version 207 to 211. I didn't use winflash, I used easyflash in the bios. Everything seemed to be going okay; it deleted the old files, wrote the new ones, verified the new ones. Then it said: shutdown in/after 2 seconds and it shut down. Then nothing happened anymore. Power button completely unresponsive. The battery light was still on, and I'm not sure if I should've just waited... I didn't though, I thought I had to remove the battery, take the power off completely to be able to start it up again. So I'm wondering: what are my options here? My warranty expired and I don't really have the money to send it in and pay hundreds of bucks on repairs. Is there anything I can try? CMOS battery reset? Anything??? Please help me out! I would be very grateful :) ps. What was sort of odd btw, was that easyflash said something like that it was an unsigned bios and if I wanted to flash it anymore, if i wanted to 'force' it or something. It was with 100% certainty the correct update from the Asus.com support site, so I didn't take that 'warning' seriously.

    Read the article

  • nginx proxy_pass POST 404 errors

    - by Scott
    I have nginx proxying to an app server, with the following configuration: location /app/ { # send to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } Any request for /app goes to :9001, whereas the default site is hosted on :9000. GET requests work fine. But whenever I submit a POST request to /app/any/post/url it results in a 404 error. Hitting the url directly in the browser via GET /app/any/post/url hits the app server as expected. I found online other people with similar problems and added proxy_set_header Host $http_host; but this hasn't resolved my issue. Any insights are appreciated. Thanks. Full config below: server { listen 9000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/scott/src/ph-dox/html; # root ../html; TODO: how to do relative paths? index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /app/ { # rewrite here sends to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } }

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Esx servers in a DMZ

    - by James
    I have two ESX 3.5 servers in a DMZ. I can access these servers on any port from my lan via a VPN. Servers in the DMZ are unable to initiate connections back to the lan, for obvious reasons. I have a vCenter server on my lan and can initially connect to the esx servers fine. However the esx servers then try to send a hearth beat back to the vCenter server on udp/902 obviously this will not get back to the vCenter server, which then marks the ESX servers as not responding and disconnects. There are two broad solutions I can think of; 1) Try to tell vCenter to ignore not getting heart beats. The best I can do here is delay the disconnect by 3 mins. 2) Try some clever network solution. However again I am at loss. Note: The vCenter server is on a lan, and cannot be given a public IP, so firewall rules back will not work. And also I cannot setup a VPN from the DMZ to the lan. **I am adding the following, explanation that I added to the comments Ok maybe this is the bit that I not explaining well. The DMZ is on a remote site, an entirely independent network (network 1). The vCenter server is on our office lan (network 2). Network 2 can connect to any machine on any port on network 1. But network 1 is not allowed to initiate a connection to network 2. Any traffic destined to network 2 from network 1 gets dropped by the firewall as it is traffic to a non-routable address. The only solution I can think of is setting up a VPN from network 1 to network 2, but this is not acceptable So any clever folk out there any ideas? J

    Read the article

  • Mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=password Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=password Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by user36428
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • DHCP Server on local machine

    - by EralpB
    Hello I am trying to setup a dhcp3-server on Ubuntu. But my question is more generic, if dhcp server is in a blockbox and all clients are connected to it I think I get what is going on but when dhcp server is installed on one of the "clients" that confuses me. When I send a dhcp packet from that client to the dhcp server, will my ethernet card read and write at the same time? Or will it handle it internally without writing any data to ethernet cable. It's the first time I am encountering these network things so I am a little bit confused. Also I wonder If I am in a big network with lan IP let's say 192.168.0.100 and I install a dhcp server to my computer, can any other computers accidentally get IP from my dhcp server? Every computer has one ethernet card (if that matters?). And every computer is connected to one router. I guess the answer is no because the broadcast message won't reach to my computer since when router receives a dhcp search packet it will answer and it won't let other computers know about it because they don't need to. And without router sending that packet one by one, it cannot travel further. I'd be glad if someone enlightens me. Thank you very much.

    Read the article

  • Prolific USB-to-Serial Comm Port significantly slower under Windows 7 comparing to Windows XP

    - by Dmitry S
    Not sure if this question should be asked here or on SuperUser but if we get an answer here it may be useful for others here I am using a Prolific USB-to-Serial adapter based on the Prolific chip to use with a device on serial port. I have the latest version of the driver installed: 1.3.0 (2010-7-15). When I use my device with this adapter on my main Windows 7 (32bit) system it takes 8-9 seconds to send a command through to the device. However, when I do the same thing on a different Windows XP system (an old laptop I borrowed for testing) it only takes 2-3 seconds. I have made sure that the port settings and other variables are the same between systems. I also tested on a third laptop (also running Windows 7) and again got a significant delay. So the question is if anyone else experienced the same problem and found a solution. I would like to avoid moving to an XP system for what I need to achieve so that's my last option. Thanks in advance.

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Postfix connect timing out remotely, working fine locally

    - by Moritz
    Running Postfix on Debian I cannot connect to send mail any more. It worked until approximately a week ago. I do not recall touching the configuration of the server during that time, which makes it difficult for me to find out what the problem is. When connecting from the server to itself it works fine: root@xxxx:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost.localdomain. Escape character is '^]'. ehlo localhost 220 mail.xxxx.de ESMTP Postfix (Debian/GNU) 250-mail.xxxx.de 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. Trying to do the same remotely times out: laptop:~ $ telnet mail.xxxx.de 25 Trying 93.xx.xx.xx... telnet: connect to address 93.xx.xx.xx: Operation timed out telnet: Unable to connect to remote host Configuration is as follows: root@xxxx:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 mailbox_command = mailbox_size_limit = 0 mydestination = localhost.localdomain, localhost.localdomain, localhost myhostname = mail.xxxx.de mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_exceptions_networks = $mynetworks smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_alias_maps = proxy:mysql:$config_directory/mysql_virtual_alias_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = proxy:mysql:$config_directory/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:$config_directory/mysql_virtual_mailbox_maps.cf virtual_minimum_uid = 150 virtual_transport = dovecot Receiving mails is no problem, as is retrieving them remotely. Do you have an idea what I could check next?

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >