Search Results

Search found 50591 results on 2024 pages for 'http protocols'.

Page 295/2024 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Can't get Passwordless (SSH provided) SFTP working

    - by Shoaibi
    I have chrooted sftp setup as below. # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin without-password StrictModes yes AllowGroups admins clients RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* #Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Subsystem sftp internal-sftp Match group clients ChrootDirectory /var/chroot-home X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/david:/bin/sh Now in this case david can sftp using say filezilla client and he is chrooted to /var/chroot-home/david/. But what if i was to setup a passwordless auth? I have tried pasting his key in /var/chroot-home/david/.ssh/authorized_keys but no use, tried ssh'ing as david to the box and it just stops at "debug1: Sending env LC_CTYPE = C" after i supply it password and there is nothing shown in auth.log, may be because it can't find the homedir. If i do "su - david" as root i see "No directory, logging in with HOME=/" which makes sense. Symlink doesn't help either. I have also tried with: Match group clients ChrootDirectory /var/chroot-home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/var/chroot-home/david:/bin/sh This way if i don't change /var/chroot-home/david to root:root sshd complains about bad ownership or permission modes, and if i do, david can no longer upload/delete anything directly in his home while using sftp from filezilla.

    Read the article

  • Login configuration script for Junos EX 2200 using minicom

    - by liv2hak
    I am connecting to Junos OS on Juniper EX-2200 switches using minicom as shown below minicom -C log_sw1 sw1 Now I have a series of commands that I need to execute on sw1.(example shown below) cli request system zeroize show config show interface edit delete protocols set system arp aging-timer 240 I want to avoid having to type these commands every time I log into the system.I want to put them in a config file and I want the it to be execute every time I log into the switch using minicom. Is there any way I can achieve this?

    Read the article

  • can not connect through SCP, but SSH connections works

    - by Joe Cabezas
    i am trying to connect to my server to transfer file using scp: $ scp -v -r -P <port> <user>@<host>:~/dir/ dir/ this is the output: OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/joe/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to <host> [<host>] port <port>. debug1: Connection established. debug1: identity file /Users/joe/.ssh/identity type -1 debug1: identity file /Users/joe/.ssh/id_rsa type -1 debug1: identity file /Users/joe/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host but connecting via SSH works fine: $ ssh <user>@<host> -p <port> <user>@<host>'s password: <user>@<host>:~$ OK what can be wrong with this? my /etc/ssh/sshd_config file on the host is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port <port> # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication no #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes

    Read the article

  • Pidgin to pidgin, what protocol to chose?

    - by Johan
    Hi I and a friend need to start to use some instant messaging and it seem like pidgin is a nice program to use. However it supports quite a lot of different protocols. So my question is witch to choose? Since both parties can use the same program we don't have to use the same protocol as "all the rest", but can focus on what is best to use. As a side note both parties will be using Ubuntu based computers. Thanks Johan

    Read the article

  • Exchange server not serving mobile devices - how to troubleshoot?

    - by chickeninabiscuit
    Our exchange server has suddenly stopped serving mobile devices. Attempts to connect result in our ActiveSync server returning HTTP 500. It is serving outlook clients fine. Our server is Windows 2003 SBS 6.5 SP2 There are no abnormal events in the system log. I ran the "Exchange ActiveSync with AutoDiscover" at https://www.testexchangeconnectivity.com/ I've notice an abnormality in the exchange properties, Log File Directory shows: Access denied. Facility: Win32 ID no: 80070005 Exchange System Manager As shown in the following image: I think it may be related to a recent issue we had here: http://serverfault.com/questions/40222/windows-server-2003-suddenly-unable-to-connect-to-anything We followed a procedure to reinstall TCP/IP: http://support.microsoft.com/kb/325356 I've run the "exchange activesync" connectivity test at testexchangeconnectivity.com: Attempting to Resolve the host name mail.immersive.com.au in DNS. Host successfully Resolved Additional Details IP(s) returned: 221.133.203.229 Testing TCP Port 443 on host mail.immersive.com.au to ensure it is listening/open. The port was opened successfully. Testing SSL Certificate for validity. The certificate passed all validation requirements. Test Steps Validating certificate name Successfully validated the certificate name Additional Details Found hostname mail.immersive.com.au in Certificate Subject Common name Validating certificate trust for Windows Mobile Devices Certificate is trusted and all certificates are present in chain Additional Details Certificate is trusted for Windows Mobile 5 and Later platforms. Root = [email protected], CN=Thawte Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, S=Western Cape, C=ZA Testing certificate date to ensure validity Date Validation passed. The certificate is not expired. Additional Details Certificate is valid: NotBefore = 1/5/2009 4:00:00 PM, NotAfter = 1/11/2010 3:59:59 PM Testing Http Authentication Methods for URL https://mail.immersive.com.au/Microsoft-Server-Activesync/ Http Authentication Methods are correct Additional Details Found all expected authentication methods and no disallowed methods. Methods Found: Basic Attempting an Activesync session with server Errors were encountered while testing the ActiveSync session Test Steps Attempting to send OPTIONS command to server OPTIONS response was successfully received and is valid Additional Details Headers received: MicrosoftOfficeWebServer: 5.0_Pub Pragma: no-cache Public: OPTIONS, POST Allow: OPTIONS, POST MS-Server-ActiveSync: 6.5.7638.1 MS-ASProtocolVersions: 1.0,2.0,2.1,2.5 MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,ResolveRecipients,ValidateCert,Provision,Search,Notify,Ping Content-Length: 0 Date: Thu, 16 Jul 2009 01:07:27 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Attempting FolderSync command on ActiveSync session FolderSync command test failed Tell me more about this issue and how to resolve it Additional Details Exchange

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • How secure is a bluetooth keyboard against password sniffing?

    - by jhs
    In a situation where an admin will enter sensitive information into a keyboard (the root password), what is the risk that a bluetooth keyboard (ship by default with Mac systems these days) would put those passwords at risk? Another way of asking would be: what security and encryption protocols are used, if any, to establish a bluetooth connection between a keyboard and host system?

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

  • Installing OpenLDAP on Fedora 12: ldap_bind: Invalid credentials (49)

    - by Alpha Hydrae
    I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • User given a login prompt when closing Word documents after viewing them in IE7

    - by Martin Owen
    When using IE7 to view Word documents on our CRM system (an ASP.NET 2.0 application running on Windows Server 2003 and IIS 6 and using Windows authenticaton) I'm finding that a prompt appears when the user closes the document. The Word document is originally opened by clicking a link in the CRM system. Are there permissions that I can set on the folder containing the Word documents to prevent this prompt? I've already tried only allowing the Read permission for the Users group (I've left Administrators with Full Control.) If there's another solution to this without using permissions please let me know. UPDATE: I ran Fiddler as suggested by JD and here is the output from the two responses after the request for the document. The first seems to be a DAV response and the second is the authentication request. How do I prevent the DAV response and just return the .doc on the server? OPTIONS / HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 200 OK Date: Thu, 18 Feb 2010 13:37:36 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET MS-Author-Via: DAV Content-Length: 0 Accept-Ranges: none DASL: <DAV:sql> DAV: 1, 2 Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK Cache-Control: private ------------------------------------------------------------------ OPTIONS /docs/ZONE%20100-105.doc HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 401 Unauthorized Content-Length: 83 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Basic realm="<REMOVED>" X-Powered-By: ASP.NET Date: Thu, 18 Feb 2010 13:37:36 GMT ------------------------------------------------------------------ UPDATE 2: I found a potential workaround for the problem via this post: http://forums.iis.net/p/1149091/1868317.aspx. I moved all of the documents that are being requested into a folder outside of the web root, and created a virtual directory for it (also outside of the web root). When I followed a link to one of the documents in IE and then closed the document I wasn't presented with a login prompt. I should point out that I'm not using FPSE, unlike the person in the forum post. Ideally I don't want to have to put the documents in a separate virtual directory, but this is the simplest solution I've found so far.

    Read the article

  • Installing OpenLDAP: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • What PHP, Xdebug and Eclipse configurations work on Windows 7 64 bit?

    - by thaddeusmt
    I have been mucking around for days, trying to find the right combination that lets me debug with breakpoints and variable viewing, in Eclipse, without crashing Apache. PHP 5.3? PHP 5.2? Eclipse Helios? Eclipse Galileo? One or the other with certain versions of xdebug or php? Or do I really need to use NetBeans or something else? Is my 64 bit OS the problem? Do need specific 64bit versions of PHP, Eclipse or Xdebug to work on Windows 7 64? Any special xdebug config options and tricks that I need in php.ini? Like turning off xdebug.profiler_enable or not using quotes around my zend_extension path to the xdebug dll? A Vhosts issue? Scrap the whole thing and go back to Win XP or Ubuntu? Here's what I've already been reading: http://stackoverflow.com/questions/4509245/so-eclipse-and-xdebug-walk-into-a-bar-and-then-my-apache-server-dies/4602473 http://stackoverflow.com/questions/206788/why-does-xdebug-crash-apache-on-every-xampp-install-ive-tried http://bugs.xdebug.org/view.php?id=459 https://bugs.eclipse.org/bugs/show_bug.cgi?id=312951#c8 http://stackoverflow.com/questions/2799936/xdebug-for-php-5-2-on-windows-7-64bit and so and so on... SO, xdebug bug tracker, eclipse bugzilla, etc, etc Basically what would be great is if folks could post their working (i.e. debugging with breakpoints and local variable viewing in Eclipse) Win7 64bit configurations, including: PHP version (5.3.1, 5.2.11, etc) Xdebug dll (2.1.0-5.3-vc6, etc) Xdebug php.ini config (zend_extension = "C:\xampp\php\ext\php_xdebug.dll", etc) Apache version (2.2.14, etc) Eclipse version Anything else important? The "secret ingredient"? Thanks! I miss my debugger since I got a new laptop with Win 7! Sadly it looks like some of the drivers (switchable graphics, multi-touch pad, etc) on my lappy don't work right with Ubuntu yet, so I feel a bit trapped on Win :( I know I will figure something out eventually, but I've been at this trial-and-error game a while and am seeking some guidance. (Originally posted on StackOverflow here, but moved to SuperUser:) http://stackoverflow.com/questions/4628215/what-php-xdebug-and-eclipse-configurations-work-on-windows-7-64-bit

    Read the article

  • Cache coherence literature for big (>=16CPU) systems

    - by osgx
    Hello What books and articles can you recommend to learn basis of cache coherence problems in big SMP systems (which are NUMA and ccNUMA really) with =16 cpu sockets? Something like SGI Altix architecture analysis may be interesting. What protocols (MOESI, smth else) can scale up well?

    Read the article

  • List repositories from multiple projects in Trac using mod_python

    - by Steffen Eriksen
    Currently working on a customized webpage that shows the available projects I have in Trac (1.0.1). I am using mod_python to connect the trac interface. I found a standard page for this, but it didn't show listing of repositories. The page showed some variables to link to the different projects, but I can't find variables to the different repositories inside the projects. I have set up the webpage from reading this: http://trac.edgewall.org/wiki/TracInterfaceCustomization (under Site Appearance) Short summary; editing ../conf.d/trac.conf: PythonOption TracEnvParentDir /parent/dir/of/projects PythonOption TracEnvIndexTemplate /path/to/template And making a template file I can edit at /path/to/template: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/" xmlns:xi="http://www.w3.org/2001/XInclude"> <head> <title>Available Projects</title> </head> <body> <h1>Available Projects</h1> <ul> <dl> <li py:for="project in projects" py:choose=""> <a py:when="project.href" href="$project.href" title="$project.description">$project.name</a> ## <dd> WANT TO ADD CODE HERE! </dd> <py:otherwise> <small>$project.name: <em>Error</em> <br /> ($project.description)</small> </py:otherwise> </li> </dl> </ul> </body> </html> So... The code I want to add is something like: <dd py:for="repos in project.repository" py:choose=""> <a py:when="repos.href" href="$repos.href"> $repos.name</a> </dd> I can't figure out where to add the variables, or if there already exists some variables I can use. After searching through the files it seemed like main.py had something to do with the variables (/usr/local/Trac-1.0.1/trac/web/main.py), but at first look it didn't seem easy to just add more variables. Is there a simple way to find the rest of the variables ? And how hard is it to add more variables? Will it perhaps be easier to do this an alternative way ? All I need is to link to the repositories dynamically

    Read the article

  • Unlimited online backup space for fixed price using rsync/FTP/other simple protocol

    - by barrycarter
    Many companies offer unlimited online backup space for a fixed price (mozy.com, twitter.com/allmydata, onlinestoragesolution.com, etc), but they either use proprietary non-Linux-friendly software and/or have gone out of business and/or don't actually work. Who offers reliable unlimited online backup space for a fixed price that's compatible with rsync, FTP, or other generic/open source file transfer protocols? Or, has anyone written software that lets me treat Mozy's/etc space as though it were regular file space (eg, "mozyfs"?)

    Read the article

  • .htaccess template, suggestions needed

    - by purpler
    DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • .htaccess template, suggestions needed

    - by purpler
    # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • SSH to an ubuntu machine using avahi

    - by tensaiji
    I have an ubuntu box that I connect to using avahi. Connecting to that box works fine for all services (I regularly use AFP, SSH and SMB on it) but I've noticed that whenever I connect to it from a mac using SSH (and using the ".local" dns name provided by avahi - eg. "ssh .local") SSH tries to connect using ipv6, which for some reason times out (after two minutes) then it tries ipv4 which connects immediately. I'd like to avoid this timeout, as it's really annoying for me and other users - if SSH tried ipv4 first or if ssh over ipv6 worked then that would solve the problem. But so far I've been unable to get either to work (the best I've managed is to specify the "-4" option to SSH to stop it from trying ipv6 at all). I'm using Ubuntu 10.04. Any solution has to be on the server (not the client) as there are multiple clients connecting. A possible complication might be that my LAN is set up to allow link-local ipv6 addresses only, but I have other servers (using Mac OS) that I can SSH into using ipv6) I suspect that the problem could be solved by either preventing avahi from broadcasting the ipv6 address, or by enabling ssh over ipv6, but so far as I can tell avahi is already configured not to broadcast the ipv6 address and sshd is configured to allow ipv6 connections! Here's my /etc/avahi/avahi-daemon.conf (I don't think I've changed anything from the ubuntu defaults) [server] #host-name=foo #domain-name=local #browse-domains=0pointer.de, zeroconf.org use-ipv4=yes use-ipv6=no #allow-interfaces=eth0 #deny-interfaces=eth1 #check-response-ttl=no #use-iff-running=no #enable-dbus=yes #disallow-other-stacks=no #allow-point-to-point=no [wide-area] enable-wide-area=yes [publish] #disable-publishing=no #disable-user-service-publishing=no #add-service-cookie=no #publish-addresses=yes #publish-hinfo=yes #publish-workstation=yes #publish-domain=yes #publish-dns-servers=192.168.50.1, 192.168.50.2 #publish-resolv-conf-dns-servers=yes #publish-aaaa-on-ipv4=yes #publish-a-on-ipv6=no [reflector] #enable-reflector=no #reflect-ipv=no [rlimits] #rlimit-as= rlimit-core=0 rlimit-data=4194304 rlimit-fsize=0 rlimit-nofile=300 rlimit-stack=4194304 rlimit-nproc=3 and here's my sshd_config (mainly updated to only allow pub/private keys): # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 180 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no AllowGroups sshusers # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Does anyone have any ideas that I can try, or has experienced anything similar?

    Read the article

  • Dual external internet connections

    - by cpf
    Hi ServerFault, We have here 2 Internet connections coming in. And the intention is to have all services available on our server which should be available externally, to be available through both connections. Also, one connection should be used as few as possible, except for certain protocols. How can I achieve this dual connection method?

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >