Search Results

Search found 29511 results on 1181 pages for 'html beginform'.

Page 996/1181 | < Previous Page | 992 993 994 995 996 997 998 999 1000 1001 1002 1003  | Next Page >

  • Squid external_acl_type Cannot run process

    - by Alex Rezistorman
    I want to restrict uploading for group of the users via squid. So I've choosen to use external_acl_type but after reload of the squid it returns error. WARNING: Cannot run '/usr/local/etc/squid/lists/newupload.sh' process. Permissions of newupload.sh and squid are the same. newupload.sh is executive. How can I solve this problem? Thnx in advance. newupload.sh #!/bin/sh while read line; do set -- $line length=$1 limit=$2 if [ -z "$length" ] || [ "$length" -le "$2" ]; then echo OK else echo ERR fi done Strings from squid.conf external_acl_type request_body protocol=2.5 %{Content-Lenght} /usr/local/etc/squid/lists/newupload.sh acl request_max_size external request_body 5000 http_access allow users request_max_size Squid version squid -v Squid Cache: Version 3.2.13 configure options: '--with-default-user=squid' '--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var' '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' '--with-swapdir=/var/squid/cache/squid' '--enable-auth' '--enable-build-info' '--enable-loadable-modules' '--enable-removal-policies=lru heap' '--disable-epoll' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-translation' '--enable-auth-basic=PAM' '--disable-auth-digest' '--enable-external-acl-helpers= kerberos_ldap_group' '--enable-auth-negotiate=kerberos' '--disable-auth-ntlm' '--without-pthreads' '--enable-storeio=diskd ufs' '--enable-disk-io=AIO Blocking DiskDaemon IpcIo Mmapped' '--enable-log-daemon-helpers=file' '--disable-url-rewrite-helpers' '--disable-ipv6' '--disable-snmp' '--disable-htcp' '--disable-forw-via-db' '--disable-cache-digests' '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups' '--disable-eui' '--disable-ipfw-transparent' '--disable-pf-transparent' '--disable-ipf-transparent' '--disable-follow-x-forwarded-for' '--disable-ecap' '--disable-icap-client' '--disable-esi' '--enable-kqueue' '--with-large-files' '--enable-cachemgr-hostname=proxy.adir.vbr.ua' '--with-filedescriptors=131072' '--disable-auto-locale' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 'CC=cc' 'CFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'LDFLAGS= -L/usr/local/lib' 'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'CPP=cpp' --enable-ltdl-convenience Related post: Restrict uploading for groups in squid http://squid-web-proxy-cache.1019090.n4.nabble.com/flexible-managing-of-request-body-max-size-with-squid-2-5-STABLE12-td1022653.html

    Read the article

  • Debugging UI Problems in IE8 (Was IE8 on Windows 7 Authentication Mess)

    - by alharaka
    UPDATE: I think the real question I need to ask here is: how does a technician debug UI problems with Internet Explorer, and not HTML rendering issues that have pretty good tools? I am aware of the SysInternals tools and others mentioned below, but maybe I am not harnessing their power properly. Someone else in the TechNet forum I mentioned had a similar issue. Again, I have lots of data, I am not sure how to properly interpret it. ORIGINAL POST: So I tried the venerable Technet Forums to solve this isse. In short, the Windows Security dialog has no place to put credentials, rendering pretty much useless. This happens to apply for a whole bunch of our intranet websites, and only a select number of users with a few laptops have this problem. It ends up looking like this. Things I have tried so far: Disabling local Group Policy (not domain connected) Disabling local Security Policy Resetting IE settings A few system restores Re-registering a bunch of IE DLL's and all other steps here Reinstalling IE8 (dism /online /disable-feature /featurename:"internet-explorer-optional-x86, reboot, dism /online /enable-feature /featurename:"internet-explorer-optional-x86, and reboot) And SFC scan, which found nothing Still, nothing. Not only am I fed up, but I have begun to really work with APIExplorer and Procmon as mentioned in the Technet original because I want to know WHAT is happening, not just fix it. Any thoughts?

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Active FTP client blocked by Windows Firewall on Windows 7

    - by Eli
    I have an application that runs as a service and contains an FTP client. It needs to connect to an FTP server that only supports Active FTP. When I attempt to get a list of files or download a file, Windows Firewall is dropping the incoming connection from the FTP server. (I don't believe we had this problem in Windows XP or Windows Vista.) Active FTP is the protocol that requires the the server to open a connection to the client on a port that the client specified. (http://slacksite.com/other/ftp.html) I know I could open up a large port range in Windows Firewall and force my FTP client to only use those ports, but I would have guessed that Windows Firewall would support Active FTP natively. Is there some setting that needs to be made in order to have Windows Firewall automatically detect Active FTP and open up the necessary ports as needed? Can I change that setting programmatically? Thanks. PS- I asked this question on StackOverflow, but was told I should probably ask here as well.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • Problems with apache svn server (403 Forbidden)

    - by mrlanrat
    Iv recently setup a SVN server on my papache webserver. I installed USVN http://www.usvn.fr/ to help manage the repositories from a web interface. When I create a repository and try to import code into it from netbeans i get the following error: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to PROPFIND request for '/svn/python1' I know i have the username and password correct (and I have tried different users) I have done some research and it seems that it is most likely an Apache svn error. Below is the config file for this virtualhost. <VirtualHost *:80> ServerName svn.domain.com ServerAlias www.svn.domain.com ServerAlias admin.svn.domain.com DocumentRoot /home/mrlanrat/domains/svn.domain.com/usvn/public ErrorLog /var/log/virtualmin/svn.domain.com_error_log CustomLog /var/log/virtualmin/svn.domain.com_access_log combined DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory "/home/mrlanrat/domains/svn.domain.com/usvn"> Options +SymLinksIfOwnerMatch AllowOverride All Order allow,deny Allow from all </Directory> <Location /svn/> ErrorDocument 404 default DAV svn Require valid-user SVNParentPath /home/mrlanrat/domains/svn.domain.com/usvn/files/svn SVNListParentPath on AuthType Basic AuthName "USVN" AuthUserFile /home/mrlanrat/domains/svn.domain.com/usvn/files/htpasswd AuthzSVNAccessFile /home/mrlanrat/domains/svn.domain.com/usvn/files/authz </Location> </VirtualHost> Can anyone point out what I may have done wrong and how to fix it? I have tested with changing file permissions and changing the configuration with no luck. Thanks in advance!

    Read the article

  • SQL 2008 SP2 RsClientPrint ActiveX - "Unable to load client print control"

    - by Miles
    We recently updated our SQL 2008 server to use SP 2 and its causing a few headaches. We use SSRS on this server and when a client tries to print a report by the built-in print function, we're needing to download the RsClientPrint ActiveX control from the server from the client gets the following error Unable to load client print control. We have about 700 computers that are needing this fixed and I've followed the instructions found on the following URL: http://www.kodyaz.com/articles/client-side-printing-silent-deployment-of-rsclientPrint.aspx We have two issues: Most of the users who will be using this ActiveX control are not local administrators so they will not be able to install the control themselves Since there are so many computers, this has to be done silently behind the scenes run by a local admin account After following the information from the link above, we're able to put the files in the C:\Windows\System32 folder and register the DLL but we still get the same problem. The only small thing I've noticed is that in the HTML for the report page, everything that references a version is referencing version 2007.100.4000.00 and the version of the DLL that I pulled from the report server is 2007.100.1600.22. Also, for some clients that are local administrators, they are prompted every time to install the ActiveX control when they click print. This works successfully but we can't have the user asked if they want to install the same control every time they need to print.

    Read the article

  • How can I solve the apache2 httpd error "mixing * ports and non-* ports with a NameVirtualHost addre

    - by rrc7cz
    Here is the error I get when booting up Apache2: * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [warn] NameVirtualHost *:80 has no VirtualHosts I first followed this guide on setting up Apache to host multiple sites: http://www.debian-administration.org/articles/412 I then found a similar question on ServerFault and tried applying the solution, but it didn't help. Here is an example of my final VirtualHost config: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.xxx.com ServerAlias xxx.com # Indexes + Directory Root. DirectoryIndex index.html DocumentRoot /var/www/www.xxx.com # Logfiles ErrorLog /var/www/www.xxx.com/logs/error.log CustomLog /var/www/www.xxx.com/logs/access.log combined </VirtualHost> with the domain X'd out to protect the innocent :-) Also, I have the conf.d/virtual.conf file mentioned in the guide looking like this: NameVirtualHost * The odd thing is that everything appears to work fine for two of the three sites.

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • Apache Simple Configuration Issue: per-user directory is accessing /~user instead of ~user

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work?

    Read the article

  • How to use cURL to FTPS upload to SecureTransport (hint: SITE AUTH and client certificates)

    - by Seamus Abshere
    I'm trying to connect to SecureTransport 4.5.1 via FTPS using curl compiled with gnutls. You need to use --ftp-alternative-to-user "SITE AUTH" per http://curl.haxx.se/mail/lib-2006-07/0068.html Do you see anything wrong with my client certificates? I try with # mycert.crt -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- # mykey.pem -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- And it says "530 No client certificate presented": myuser@myserver ~ $ curl -v --ftp-ssl --cert mycert.crt --key mykey.pem --ftp-alternative-to-user "SITE AUTH" -T helloworld.txt ftp://ftp.example.com:9876/upload/ * About to connect() to ftp.example.com port 9876 (#0) * Trying 1.2.3.4... connected * Connected to ftp.example.com (1.2.3.4) port 9876 (#0) < 220 msn1 FTP server (SecureTransport 4.5.1) ready. > AUTH SSL < 334 SSLv23/TLSv1 * found 142 certificates in /etc/ssl/certs/ca-certificates.crt > USER anonymous < 331 Password required for anonymous. > PASS [email protected] < 530 Login incorrect. > SITE AUTH < 530 No client certificate presented. * Access denied: 530 * Closing connection #0 curl: (67) Access denied: 530 I also tried with a pk8 version... # openssl pkcs8 -in mykey.pem -topk8 -nocrypt > mykey.pk8 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ...but got exactly the same result. What's the trick to sending a client certificate to SecureTransport?

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

  • How do I disable MEDIUM and WEAK/LOW strength ciphers in Apache + mod_ssl?

    - by superwormy
    A PCI Compliance scan has suggested that we disable Apache's MEDIUM and LOW/WEAK strength ciphers for security. Can someone tell me how to disable these ciphers? Apache v2.2.14 mod_ssl v2.2.14 This is what they've told us: Synopsis : The remote service supports the use of medium strength SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer medium strength encryption, which we currently regard as those with key lengths at least 56 bits and less than 112 bits. Solution: Reconfigure the affected application if possible to avoid use of medium strength ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More] Synopsis : The remote service supports the use of weak SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer either weak encryption or no encryption at all. See also : http://www.openssl.org/docs/apps/ciphers .html Solution: Reconfigure the affected application if possible to avoid use of weak ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More]

    Read the article

  • How to know if your computer is hit by a dnschanger virus?

    - by kira
    The Federal Bureau of Investigation (FBI) is on the final stage of its Operation Ghost Click, which strikes against the menace of the DNSChanger virus and trojan. Infected PCs running the DNSChanger malware at unawares are in the danger of going offline on this coming Monday (July 9) when the FBI plans to pull down the online servers that communicate with the virus on host computers. After gaining access to a host PC, the DNSChanger virus tries to modify the DNS (Domain Name Server) settings, which are essential for Internet access, to send traffic to malicious servers. These poisoned web addresses in turn point traffic generated through infected PCs to fake or unsafe websites, most of them running online scams. There are also reports that the DNSChanger virus also acts as a trojan, allowing perpetrators of the hack attack to gain access to infected PCs. Google issued a general advisory for netizens in May earlier this year to detect and remove DNSChanger from infected PCs. According to our report, some 5 lakh PCs were still infected by the DNSChanger virus in May 2012. The first report of the DNSChanger virus and its affiliation with an international group of hackers first came to light towards the end of last year, and the FBI has been chasing them down ever since. The group behind the DNSChanger virus is estimated to have infected close to 4 million PCs around the world in 2011, until the FBI shut them down in November. In the last stage of Operation Ghost Click, the FBI plans to pull the plug and bring down the temporary rogue DNS servers on Monday, July 9, according to an official announcement. As a result, PCs still infected by the DNSChanger virus will be unable to access the Internet. How do you know if your PC has the DNSChanger virus? Don’t worry. Google has explained the hack attack and tools to remove the malware on its official blog. Trend Micro also has extensive step-by-step instructions to check if your Windows PC or Mac is infected by the virus. The article is found at http://www.thinkdigit.com/Internet/Google-warns-users-about-DNSChanger-malware_9665.html How to check if my computer is one of those affected?

    Read the article

  • SSL on nginx + unicorn got "Error 102 (net::ERR_CONNECTION_REFUSED)"

    - by panggi
    I tried to deploy my app on EC2 (opened port: 22, 80, 443) App: Rails 3.2.2 Server: nginx 1.2.1 unicorn gem (latest) ubuntu 12.04 Deployer: Capistrano I tried to follow the instruction in Railscasts : http://railscasts.com/episodes/335-deploying-to-a-vps (Sorry, it's a Pro Episode) Anything fine with normal port 80 http but i got Error 102 after trying to use SSL, here is the nginx.conf content: upstream unicorn { server unix:/tmp/unicorn.frontend.sock fail_timeout=0; } server { server_name beta.sukeru.com; listen 443 default; root /home/deployer/apps/appname/current/public; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; } In production.rb i set: config.force_ssl = true Can anyone give a solution for this? :)

    Read the article

  • postfix Mail filters not running behind a controlled enviornment

    - by Ashish
    Hi, I have deployed a postfix server for email receiving. On this I have configured SenderID + SPF milter, by referring to http://www.postfix.org/MILTER_README.html The command that I used is as follows: ./sid-filter -u postfix -p inet:10027@localhost -l Following are my settings in main.cf file: #Milter support for smtpd mail smtpd_milters = inet:localhost:10027, inet:localhost:10028 # Milters for non-SMTP mail. non_smtpd_milters = inet:localhost:10027, inet:localhost:10028 milter_default_action = reject # Postfix . 2.6 #milter_protocol = 6 # 2.3 . Postfix . 2.5 milter_protocol = 2 Now I have this observation: One of the postfix that is setup on AWS CentOS 5.5 is working fine and is able to receive mails on defined mx record. One of the similar postfix(as in step 1) that is setup behind one of the corporate firewalls is not able to receive any mails and is giving following type of error logs: connect from xxxxxx.austin.hp.com[xx.xxx.96.198] May 25 13:20:02 g2t0385g postfix/smtpd[11733]: C11F9B0194: client=xxxxxxx.austin.hp.com[15.217.96.198] May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: message-id= May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: milter-reject: END-OF-MESSAGE from xxxxxx.austin.hp.com[xx.xxx.96.198]: 5.7.1 Command rejected; from= to= proto=ESMTP helo= Here the 'sid-filter' is giving problems. Any idea, what I am doing wrong? Please help. Thanks in advance Ashish Sharma

    Read the article

  • EC2 Ubuntu - Force instance to use internal IP

    - by Peter
    I've just set up a micro instance on EC2 (AMI ID ami-e59ca991). I had hoped to avoid charges for a year as my usage falls well within the bound of the free tier. I have been charged $0.01 for "regional data transfer". I read here that this is because my instance is talking to its self via it's external IP address. From what I've Googled it looks like you can stop the charges by making sure that the instance uses its internal IP address. However, when I ping the hostname of my instance internally (via an ssh session) it resolves to the instances internal IP address. How can I configure my instance so that I do not get these charges? Is it as simple as adding a line to my hosts file? Additionally, is this the real reason for the charge? I'm concerned that I've misunderstood the pricing somewhere. I have Apace and MySQL (with phpmyadmin) running on the machine - could I be being charged for data transfer associated with these (I have only one flat HTML page and I have only logged in via phpmyadmin - I have no data in my database). Edit: Additionally, my user account on MySQL was declared as: grant all privileges on *.* to 'peter'@'localhost'; Should I have instead used the internal hostname for the instance? grant all privileges on *.* to '[email protected]'; Cheers, Pete

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Phpmyadmin location for nginx

    - by multiformeinggno
    I installed nginx and phpmyadmin. I set up a domain with these parameters to test phpmyadmin: server { listen 80; server_name domain.com; root /usr/share/phpmyadmin; index index.php; fastcgi_index index.php; location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } } And everything works properly (if I visit the domain I can login to phpmyadmin). The problem is that it was just for testing phpmyadmin, now I'd like to move this to my 'default' site. But I can't figure out how to have it on /phpmyadmin. Here's the config for the 'default' nginx site (where I'd like to put this /phpmyadmin location): server { server_name blabla; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /var/www/default; index index.php index.html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } ### NginX Status location /nginx_status { stub_status on; access_log off; } ### FPM Status location ~ ^/(status|ping)$ { fastcgi_pass unix:/var/run/php5-fpm.sock; access_log off; } }

    Read the article

  • Installing Munin on Centos 6

    - by justinhj
    I've hit problems installing munin on Centos 6. This seems to be a conflict between parts of Perl. I think the version of Perl is newer on Centos 6 (v5.10.1) When installing munin via yum I get errors relating to perl dependencies as below. I'm not a big enough whiz at yum or rpm to figure out the issue. Munin documentation does not yet talk about installing to Centos 6.0 Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(Net::SNMP) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: bitstream-vera-fonts Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(HTML::Template) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-SNMP Error: Package: munin-common-1.4.2-0.rpl1.el5.noarch (/munin-common-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(DBI) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Log::Log4perl) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(LWP::Simple) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(RRDs) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Date::Manip) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(CGI::Fast) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Time::HiRes)

    Read the article

  • Why do (Russian) characters in some received emails change when reading in David InfoCenter?

    - by waszkiewicz
    I'm using David InfoCenter as email Software, and I have troubles with some of my emails in Russian. It's only a few letters, in some emails (sent from different people), like for example the "R" ("P" in russian) will be shown as a "T". In other emails in Russian, the problem doesn't appear. Isn't it strange? Does anyone had the same problem already and found where it came from? When I transmit that email to an external mailbox (internet email account), it's even worse, and gives me symbols instead of all Russian letters... The default encoding was "Russian (ISO)", I changed it to "Russian (Windows)", but same problem. Another weird reaction is when I write an intern email and name it TEST in Russian (????), with ???? in the text window, it changes the title to "Oano"? But the content stays in Russian... With Mailinator I got the following, for message and subject "????": Subject: ???? [..] MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_000_00017783.4AF7FB71" This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 0KLQtdGB0YI= ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxNRVRBIG5hbWU9R0VORVJBVE9SIGNvbnRl bnQ9Ik1TSFRNTCA4LjAwLjYwMDEuMTg4NTIiPjwvSEVBRD4NCjxCT0RZIHN0eWxlPSJGT05UOiAx MHB0IENvdXJpZXIgTmV3OyBDT0xPUjogIzAwMDAwMCIgbGVmdE1hcmdpbj01IHRvcE1hcmdpbj01 Pg0KPERJViBzdHlsZT0iRk9OVDogMTBwdCBDb3VyaWVyIE5ldzsgQ09MT1I6ICMwMDAwMDAiPtCi 0LXRgdGCPFNQQU4gDQppZD10b2JpdF9ibG9ja3F1b3RlPjxTUEFOIGlkPXRvYml0X2Jsb2NrcXVv dGU+PC9ESVY+PC9TUEFOPjwvU1BBTj48L0JPRFk+PC9IVE1MPg== ------_=_NextPart_000_00017783.4AF7FB71--

    Read the article

< Previous Page | 992 993 994 995 996 997 998 999 1000 1001 1002 1003  | Next Page >