Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 621/764 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Windows 7: Windows Firewall: Logging/Notifying on Outgoing Request Attempts

    - by Maxim Z.
    I'm trying to configure Windows Firewall with Advanced Security to log and tell me when programs are trying to make outbound requests. I previously tried installing ZoneAlarm, which worked wonders for me with this in XP, but now, I'm unable to install ZA on Win7. My question is, is it possible to somehow monitor a log or get notifications when a program tries to do that if I set all outbound connections to auto-block, so that I can then create a specific rule for the program and block it.? Thanks! UPDATE: I've enabled all the logging options available through the Properties windows of the Windows Firewall with Advanced Security Console, but I am only seeing logs in the %systemroot%\system32\LogFiles\Firewall\pfirewall.log file, not in the Event Viewer, as the first answer suggested. However, the logs that I can see only tell me the request's or response's destination IP and whether the connection was allowed or blocked, but it doesn't tell me what executable it comes from. I want to find out the file path of the executable that each blocked request comes from. So far, I haven't been able to.

    Read the article

  • Samba 3.5 Shadow Copy for Windows 7

    - by Prashanth Sundaram
    Over the past several days I have been trying to get the shadow to work with samba but haven’t been successful. Can someone check below config and let me know if I am missing something? We are using Equallogic SAN and iSCSI LUNS to mount volumes. I can cleanly access samba shares on Windows 7 clients but just not shadow copy. I have referred the official how-to but couldn’t get it to work. I see these messages in the logs. Any help is deeply appreciated. [2012/10/31 12:20:53.549863, 0] smbd/nttrans.c:2170(call_nt_transact_ioctl) FSCTL_GET_SHADOW_COPY_DATA: connectpath /fs/test-01, failed. [2012/10/31 12:21:13.887198, 0] modules/vfs_shadow_copy2.c:734(shadow_copy2_get_shadow_copy2_data) shadow:snapdir not found for /fs/test-01 in get_shadow_copy_data [2012/10/31 12:21:13.887265, 0] smbd/nttrans.c:2170(call_nt_transact_ioctl) FSCTL_GET_SHADOW_COPY_DATA: connectpath /fs/test-01, failed. == Samba pkgs == samba-3.5.10-116.el6_2.x86_64 samba-common-3.5.10-116.el6_2.x86_64 samba-winbind-clients-3.5.10-116.el6_2.x86_64 samba-client-3.5.10-116.el6_2.x86_64 === df –h == First is the iSCSI LUN and 2 others are snapshots. /dev/mapper/eql-0-fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01 /dev/mapper/eql-2-0+fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01/@GMT-2012.10.26-17.32.42/fs/test-01 (SNAPSHOT-1) /dev/mapper/eql-d-0+fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01/@GMT-2012.10.31-11.52.42/fs/test-01 (SNAPSHOT- 2) ===/etc/samba/smb.conf === [global] workgroup = DOMAIN server string = Samba Server Version %v security = ads realm = DOMAIN.CORP encrypt passwords = yes guest account = nobody map to guest = bad uid log file = /var/log/samba/%m.log domain master = no local master = no preferred master = no os level = 0 load printers = no show add printer wizard = no printable = no printcap name = /dev/null disable spoolss = yes follow symlinks = yes wide links = yes unix extensions = no [test] comment = Test Directories path = /fs/test-01 vfs objects = shadow_copy2 #shadow_copy2: sort = desc #shadow: localtime = yes #shadow: snapdir = /fs/test-01/test #shadow: basedir = /fs/test-01 guest ok = yes writeable = yes map archive = no force create mode = 0660 force directory mode = 2770 inherit owner = yes inherit permissions = yes All feedback is welcome. Thanks!

    Read the article

  • Puppet and Vim fighting over Ruby version

    - by devians
    I have installed puppet from the .dmg from puppetlabs. If I remove ruby 1.9.3, puppet works, but other things like my vim install (dependant plugins) do not. According to http://docs.puppetlabs.com/guides/platforms.html#ruby-versions 1.9.3 is supported. So whats going wrong with puppet? % uname -a Darwin Kusanagi.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 % which ruby /usr/local/bin/ruby % ruby --version ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-darwin11.4.2] % /usr/bin/ruby --version ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0] % brew info ruby 1 ? ruby: stable 1.9.3-p327, HEAD http://www.ruby-lang.org/en/ Depends on: pkg-config, readline, gdbm, libyaml /usr/local/Cellar/ruby/1.9.3-p327 (796 files, 17M) * https://github.com/mxcl/homebrew/commits/master/Library/Formula/ruby.rb ==> Options --with-tcltk Install with Tcl/Tk support --with-suffix Suffix commands with "19" --universal Build a universal binary --with-doc Install documentation ==> Caveats NOTE: By default, gem installed binaries will be placed into: /usr/local/Cellar/ruby/1.9.3-p327/bin You may want to add this to your PATH. % puppet /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- puppet/util/command_line (LoadError) from /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/puppet:3:in `<main>'

    Read the article

  • Windows Media Player 12 Library import keeps dying

    - by duckworth
    I cannot get WMP 12 to import my library. I have searched around various forums and tried all the common solutions like disabling Media Sharing, deleted my %LOCALAPPDATA%\Microsoft\Media Player directory and tried reimporting, etc. but nothing works. I have even removed the Media features from Windows setup and re-added them. I have a large mp3 collection shared on the network from another Windows box. I add the folder (tried as a mapped drive and UNC path) and it begins importing. After about 30 minutes into the import (the CurrentDatabase_372.wmdb hits just under 400MB) my WMP player stops importing and all of the icons in WMP turn to red x's and my library is gone. I close and reopen WMP 12 and the library is empty and the CurrentDatabase_372.wmdb is small and it strarts importing again. Rinse, lather, repeat. I am going nuts as WMP11 on Vista handles this same setup perfectly. I am at my wits end on what else to try. I am running a legit Windows 7 Ultimate X64 RTM install. Here is a screenshot of what WMP12 looks like when the import dies: Any other ideas? Edit: OK, I Just confirmed this is definitely a problem not specific to my computer or configuration. I just did a clean installation of Windows 7 Ultimate x86 on an old test machine, opened WMP12 and added the same network folder of mp3's and it crashed about an hour into the import with the same appearance as the screenshot I posted above and the library disappears. So the problem has to be one of several things: The large size of the library The fact that the library is on the network A specific file or file is causing it the player to crash

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Configuring suExec to work with Apache and PHP via FastCGI

    - by RandomPsychology
    I have installed ISPConfig 3 on an Ubuntu VPS and configured it for Apache + PHP via FastCGI and suexec. I am able to upload PHP apps (e.g. Wordpress) and run them normally w/ suexec. However, for some reason the PHP scripts cannot write data to disk. For instance, trying to upgrade a plugin via Wordpress' web interface causes it to fail with the error "Could not create directory /path/to/wp-content/upgrade/plugin.tmp." Trying to upload media and other assets also fails via the web. I've checked owner/group on the directory structure and it looks good. The suExec log also seems to be normal and I don't see any indicative errors in the web server logs. I can also confirm that changing the owner/group on the directories does result in the expected error in suexec.log. Additionally, I have the directory permissions set to u=rw,g=r,o= and I've also tried setting g=rw. None of this results in my scripts being able to write to the directories. What am I doing wrong?

    Read the article

  • IIS 7.5 / Windows 7: Error 500.19, error code 0x800700b7

    - by nikhiljoshi
    I have been trying to resolve this issue. I am using Windows 7 and VS2008 +iis7.5. My project is stuck because of this error. The error says: Error Summary HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. `Detailed Error Information Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x800700b7 Config Error There is a duplicate 'system.web.extensions/scripting/scriptResourceHandler' section defined Config File \\?\C:\inetpub\wwwroot\test23\web.config Requested URL http://localhost:80/test23 Physical Path C:\inetpub\wwwroot\test23 Logon Method Not yet determined Logon User Not yet determined Config Source 15: <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> 16: <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> 17: <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> ` I followed the instructions in this Microsoft solution document, but it didn't help. http://support.microsoft.com/kb/942055

    Read the article

  • Apache HTTPD - Segmentation fault when loading mod_jk module

    - by Hans Engel
    I just set up mod_jk with my Apache httpd 2.0.52 installation, but now when I try to start Apache, it has a segmentation fault. I've checked that I am using the mod_jk compiled for 2.0.x.. built against the same version I have, in fact. I've also verified that the path I'm giving to LoadModule is correct, and the permissions and the ownership of the file are the same as the rest of the modules'. When I remove the "LoadModule" command for mod_jk from my httpd.conf, there is no segmentation fault. Nothing shows in Apache's error logs. I have tried restarting the server with this module using both service httpd restart and httpd. These are the last few lines returned of strace httpd -X: gettimeofday({1292100295, 434487}, NULL) = 0 socket(PF_INET6, SOCK_STREAM, IPPROTO_IP) = -1 EAFNOSUPPORT (Address family not supported by protocol) socket(PF_NETLINK, SOCK_RAW, 0) = 3 bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0 getsockname(3, {sa_family=AF_NETLINK, pid=22378, groups=00000000}, [12]) = 0 time(NULL) = 1292100295 sendto(3, "\24\0\0\0\26\0\1\3\307\342\3M\0\0\0\0\0\305\333\267", 20, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"<\0\0\0\24\0\2\0\307\342\3MjW\0\0\2\10\200\376\1\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 664 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0\307\342\3MjW\0\0\0\0\0\0\1\0\0\0\10\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20 close(3) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ Process 22378 detached Has anyone had a similar problem using Apache 2.0.52 with mod_jk? I might try downloading and building the source for the Apache server and mod_jk myself if there isn't a discovered fix for this.

    Read the article

  • Force SSL on one page via .htaccess without looping

    - by Will Martin
    Okay, I have this code: RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/borrowing/ill/request\.php$ RewriteRule ^.*$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L] The way I would expect this to work is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule does not match, because HTTPS is now on. The way it actually works is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS ... And so on. I know that the second condition (matching the file name) is working, because the redirect loop only hits that specific page. The question is, why isn't the switch to HTTPS causing the first condition to not match? EDIT: I put the exact same .htaccess rules into a test area on another server -- same file and path info. And they worked just fine. There's got to be something wrong with the server configuration elsewhere.

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • PHP extension causes symbol lookup error

    - by Christian
    Dear, I installed - or better tried to - the NMCryptGate Extension for PHP on my Debian 5.0.8 server. I did this by compiling the sources which came up with no error message. Calling phpinfo() I can see the extension as enabled. BUT, whenever I try calling a method from this extension I get an error logged to the apache error log: /usr/sbin/apache2: symbol lookup error: /usr/lib/php5/20060613+lfs/nmcryptgate.so: undefined symbol: nmlistalloc What is missing? I got two packages from the software company: the php module sources and some files which should - according to their path inside the tar - go to /usr/local/bin|doc|include|lib. I moved them there without any effect. Each of these two packages has its own config file almost looking the same: \# libnmcryptgate.la - a libtool library file \# Generated by ltmain.sh - GNU libtool 1.3.4 (1.385.2.196 1999/12/07 21:47:57) \# \# Please DO NOT delete this file \# It is necessary for linking the library \# The name that we can dlopen(3) dlname='' \# Names of this library library_names='libnmcryptgate.so.1 libnmcryptgate.so libnmcryptgate.so' \# The name of the static archive old_library='' \# Libraries that this one depends upon dependency_libs=' -L. -L/usr/ssl/lib -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto' \# Version information for libnmcryptgate current=1 age=0 revision=29 \# Is this an already installed library installed=yes \# Directory that this library needs to be installed in libdir='/usr/local/lib' I tried several ways to get it right: moving files, symlinking, changing configurations - always followed by restarting apache - no success. I guess I just have to move the files to the correct location or change the libdir inside the config files but meanwhile I'm totally confused by the two packages: do I need both, which config rules what, do I have to use the libdir variable? And for what? ... Anybody out there hinting me to my source of failure? Thank you in advance, regards, Christian

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Create App_Data and register Excel application on ASP.NET deployment? (IIS7.5)

    - by Francesco
    I am deploying an ASP.NET MVC3 application in IIS7. I already deployed other applications but they never made use of the App_Data folder or any additional component such as the Interop library. I used the one click deployement and I sue the default application pool. When I launch the application I immediately get an error stating: [web access] Sorry, an error occurred while processing your request. [browse from IIS7] Could not find a part of the path 'D:\Data\Apps\OppUpdate\App_Data\Test.xlsx'. Then I manually added the App_Data folder inside the deployment directory and the application starts regularly. Then when it comes to the taks that uses the Interop library, I get the following error: [web access] Sorry, an error occurred while processing your request. [browse from IIS7] Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). Is there any way to automatically add the App_Data folder when using 1 click deploy? How can I register the Interop services? Thanks you, Francesco

    Read the article

  • Does WebDAV even work on IIS 7? I say nay

    - by FlavorScape
    I've tried every configuration from the top 10 stack overflow and server fault results for WebDAV 405 on IIS (for verb PROPFIND and PUT). I'm running server 2008 SP2. Followed all the instructions here. I'm no stranger to configuring servers. This has gotten nowhere after 8 hours. Confirmed system.webserver in applicationhost.config: <add name="WebDAV" path="*" verb="PROPFIND,PROPPATCH,MKCOL,PUT,COPY,DELETE,MOVE,LOCK,UNLOCK" modules="WebDAVModule" resourceType="Unspecified" requireAccess="None" /> Port 443 with basic auth, same issue. Tried port 80 with windows auth. Broken. (405) Windows authentication. Check. Added authoring rules for default site and application. Check. Not the firewall. Check. added "Desktop Experience" role feature Tried HTTPS with Basic Authentication on port 443. Does not work. No other services are running like Sharepoint. Check. confirmed user has read/write NT level permissions for the folder/virtual dir tried net use * http://localhost /user:MYDOMAIN\me myPass get error 1920, if I don't authenticate I get error 67 confirmed I'm not applying filtering to WebDAV: <requestFiltering> <fileExtensions applyToWebDAV="false" /> <verbs applyToWebDAV="false" /> <hiddenSegments applyToWebDAV="false" /> 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access. SHOULD I JUST GIVE UP? Other questions that helped none: 405 - ‘Method not Allowed’ adding service hosted in IIS7 webdav on iis7.5 - simply cannot make it work http://studentguru.gr/b/kingherc/archive/2009/11/21/webdav-for-iis-7-on-windows-server-2008-r2.aspx

    Read the article

  • Installing MySQL 5.1 on OS X 10.7 Lion

    - by xisal
    I am trying to install MySQL 5.1. I am on Lion, and when I remove all files associated with MySQL on my machine it still tells me that I have a newer version installed when I try to install it from the DMG file. Has anyone successfully installed MySQL 5.1 on Lion? I found a solution using Homebrew: Completely remove MySQL from your system (just in case) sudo rm /usr/local/mysql sudo rm -rf /usr/local/mysql* sudo rm -rf /Library/StartupItems/MySQLCOM sudo rm -rf /Library/PreferencePanes/My* vim /etc/hostconfig and removed the line MYSQLCOM=-YES- rm -rf ~/Library/PreferencePanes/My* sudo rm -rf /Library/Receipts/mysql* sudo rm -rf /Library/Receipts/MySQL* sudo rm -rf /var/db/receipts/com.mysql.* Source:http://stackoverflow.com/questions/1436425/how-do-you-uninstall-mysql-from-mac-os-x Install homebrew /usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)" Source: https://github.com/mxcl/homebrew/wiki/installation Install MySQL 5.1 via brew brew install mysql51 if that doesn't work, do this: brew install https://raw.github.com/adamv/homebrew-alt/master/versions/mysql51.rb Source: http://stackoverflow.com/questions/4359131/brew-install-mysql-on-mac-os/6399627#6399627 Make MySQL Work Create mysql.sock file touch /tmp/mysql.sock Install MySQL default tables /usr/local/Cellar/mysql51/5.1.58/bin/mysql_install_db ...or your path Source: http://stackoverflow.com/questions/4788381/getting-cant-connect-through-socket-tmp-mysql-when-installing-mysql-on-ma/5140849#5140849

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Where is vmlinux on my Ubuntu installation?

    - by Jason Baker
    I'm trying to work with starting up oprofile, and I'm running into a problem at this step: opcontrol --vmlinux=/path/to/vmlinux Ubuntu has no package called vmlinux, and when I do a locate vmlinux, I get a lot of files: /usr/src/linux-headers-2.6.28-14/arch/h8300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-14/arch/m68k/kernel/vmlinux-std.lds /usr/src/linux-headers-2.6.28-14/arch/m68k/kernel/vmlinux-sun3.lds /usr/src/linux-headers-2.6.28-14/arch/mn10300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-14/arch/sh/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-14/arch/x86/boot/compressed/vmlinux_32.lds /usr/src/linux-headers-2.6.28-14/arch/x86/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-14/include/asm-generic/vmlinux.lds.h /usr/src/linux-headers-2.6.28-15/arch/h8300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-15/arch/m68k/kernel/vmlinux-std.lds /usr/src/linux-headers-2.6.28-15/arch/m68k/kernel/vmlinux-sun3.lds /usr/src/linux-headers-2.6.28-15/arch/mn10300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-15/arch/sh/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-15/arch/x86/boot/compressed/vmlinux_32.lds /usr/src/linux-headers-2.6.28-15/arch/x86/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-15/include/asm-generic/vmlinux.lds.h /usr/src/linux-headers-2.6.28-16/arch/h8300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-16/arch/m68k/kernel/vmlinux-std.lds /usr/src/linux-headers-2.6.28-16/arch/m68k/kernel/vmlinux-sun3.lds /usr/src/linux-headers-2.6.28-16/arch/mn10300/boot/compressed/vmlinux.lds /usr/src/linux-headers-2.6.28-16/arch/sh/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-16/arch/x86/boot/compressed/vmlinux_32.lds /usr/src/linux-headers-2.6.28-16/arch/x86/boot/compressed/vmlinux_64.lds /usr/src/linux-headers-2.6.28-16/include/asm-generic/vmlinux.lds.h Which one of these is the one I'm looking for?

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • email dropbox between two mutually untrusted sites

    - by user52874
    I've an interesting problem that I thought was straightforward, but turns out I think I'm whistling down the wrong path. It has to do with (shudder) email. I thought I was done with needing to know about email guts ten years ago; I was wrong. Anyway. Simply put, I need to figure out how to relay outgoing email that is not targetted in our domain from our domain into a 'dropbox' in a DMZ, and the Other Guys can retrieve that email from their side of the DMZ and distribute it accordingly, even out to the public internet if need be. There will be no [un-established] traffic coming back to Our side from anywhere; any attempts to do so are dropped with malicious prejudice. Our side is postfix running on scilinux6.1. The DMZ boxes are redhat5.4. The Other Guys are M$ Exchange. The firewalls are set up such that data can go from Our Side downsec to the DMZ, but not upsec from the DMZ into Our Side. Same for the Other Guys. My first thinking was simply to set up postfix on a box in the DMZ and tell them to set up fetchmail or whatever the M$ equivalent is, but then I started remembering that postfix wants to actively relay email onwards, rather than hold it and wait for someone to 'reach in' and retrieve it. I'm not sure I've explained this well, but hopefully it's clear enough that someone can point me in the right direction. I seem to remember having done this before, but it was a looong time ago. thanks!

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >