Search Results

Search found 64048 results on 2562 pages for 'http post'.

Page 575/2562 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • CNAME redirect to Wordpress blog not working (.htaccess problem)

    - by Vincent Chan
    I have two domains hosting in two different servers: domain1.com & domain2.com I would like to forward "blog.domain1.com" to "blog2.domain2.com" which is a Wordpress blog using CNAME redirect. Before I installed Wordpress. blog.domain1.com (=redirect=) blog2.domain2.com/index.htm (working fine) The browser will keep the URL (http://blog.domain1.com) even the index.htm is on domain2.com server. However, after I installed Wordpress, the browser will change the URL to (http://blog2.domain2.com) This is my current setup: On domain1.com DNS: blog.domain1.com CNAME redirect to domain2.com on domain2.com .htaccess: Options +FollowSymLinks RewriteEngine On RewriteCond %{HTTP_HOST} ^blog\.domain1\.com RewriteRule ^(.*)$ http://blog2.domain2.com/$1 [R=301,L] on blog2.domain2.com .htaccess: DirectoryIndex index.php <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On RewriteBase /blog2/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog2/index.php [L] </IfModule> blog2.domain2.com is installed under domain2.com/blog2/ All I want to do is keeping the URL (blog.domain1.com) unchanged for the whole Wordpress redirect. Thanks a lot.

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP. (solved)

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • SQL 2008 SP1 crashing almost daily

    - by matijake
    Hey, almost every day our new DB crashes. It is virtual server residing on same hardware as 5 other servers, two of them beeing identical MS SQL2008sp1 and two Oracle 11g's so I can pretty much rule out hardware issues. Server has dedicated local LUN, 4vCPU and 8GB memory with 2GB windows swap file. It runs 4 instances. Primary instance is limited to 5GB memory and paralelism set to 4 running on MS SQL 2008 SP1 @ Windows Server 2008 Enterprise R2 x64. Only that primary instance is crashing. After it crashes nothing can connect to it, it's even impossible to shut it down through service manager. What I found in logs is: ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\SQLDump0081.txt SqlDumpExceptionHandler: Process 4788 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server i s terminating this process.     Whole log can be seen at: http://kabl.org/files/SQLDump0081.txt second crash log made second later at: http://kabl.org/files/SQLDump0082.txt I have analyzed mini crashdump with Microsoft tools, but no promising results. If it can help, here it is: http://kabl.org/files/SQLDump0081.mdmp Any ideas are greatly welcome, since it is becoming quite a pain in the ass to restart server almost every day :) Regrads, -Matija

    Read the article

  • Mac OS X Lion Apache Server not Found

    - by Burak Erdem
    After upgrading to Lion 10.7.2 today, Apache virtual hosts are not working anymore. When I go to http://XYZ.localhost, it say "server not found". I am using Apache on my Mac OS X Lion and until today, it was working fine. I can access http://localhost but I can't access http://XYZ.localhost My /etc/hosts file is like below; 127.0.0.1 XYZ.localhost My /etc/apache2/extra/httpd-vhosts.conf file is like below; <VirtualHost *:80> ServerName XYZ.localhost DocumentRoot /Library/WebServer/Documents/XYZ <Directory /Library/WebServer/Documents/XYZ> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I think I once had this problem too, after another OS X update, but I can't remember how I solved it. Is it a user permission issue? Or is there something wrong with Apache or any other setting? EDIT: It seems like my /etc/hosts file is not working correctly. Even if I add something like 127.0.0.1 apple.com it still goes to the real apple.com. Maybe this might help to solve the problem.

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • Installing and running a guest OS on KVM-qemu with only serial console access

    - by nixnotwin
    I am trying to installing a bsd distro with virt-installer. With a Linux distro I used this: virt-install -n debian -r 1024 --vcpus=1 --accelerate -v --disk /var/kvm/installation-disks/debian.img,size=6--nographics --network=bridge:br0,model=ne2k_pci,mac=52:54:00:66:68:09 -l http://ftp.de.debian.org/debian/dists/squeeze/main/installer-amd64/current/images/ -x console=ttyS0,115200 This loads the installer directly from the online mirror. With Fedora I used this mirror: http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ Are there such mirrors for freebsd or openbsd? The reason I want direct installable ftp/http mirrors is because I can access my physical server only via ssh, and it doesn't have a X server or a window manager to give me a VNC GUI. When I tried installing centos 6 with an online mirror I was able to finish the installation via serial console, but after I rebooted it, the serial console never worked for me. I tried everything possible---editing menu.lst, inttab and securtty files. Fedora 16 booted fine from serial console, but got stuck when it loaded anaconda installer. I tried editing freebsd iso installation media by adding serial console option to boot option. And installation was successful. But couldn't boot into it becuase it wasn't giving console acess. I couldn't edit any files as ufs partition cannot be loaded with write access on my Ubuntu server 10.04. Only debian squeeze worked well, it worked for me even without editing a single configuration file. I want to have CLI versions of fedora/centos and freebsd/openbsd. But, looks like there isn't any hope for me to have them, as I have to depend on a serial console to do everything.

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • Can't create add a SQL Server user: The login already has an account under a different user name.

    - by Zian Choy
    Environment: SQL Server 2005 Express Windows 7 When I installed SQL Server, I followed the instructions at http://msdn.microsoft.com/en-us/library/aa905868.aspx to set my computer's admin account as the SQL Server admin. However, when I try to access a database on my computer through Visual Studio 2008, I get the following error message: --------------------------- Microsoft Visual Studio --------------------------- The database 'Parkinsons' does not exist or you do not have permission to see it. Would you like to attempt to create it? --------------------------- Yes No --------------------------- Then, if I go to SQL Server and add a user to that database, I get the following error message: TITLE: Microsoft SQL Server Management Studio Express ------------------------------ Create failed for User 'zian'. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.2047.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Create+User&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.Express.ConnectionInfo) ------------------------------ The login already has an account under a different user name. (Microsoft SQL Server, Error: 15063) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=15063&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ Why doesn't VS piggy back on the dbo account? If the dbo account is unusable, then why won't SQL Server let me make an account so that I can access my own data?

    Read the article

  • Allow incoming connections on Windows Server 2008 R2

    - by Richard-MX
    Good day people. First, im new to Windows Server. I've always used Linux/Apache combo, but, my client has and AWS EC2 Windows Server 2008 R2 instance and he wants everything in there. Im working with IIS and PHP enabled as Fast-CGI and everything is working, but, i cant see the websites stored in it from internet. The public DNS that AWS gave us for that instance is: http://ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com/ But, if i copy paste that address, i get nothing, no IIS logo or something like that. My common sense tells me that maybe the firewall could be blocking the access. Can anyone help me and tell where to enable some rules to get this thing working? I don't wanna start enabling rules at random and make the system insecure. If you need any additional info, you can ask me and i will provide it. Thanks in advance. UPDATE: Amazon EC2 display this: Public DNS: ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com Private DNS: ip-XX-XXX-XX-252.us-west-2.compute.internal Private IPs: XX.XXX.XX.25 In my test microinstance, i just to use the Public DNS address (the one that starts with "ec2") and it works like a charm (of course, the micro instance have its own Public DNS im not assuming same address for both instances...) However, for the large instance, i tried to do the same. Set up everything as in the micro instance but if i use the Public DNS, it doesnt load anything. Im suspicious about the Windows Firewall, but, the HTTP related stuff is enabled. What should i do to get access to the large instance? I don't want to set up the domain yet, i want access from an amazon url. 2ND EDIT: all fixed. Charles pointed that maybe Security Groups was not properly set up for the instance. He was right. Just added HTTP service to the rules and all works good.

    Read the article

  • SQUID proxy - open FTP (and other ports)

    - by gaffcz
    elpeHow can I open other ports than HTTP and HTTPS using SQUID proxy? I have last version of squid running on Fedora 10 but I'm not able to open FTP port. part of my squid.conf: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl ftp proto FTP acl ftp_port port 21 always_direct allow FTP acl SSL_ports port 443 20 21 22 acl Safe_ports port 20 # ftp acl Safe_ports port 21 # ftp acl Safe_ports port 22 # sftp acl Safe_ports port 80 # http acl Safe_ports port 280 # http-mgmt acl Safe_ports port 443 # https acl Safe_ports port 1025-65535 # uregistred ports acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager # USER privilegies (encoded in file passwd) auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd acl AUTHUSERS proxy_auth REQUIRED # BLACKLIST (in file denied.conf) acl denied_domains dstdomain "/etc/squid/DNDdomains.conf" acl denied_regex url_regex "/etc/squid/DNDregex.conf" http_access deny denied_regex http_access deny denied_domains http_access allow AUTHUSERS http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow ftp_port CONNECT http_access allow ftp http_access allow localhost http_access deny all #http_reply_access allow all #http_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 10000 16 256 coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 I've tried to add: acl ftp proto FTP / acl ftp_port port 21 http_access allow ftp add/remove ports 20,21 from SSL_PORTS list set the iptables But nothing helped. It is even possible to use a new version of squid for FTP transfer?

    Read the article

  • Mysql - innoDb - is in the future! Current system log sequence number

    - by Ward Loockx
    I have this error in my syslog. I restored an older dump to solve this problem, after some reading. Now The errors in syslog are far less than before. But still a few times an hour I get Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96637 log sequence number 7 1223357717 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96638 log sequence number 8 150027924 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96639 log sequence number 7 4208567151 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Anybody that knows how I can track what database is causing this issue? And how to fix?

    Read the article

  • Gvim on Windows 7: ALT codes not working

    - by John Sonderson
    I would like to be able to enter ALT codes in Gvim on Windows 7 as documented on the following site: Alt Codes On Windows (Windows 7 in my case), to generate a character via an ALT code you make sure that the NumLock key on your keypad is toggled on, hold down the ALT key, enter the keycode on the numeric keypad, and then release the ALT key. However this does not work in Gvim on Windows (which ignores the fact that I am pressing the ALT key and just prints to entered keypad key directly onto the screen). How can I get these keystroke combinations to work in Gvim as well? Thanks. EDIT: As the answer below points out, the way to insert non-ASCII characters for which you do not have entries on your keyboard without changing the keyboard layout is as follows: Make sure you are in insert mode, and then type CTRL-V followed by the Unicode character code of interest, for instance: CTRL-V u00E0 (generates à) CTRL-V u00C8 (generates È) CTRL-V u00E8 (generates è) CTRL-V u00E9 (generates é) CTRL-V u00EC (generates ì) CTRL-V u00F2 (generates ò) etc... See for instance http://unicode-table.com/ for a full list of Unicode character codes. The following list of Unicode characters by language may also be useful: http://en.wikipedia.org/wiki/List_of_Unicode_characters In some cases such as this one, though, there might be an easier way to enter special characters (see :help digraphs and :digraphs). For example, while in insert mode you may be able to type the following: CTRL-K E! (yields É) CTRL-K a' (yields á) Note that as the following page shows: http://code.google.com/p/vim/source/browse/runtime/doc/digraph.txt Gvim 7.4 contains an even wider set of default digraphs than Gvim 7.3, thus providing convenience to an even broader set of languages. Regards.

    Read the article

  • lighttpd with multiple IPs, each with a UCC certificate and many hostnames

    - by Dave
    I'd like to get lighttpd working with UCC certificates, but I can't seem to figure out the correct syntax. Essentially, for each IP address, I have one UCC certificate and a bunch of hostnames. $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "mywebsite.com" { server.document-root = /var/www/mywebsite.com/htdocs" } The above code works fine for one hostname, but as soon as I try to set up another hostname (note the same SSL cert): $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "anotherwebsite.com" { server.document-root = /var/www/anotherwebsite.com/htdocs" } ...I get this error: Duplicate config variable in conditional 6 global/SERVERsocket==10.0.0.1:443: ssl.engine Is there any way I can put a conditional so that only if ssl.engine is not already enabled, enable it? Or do I have to put all my $HTTP["host"]s inside the same $SERVER["socket"] (which will make config file management more difficult for me) or is there some entirely different way to do it? This has to be repeated for multiple IPs too (so I'll have a bunch of SERVER["socket"] == 10.0.0.2:443" etc), each with one UCC cert and many hostnames. Am I going about this the wrong way entirely? My goal is to conserve IP addresses when I have many websites that are related and can share an SSL certificate, but still need their own SSL-accessible version from the appropriate hostname (instead of a single secure.mywebsite.com).

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • SSL wildcard certificates and trailing 'www'

    - by user173326
    I've got a wildcard SSL certificate for *.mydomain.com. I'm using nginx, and redirecting all traffic for http to https, and also rewriting the URLs without a trailing www (if there is one). So it has, 1) http://subdomain.mydomain.com ---> https://subdomain.mydomain.com 2) http://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 3) https://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 4) https://subdomain.mydomain.com ---> https://subdomain.mydomain.com However, since my cert is for *.mydomain.com, case 3 gets an SSL error in chrome ('This is probably not the site that you are looking for!'), but if you click through it gets redirected and all is well. I understand why, since the initial connection is for https with a www (2 levels of subdomains), which doesn't match what is on the wildcard certificate. I thought a solution would be to get an additional cert for *.*.mydomain.com to cover www.*.mydomain.com. But it seems like that won't work. I spoke to agents from namecheap and comodo, and both said *.*.mydomain.com was not possible. I also came across this: https://support.quovadisglobal.com/KB/a60/will-ssl-work-with-multilevel-wildcards.aspx Is there a solution to this? To be able to cover www.*.mydomain.com?

    Read the article

  • Turn off gzip for a location in Nginx

    - by Nyxynyx
    How can gzip be turned off for a particular location and all its sub-directories? My main site is at http://mydomain.com and I want to turn gzip off for both http://mydomain.com/foo and http://mydomain.com/foo/bar. gzip is turned on in nginx.conf. I tried turning off gzip as shown below, but the Response Headers in Chrome's dev tools shows that Content-Encoding:gzip. How should gzip/output buffering be disabled properly? Attempt: server { listen 80; server_name www.mydomain.com mydomain.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /var/www/mydomain/public; index index.php index.html; location / { gzip on; try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_read_timeout 300; } location /foo/ { gzip off; try_files $uri $uri/ /index.php?$args ; } }

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • How to install IE9 when KB2120976 is not applicable to my Windows 7 x86 Ultimate edition?

    - by CVertex
    I'm trying to install IE9, visit http://beautyoftheweb.com, download it and then says it's downloading prereqs. After a minute, the installer says there's a problem and directs me to Prerequisites for installing Internet Explorer 9 Beta I click on the x86 installers one by one... most say "already installed".. But http://support.microsoft.com/kb/2120976/ says "The update is not applicable to your computer". Lame. So I try the IE9 install and the go through the whole process again with the same result. I discovered there's an IE9 install log at C:\Windows\IE9_main.log which logs the install process and reports an Error for 2120976 00:06.630: ERROR: Error installing prerequisite file (C:\Users\Vijay\AppData\Local\Temp\IE98036.tmp\KB2120976_x86.msu): 0x80240017 (2149842967) 00:06.677: INFO: PauseOrResumeAUThread: Successfully resumed Automatic Updates. 00:12.090: INFO: Link clicked, opening URL in new window:'http://go.microsoft.com/fwlink/?LinkId=185111' 00:12.106: INFO: Setup exit code: 0x00009C47 (40007) - Required updates are missing from the system.are missing from the system. Any idea why KB2120976 is inapplicable to my LEGAL Windows 7 Ultimate system? Any help is greatly appreciated.

    Read the article

  • Is .htaccess slowing down my dedicated server?

    - by David Robles
    First of all, I consider myself more a programmer than a servers guy. I have a website where I receive about 3,000 visits per day, which I think is a lot less than the max capacity for a dedicated server. However, I've noticed that the connection to the website is pretty slow, e.g., to load images, to connect to it via SSH, etc. I configured .httaccess recently to avoid hotlinking to images in my server (i.e. .jpg, .gif and .png), and I was wondering if that could be slowing down my website. This is the configuration that I have: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www.mysite.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.mysite.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp|swf)$ http://www.google.com/ [R,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I found some code to do that in google, and I just copied to .htacces since I'm not an expert in apache. It works, but I don't know if that is the best way to do it. How can I see if that is the reason why the server is slow? Are there any tools to monitor it? What would you do guys? Thanks in advance!

    Read the article

  • I think installing PostFix solved my problem, but it seemed *too* easy

    - by Joel Marcey
    Hi, This is a followup to a serverfault post I made a while ago: http://serverfault.com/questions/21633/how-do-i-target-a-different-mail-server-depending-on-domain-with-exim (More context here too: http://forum.slicehost.com/comments.php?DiscussionID=3806 ) I have a slice/VPS at Slicehost. I basically decided to scrap exim (i.e., purge it), and start anew with my email infrastructure. In case you didn't read any of the above threads, basically my goal was to have a send only mail infrastructure that relays all outgoing email to Google Apps. I also wanted to where email from domain1 (a Wordpress installation) would show it coming from domain1.com and email from domain2 (a normal website) would show it coming from domain2.com. So I decided to give PostFix a try. I literally followed the surprisingly simple instructions here: http://sudhanshuraheja.com/2009/02/slicehost-setup-outgoing-mail-google-apps-postfix/ And voila, all seems to be working as I expected. My email tests show email coming from the proper locations (either domain1 or domain2 depending on where the emails were sent from). But this all seemed too simple to me. So simple, in fact, that I feel that something is amiss. When I installed PostFix according to the instructions in the post above and it worked, I was surprised that I didn't have to specify an SMTP server, a port number, any authentication credentials, etc. My slice is set up such that I have MX records for Google Apps (e.g., ASPMX.L.GOOGLE.COM.) in my DNS settings, but I am not sure if that is why it is working. My email infrastructure knowledge is admittedly limited, but with this am I suspect to: Spammers using my email infrastructure? My emails going to people as spam? Something else sinister? I have actually stopped running PostFix until I understand this better. Thanks!

    Read the article

  • pfSense Load Balancer and Virtual IP

    - by jshin47
    I have two identical web servers on 10.2.1.13 and 10.2.1.113. I would like to set up pfSense load balancer to balance requests to both of these. I set up pools that included HTTP and HTTPS for both of these hosts, then set up virtual servers that responded on HTTP and HTTPS and referred traffic to its respective pool. However, I set up the virtual server to listen on 10.2.1.213, a LAN IP rather than a WAN IP, because I want LAN traffic to be able use the load balancer virtual server as well. So, I set up a Virtual IP for 10.2.1.213 on LAN IP, and a NAT port forwarding rule for HTTP and HTTPS traffic on a WAN IP to forward to 10.2.1.213. It seems like this should work, but it fails. What eventually happens is that when I try to access the page from WAN, I am directed to the login page for my pfSense device rather than the page I am expecting. When I try to access 10.2.1.213 from LAN, the request times out. What is going wrong here? I have tried it with and without NAT reflection to no avail. Please advise

    Read the article

  • Sparing level on HP EVA 4000

    - by Samuel
    One of the disks of our EVA4000 died today. This diskgroup (all volumes vraid5 with sparing level 1 and almost no space left for more volumes, 1TiB drives) is being rebuilt with "spare space" right now, and it will take at least 15 hours to do the leveling/rebuilding. We can't get a new disk until Friday. So, the question is, what would happen if another disk dies before the leveling completes? Would we lose data? And after that, how many aditional disks could die before losing data? 1 or 2? In "usual" RAID, we would be vulnerable to data loss while the rebuild takes place, but in this case the space reserved for sparing is two times the size of the bigger disk, so at the very least the effect should be the same of having two spares. Thanks in advance. Update: I have found some interesting threads about this question but still can't answer to this question, so I'm starting a bounty. http://blog.thestoragearchitect.com/2008/10/27/understanding-eva/ http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&url=http%3A%2F%2Fwww.experts-exchange.com%2FStorage%2FStorage_Technology%2FQ_25548177.html (Expert Exchange question from google).

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >