Search Results

Search found 13625 results on 545 pages for 'apache math'.

Page 500/545 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • REMOTE_USER not getting set?

    - by landed
    I am trying to setup LDAP Authentication in Joomla using a plugin called JMapMyLDAP (in fact 4 plugins each doing a different job). I need to pull a part of a string out of the server variable REMOTE_USER and this should be visible (we see here http://timplummer.com.au/4-how-to-integrate-joomla-3-with-active-directory-using-ldap.html) in phpinfo(); The issue is that REMOTE_USER is not set or at least not appearing. A few things to note (if you don't mind) here- conceptually I am not really understanding authentication as a whole subject it appears to be vast despite my years working in websites. Yes I used asp and built php pages to check a user is who they say they are with a token(/session?) that was given to just them and then they are identified when a stateless request is made to the server. Thats my level of understanding. This sounds different to the basic authentication in apache where a password sits in a file and a username and the user needs to login to a basic form to get access to the folder/docs this is via an .htaccess file. Ok so with the LDAP to work I need to get REMOTE_USER this sounds very reasonable as how else do we know is making the request. Thank you.

    Read the article

  • What is the best hosting option for Flash web-widget?

    - by par
    Our Flash web-widget has got highly popular. It is downloaded around 100,000 times per day. And that is the problem. Our server bandwidth is too narrow to deliver the widget to the clients fast. The widget is loaded very slow. Probably 20 times slower than before (at peak times). Probably I have choosen not the right hoster for my task - delivering 1 MB Flash widget to 100,000 users per day. What is the best hosting solution in my case? I'm not good at server administration so forgive me if I sound naive. The details are the following. Our hoster options: -Dedicated server, Ubuntu -10 Mbit Connection -monthly bandwidth limit: 2000 GB Widget size is 1 MB. The widget consists of the main SWF and a number of loaded SWF and data files. This is a part of Apache Status report taken right now ---- Server uptime: 1 hour 2 minutes 38 seconds Total accesses: 74865 - Total Traffic: 5.8 GB CPU Usage: u28 s7.78 cu0 cs0 - .952% CPU load 19.9 requests/sec - 1.6 MB/second - 81.1 kB/request 200 requests currently being processed, 0 idle workers WWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWCWWWW WWWWWCWWWWWWWWWWWWWWCWWWWWWWWWWWWCWWWWWWWWCWWCWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWWCWCWWWWWWWWWWWWWWWCW WWWWWWWW........................................................ ----

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • "Server not found" errors all over after Wordpress installation

    - by picardo
    I uploaded my Wordpress blog from my local machine to Slicehost and then pointed the domain name to the IP address. Then I installed the blog as normal. Once I went to wp-login.php to login, though, I started getting "Server not found" errors. That was strange because the server process was still running, and I checked many times. I can't see anything wrong in the error log, or the access log either. This doesn't only affect Wordpress. I can't access phpmyadmin either now, which was mapped to a subdirectory of the same domain address. What is going on? Can anyone help? Edit: the blog is located on a subdomain. It's still accessible from IP address. The virtual host configs are ServerName and ServerAlias, both set to blog.mysite.com. When I changed those and restarted apache, phpmyadmin came back. Edit: also it's not a propagation issue because I installed the blog from the domain name. It's only when I tried to log into the admin section, I started getting these errors.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Server Intermittently Inaccessible Externally (but Accessible Internally Continuously)

    - by nicorellius
    I have a CRM on a server on a network. We have a static IP and another server outward facing. We use port-forwarding to map to the CRM, so that when you go to the IP or the FQDN, you get to the CRM: xxx.xxx.xxx.xxx crm.example.com Internally, we can access the CRM by going to crm or crm.example.com Lately, I've been noticing that accessing the server from outside the network times out or gives 503, bad gateway. During that time, I can also SSH (different port, so this works) into the outward facing computer and access the server just fine. I have a robot monitoring the site and indeed via HTTP monitoring the site is going down periodically. I looked through the Apache server access and error logs and nothing stuck out at me so I'm a bit confused as to what could be going on. I also searched the access logs for 503 and found nothing. When I run tracert from outside the network, it appears the packets basically make it through the wider area servers (Comcast city and county servers) and end up dropping at the CRM server's front step. I'm tempted to replace the server because it is older and underpowered but it would be nice to know what is going on. Any ideas what to do next?

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • Website latency and bad tcp packets

    - by Mistero Lupo
    I have multiple websites hosted on a Linode VPS and I'm having an issue with one of them: every page that I try to load has about 10 seconds latency. Apache logs are clean and the other websites on the same machine are running well. At a first glance I tought it was a memory problem since the VPS has got only 512M, but from the linode dashboard CPU and Disk I/O are normal. Anyway here we have the ram status: $ free -m total used free shared buffers cached Mem: 487 463 23 0 2 55 -/+ buffers/cache: 404 82 Swap: 255 155 100 Only 23M free, but if it was a memory problem why other websites are going as usual? I took a live capture with wireshark, and there are some duplicates SYN ACK packets just before the 10 seconds gap. I'm out of ideas, looking for some clues. Wireshark live capture screenshot As you can see from the image, the gap is after the last bad tcp. Thank you in advance. UPDATE I've checked Apache2 logs in debug error level, and this is where something is appening: 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'index.php' 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (1) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] pass through /home/fmaisi/sites/www.fmaisi.it/public_html/index.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] strip per-dir prefix: /home/fmaisi/sites/www.fmaisi.it/public_html/wp-content/plugins/wp-filebase/wp-filebase_css.php -> wp-content/plugins/wp-filebase/wp-filebase_css.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'wp-content/plugins/wp-filebase/wp-filebase_css.php' As you can see there is a gap of 14 seconds after the pass through of index.php. Any suggestions? I'm out of ideas again.

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Reverse web proxy with time constraints

    - by user2893458
    I have a web application which produces several unique URLs of the type http://service.company.com/service.html?type=aaaa&key=jfiZm6u6cW where the last part is a randomly generated key. Each such URL provides access to an instance of the service provided. I am looking for a way to restrict access to those URLs based on time constraints, as an example URL#1 should be available between 8:00AM and 10:00AM on May 30, URL#2 should be available between 10:30AM and 12:00PM on May 31, and so on. I already have a resource scheduling application based on Drupal and would like to find a way to include those URLs as scheduled resources. The web application is deployed on Apache Tomcat, so I don't have the knowledge or the resources to alter it, therefore I thought that I could put some sort of reverse proxy in front of the web app that could implement the time constraint feature. In my thoughts the reverse proxy would allow or disallow access to each URL based on the rules that my scheduling application would provide. There may be other ways to deliver such a solution, but I can't think of anything better, so the question is: is there a reverse web proxy architecture that could allow access to the destination URLs based on time and date rules? Any other ideas are more than welcome.

    Read the article

  • server_name seems to be ignored in nginx

    - by user46171
    I have two domains set up in nginx.conf. Both are using SSL with their own certificates, and proxy to Apache. However the second domain is completely ignored, and nginx always resolves to the first domain. I can't see what in the issue is with this configuration, having set the server_name in each case correctly (as far as I can see): http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; upstream site { # real IP addresses masked server xx.xxx.x.xxx; server xx.xxx.x.xxx; } server { # this domain always works listen 443; server_name *.first-site.com; ssl on; ssl_certificate /var/ssl/first-site.crt; ssl_certificate_key /var/ssl/first-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } server { # this domain is ignored, always resolves to first-site.com listen 443; server_name *.second-site.com; ssl on; ssl_certificate /var/ssl/second-site.crt; ssl_certificate_key /var/ssl/second-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } }

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • How do I restrict access to certain web files/folders on an IIS 7.5 based web server?

    - by cpuguru
    We're moving a website that was previously hosted on Win2k3 & IIS 6 to a Win2k8 R2 & IIS 7.5 platform. The website is public, but we want to restrict anonymous access to certain files and folders such that the user would be prompted for a password to access them. If this were Apache, a simple .htaccess file would serve the purpose. However, since it's IIS 7.5 and we're serving up mainly static HTML files and a few classic ASP pages I'm in a bit of a quandry as to how to restrict access to individual files and folders for various committees such that attempts to committee_1's files and/or folders would prompt the user for a password and, if entered correctly, would serve up their files. Same thing for committee_2 and so on. Under IIS 6, we would take away the read privileges for IIS_IUSRS and create a user called "committee_1" with a password known by the group and give that user read privileges to the files/folders. There's got to be a better (and more secure) way. Reminder, these are not *.aspx pages that are being served up. Any suggestions on how to password protect key files and/or folders under IIS 7.5 are much appreciated.

    Read the article

  • nginx + IIS + GET

    - by Eralde
    I have nginx on pc "A" & IIS with ASP.NET on pc "B". nginx is configured like this: ... location ~ ((Web|Script)Resource.*)$ { proxy_pass "B"/$1; proxy_redirect off; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header REQUEST_URI $request_uri; proxy_set_header HTTP_REFERER $http_referer; #proxy_set_header REQUEST_URI $request_uri; proxy_set_header QUERY_STRING $query_string; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }... but requests to "B"/WebScript?a=b&c=d aren't able to deliver GET data (a=b&c=d) to IIS part. Could anyone help with this? Edit: There's some additional info: nginx is also configured to proxy other data to Apache, running on "A" everything is fine there (at least GET is OK). configuration is the same as above, but for different location

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • SSL Mail server connection times out on send()

    - by Jivan
    When trying to programmatically send an email from a website of mine, with PHP Pear Mail package with SSL connection, PEAR:Mail replies the following : Failed to connect to example.blabla.net:PORT [SMTP: Failed to connect socket: connection timed out (code: -1, response: )] I looked for similar questions on SO and SF, all the answers asking the OP to test a request on telnet or ssh in command line. So, that is what I did and here is what happens : $ ssh -l myusername -p PORT example.blablabla.net _ Here, '_' in the second line means that NOTHING happens. Indefinitely, which seems coherent with the timeout message I had from PEAR:Mail. So PEAR:Mail seems out of cause. But, what I have to tell you is that yesterday, it just worked. Connection was properly established, mails were properly sent, etc. Just today, it doesn't work anymore and I absolutely don't know why. I restarted Apache (in case an extension was broken), restarted mail services, etc. Still. No effect. Before yesterday (when it worked) and today (when it doesn't anymore), I just didn't touch the server and did nothing on it, simply because I took a day off to write some blog post! Have anyone of you encountered similar problem ? The problem seems quite common, judging after some googling, but the solution doesn't. Thanks for any help ! (note on config : CentOS 6.4 x86_64 with cPanel/WHM)

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by user57019
    I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • cannot get mssql working with sql server 2005

    - by Ryan
    I'm a MySQL/Apache user, trying my hand with IIS and SQL server, so please, if this is a stupid question have patience. I'm using IIS version 7.5. PHP version 5.3.13 and SQL server 2005 IIS is running on port 90, not sure if that will make a difference or not. I know my sql server is running because I can explore/connect to it in Server management studio. I know php is configured properly, because //localhost:90/phpinfo.php works fine. I updated the php_msql.dll extension in phpinfo to: extension=ext/php_msql.dll EDIT- However, when I run phpinfo() under the "configure command" row, this is present: --without-mssql I found/downloaded the ntwdblib.dll and placed it in both sys32 and php root. All these things were supposed to fix the issue, and they haven't. This is the code I'm using, straight from php.net: <?php // Server in the this format: <computer>\<instance name> or // <server>,<port> when using a non default port number $server = 'localhost'; // Connect to MSSQL $link = mssql_connect($server, 'uname', 'pwd'); if (!$link) { die('Something went wrong while connecting to MSSQL'); } ?> obviously I'm using a real username and password, but when I load the file in my browser, I receive a 500 error. Upon checking the log, this is what is displayed: 2012-06-25 12:41:29 ::1 GET /test.php - 90 - ::1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/536.5+(KHTML,+like+Gecko)+Chrome/19.0.1084.56+Safari/536.5 500 0 0 5 That (to me) doesn't help me much. What am I doing wrong? Thank you

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >