Search Results

Search found 15586 results on 624 pages for 'request tracker'.

Page 29/624 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • IIS 7 Request routing

    - by Abraham Durairaj
    Not sure the title is right. I have my site configured in IIS7 and I have another partner site which runs on a different port eg. http:// localhost:1234 /mysite. Can I have my parent site to have a virtual site http:// localhost /mysite to route requests to the partner site http://localhost:1234 /mysite. I should not redirect but I should basically proxy the requests. Any help here is appreciable. Thanks in advance.

    Read the article

  • Apache: How to redirect OPTIONS request with .htaccess?

    - by Milan Babuškov
    I have Apache 2.2.4 server with a lot of messages like this in the access_log: ::1 - - [15/May/2010:19:55:01 +0200] "OPTIONS * HTTP/1.0" 400 543 ::1 - - [15/May/2010:20:22:17 +0200] "OPTIONS * HTTP/1.0" 400 543 ::1 - - [15/May/2010:20:24:58 +0200] "OPTIONS * HTTP/1.0" 400 543 ::1 - - [15/May/2010:20:25:55 +0200] "OPTIONS * HTTP/1.0" 400 543 ::1 - - [15/May/2010:20:27:14 +0200] "OPTIONS * HTTP/1.0" 400 543 These are the "internal dummy connections" as explained on this page: http://wiki.apache.org/httpd/InternalDummyConnection The page also hits my main problem: "In 2.2.6 and earlier, in certain configurations, these requests may hit a heavy-weight dynamic web page and cause unnecessary load on the server. You can avoid this by using mod_rewrite to respond with a redirect when accessed with that specific User-Agent or IP address." Well, obviously I cannot use UserAgent because I minimized the server signature, but I could use IP address. However, I don't have a clue what should the RewriteCond and RewriteRule look for IPv6 address ::1. The website where this runs is using CodeIgniter, so there is already the following .htaccess in place, I just need to add to it: RewriteEngine on RewriteCond %{REQUEST_URI} ^/system.* RewriteRule ^(.*)$ /index.php?/$1 [G] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php?/$1 [L] Any idea how to write this .htaccess rule?

    Read the article

  • Giving PHP the permission to make a git pull request

    - by Bernd
    Hello, I would like to allow PHP to execute a Git pull command. But there are some problems with the user and permissions. How did you solve the problem? PHP runs as user www-data. Therefore I've changed the .git directory owner/group to www-data (chown www-data:www-data -R .git). As it is turned out later www-data has no SSH keys. Is it a good idea to give it one? If yes where to place? Or is it possible to allow it to use a specific key. Best regards, Bernd

    Read the article

  • Giving PHP the permission to make a git pull request

    - by Bernd
    I would like to allow PHP to execute a Git pull command. But there are some problems with the user and permissions. How did you solve the problem? PHP runs as user www-data. Therefore I've changed the .git directory owner/group to www-data (chown www-data:www-data -R .git). As it is turned out later www-data has no SSH keys. Is it a good idea to give it one? If yes where to place? Or is it possible to allow it to use a specific key. Best regards, Bernd

    Read the article

  • .htaccess: Transparently adding a name to the request

    - by Jon
    I've read this tutorial about how to modify your .htaccess in order to server many web2py applications but it doesn't seem to work. Here is my .htaccess RewriteEngine On RewriteRule ^dispatch\.fcgi/ - [L] RewriteRule ^(.*)$ dispatch.fcgi/$1 [L] RewriteCond %{HTTP_HOST} =www.moublemouble.com [NC, OR] RewriteCond %{HTTP_HOST} =moublemouble.com [NC] RewriteRule ^/(.*) /moublemouble/$1 [PT,L] All I get is a 500 Internal Error and .htaccess is not my strong point. Any clues?

    Read the article

  • Nginx ignoring client's HTTP 1.0 request and respond by HTTP 1.1

    - by Yoga
    I am testing using nginx/php5-fpm, with the code <?php header($_SERVER["SERVER_PROTOCOL"]." 404 Not Found"); // also tested: header("Status: 404 Not Found"); echo $_SERVER["SERVER_PROTOCOL"]; And force to use HTTP 1.0 with the curl command. curl -0 -v 'http://www.example.com/test.php' > GET /test.php HTTP/1.0 < HTTP/1.1 404 Not Found < Server: nginx < Date: Sat, 27 Oct 2012 08:51:27 GMT < Content-Type: text/html < Connection: close < * Closing connection #0 HTTP/1.0 As you can see I am already requesting using HTTP 1.0, but nginx reply me with HTTP 1.1

    Read the article

  • WMI ASP.net Request per Seconds not right

    - by Louis Haußknecht
    I'm querying a webserver for RequestsTotal and RequestsPerSec: Select * from Win32_PerfRawData_ASPNET_ASPNETApplications The server is a Windows Server 2008 R2 with IIS 7.5 and I'm querying it from my Windows 7 workstation using a C# program. My problem is that both RequestsPerSec and RequestsTotal show the same value. Running perfmon on that server and selecting the counters there shows the correct values.

    Read the article

  • First request too slow even if I have a load balancer in the back

    - by adrian7
    I have an Apache 2 on Centos + bind with a wordpress website on it (e.g example.com). I have also set up, on another server in a different contry a load balancer (varnish:80 + nginx 127.0.0.1:8080) for it - which task is to server all static content under /wp-content/. Using Simple DNS editor I added an A entry to cdn.example.com pointing to the server's IP. So no extra work from a 2nd dns server. Then using htaccess I redirect all requests to jpg|gif|css|js files to cdn.example.com. That works and all files are saved on the "cdn" server and served right away. My problem is that for the first time I enter on example.com (e.g after restarting the computer or closing the browser) the load time is 1 up to 3 seconds, while any subsequent page loads take only 300 to 600 miliseconds. I know it might be a DNS issue, but I have done a cache check on several websites and cdn.example.com indicates the right IP. Do you have any ideas where I should dig to solve this first-time slowness?

    Read the article

  • Apache mod_proxy, how to forward request into local network ip(server)

    - by Beck
    Can't figure out, how to configure mod_proxy for this. I have two domains, one is working fine at the moment. Second is bind to the same ip. I need to forward requests from second domain to another server in local network. like that: domain1.com => 192.168.1.101 domain2.com => 192.168.1.102 What configuration or directives i should use? Thanks ;) Update <VirtualHost *:80> ServerName www.domain2.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.103:8080/ ProxyPassReverse / http://192.168.1.103:8080/ </VirtualHost> It just doesn't redirect to second server. That's it. And when i restart apache, it says something with overlapping 80 port. [warn] _default_ VirtualHost overlap on port 80, the first has precedence

    Read the article

  • Nginx proxy SOAP request

    - by user2606078
    looking for a right way to accomplish the following: there is an app that have URL(1) hardcoded and no way/time to change it in the source http://dev.server.com/example.com/admin/soap/action/index?pr=1 and it should use (and get response from) URL(2) http://example.com/admin/soap/action/index?pr=1 what should I configure in Nginx (apache as backup used) conf on dev.server.com in order to give that app when it asks URL(1) answer from URL(2)? On dev.server.com Apache has virtual host: dev.server.com enabled. Also I've tried to proxy in apache instead of nginx by using ProxyPass: <Directory /var/www/dev> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> <Location /example.com/admin/soap> ProxyPass http://example.com/admin/soap </Location>

    Read the article

  • Redirecting a single request to another pages, ignoring www subdomain

    - by Petter Brodin
    I have a site running on IIS 7.5 that does an automatic redirect from 'http://mysite.com/whatever.aspx' to 'http://www.mysite.com/whatever.aspx' On the site, there is a lot of traffic to an old URL that I want to redirect to the front page, index.aspx: 'http://mysite.com/foo/bar/index.cgi%something=asdf&somethingelse=qwerty' The problem is that no matter what I try, I can only get the redirect to work with the www subdomain. If I use the URL without www, I just end up at 'http://www.mysite.com/404.aspx' Any ideas? Thanks in advance for all help! Edit3: it seems like the browser caching the redirect response was messing with me, so edit2 is wrong. See my response below. Edit2: disregard edit1, it doesn't seem like it's working after all. Edit: here's some further info: using this article I've managed to redirect from 'http://mysite.com/foo/bar/index.cgi' to 'http://www.mysite.com/index.aspx', but if I add the query string parameters, it still redirects to 'http://www.mysite.com/404.aspx' Isn't there a way to catch all requests to the cgi file, including query string parameters?

    Read the article

  • Request Multiple Maya Floating Server Licenses for extra Satellite clients

    - by Rob
    Hello all: I am currently setting up a 'render farm' for Maya 2008 Unlimited. One Maya workstation license comes with the ability to render on eight satellite nodes. It works perfect, the remote rendering works like a charm. However, we have additional boxes to set up as satellite rendering nodes, and we have extra Maya workstation licenses. Ideally, the workstation can take two licenses and thus render on 16 nodes, but I haven't been able to figure it out, or determine if it is actually possible. It's a big project, where rendering the entire thing is in the scope of weeks, so the speed up would be worth it. Any thoughts?

    Read the article

  • Apache Request IP Based Security

    - by connec
    I run an Apache server on my home system that I've made available over the internet as I'm not always at my home system. Naturally I don't want all my home server files public, so until now I've simply had: Order allow, deny Deny from all Allow from 127.0.0.1 in my core configuration and just Allow from all in the htaccess of any directories I wanted publicly viewable. However I've decided a better system would be to centralise all the access control and just require authentication (HTTP basic) for requests not to 127.0.0.1/localhost. Is this achievable with Apache/modules? If so how would I go about it? Cheers.

    Read the article

  • Fonts not found when print request comes from a child process of a Service

    - by beeglebug
    I have a strange issue on a Windows Server 2003 box which has been baffling me for days now. I have a service running on the machine which calls a specified exe every 60 seconds, the exe looks at a local database to see if it needs to print anything, and if so, it prints it to a network laser printer. The problem i'm having is that some fonts won't print out when the exe is called automatically by the service, but work fine if I double click the exe to run it. The font was installed by Administrator, but the service runs as NT Authority\System. I thought this might have something to do with it, but I tried running the service as Administrator, and that didn't solve it. Are there any issues with fonts and permissions that i'm not aware of that could be causing this behaviour?

    Read the article

  • mod_proxy Fowarding Based on Request Host Header

    - by zigzagip
    Lets say I have 3 URLs and they all point to the same reverse proxy. I would like to have the requests being forwarded to the web servers behind the proxy based on the host header: webfront1.example.com > reverseproxy.example.com > backend1.example.com webfront2.example.com > reverseproxy.example.com > backend2.example.com webfront3.example.com > reverseproxy.example.com > backend3.example.com Based on what I have read, I can configure reverseproxy.example.com/webfront1 > backend1.example.com, reverseproxy.example.com/webfront2 > backend2.example.com, etc. I am wondering if proxy based on host header is even possible or if I used the wrong approach entirely.

    Read the article

  • is there any valid reason for users to request phpinfo()

    - by The Journeyman geek
    I'm working on writing a set of rules for fail2ban to make life a little more interesting for whoever is trying to bruteforce his way into my system. A good majority of the attempts tend to revolve around trying to get into phpinfo() via my webserver -as below GET //pma/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //admin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //dbadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //mysql/config/config.inc.php?p=phpinfo(); HTTP/1.1 I'm wondering if there's any valid reason for a user to attempt to access phpinfo() via apache, since if not, i can simply use that, or more specifically the regex GET //[^>]+=phpinfo\(\) as a filter to eliminate these attacks

    Read the article

  • Load balanced proxies to avoid an API request limit

    - by ClickClickClick
    There is a certain API out there which limits the number of requests per day per IP. My plan is to create a bunch of EC2 instances with elastic IPs to sidestep the limitation. I'm familiar with EC2 and am just interested in the configuration of the proxies and a software load balancer. I think I want to run a simple TCP Proxy on each instance and a software load balancer on the machine I will be requesting from. Something that allows the following to return a response from a different IP (round robin, availability, doesn't really matter..) eg. curl http://www.bbc.co.uk -x http://myproxyloadbalancer:port Could anyone recommend a combination of software or even a link to an article that details a pleasing way to pull it off? (My client won't be curl but is proxy aware.. I'll be making the requests from a Ruby script..)

    Read the article

  • Nginx and low-speed connections: request terminates after 253 seconds

    - by meze
    I'm trying to make nginx to handle static files. All is working fine except that when I throttle my connection speed to 8kbit/s, the loading process of a file just stops after 253-255 seconds (4.2 min according to chrome). No error in the log, the status code is 200 but the response is received partially. If I disable nginx and make apache to send the same file - it loads successfully after 10 minutes. The config I use for debugging is: client_header_buffer_size 16k; large_client_header_buffers 4 8k; client_max_body_size 50m; client_body_buffer_size 16k; client_header_timeout 20m; client_body_timeout 20m; send_timeout 20m; Did i miss some configurations?

    Read the article

  • nginx block URI request but allow internal directory

    - by Mike Anders
    I'm new to nginx from apache. I'm trying to simply block the URIs: /_mydir/* = / (redirect) But, I want to rewrite: /ex/(.*)$ = /_mydir/$1 I have tried: location /ex/ { rewrite ^/ex/(.*)$ /_mydir/$1 last; } location /_mydir { rewrite ^/_mydir/(.*)$ http://$http_host/ redirect; } But what always happens is once I block the '/_mydir' directory the rewrite is also blocked. I have also tried: location /_mydir/ { internal; } This also ends up blocking the rewrite. All help is greatly appreciated, thanks. UPDATE: I fixed this problem using: rewrite ^/ex/(.*)$ /_mydir/$1 break;

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >