Search Results

Search found 53597 results on 2144 pages for 'http requests'.

Page 246/2144 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • How to link specific ports to specific domains with Apache virtual hosts?

    - by theJoe
    We have a forward-facing linux box running Apache HTTP server that is acting as a reverse proxy for several back-end servers. The servers are accessed through specific domain names and ports and are set up as virtual hosts within Apache as such: Listen 8001 Listen 8002 <Virtualhost *:8001> ServerName service.one.mycompany.com ProxyPass / http://internal.one.mycompany.com:8001/ ProxyPassReverse / http://internal.one.mycompany.com:8001/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> <Virtualhost *:8002> ServerName service.two.mycompany.com ProxyPass / http://internal.two.mycompany.com:8002/ ProxyPassReverse / http://internal.two.mycompany.com:8002/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> The proxy server has only one IP address, and both domains are pointing to it. Accessing internal.one via service.one works fine, as does accessing internal.two via service.two. Now the problem is that Apache does not take the requesting domain into account when accessing the virtual hosts. What I mean is that both domains work for both ports: requests for service.one:8002 proxies to internal.two:8002, and requests for service.two:8001 proxies to internal.one:8001, where ideally both these requests should be denied. I can get around this by creating more virtual hosts that explicitly deny these requests: NameVirtualHost *:8001 NameVirtualHost *:8002 <Virtualhost *:8001> ServerName service.two.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> <Virtualhost *:8002> ServerName service.one.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> But this is not an ideal solution, since we plan to add more services to the proxy, and each new port would need to be explicitly denied on all the other domains, and each new domain would need to be explicitly denied on all ports it is not utilizing. As we add more services, the number of virtual hosts can get out of hand quickly. My question, then, is whether there is a better way? Can we explicitly tie specific ports to specific domains in a virtual host so that only that domain-port combination is processed, and all other combinations are not? Things I’ve tried: Adding NameVirtualHost *:8001, etc. without the additional virtual hosts. Setting ProxyRequests On and Off, as well as ProxyPreserveHost On and Off Adding the server name or IP address to the virtual host header, e.g. <VirtualHost service.one.mycompany.com:8001> Using the <proxy> directive inside the virtual host directive. Lots and lots of googling. The proxy server is running CentOS 6.2 64-bit, Apache HTTPD server 2.2.15. As mentioned, the proxy server has only one IP address, and all the domains we are using are pointing to it.

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Two DHCP Servers, Block Clients for one of them?

    - by Rilindo
    I am building out a kickstart network that resides on a different VLAN uses its own DHCP server. For some reason, my kickstart clients kept getting assign IPs from my primary DHCP server. The way I have it set up is that I have a primary DHCP server on this router here: 192.168.15.1 Connected to that DHCP server is a switch with the IP of 192.168.15.2. My kickstart (Scientific Linux) server is connected to that switch on two ports: Port 2 - where the kickstart server communicates to the rest of the production network via eth0. The IP assigned to the server on that interface is 192.168.15.100 (on eth0). The details are: Interface: eth0 IP: 192.168.15.100 Netmask: 255.255.255.0 Gateway: 192.168.15.1 Port 7 - has it's own VLAN ID (along with port 8). The kickstart server is connected to that port with the IP of 172.16.15.100 (on eth1). Again, the details are: Interface: eth1 IP: 172.16.15.100 Netmask: 255.255.255.0 Gateway: none The kickstart server runs its own DHCP server and assigns them over the eth1. Most of the kick starts are built over the kickstart VLAN through port 8. To prevent the kickstart DHCP server from assigning addresses over the production network, I have the route setup like so: route add -host 255.255.255.255 dev eth1 At this point, the clients kept getting assign IPs from the 192.168.15.1 DHCP server. I need to figure out a way to block client requests from reaching that DHCP. Its should be noted that but I also build KVM hosts on the kickstart server as well, so I need those KVMs to have the ability to get DHCP requests from the 192.168.15.1 DHCP server via the bridge network once I finish resolved this particular problem. (Currently, they communicate via NAT). So what would be done to resolve this? Through iptables or some sort of routing I need to put in? I tried to limited to requests via IPtables on that interface, allowing DHCP requests for 172.16.15.x network: -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 67 -j ACCEPT And rejects assignments on eth1 from 192.168.15.x network: -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 67 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 67 -j REJECT Nope. :(

    Read the article

  • Exclude regular expression from virtual host

    - by Joao Trindade
    I have a virtual host in apache which is redirecting requests to another web server. <VirtualHost *:80> DocumentRoot /var/www ServerName another.host ProxyPass / http://another.host2:8081/ ProxyPassReverse / http://another.host2:8081/ </VirtualHost> I need to exclude an URL pattern from being catch by this virtual host. Basically I don't want requests with the url: http://another.host:8081/~username to be forwarded to the other server. Can this be done?

    Read the article

  • How do you protect your <appid>.appspot.com domain from DDOS attack?

    - by jacob
    If I want to use CloudFlare to help protect my GAE app via it's custom domain, I still am vulnerable to attacks directly on the .appspot.com domain. How do I mitigate that? I could force redirect appspot.com host requests, such as discussed here: http://stackoverflow.com/questions/1364733/block-requests-from-appspot-com-and-force-custom-domain-in-google-app-engine/ But I would still suffer the load of processing the redirect in my app. Are there any other solutions?

    Read the article

  • How to make Nginx fire 504 immediately is server is not available?

    - by Georgiy Ivankin
    I have Nginx set up as a load balancer with cookie-based stickiness. The logic is: If the cookie is NOT there, use round-robbing to choose a server from cluster. If the cookie is there, go to the server that is associated with the cookie value. Server is then responsible for setting the cookie. What I want to add is this: If the cookie is there, but server is down, fallback to round-robbing step to choose next available server. So actually I have load balancing and want to add failover support on top of it. I have managed to do that with the help of error_page directive, but it doesn't work as I expected it to. The problem: 504 (and the fallback associated with it) fires only after 30s timeout even if the server is not physically available. So what I want Nginx to do is fire a 504 (or any other error, doesn't matter) immediately (I suppose this means: when TCP connection fails). This is the behavior we can see in browsers: if we go directly to server when it is down, browser immediately tells us that it can't connect. Moreover, Nginx seems to be doing this for 502 error: if I intentionally misconfigure my servers, Nginx fires 502 immediately. Configuration (stripped down to basics): http { upstream my_cluster { server 192.168.73.210:1337; server 192.168.73.210:1338; } map $cookie_myCookie $http_sticky_backend { default 0; value1 192.168.73.210:1337; value2 192.168.73.210:1338; } server { listen 8080; location @fallback { proxy_pass http://my_cluster; } location / { error_page 504 = @fallback; # Create a map of choices # see https://gist.github.com/jrom/1760790 set $test HTTP; if ($http_sticky_backend) { set $test "${test}-STICKY"; } if ($test = HTTP-STICKY) { proxy_pass http://$http_sticky_backend$uri?$args; break; } if ($test = HTTP) { proxy_pass http://my_cluster; break; } return 500 "Misconfiguration"; } } } Disclaimer: I am pretty far from systems administration of any kind, so there may be some basics that I miss here. EDIT: I'm interested in solution with standard free version of Nginx, not Nginx Plus. Thanks.

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • Can I include the path and query string in an IIS "Error Pages" redirect?

    - by Dylan Beattie
    I'm setting up a custom 403.4 handler so that non-SSL requests to my site are redirected to a different URL - and what I'd like to do is to include the script path and query string in the redirect, so that a user who requests http://www.site.com/foo?bar=1 will be redirected to https://www.site.com/foo?bar=1 I know something similar is possible when configuring a top-level site redirect, using the $S, $Q, %v tokens referred to in this IIS reference page - but this syntax doesn't seem to work when configuring a custom error redirect.

    Read the article

  • is it possible for a host to maintain a tcp connection with two hosts with the same IP?

    - by wenzi
    I have two hosts A and B and a host(called client here) C C will establish a tcp connection with A, and send http requests to A then A will relay the HTTP requests to B (the relaying may be seconds of delay) and B will spoof its IP address as the IP of A and send http response to C I know there is sequnce number inconsistency problem, but is it possible to trick the TCP protocol to make the connection viable? thanks!

    Read the article

  • How to determine which request nginx sends to a proxy and which it serves?

    - by Zxaos
    I currently have nginx proxying for Thin, but set up to serve static files for the app that Thin is serving instead of proxying the request. What I'd like to know is how I can check that the rules are set up correctly. Since Thin doesn't log requests, I would need to set up nginx logs in such a way that it shows which requests were served as files and which were passed to Thin. Is this even possible? If so, how?

    Read the article

  • Apache RewriteRule and slashes (%2F)

    - by Felix
    I have the following RewriteRule: RewriteRule ^like/(.+)$ ask.php/$1 Which works just fine for requests like: /like/someting+here/something+else But for requests where one of the path parts contains an escaped slash (%2F), the server spits out a 404 Not Found error: /like/one%2Ftwo+things/ Is there any way to fix this? I tried both [B] and [NE] flags (separate and together) but nothing worked.

    Read the article

  • Scrolling an HTML 5 page using JQuery

    - by nikolaosk
    In this post I will show you how to use JQuery to scroll through an HTML 5 page.I had to help a friend of mine to implement this functionality and I thought it would be a good idea to write a post.I will not use any JQuery scrollbar plugin,I will just use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >        <link rel="stylesheet" type="text/css" href="style.css">        <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="scroll.js">     </script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>        <div id="main">        <table>        <caption>Liverpool Players</caption>        <thead>            <tr>                <th>Name</th>                <th>Photo</th>                <th>Position</th>                <th>Age</th>                <th>Scroll</th>            </tr>        </thead>        <tfoot class="footnote">            <tr>                <td colspan="4">We will add more photos soon</td>            </tr>        </tfoot>    <tbody>        <tr class="maintop">        <td>Alan Hansen</td>            <td>            <figure>            <img src="images\Alan-hansen-large.jpg" alt="Alan Hansen">            <figcaption>The best Liverpool Defender <a href="http://en.wikipedia.org/wiki/Alan_Hansen">Alan Hansen</a></figcaption>            </figure>            </td>            <td>Defender</td>            <td>57</td>            <td class="top">Middle</td>        </tr>        <tr>        <td>Graeme Souness</td>            <td>            <figure>            <img src="images\graeme-souness-large.jpg" alt="Graeme Souness">            <figcaption>Souness was the captain of the successful Liverpool team of the early 1980s <a href="http://en.wikipedia.org/wiki/Graeme_Souness">Graeme Souness</a></figcaption>            </figure>            </td>            <td>MidFielder</td>            <td>59</td>        </tr>        <tr>        <td>Ian Rush</td>            <td>            <figure>            <img src="images\ian-rush-large.jpg" alt="Ian Rush">            <figcaption>The deadliest Liverpool Striker <a href="http://it.wikipedia.org/wiki/Ian_Rush">Ian Rush</a></figcaption>            </figure>            </td>            <td>Striker</td>            <td>51</td>        </tr>        <tr class="mainmiddle">        <td>John Barnes</td>            <td>            <figure>            <img src="images\john-barnes-large.jpg" alt="John Barnes">            <figcaption>The best Liverpool Defender <a href="http://en.wikipedia.org/wiki/John_Barnes_(footballer)">John Barnes</a></figcaption>            </figure>            </td>            <td>MidFielder</td>            <td>49</td>            <td class="middle">Bottom</td>        </tr>                <tr>        <td>Kenny Dalglish</td>            <td>            <figure>            <img src="images\kenny-dalglish-large.jpg" alt="Kenny Dalglish">            <figcaption>King Kenny <a href="http://en.wikipedia.org/wiki/Kenny_Dalglish">Kenny Dalglish</a></figcaption>            </figure>            </td>            <td>Midfielder</td>            <td>61</td>        </tr>        <tr>            <td>Michael Owen</td>            <td>            <figure>            <img src="images\michael-owen-large.jpg" alt="Michael Owen">            <figcaption>Michael was Liverpool's top goal scorer from 1997–2004 <a href="http://www.michaelowen.com/">Michael Owen</a></figcaption>            </figure>            </td>            <td>Striker</td>            <td>33</td>        </tr>        <tr>            <td>Robbie Fowler</td>            <td>            <figure>            <img src="images\robbie-fowler-large.jpg" alt="Robbie Fowler">            <figcaption>Fowler scored 183 goals in total for Liverpool <a href="http://en.wikipedia.org/wiki/Robbie_Fowler">Robbie Fowler</a></figcaption>            </figure>            </td>            <td>Striker</td>            <td>38</td>        </tr>        <tr class="mainbottom">            <td>Steven Gerrard</td>            <td>            <figure>            <img src="images\steven-gerrard-large.jpg" alt="Steven Gerrard">            <figcaption>Liverpool's captain <a href="http://en.wikipedia.org/wiki/Steven_Gerrard">Steven Gerrard</a></figcaption>            </figure>            </td>            <td>Midfielder</td>            <td>32</td>            <td class="bottom">Top</td>        </tr>    </tbody></table>          </div>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  The markup is very easy to follow and understand. You do not have to type all the code,simply copy and paste it.For those that you are not familiar with HTML 5, please take a closer look at the new tags/elements introduced with HTML 5.When I view the HTML 5 page with Firefox I see the following result. I have also an external stylesheet (style.css). body{background-color:#efefef;}h1{font-size:2.3em;}table { border-collapse: collapse;font-family: Futura, Arial, sans-serif; }caption { font-size: 1.2em; margin: 1em auto; }th, td {padding: .65em; }th, thead { background: #000; color: #fff; border: 1px solid #000; }tr:nth-child(odd) { background: #ccc; }tr:nth-child(even) { background: #404040; }td { border-right: 1px solid #777; }table { border: 1px solid #777;  }.top, .middle, .bottom {    cursor: pointer;    font-size: 22px;    font-weight: bold;    text-align: center;}.footnote{text-align:center;font-family:Tahoma;color:#EB7515;}a{color:#22577a;text-decoration:none;}     a:hover {color:#125949; text-decoration:none;}  footer{background-color:#505050;width:1150px;}These are just simple CSS Rules that style the various HTML 5 tags,classes. The jQuery code that makes it all possible resides inside the scroll.js file.Make sure you type everything correctly.$(document).ready(function() {                 $('.top').click(function(){                     $('html, body').animate({                         scrollTop: $(".mainmiddle").offset().top                     },4000 );                  });                 $('.middle').click(function(){                     $('html, body').animate({                         scrollTop: $(".mainbottom").offset().top                     },4000);                  });                     $('.bottom').click(function(){                     $('html, body').animate({                         scrollTop: $(".maintop").offset().top                     },4000);                  }); });  Let me explain what I am doing here.When I click on the Middle word (  $('.top').click(function(){ ) this relates to the top class that is clicked.Then we declare the elements that we want to participate in the scrolling. In this case is html,body ( $('html, body').animate).These elements will be part of the vertical scrolling.In the next line of code we simply move (navigate) to the element (class mainmiddle that is attached to a tr element.)      scrollTop: $(".mainmiddle").offset().top  Make sure you type all the code correctly and try it for yourself. I have tested this solution will all 4-5 major browsers.Hope it helps!!!

    Read the article

  • SSAS: distribution of measures over percentage

    - by Alex
    Hi there, I am running a SSAS cube that stores facts of HTTP requests. The is a column "Time Taken" that stores the milliseconds a particular HTTP request took. Like... RequestID Time Taken -------------------------- 1 0 2 10 3 20 4 20 5 2000 I want to provide a report through Excel that shows the distribution of those timings by percentage of requests. A statement like "90% of all requests took less than 20millisecond". Analysis: 100% <2000 80% <20 60% <20 40% <10 20% <=0 I am pretty much lost what would be the right approach to design aggregations, calculations etc. to offer this analysis through Excel. Any ideas? Thanks, Alex

    Read the article

  • Identify valid server in XML-RPC request using PHP

    - by Ian
    I'm working on a client-server system, where the client makes XMLRPC requests to the server. The client part of the system is handed to a third-party, meaning that he could eventually modify the code or re-route the xmlrpc requests. Now, hoping the third-party won't modify the code, I need a way to make sure that the server the client script is contacting is actually MY server (cause, the person could somehow reroute the requests to his own server where he could make up some xml responses, not what I want). Is there a way to identify a server using PHP? Some sort of SSL connection? Hope you guys understand me. Cheers.

    Read the article

  • Intercept page request behind firewall return altered content with php and apache

    - by Matthew
    I'm providing free wifi service and need an ad to be added to all page requests. Currently I have a router forwarding all http requests to an apache server, which redirects all requests to an index.php page. The index.php page reads the request, fetches the content from the appropriate site, and edits the content to include the ad. The problem is that all images and css files etc. cannot be accessed, because when the browser tries to get the image <img src="site.com/image.jpg"> it's just redirected back to the index.php. I can change settings for the router (running dd-wrt) and the webserver (apache2 and php 5.2). Is there a solution that allows content to be edited before returning to the client, and allows css and images to be accessed?

    Read the article

  • PHP Curl's Proxy feature causes no information to be returned

    - by Grant David Bachman
    I'm working on a PHP script right now which sends requests to our school's servers to get real-time information about class sizes for different courses. The script runs perfectly fine when I don't use a proxy, returning a string full of course numbers and available seats. However, I want to make this a service for the students, and I'm afraid that if I make too many requests my ip will get blocked. So I'm attempting to do this through a proxy, with no success. As soon as I add the CURLOPT_HTTPPROXYTUNNEL and CURLOPT_PROXY fields to my requests nothing gets returned. I'm not even sure how to troubleshoot it at this point since I'm not getting an error message of any kind. Does anyone know what's going on or how to at least troubleshoot it?

    Read the article

  • IE8 AJAX GET setRequestHeaders not working unless params provided in URL

    - by bobthabuilda
    I'm trying to create an AJAX request in IE8. var xhr = new ActiveXObject( 'Msxml2.XMLHTTP' ); xhr.open( 'GET', '/ajax/' ); // Required header for Django to detect AJAX request xhr.setRequestHeader( 'X-Requested-With', 'XMLHttpRequest' ); xhr.onreadystatechange = function() { if ( this.readyState == 4 ) console.log(this.responseText); } xhr.send( null ); This works perfectly fine in Firefox, Chrome, Safari. In IE8 however, all of my AJAX test requests work EXCEPT for ones where I'm performing GETs without any query string params (such as the one above). POSTs work without question, and GET requests only work whenever I include query strings in the URL, like this: xhr.open( 'GET', '/ajax/?foo=bar' ) I'm also 110% positive that my server code is handling these requests appropriately, so, this stumps me completely. Does anyone have any clue as to what might be causing this?

    Read the article

  • How to setup .htaccess to rewrite to different folder

    - by Guy
    I'm moving my site to a new host but I need to have my current server continue to handle requests (not all files can be moved to the new server). So I added a parked domain to my old server (old.mydomain.com) and I want all requests to it to be written to the files from the old site. My old site (mydomain.com) was hosted internally in a folder (/public_html/mydomain/) and I want all requests to old.mydomain.com to be rewritten to the same folder. So if mydomain.com/blog was internally at /public_html/mydomain/blog I now want old.mydomain.com/blog also to reach /public_html/mydomain/blog. Here is the .htaccess that I'm trying to use: RewriteCond %{HTTP_HOST} ^old\.mydomain\.com/* RewriteRule ^(.*)$ mydomain/$1 [NC,L] but for some reason as soon as I add the $1 in the rewrite rule I get an internal error. Any ideas?

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Flex Client application - HTTPRequest problem in the initialize function.

    - by Elad
    Hello all. I have a serious problem in my flex client applications. I have an apache server with php web services. the flex client makes an httpservice requests. I noticed that the httpservice requests that runs from the creationComplete event of the application does not always get data from the server. but HTTPservice requests called from user actions always work. I also noticed that when I run the flex client application directly from the Flex Builder 3 without upload it to the server, the problem occours less frequently. in the Application: mx:Application creationComplete="Init()" verticalScrollPolicy="off" horizontalScrollPolicy="off" xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" backgroundColor="#5d8eb1" private function Init():void { var http_request:HTTPService = new HTTPService(); http_request.url = "http://"+this.server_name+":"+this.server_port+"/services/client/client_result.php"; http_request.resultFormat = "e4x"; http_request.addEventListener("result",resultFunc); http_request.send(); http_request.disconnect(); }

    Read the article

  • How to set timeout with python-mechanize?

    - by Michal Cihar
    I'm using python-mechanize to scrape some web sites, which sometime simply don't respond to requests and these requests stay open too long, so I need to limit timeout for these requests. While using urlopen method, the timeout can be set using timeout parameter, but I have not found easy way for doing it with high level API such as submit or click methods. Ideally the timeout would be set just once for whole browser class and all calls would honor that. It would be probably possible to customize this by passing custom request_class to every click and submit call, but this would just pollute the code, so I'm looking for nicer solution for setting timeout for mechanize's browser class (and no, I don't want to change default socket timeout using socket.setdefaulttimeout).

    Read the article

  • ASP.NET Asynchronous Tasks - Worker Thread Not Releasing?

    - by user296752
    I am having an issue with testing asynchronous tasks in ASP.NET & IIS7. From what I have read, the worker thread should be released back into the thread pool while the I/O thread performs the async work, allowing ASP.NET to server other incoming requests. But when I simulate heavy load on the web application by making 20 simultaneous requests to a page (async.aspx) that performs long running async tasks, I am unable to browse to some other normal aspx page until the requests are just about done. Am I misunderstanding or missing something here? I am running Vista Biz x64, VS2008 + IIS7. I have the Async attribute applied to the Page directive.

    Read the article

  • nginx-tornado-django request timeout

    - by Xie
    We are using nginx-tornado-django to provide web services. That is, no web page frontend. The nginx server serves as a load-balancer. The server has 8 cores, so we launched 8 tornado-django processes on every server. Memcached is also deployed to gain better performance. The requests per day is about 1 million per server. We use MySQL as backend DB. The code is tested and correct. Our profiling shows that normally every request are processed within 100ms. The problem is, we find that about 10 percent of the requests suffers from time-out issue. Many requests didn't even reach tornado. I really don't have much experience on tuning of nginx/tornado/MySQL. Right now I don't have a clue on what is going wrong. Any advise is appreiciated.

    Read the article

  • Do not use IE browser settings when using a proxy with Indy

    - by JD
    Hi At one of our customer sites, we have a Delphi 2007 application that makes a number of HTTPS requests using indy components. All requests are made using the proxy settings the client provides. For this to work, in IE we have to put the URL's in the trusted zones section. After a month due to security settings the trusted zones are cleared. This means we have to re-add the URLs again to make our application work. Is there a way of bypassing IE settings or using a client side HTTP stack so we do not go through the browser to make https requests? JD

    Read the article

  • Python having problems writing/reading and testing in a correct format

    - by Ionut
    I’m trying to make a program that will do the following: check if auth_file exists if yes - read file and try to login using data from that file - if data is wrong - request new data if no - request some data and then create the file and fill it with requested data So far: import json import getpass import os import requests filename = ".auth_data" auth_file = os.path.realpath(filename) url = 'http://example.com/api' headers = {'content-type': 'application/json'} def load_auth_file(): try: f = open(auth_file, "r") auth_data = f.read() r = requests.get(url, auth=auth_data, headers=headers) if r.reason == 'OK': return auth_data else: print "Incorrect login..." req_auth() except IOError: f = file(auth_file, "w") f.write(req_auth()) f.close() def req_auth(): user = str(raw_input('Username: ')) password = getpass.getpass('Password: ') auth_data = (user, password) r = requests.get(url, auth=auth_data, headers=headers) if r.reason == 'OK': return user, password elif r.reason == "FORBIDDEN": print "Incorrect login information..." req_auth() return False I have the following problems(understanding and applying the correct way): I can't find a correct way of storing the returned data from req_auth() to auth_file in a format that can be read and used in load_auth file PS: Of course I'm a beginner in Python and I'm sure I have missed some key elements here :(

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >