Search Results

Search found 52413 results on 2097 pages for 'http proxy'.

Page 157/2097 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Difficulty screen scraping http://www.momondo.com using nokogiri

    - by Khai Kiong
    I have some difficulty to extract the total price (css selector = '.total') from the flight result. http://www.momondo.com/multicity/?Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false#Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false I get the error "undefined method `text' for nil:NilClass nokogiri ". My code desc "Fetch product prices" task :fetch_details => :environment do require 'nokogiri' require 'open-uri' include ERB::Util OneWayFlight.find_all_by_money(nil).each do |flight| url = "http://www.momondo.com/multicity/Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false#Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false" doc = Nokogiri::HTML(open(url)) price = doc.at_css(".total").text[/[0-9\.]+/] flight.update_attribute(:price, price) end end

    Read the article

  • 404 not found in telnet, works fine in browser

    - by Viranch Mehta
    i am having a very irritating problem, when i open a url ( http://celebs.widewallpapers.net/md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg ) in browser, it works fine.. but when i try to access it by telnet on bash, i get 404 not found!! my exact terminal: $ telnet celebs.widewallpapers.net 80 HEAD /md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg HTTP/1.0 [enter] [enter] HTTP/1.1 404 Not Found Server: nginx Date: Sun, 23 May 2010 21:36:05 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 166 Connection: close please help me with this as i m trying to make a C batch-downloader, which is almost working as same as the telnet.

    Read the article

  • Java on Iphone / IPad - Java Simulator / Proxy?

    - by Kovu
    Hey, I wish to have a java-chat on the iphone / ipad. With both, as far i know, it's not possible to have java there. But I see a lot of mysteriuos things in the internet the last years, so I must ask - is there any possibilty to run a java-chat on IPad / Iphone?!

    Read the article

  • how to get the http header in asynchronous request mode using ASIHHTTP

    - by user262325
    Hello everyone I hope to display the download file size by reading http header. I know there is way do this: ASIHTTPRequest request = [ASIHTTPRequest requestWithURL:url]; [request startSynchronous]; NSString poweredBy = [[request responseHeaders] objectForKey:@"X-Powered-By"]; NSString *contentType = [[request responseHeaders] objectForKey:@"Content-Type"]; but this is Synchronous mode, in Asynchronous mode it can be done as below: (void)requestFinished:(ASIHTTPRequest *)request { unsigned long long contentLength = [request contentLength]; } but 'requestFinished' is at the end of download. Is there an event to get the http header info at the beginning of download? Thank interdev

    Read the article

  • Castle Windsor XML configuration for WCF proxy using WCF Integration Facility

    - by andreyg
    Hi everybody! Currently, we use programming registration of WCF proxies in Windsor container using WCF Integration Facility. For example: container.Register( Component.For<CalculatorSoap>() .Named("calculatorSoap") .LifeStyle.Transient .ActAs(new DefaultClientModel { Endpoint = WcfEndpoint.FromConfiguration("CalculatorSoap").LogMessages() } ) ); Is there any way to do the same via Windsor XML configuration file. I can't find any sample of this on google. Thanks in advance

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • Remove HTTP headers from a raw response

    - by Ed
    Let's say we make a request to a URL and get back the raw response, like this: HTTP/1.1 200 OK Date: Wed, 28 Apr 2010 14:39:13 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 Set-Cookie: PREF=ID=e2bca72563dfffcc:TM=1272465553:LM=1272465553:S=ZN2zv8oxlFPT1BJG; expires=Fri, 27-Apr-2012 14:39:13 GMT; path=/; domain=.google.co.uk Server: gws X-XSS-Protection: 1; mode=block Connection: close <!doctype html><html><head>...</head><body>...</body></html> What would be the best way to remove the HTTP headers from the response in C#? With regexes? Parsing it into some kind of HTTPResponse object and using only the body? EDIT: I'm using SOCKS to make the request, that's why I get the raw response.

    Read the article

  • net/http.rb:560:in `initialize': getaddrinfo: Name or service not known (SocketError)

    - by Sid
    ` @@timestamp = nil def generate_oauth_url @@timestamp = timestamp url = CONNECT_URL + REQUEST_TOKEN_PATH + "&oauth_callback=#{OAUTH_CALLBACK}&oauth_consumer_key=#{OAUTH_CONSUMER_KEY}&oauth_nonce=#{NONCE} &oauth_signature_method=#{OAUTH_SIGNATURE_METHOD}&oauth_timestamp=#{@@timestamp}&oauth_version=#{OAUTH_VERSION}" puts url url end def sign(url) Base64.encode64(HMAC::SHA1.digest((NONCE + url), OAUTH_CONSUMER_SECRET)).strip end def get_request_token url = generate_oauth_url signed_url = sign(url) request = Net::HTTP.new((CONNECT_URL + REQUEST_TOKEN_PATH),80) puts request.inspect headers = { "Authorization" => "Authorization: OAuth oauth_nonce = #{NONCE}, oauth_callback = #{OAUTH_CALLBACK}, oauth_signature_meth od = #{OAUTH_SIGNATURE_METHOD}, oauth_timestamp=#{@@timestamp}, oauth_consumer_key = #{OAUTH_CONSUMER_KEY}, oauth_signature = #{signed_url}, oauth_versio n = #{OAUTH_VERSION}" } request.post(url, nil,headers) end def timestamp Time.now.to_i end ` I am trying to do what oauth does in an attempt to understand how to use the Authorization headers. I am also getting the following error. I am trying to connect to the linkedin API. /usr/lib/ruby/1.8/net/http.rb:560:in 'initialize': getaddrinfo: Name or service not known (SocketError) I would really appreciate it if someone could nudge me in the right direction.

    Read the article

  • Sening a file from memory (rather than disk) over HTTP using libcurl

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

  • Running log operation in Http Modules?

    - by Niranjan
    Hi, I have a simple requirement in which I want to execute a long running application program on server (e.g. DTSX) I want to make an HTTP module for this, But I have a question whether the DTSX will run even if the user closes the page and browser. In my case user hits the handler with a query string but what if the user closes the browser immediately? How is the behavior different from simple linear page processing? I want my DTSX package to finish once its started no matter how much time it takes and also dont want to halt the user that is why I am using http modules in place of linear asp page processing. Reagrds, Niranjan

    Read the article

  • Newbie queston: Cannot load http://localhost/phptest:8080/

    - by Princess Innah
    Hi guys, I decided to learn HTML so I installed apache on windows vista. Everything seems to work fine; when I go to http://localhost:8080 the sample webpage installed by apache shows. Apache is configured at port 8080. So far so good. Since my DocumentRoot is c:\pub, I made another folder inside, e.g., c:\pub\test. What I'm trying to figure out is why the page at http://localhost/test:8080 cannot load? It has index.html and apache is working fine. Any ideas?

    Read the article

  • Getting Session in Http Handler ashx

    - by prakash
    Hi All, I am using Http Handler ashx file for showing the images. I was using Session object to get image and return in the response Now problem is i need to use custom Session object its nothing but the Wrapper on HttpSession State But when i am trying to get existing custom session object its creating new ... its not showing session data , i checked the session Id which is also different Please adive how can i get existing session in ashx file ? Note: When i use ASP.NET Sesssion its working fine [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class GetImage : IHttpHandler, System.Web.SessionState.IRequiresSessionState {

    Read the article

  • Simulate Incorrect Content-Length Headers for HTTP in C#

    - by cfeduke
    We are building a comprehensive integration test framework in C# for our application which exists on top of HTTP using IIS7 to host our applications. As part of our integration tests we want to test incoming requests which will result in EndOfStreamExceptions ("Unable to read beyond end of stream") that occur when a client sends up a HTTP header indicating a larger body size than it actually transmits as part of the body. We want to test our error recovery code for this condition so we need to simulate these sorts of requests. I am looking for a .NET Fx-based socket library or custom HttpWebRequest replacement that specifically allows developers to simulate such conditions to add to our integration test suite. Does anyone know of any such libraries? A scriptable solution would work as well.

    Read the article

  • Multiple key/value pairs in HTTP POST where key is the same name

    - by randombits
    I'm working on an API that accepts data from remote clients, some of which where the key in an HTTP POST almost functions as an array. In english what this means is say I have a resource on my server called "class". A class in this sense, is the type a student sits in and a teacher educates in. When the user submits an HTTP POST to create a new class for their application, a lot of the key value pairs look like: student_name: Bob Smith student_name: Jane Smith student_name: Chris Smith What's the best way to handle this on both the client side (let's say the client is cURL or ActiveResource, whatever..) and what's a decent way of handling this on the server-side if my server is a Ruby on Rails app? Need a way to allow for multiple keys with the same name and without any namespace clashing or loss of data.

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • Proxy calls across a DMZ

    - by John
    We need to determine a quick way for our web application deployed in a DMZ to communicate to our SQL server that lives in the protected network. Only port 80 is open and available, and no direct SQL traffic is allowed across the firewall. So take the following simple system. A web page (default.aspx) makes a call (string GetData()) that resides in an assembly (Simple.DLL). GetData() uses ADO.NET to open a connection, execute a SQL call, retrieve the data, and return the data to the caller. However, since only port 80 is available and no SQL traffic is allowed, what could we do to accomplish our goal? I believe a .NET remoting solution would work, and I have heard of an architecture where a remoting layer proxies the call from Simple.DLL in the DMZ to another Simple.DLL that runs on the protected side. The remoting layer handles the communication between the two DLL’s. Can someone shed some light on how WCF/remoting can help us and how to get started with a solution?

    Read the article

  • Cross-domain REST proxy with Javascript, HTML5

    - by Bosh
    I'm writing a service (say, service.com) that provides a REST API to external apps running inside of IFrames. (These apps are hosted from domains outside the service.com). I'm planning a javascript client library for the apps to make pure-javascript requests to the service.com REST API -- basically using postMessage and some ad-hoc encapsulation of my API calls to get messages back and forth across frames (from the outside-app.com IFrame -- service.com REST API, and back to the IFrame with a response). My question: is there any robust, general-purpose javascript library to accomplish the kind of cross-domain REST request proxying I need, or should I just hack it from scratch?

    Read the article

  • Problem in getting Http Response in crome

    - by Bhaskasr
    hi Am trying to get http response from php web service in javascript , but am getting null in firefox and crome. plz tell me where am doing mistake here is my code, function fetch_details() { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest() alert("first"); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP") alert("sec"); } alert("hi."); xhttp.open("GET","http://122.166.97.94:8080/proxim_live/phpsend.php?UserID=881&DeviceID=Imm123&LastSyncDate=",false); xhttp.send(""); xmlDoc=xhttp.responseXML; alert(xmlDoc); alert(xmlDoc.getElementsByTagName("Inbox")[0].childNodes[0].nodeValue); }

    Read the article

  • Securely persist session between https://secure.yourname.com and http://www.yourname.com on rails ap

    - by Matt
    My rails site posts to a secure host (e.g. 'https://secure.yourname.com') when the user logs into the site. Session data is stored in the database, with the cookie containing only the session ID. The problem is that when the user returns to a non-https page, such as the home page (e.g. 'http://www.yourname.com') the user appears to have logged out. I believe the reason for this is that a separate cookie is stored for each host (www vs. secure). Is this correct? What is the best secure way to persist the session between both the http and https sections of the site? Does anyone know of any plugins that address this problem? The site runs on Heroku.

    Read the article

  • Castel Windsor XML configuration for WCF proxy using WCF Integration Facility

    - by andreyg
    Hi everybody! Currently, we use programming registration of WCF proxies in Windsor container using WCF Integration Facility. For example: container.Register( Component.For<CalculatorSoap>() .Named("calculatorSoap") .LifeStyle.Transient .ActAs(new DefaultClientModel { Endpoint = WcfEndpoint.FromConfiguration("CalculatorSoap").LogMessages() } ) ); Is there any way to do the same via Windsor XML configuration file. I can't find any sample of this on google. Thanks in advance

    Read the article

  • How do I get through proxy server environments for non-standard services?

    - by Ripred
    I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment. I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences. Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche. My first reaction is "why can't they just add my servers addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days. My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there. Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me. Thanks for reading my long-winded question, Ripred

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >