Search Results

Search found 57458 results on 2299 pages for 'http response codes'.

Page 370/2299 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • How should I do a loop a nokogiri search in ruby?

    - by kim
    I have the following that I retreive the title of each url from an array that contains a list of urls. require 'rubygems' require 'nokogiri' require 'open-uri' @urls = ["http://google.com", "http://yahoo.com", "http://rubyonrails.org"] @found_titles = Array.new @found_titles[0] = Nokogiri::HTML(open("#{@urls[0]}")).search("title").inner_html #this can go on forever...but #@found_titles[1] = Nokogiri::HTML(open("#{@urls[1]}")).search("title").inner_html #@found_titles[2] = Nokogiri::HTML(open("#{@urls[2]}")).search("title").inner_html puts "#{@found_titles[0]}" How should i form a loop method for this so i can get the title even when the list in @url array gets longer.

    Read the article

  • color letters in a div

    - by Growler
    I've created a palindrome checker. I want to take it one step further and show the letters being compared as it is being checked. HTML: <p id="typing"></p> <input type="text" id="textBox" onkeyup="pal(this.value);" value="" /> <div id="response"></div> <hr> <div id="palindromeRun"></div> JS: To do this, I run the recursive check... Then if it is a palindrome, I run colorLetters(), which I'm trying to highlight in green each letter as it is being checked. Right now it is just rewriting palindromeRun's HTML with the first letter. I know this is because of the way I'm resetting its HTML. I don't know how to just grab the first and last letter, change only those letters' css, then increment i and j on the next setTimeout loop. var timeout2 = null; function pal (input) { var str = input.replace(/\s/g, ''); var str2 = str.replace(/\W/, ''); if (checkPal(str2, 0, str2.length-1)) { $("#textBox").css({"color" : "green"}); $("#response").html(input + " is a palindrome"); $("#palindromeRun").html(input); colorLetters(str2, 0, str2.length-1); } else { $("#textBox").css({"color" : "red"}); $("#response").html(input + " is not a palindrome"); } if (input.length <= 0) { $("#response").html(""); $("#textBox").css({"color" : "black"}); } } function checkPal (input, i, j) { if (input.length <= 1) { return false; } if (i === j || ((j-i) == 1 && input.charAt(i) === input.charAt(j))) { return true; } else { if (input.charAt(i).toLowerCase() === input.charAt(j).toLowerCase()) { return checkPal(input, ++i, --j); } else { return false; } } } function colorLetters(myinput, i, j) { if (timeout2 == null) { timeout2 = setTimeout(function () { console.log("called"); var firstLetter = $("#palindromeRun").html(myinput.charAt(i)) var secondLetter = $("#palindromeRun").html(myinput.charAt(j)) $(firstLetter).css({"color" : "red"}); $(secondLetter).css({"color" : "green"}); i++; j++; timeout2=null; }, 1000); } } Secondary: If possible, I'd just like to have it colors the letters as the user is typing... I realize this will require a setTimeout on each keyup, but also am not sure how to write this.

    Read the article

  • file_get_contents returns 403 forbidden

    - by absk
    I am trying to make a sitescraper. I made it on my local machine and it works very fine there. When I execute the same on my server, it shows a 403 forbidden error. I am using the PHP Simple HTML DOM Parser. The error I get on the server is this: Warning: file_get_contents(http://example.com/viewProperty.html?id=7715888) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/scraping/simple_html_dom.php on line 40 The line of code triggering it is: $url="http://www.example.com/viewProperty.html?id=".$id; $html=file_get_html($url); I have checked the php.ini on the server and allow_url_fopen is On. Possible solution can be using curl, but I need to know where I am going wrong.

    Read the article

  • Unable to fetch Json data from remote url

    - by user3772611
    I am cracking my head to solve this thing. I am unable to fetch the JSON data from remote REST API. I need to fetch the JSOn data nd display the "html_url" field from the JSON data on my website. I saw that you need the below charset and content type for fetching JSON. <html> <head> <meta charset="utf-8"> <meta http-equiv="content-type" content="application/json"> </head> <body> <p>My Instruments page</p> <ul></ul> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script> <script type="text/javascript" src="http://code.jquery.com/jquery-1.11.0.min.js"></script> <script type="text/javascript"> $(document).ready(function () { alert("Inside the script"); $.getJSON(" https://pki.zendesk.com/api/v2/help_center/sections/200268985/articles.json", function (obj) { alert("Inside the getJSON"); $.each(obj, function (key, value) { $("ul").append("<li>" + value.html_url + "</li>"); }); }); }); </script> </body> </html> I referred to following example on jsfiddle http://jsfiddle.net/2xTjf/29/ The "http://date.jsontest.com" given in this example also doesn't work in my code. The first alert is pops but not the other one. I am a novice at JSON/ Jquery. i used jsonlint.com to find if it has valid JSON, it came out valid. I tested using chrome REST client too. What am I missing here ? Help me please ! Thanks in anticipation.

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • How do "and" and "or" work when combined in one statement?

    - by orokusaki
    For some reason this function confused me: def protocol(port): return port == "443" and "https://" or "http://" Can somebody explain the order of what's happening behind the scenes to make this work the way it does. I understood it as this until I tried it: Either A) def protocol(port): if port == "443": if bool("https://"): return True elif bool("http://"): return True return False Or B) def protocol(port): if port == "443": return True + "https://" else: return True + "http://" Is this some sort of special case in Python, or am I completely misunderstanding how statements work?

    Read the article

  • .htaccess mod_rewrite subdomains

    - by Aaron
    .htaccess: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{HTTP_HOST} ^site\.com$ [NC] RewriteRule ^(.*)$ http://www.site.com/$1 [L,R=301] RewriteRule ^new/?$ index.php?section=new This works great and all, but I have recently implemented a subdomain m.site.com which reads off of a /mobile directory. When accessing m.site.com/new it will not display anything except return a server error. What can I do to correct this problem? Basically, I want http://m.site.com/new To achieve the same affect as http://www.site.com/new

    Read the article

  • Twitter rate limit

    - by raulriera
    Hi, I am whitelisted in Twitter, and I have this "traffic heavy" application that just makes 2 request to find out how many users 2 people have.... the traffic currently is killing the 150 request limit per hour. How do I authenticate my requests so that twitter knows I am whitelisted? http://api.twitter.com/1/users/show.xml?screen_name=chavezcandanga http://api.twitter.com/1/users/show.xml?screen_name=luischataing I wish to authenticate those for this simple project http://250mil.com Thanks!

    Read the article

  • How to show the animation without exposing the code?

    - by bonCodigo
    There's an animation done in jQuery within jsfiddle. I do not have a website as of now to "implant" it and share the URL. I also can't give the jsfiddle. So how can I share my animation to the audience without showing the code? Does github or any other facility tools allow locking the code and showing the final product without having a website, yet I could get a URL for the audience to view it? I regret for the rookie-question in this context as I am still new to web stuff. EDIT: jsFiddle shows 3 code windows along the results :html, css, js. My requirement is to only show the results window to the audience and by all means to hide codes and leads via URL to the codes. Ideal solution demands a results to be shown and URL that is encrypted (at best).

    Read the article

  • My mobile does not cache , but i have a manifest file,...

    - by Ploetzeneder
    Hello, i have now put the site on: http://www.ploetzeneder.eu/Dateien/test/index4.html the manifest is there: http://www.ploetzeneder.eu/Dateien/test/app-cache-demo.manifest Why does it not work? The Webserver where the relevant problem has this url: http://www.pharao.mobi/WebAppproblem/ Username is the Username Passwort is the Password the problem is on index4.html where all images should be cached but are not

    Read the article

  • Using system time directly to get random numbers

    - by Richard Mar.
    I had to return a random element from an array so I came up with this placeholder: return codes[(int) (System.currentTimeMillis() % codes.length - 1)]; Now than I think of it, I'm tempted to use it in real code. The Random() seeder uses system time as seed in most languages anyway, so why not use that time directly? As a bonus, I'm free from the worry of non-random lower bits of many RNGs. It this hack coming back to bite me? (The language is Java if that's relevant.)

    Read the article

  • Java - Regex problem

    - by Yatendra Goel
    I have list of urls of types: http://www.abc.com/pk/etc http://www.abc.com/pk/etc/ http://www.abc.com/pk/etc/etc where etc can be anything. So I want to search only those urls that contains www.abc.com/pk/etc or www.abc.com/pk/etc/

    Read the article

  • Viable development for iPhone after 3.3.1 change?

    - by Kevin
    With the latest changes to the developer agreement by Apple, how inherant is the risk of using any kind of framework to develop Apps for devices now? Should shops risk using things like MonoTouch, Three20, Appcelerator since this change? How are some iPhone/iPad developers here handling it? http://www.pcworld.com/article/193916/apples_new_iphone_app_policy_unreasonable_and_unjustifiable.html http://www.wired.com/gadgetlab/2010/04/iphone-flash-policy-steve-jobs/ http://37signals.com/svn/posts/2273-five-rational-arguments-against-apples-331-policy

    Read the article

  • SVN update returns nothing, while it should

    - by user325483
    Hi everyone, First some background information; I've set up my SVN repository on my local server at home using VisualSVN Server. Using SSH on (or via php/shell script), i am able to check out a folder from this repository to the webserver, all goes well. Also updates and other svn commands execute normaly and return their messages. Now comes the problem, and I'm struggling with this for a few days now. Before I execute the checkout command *svn co http://server_home/folder*, I want to make sure no conflicts are going to happen, so I execute *svn status [folder_on_webserver]*. But this doesn't return the result as expected, it returns nothing. When I execute * svn status --show-updates [folder_on_webserver]* it returns the following: * newfolder * 13 anotherfolder * 13 yetanotherfolder * 13 . Status against revision: 16 As you can see it misses the svn codes (A,U,D). Does somebody knows why the svn update command and the svn codes doesnt work?

    Read the article

  • How to add a new page in Lift framework

    - by portoalet
    How can I add a new page in the webapp directory in lift that can be accessed by users? Currently only the index.html can be accessed through http://localhost:8080/ or http://localhost:8080/index.html Say I add a static file newpage.html into webapp dir, then what can I do so users can access it through http://localhost:8080/newpage.html ?

    Read the article

  • Apache finds non-existent files

    - by Adam
    My web server has a peculiar behavior: Let's say my website URL is http://my-domain.com, and I have an accessible file http://my-domain.com/blah.jpg in it. For some reason I'm able to access the file using http://my-domain.com/blah. It happens with any type of file. Do you have any idea how do I fix this?

    Read the article

  • Problem with routes and mod-rewrite (if not absolute i don't get CSS, JS or images)

    - by Toni Michel Caubet
    hi there! i updated the code from my website to a 'better' veersion i think, it works fine but when i try to implement the friendly URL and load it, works, but with no CSS, Javascript or images, but if i corret the routes for the css to http://website/css/style.css (instead of ./css/style.css) it i do see the CSS properly loaded, any idea why? Example: http://keepyourlinks.com/link1.php?id=25 VS http://keepyourlinks.com/keep/25/series-yonkis (i updated the route of the CSS, but the Javascript is missing an the images asweell) I really would like not to have to correct al routes :(

    Read the article

  • ASP.NET MVC & Windsor.Castle: working with HttpContext-dependent services

    - by Igor Brejc
    I have several dependency injection services which are dependent on stuff like HTTP context. Right now I'm configuring them as singletons the Windsor container in the Application_Start handler, which is obviously a problem for such services. What is the best way to handle this? I'm considering making them transient and then releasing them after each HTTP request. But what is the best way/place to inject the HTTP context into them? Controller factory or somewhere else?

    Read the article

  • Mod Rewirte Question.

    - by delimit
    I cant seem to get Example 1 to turn into Example 2 using mod rewrite. Can someone help me out? Example 1 http://www.example.com/info/index.php?uid=123 Example 2 http://www.example.com/123 Mod rewrite code. Options +FollowSymLinks Options -Indexes RewriteEngine on RewriteBase /info RewriteCond %{HTTP_HOST} ^example\.com$ [NC] RewriteRule ^(.*)$ http://www.example.com/info/$1 [R=301,L] RewriteRule ^([^/]*)$ /info/index.php?uid=$1 [L]

    Read the article

  • Parsing XML elements with dynamic namespace prefix in PHP

    - by BugKiller
    I have the following XML ( you can say SOAP request ) : <SOAPENV:Envelope xmlns:SOAPENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:NS="http://xyz.gov/headerschema" > <SOAPENV:Header> <NS:myHeader> <NS:SourceID>223423</NS:SourceID> </NS:myHeader> </SOAPENV:Header> </SOAPENV:Envelope> I use the following code and it works fine : <?php $myRequest ='<SOAPENV:Envelope xmlns:SOAPENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:NS="http://xyz.gov/headerschema" > <SOAPENV:Header> <NS:myHeader> <NS:SourceID>223423</NS:SourceID> </NS:myHeader> </SOAPENV:Header> </SOAPENV:Envelope>'; $xml = simplexml_load_string($myRequest, NULL, NULL, "http://schemas.xmlsoap.org/soap/envelope/"); $namespaces = $xml->getNameSpaces(true); $soapHeader = $xml->children($namespaces['SOAPENV'])->Header; $myHeader = $soapHeader->children($namespaces['NS'])->myHeader; echo (string)$myHeader->SourceID; ?> The Problem I know the prefix ( SOAPENV + NS ) , but the clients could change the prefix to whatever they want, so they may send me xml document that has ( MY-SOAPENV + MY-NS) prefixes. My Question How can I handle this since the namespace prefixes are not static , how can I parse it ? Thanks

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • GET command is giving two kinds of ouput,why???

    - by developer
    iam using GET command to get the content of a page.When i write the same command on shell prompt it gives correct result but when i use that in PHP file then sometimes its giving correct result but sometimes it gives only half of the content i.e. end-half portion only. Iam using following command in shell script :- GET http://www.abc.com/ -H "Referer:http://www.abcd.com/" and following in PHP file :- $data=exec('GET http://www.abc.com/ -H "Referer:http://www.abcd.com/"'); echo $data; Now please tell why this command is not giving full content of the page when im using it in php file.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >