Search Results

Search found 57458 results on 2299 pages for 'http response codes'.

Page 370/2299 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • GET command is giving two kinds of ouput,why???

    - by developer
    iam using GET command to get the content of a page.When i write the same command on shell prompt it gives correct result but when i use that in PHP file then sometimes its giving correct result but sometimes it gives only half of the content i.e. end-half portion only. Iam using following command in shell script :- GET http://www.abc.com/ -H "Referer:http://www.abcd.com/" and following in PHP file :- $data=exec('GET http://www.abc.com/ -H "Referer:http://www.abcd.com/"'); echo $data; Now please tell why this command is not giving full content of the page when im using it in php file.

    Read the article

  • Getting an array back from php using $.ajax

    - by Wurlitzer
    A php script is giving this array (which has been passed through json_encode()) [{name:"a1",path:"b1"},{name:"a2",path:"b2"}] I use the following function to retrieve the array to jquery: $.ajax({ type: "POST", url: "functions.php", data: "action=" + something, cache: false, success: function(response) { alert(response); } }); The problem is I get the array back as a string: (new String("[{name:"a1",path:"b1"},{name:"a2",path:"b2"}}]")) How can I get it to be a javascript array? Help would be much appreciated.

    Read the article

  • Problem with mod rewrite for multi-lingual site

    - by Chris
    Hi everyone Currently developing a multi-lingual website, Users can access the front page using url with format below: http://mydomain.com/en/ http://mydomain.com/fr/ Problem is here. URL without last "/" (http://mydomain.com/fr) caused page not found problem Here is the rule RewriteRule ^/?([^./]+)/(.*)$ $2?lang=$1 [L,QSA] Can anybody help ? Thanks in advance

    Read the article

  • How do "and" and "or" work when combined in one statement?

    - by orokusaki
    For some reason this function confused me: def protocol(port): return port == "443" and "https://" or "http://" Can somebody explain the order of what's happening behind the scenes to make this work the way it does. I understood it as this until I tried it: Either A) def protocol(port): if port == "443": if bool("https://"): return True elif bool("http://"): return True return False Or B) def protocol(port): if port == "443": return True + "https://" else: return True + "http://" Is this some sort of special case in Python, or am I completely misunderstanding how statements work?

    Read the article

  • How do I make my browser brute force an input field?

    - by user210904
    Hi, There's a website I frequent that gives codes such as GD??Q-TPY32-TPTT3-9CM9P-F2QMQ and hints that the ?? is a number and a letter. If you're the first to unlock the code you can redeem the modest prize. So the obvious way to solve this problem is to brute-force the code. But obviously I don't want to sit in front of the computer for an hour to manually input all 10 * 26 combinations. Is there a way for me to tell my browser to input these codes (assume that each 5-character block is an individual text field). Or is there a special browser that enables some sort of macro-type feature? Thanks.

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • function file_exists not working in php

    - by Rakesh R Nair
    I have been trying to find if a file_exist in the directory. If not i want to use a different image. But as i am using the file_exists function it always returns false. The code i used is while($r=mysql_fetch_row($res)) { if(!file_exists('http://localhost/dropbox/lib/admin/'.$r[5])) { $file='http://localhost/dropbox/lib/admin/images/noimage.gif'; } else $file='http://localhost/dropbox/lib/admin/'.$r[5];} But the function always return false even if the file exits. I checked that by using <img src="<?php echo 'http://localhost/dropbox/lib/admin/'.$r[5]; ?>" /> This displayed the image correctly. Please Someone Help Me

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • Unable to fetch Json data from remote url

    - by user3772611
    I am cracking my head to solve this thing. I am unable to fetch the JSON data from remote REST API. I need to fetch the JSOn data nd display the "html_url" field from the JSON data on my website. I saw that you need the below charset and content type for fetching JSON. <html> <head> <meta charset="utf-8"> <meta http-equiv="content-type" content="application/json"> </head> <body> <p>My Instruments page</p> <ul></ul> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script> <script type="text/javascript" src="http://code.jquery.com/jquery-1.11.0.min.js"></script> <script type="text/javascript"> $(document).ready(function () { alert("Inside the script"); $.getJSON(" https://pki.zendesk.com/api/v2/help_center/sections/200268985/articles.json", function (obj) { alert("Inside the getJSON"); $.each(obj, function (key, value) { $("ul").append("<li>" + value.html_url + "</li>"); }); }); }); </script> </body> </html> I referred to following example on jsfiddle http://jsfiddle.net/2xTjf/29/ The "http://date.jsontest.com" given in this example also doesn't work in my code. The first alert is pops but not the other one. I am a novice at JSON/ Jquery. i used jsonlint.com to find if it has valid JSON, it came out valid. I tested using chrome REST client too. What am I missing here ? Help me please ! Thanks in anticipation.

    Read the article

  • Getting Google repositories to work with apt-get on Ubuntu Hardy

    - by Justin
    I've installed Google Chrome on Hardy via the .deb file and would like to configure apt-get for automatic updates. [I have another machine running Ubuntu Karmic where this works fine; apt-get knows the package as 'google-chrome'; I'm now using a Dell Mini 10 with Ubuntu 8.04 LTS installed] As part of the .deb install, two entries have been added to the third- party software sources tab: http://dl.google.com/linux/deb stable main http://dl.google.com/linux/deb stable non-free main However if I check for updates with either of these clicked, I get the following error: Failed to fetch http://dl.google.com/linux/deb/dists/stable/Release Unable to find expected entry main/binary-lpia/Packages in Meta-index file (malformed Release file?) There is a thread here which indicates others have had the same problem: http://www.google.co.uk/support/forum/p/Chrome/thread?tid=097d103f87b49abe&hl=en This references a further thread: http://code.google.com/p/chromium/issues/detail?id=38608 which suggests the problem has been fixed. Despite this I remain unable to get it to work, and none of the suggested workarounds seem to work either. Ideas ? Thanks.

    Read the article

  • How to show the animation without exposing the code?

    - by bonCodigo
    There's an animation done in jQuery within jsfiddle. I do not have a website as of now to "implant" it and share the URL. I also can't give the jsfiddle. So how can I share my animation to the audience without showing the code? Does github or any other facility tools allow locking the code and showing the final product without having a website, yet I could get a URL for the audience to view it? I regret for the rookie-question in this context as I am still new to web stuff. EDIT: jsFiddle shows 3 code windows along the results :html, css, js. My requirement is to only show the results window to the audience and by all means to hide codes and leads via URL to the codes. Ideal solution demands a results to be shown and URL that is encrypted (at best).

    Read the article

  • file_get_contents returns 403 forbidden

    - by absk
    I am trying to make a sitescraper. I made it on my local machine and it works very fine there. When I execute the same on my server, it shows a 403 forbidden error. I am using the PHP Simple HTML DOM Parser. The error I get on the server is this: Warning: file_get_contents(http://example.com/viewProperty.html?id=7715888) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/scraping/simple_html_dom.php on line 40 The line of code triggering it is: $url="http://www.example.com/viewProperty.html?id=".$id; $html=file_get_html($url); I have checked the php.ini on the server and allow_url_fopen is On. Possible solution can be using curl, but I need to know where I am going wrong.

    Read the article

  • Using system time directly to get random numbers

    - by Richard Mar.
    I had to return a random element from an array so I came up with this placeholder: return codes[(int) (System.currentTimeMillis() % codes.length - 1)]; Now than I think of it, I'm tempted to use it in real code. The Random() seeder uses system time as seed in most languages anyway, so why not use that time directly? As a bonus, I'm free from the worry of non-random lower bits of many RNGs. It this hack coming back to bite me? (The language is Java if that's relevant.)

    Read the article

  • Parsing XML elements with dynamic namespace prefix in PHP

    - by BugKiller
    I have the following XML ( you can say SOAP request ) : <SOAPENV:Envelope xmlns:SOAPENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:NS="http://xyz.gov/headerschema" > <SOAPENV:Header> <NS:myHeader> <NS:SourceID>223423</NS:SourceID> </NS:myHeader> </SOAPENV:Header> </SOAPENV:Envelope> I use the following code and it works fine : <?php $myRequest ='<SOAPENV:Envelope xmlns:SOAPENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:NS="http://xyz.gov/headerschema" > <SOAPENV:Header> <NS:myHeader> <NS:SourceID>223423</NS:SourceID> </NS:myHeader> </SOAPENV:Header> </SOAPENV:Envelope>'; $xml = simplexml_load_string($myRequest, NULL, NULL, "http://schemas.xmlsoap.org/soap/envelope/"); $namespaces = $xml->getNameSpaces(true); $soapHeader = $xml->children($namespaces['SOAPENV'])->Header; $myHeader = $soapHeader->children($namespaces['NS'])->myHeader; echo (string)$myHeader->SourceID; ?> The Problem I know the prefix ( SOAPENV + NS ) , but the clients could change the prefix to whatever they want, so they may send me xml document that has ( MY-SOAPENV + MY-NS) prefixes. My Question How can I handle this since the namespace prefixes are not static , how can I parse it ? Thanks

    Read the article

  • URL rewrite to remove parameters

    - by samoyed
    Hello to all, I'm working on a site where all the pages are actually index.php + a 'name' parameter that is analyzed and loads the appropriate template and content. the homepage url is: http://www.some_site.com/?page=homepage 1. i was asked to "change" the homepage url to: http://www.some_site.com can i use url rewite and htaccess for that and if so, what should i write there? working on my local machine, i tried this code (mode rewrite is enabled): <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On RewriteRule /index.php /index.php?page=homepage </IfModule> i would still need the 'name' parameter to be available to the php code of course, so i can load the template and css files. 2. it would be nice for other pages (not homepage) to be converted from (example) http://www.some_site.com/?page=products to: http://www.some_site.com/products this is less crucial. thanx in advance and have a nice day :-)

    Read the article

  • Problem with routes and mod-rewrite (if not absolute i don't get CSS, JS or images)

    - by Toni Michel Caubet
    hi there! i updated the code from my website to a 'better' veersion i think, it works fine but when i try to implement the friendly URL and load it, works, but with no CSS, Javascript or images, but if i corret the routes for the css to http://website/css/style.css (instead of ./css/style.css) it i do see the CSS properly loaded, any idea why? Example: http://keepyourlinks.com/link1.php?id=25 VS http://keepyourlinks.com/keep/25/series-yonkis (i updated the route of the CSS, but the Javascript is missing an the images asweell) I really would like not to have to correct al routes :(

    Read the article

  • How to solve Only Web services with a [ScriptService] attribute on the class definition can be called from script

    - by NevenHuynh
    I attempt to use webservice return POCO class generated from entity data model as JSON when using Jquery AJAX call method in webservice. but I have problem with error "Only Web services with a [ScriptService] attribute on the class definition can be called from script", and getting stuck in it, Here is my code : namespace CarCareCenter.Web.Admin.Services { /// <summary> /// Summary description for About /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class About : System.Web.Services.WebService { [ScriptMethod(ResponseFormat = ResponseFormat.Json)] [WebMethod] public static Entities.Category getAbout() { Entities.Category about = new Entities.Category(); using (var context = new CarCareCenterDataEntities()) { about = (from c in context.Categories where c.Type == "About" select c).SingleOrDefault(); } return about; } } } aspx page : <script type="text/javascript"> $(document).ready(function () { $.ajax({ type: 'POST', dataType: 'json', contentType: 'application/json; charset=utf-8', url: '/Services/About.asmx/getAbout', data: '{}', success: function (response) { var aboutContent = response.d; alert(aboutContent); $('#title-en').val(aboutContent.Name); $('#title-vn').val(aboutContent.NameVn); $('#content-en').val(aboutContent.Description); $('#content-vn').val(aboutContent.DescriptionVn); $('#id').val(aboutContent.CategoryId); }, failure: function (message) { alert(message); }, error: function (result) { alert(result); } }); $('#SaveChange').bind('click', function () { updateAbout(); return false; }); $('#Reset').bind('click', function () { getAbout(); return false; }) }); function updateAbout() { var abt = { "CategoryId": $('#id').val(), "Name": $('#title-en').val(), "NameVn": $('#title-vn').val(), "Description": $('#content-en').val(), "DescriptionVn": $('#content-vn').val() }; $.ajax({ type: "POST", url: "AboutManagement.aspx/updateAbout", data: JSON.stringify(abt), contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { var aboutContent = response.d; $('#title-en').val(aboutContent.Name); $('#title-vn').val(aboutContent.NameVn); $('#content-en').val(aboutContent.Description); $('#content-vn').val(aboutContent.DescriptionVn); }, failure: function (message) { alert(message); }, error: function (result) { alert(result); } }); } </script> Do any approaches to solve it ? Please help me . Thanks

    Read the article

  • My mobile does not cache , but i have a manifest file,...

    - by Ploetzeneder
    Hello, i have now put the site on: http://www.ploetzeneder.eu/Dateien/test/index4.html the manifest is there: http://www.ploetzeneder.eu/Dateien/test/app-cache-demo.manifest Why does it not work? The Webserver where the relevant problem has this url: http://www.pharao.mobi/WebAppproblem/ Username is the Username Passwort is the Password the problem is on index4.html where all images should be cached but are not

    Read the article

  • Java - Regex problem

    - by Yatendra Goel
    I have list of urls of types: http://www.abc.com/pk/etc http://www.abc.com/pk/etc/ http://www.abc.com/pk/etc/etc where etc can be anything. So I want to search only those urls that contains www.abc.com/pk/etc or www.abc.com/pk/etc/

    Read the article

  • Apache finds non-existent files

    - by Adam
    My web server has a peculiar behavior: Let's say my website URL is http://my-domain.com, and I have an accessible file http://my-domain.com/blah.jpg in it. For some reason I'm able to access the file using http://my-domain.com/blah. It happens with any type of file. Do you have any idea how do I fix this?

    Read the article

  • Twitter rate limit

    - by raulriera
    Hi, I am whitelisted in Twitter, and I have this "traffic heavy" application that just makes 2 request to find out how many users 2 people have.... the traffic currently is killing the 150 request limit per hour. How do I authenticate my requests so that twitter knows I am whitelisted? http://api.twitter.com/1/users/show.xml?screen_name=chavezcandanga http://api.twitter.com/1/users/show.xml?screen_name=luischataing I wish to authenticate those for this simple project http://250mil.com Thanks!

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >