Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 266/854 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • recommended parser for XML in java(absolute beginner to xml)

    - by poeschlorn
    Hi Pro's, which parser (java) would you recommend for parsing GPX data? Im looking for a one that is very intuitive to use and should not need too much RAM (it seems that DOM requires too much, doesn't it?). I have no idea about parsing xml, so it is time for me to learn this ;-) My documents are not very huge and are always read twice(a point for DOM), but I don't want to keep as few things as possible in RAM. What would you do in this situation? Which one would you coose and why?

    Read the article

  • Best practices retrieving XML/stream from HTTP in Android

    - by Jeffrey
    Hello everyone, what are the best practices parsing XML from an HTTP resource in Android? I've been using HttpURLConnection to retrieve an InputStream, wrapping it with a BufferedInputStream, and then using SAX to parse the buffered stream. For the most part it works, though I do receive error reports of SocketTimeoutException: The operation timed out or general parsing error. I believe it's due to the InputStream. Would using HttpClient instead of HttpURLConnection help? If yes, why? Should the stream be output to a file, having the file parsed instead of the stream? Any input or direction would be greatly appreciated. Thanks for your time.

    Read the article

  • How to implement buffering with timeout in RX

    - by Gaspar Nagy
    I need to implement an event processing, that is done delayed when there are no new events arriving for a certain period. (I have to queue up a parsing task when the text buffer changed, but I don't want to start the parsing when the user is still typing.) I'm new in RX, but as far as I see, I would need a combination of BufferWithTime and the Timeout methods. I imagine this to be working like this: it buffers the events until they are received regularly within a specified time period between the subsequent events. If there is a gap in the event flow (longer than the timespan) it should return propagate the events buffered so far. Having a look at how Buffer and Timeout is implemented, I could probably implement my BufferWithTimeout method (if everyone have one, please share with me), but I wonder if this can be achieved just by combining the existing methods. Any ideas?

    Read the article

  • Combining the streams:Web application

    - by Surendra J
    This question deals mainly with streams in web application in .net. In my webapplication I will display as follows: bottle.doc sheet.xls presentation.ppt stackof.jpg Button I will keep checkbox for each one to select. Suppose a user selected the four files and clicked the button,which I kept under. Then I instantiate clasees for each type of file to convert into pdf, which I wrote already and converted them into pdf and return them. My problem is the clases is able to read the data form URL and convert them into pdf. But I don't know how to return the streams and merge them. string url = @"url"; //Prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "GET"; request.ContentType = "application/mspowerpoint"; request.UserAgent = "Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT+5.0"; //Execute the request HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //We will read data via the response stream Stream resStream = response.GetResponseStream(); //Write content into the MemoryStream BinaryReader resReader = new BinaryReader(resStream); MemoryStream PresentaionStream = new MemoryStream(resReader.ReadBytes((int)response.ContentLength)); //convert the presention stream into pdf and save it to local disk. But I would like to return the stream again. How can I achieve this any Ideas are welcome.

    Read the article

  • how to get entire document in scrapy using hxs.select

    - by Chris Smith
    I've been at this for 12hrs and I'm hoping someone can give me a leg up. Here is my code all I want is to get the anchor and url of every link on a page as it crawls along. from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.utils.url import urljoin_rfc from scrapy.utils.response import get_base_url from urlparse import urljoin #from scrapy.item import Item from tutorial.items import DmozItem class HopitaloneSpider(CrawlSpider): name = 'dmoz' allowed_domains = ['domain.co.uk'] start_urls = [ 'http://www.domain.co.uk' ] rules = ( #Rule(SgmlLinkExtractor(allow='>example\.org', )), Rule(SgmlLinkExtractor(allow=('\w+$', )), callback='parse_item', follow=True), ) user_agent = 'Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))' def parse_item(self, response): #self.log('Hi, this is an item page! %s' % response.url) hxs = HtmlXPathSelector(response) #print response.url sites = hxs.select('//html') #item = DmozItem() items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() items.append(item) return items What I'm doing wrong... my eyes hurt now.

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • how to ignore ivy revision number?

    - by user315228
    Guys, I have certain jar files without revision number. But as rev is mandatory attribute for ivy dependency, i am providing the revision attribute. But i have something like (-[revision]) in url resolver. but its taking the module number instead of ignoring the revision attribute. I know it wont ignore the revision attribute as its not null. Following is the output that i get default-cache: no cached resolved revision for perltools#perltools;latest.integration [ivy:retrieve] tried [ivy:retrieve] listing all in [ivy:retrieve] using privateRepo to list all in [ivy:retrieve] ApacheURLLister found URL=[httP://myrepo/ivyRepository/perltools/jars/perltools.jar]. [ivy:retrieve] found 1 resources [ivy:retrieve] found revs: [perltools.jar] [ivy:retrieve] HTTP response status: 404 url=httP://myrepo/ivyRepository/perltools/jars/perltools.jar/perltools-perltools.jar.jar [ivy:retrieve] CLIENT ERROR: Not Found url=httP://myrepo/ivyRepository/perltools/jars/perltools.jar/perltools-perltools.jar.jar Can somebody please explain why its taking module.ext as revision where revision i specified is latest.integration and in myrepo, i dont have revision attribute. its just has [http://myrepo/ivyRepository/perltools/jars//perltools.jar] Can somebody please help me so that i can avoid revision attribute?

    Read the article

  • Java to C# code converter

    - by acadia
    Hello, Are there any converters available that converts Java code to C#? I need to convert the below code into C# String token = new String(""); URL url1 =new URL( "http", domain, Integer.valueOf(portnum), "/Workplace/setCredentials?op=getUserToken&userId="+username+"&password="+password +"&verify=true"); URLConnection conn1=url1.openConnection(); ((HttpURLConnection)conn1).setRequestMethod("POST"); InputStream contentFileUrlStream = conn1.getInputStream(); BufferedReader br = new BufferedReader(new InputStreamReader(contentFileUrlStream)); token=br.readLine(); String encodedAPIToken = URLEncoder.encode(token); String doubleEncodedAPIToken ="ut=" + encodedAPIToken;//.substring(0, encodedAPIToken.length()-1); //String doubleEncodedAPIToken ="ut=" + URLEncoder.encode(encodedAPIToken); //String userToken = "ut=" + URLEncoder.encode(token, "UTF-8"); //URLEncoder.encode(token); String vsId = "vsId=" + URLEncoder.encode(docId.substring(5, docId.length()), "UTF-8"); url="http://" + domain + ":" + portnum + "/Workplace/getContent?objectStoreName=RMROS&objectType=document&" + vsId + "&" +doubleEncodedAPIToken; String vsId = "vsId=" + URLEncoder.encode(docId.substring(5, docId.length()), "UTF-8"); url="http://" + domain + ":" + portnum + "/Workplace/getContent?objectStoreName=RMROS&objectType=document&" + vsId + "&" +doubleEncodedAPIToken; Thanks in advance

    Read the article

  • Is it possible to modify ASP.NET to no longer require runat="server"?

    - by sean2078
    I know why runat="server" is currently required (ASP.NET why runat="server"), but the consensus is that it should not be required if you incorporate a simple default into the design (I agree of course). Would it be possible to modify, extend, decompile and recreate, intercept or otherwise change the behavior of how ASP.NET parses ASPX and ASCX files so that runat="server" would no longer be required? For instance, I assume that a version of Mono could be branched to accomplish this goal. In case specific requirements are helpful, the following highlights one design: During parsing, when configured namespace tags are encountered (such as "asp"), default the element's runat property to "server" During parsing, when configured namespace tags are encountered (such as "asp"), if the element's runat property value is available, then that value should be used in place of the default New page-level setting introduced (can be set in the page directive or web.config) that specifies the default runat value for a specific namespace tag

    Read the article

  • datetime command line argument in python 2.4

    - by Ike Walker
    I want to pass a datetime value into my python script on the command line. My first idea was to use optparse and pass the value in as a string, then use datetime.strptime to convert it to a datetime. This works fine on my machine (python 2.6), but I also need to run this script on machines that are running python 2.4, which doesn't have datetime.strptime. How can I pass the datetime value to the script in python 2.4? Here's the code I'm using in 2.6: parser = optparse.OptionParser() parser.add_option("-m", "--max_timestamp", dest="max_timestamp", help="only aggregate items older than MAX_TIMESTAMP", metavar="MAX_TIMESTAMP(YYYY-MM-DD HH24:MM)") options,args = parser.parse_args() if options.max_timestamp: # Try parsing the date argument try: max_timestamp = datetime.datetime.strptime(options.max_timestamp, "%Y-%m-%d %H:%M") except: print "Error parsing date input:",sys.exc_info() sys.exit(1)

    Read the article

  • how to use cookies in HttpsURLConnection in android.

    - by sajjoo
    hello guys, actually i am new in Android and now i have to add the cookies in my project. i am using HttpsUrlConnection. here is how i am making request and getting response from a webserver and now i have to add cookies aswell. URL url = new URL(strUrl); HttpsURLConnection connection = (HttpsURLConnection) url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Type", "application/soap+xml; charset=utf-8"); connection.setRequestProperty("Content-Length", ""+ Integer.toString(request.getBytes().length)); connection.setUseCaches (false); connection.setDoInput(true); connection.setDoOutput(true); // send Request... DataOutputStream wr = new DataOutputStream (connection.getOutputStream()); wr.writeBytes (request); wr.flush (); wr.close (); //Get response... DataInputStream is = new DataInputStream(connection.getInputStream()); String line; StringBuffer response = new StringBuffer(); while((line = is.readLine()) != null) { response.append(line); } is.close(); FileLogger.writeFile("Soap.txt", "RESPONSE: " + methodName + "\n" + response); HashMap<String, String> parameters = null; try { parameters = SoapRequest.responseParser(response.toString(), methodName); } catch (ParserConfigurationException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (SAXException e) { // TODO Auto-generated catch block e.printStackTrace(); } return parameters; any help will be appreciative, thanks

    Read the article

  • Javascript problem with location.href.

    - by Patrick
    Hello! I have a textbox and whenever the user presses a key it's checked to see if the user pressed enter. If the enter key is pressed, i want to add all the info in the textbox and transfer the user to a different url. <script language="javascript" type="text/javascript"> function checkEnter(e){ //e is event object passed from function invocation var characterCode; if(e && e.which){ //if which property of event object is supported (NN4) e = e; characterCode = e.which; //character code is contained in NN4's which property } else{ e = event; characterCode = e.keyCode; //character code is contained in IE's keyCode property } if (characterCode == 13) { //if generated character code is equal to ascii 13 (if enter key) var searchLink = '/Search/?Keywords=' + document.getElementById('<%= searchBox.ClientID %>').value; transferUser(searchLink); return false; } else{ return true; } } function transferUser(url) { window.location.href = url; window.location.replace(url); } </script> Search: <input name="ctl00$searchBox" type="text" id="ctl00_searchBox" class="header_line_search_box_textbox" onKeyPress="checkEnter(event);" /> I have tried every possible combination, but nothing happens. The site just refreshes itself. I also need somehow to convert the text from the user to html safe, must like the HttpUtility.EncodeUrl in aspx.

    Read the article

  • Why is this bookmarklet code saying jQuery is not defined even though jQuery is included?

    - by Josh Brown
    I am creating a bookmarklet and the code below is not working on first try. When I goto a page, it says "jQuery is not defined". But, if I click it again, it works perfectly? var qrcodetogo = { jQURL: 'http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js', jQUIURL: 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/jquery-ui.min.js', jQUIThemeURL: 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/themes/ui-lightness/jquery-ui.css', init: function(){ this.createLink('qrcodetogo_UI-Lightness', this.jQUIThemeURL); this.createScript('qrcodetogo_jQuery', this.jQURL); this.createScript('qrcodetogo_jQueryUI', this.jQUIURL); this.createHiddenDiv('qrcodetogo_dialog','This is a Test.'); jQuery.noConflict(); }, showQRCode: function() { jQuery('#qrcodetogo_dialog').dialog(); }, createLink: function(id, url) { var l = document.createElement('link'); l.href = url; l.rel = 'stylesheet'; l.type = 'text/css'; l.media = 'screen'; l.charset = 'utf-8'; document.getElementsByTagName('head')[0].appendChild(l); }, createScript: function(id, url) { var s = document.createElement('script'); s.src = url; s.id = id; document.getElementsByTagName('head')[0].appendChild(s); }, createHiddenDiv: function(id, body) { var div = document.createElement('div'); div.id = id; div.innerHTML = body; div.style.display = 'none'; document.getElementsByTagName('body')[0].appendChild(div) } } qrcodetogo.init(); qrcodetogo.showQRCode();

    Read the article

  • ASP.Net MVC, JS injection and System.ArgumentException - Illegal Characters in path

    - by Mose
    Hi, In my ASP.Net MVC application, I use custom error handling. I want to perform custom actions for each error case I meet in my application. So I override Application_Error, get the Server.GetLastError(); and do my business depending on the exception, the current user, the current URL (the application runs on many domains), the user IP, and many others. Obviousely, the application is often the target of hackers. In almost all the case it's not a problem to detect and manage it, but for some JS URL attacks, my error handling does not perform what I want it to do. Ex (from logs) : http://localhost:1809/Scripts/]||!o.support.htmlSerialize&&[1 When I got such an URL, an exception is raised when accessing the ConnectionStrings section in the web.config, and I can't even redirect to another URL. It leads to a "System.ArgumentException - Illegal Characters in path, etc." The screenshot below shows the problem : http://screencast.com/t/Y2I1YWU4 An obvious solution is to write a HTTP module to filter the urls before they reach my application, but I'd like to avoid it because : I like having the whole security being managed in one place (in the Application_Error() method) In the module I cannot access the whole data I have in the application itself (application specific data I don't want to debate here) Questions : Did you meet this problem ? How did you manage it ? Thanks for you suggestions, Mose

    Read the article

  • Reading in bytes produced by PHP script in Java to create a bitmap

    - by Kareem
    I'm having trouble getting the compressed jpeg image (stored as a blob in my database). here is the snippet of code I use to output the image that I have in my database: if($row = mysql_fetch_array($sql)) { $size = $row['image_size']; $image = $row['image']; if($image == null){ echo "no image!"; } else { header('Content-Type: content/data'); header("Content-length: $size"); echo $image; } } here is the code that I use to read in from the server: URL sizeUrl = new URL(MYURL); URLConnection sizeConn = sizeUrl.openConnection(); // Get The Response BufferedReader sizeRd = new BufferedReader(new InputStreamReader(sizeConn.getInputStream())); String line = ""; while(line.equals("")){ line = sizeRd.readLine(); } int image_size = Integer.parseInt(line); if(image_size == 0){ return null; } URL imageUrl = new URL(MYIMAGEURL); URLConnection imageConn = imageUrl.openConnection(); // Get The Response InputStream imageRd = imageConn.getInputStream(); byte[] bytedata = new byte[image_size]; int read = imageRd.read(bytedata, 0, image_size); Log.e("IMAGEDOWNLOADER", "read "+ read + " amount of bytes"); Log.e("IMAGEDOWNLOADER", "byte data has length " + bytedata.length); Bitmap theImage = BitmapFactory.decodeByteArray(bytedata, 0, image_size); if(theImage == null){ Log.e("IMAGEDOWNLOADER", "the bitmap is null"); } return theImage; My logging shows that everything has the right length, yet theImage is always null. I'm thinking it has to do with my content type. Or maybe the way I'm uploading?

    Read the article

  • PhpBB3: adding background to specific php generated text input without affecting the other text inputs

    - by user1780055
    I have created a custom PhpBB3 style and desperately since a few hours tried to add a background image to a specific comment text area. With firebug I checked if the comment text area had a class and it does, so I tried some css variations and finally tried: sn-inputComment { background: url("{T_THEME_PATH}/images/pencil.png") repeat-x left top #FFFFFF;} { I also tried to find and manipulate the php generated text area but no success. Non of my methods worked. I will provide you all with a tinylink url to my forum with a test user and password access. User: test Password: 123456 url: http://tinyurl.com/9yqpxdb Now when you are logged in you should be redirected to the correct url and you will see a a few text boxes with "Write a comment...". I would be very happy if you could tell me what I did wrong, why im not able to add a background to the text input without having my search boxes and "what is on your mind box" affected. I appreciate your time and hope that this can be somehow solved. Sincerely, Daniel

    Read the article

  • Service Bus / Request Forwarding

    - by codputer
    I'm doing some development with a thrid party that issues either a Get or POST to a public URL that I specify. What I would like to do is set up a Relay service on the Azure Service Bus that my dev machine can listen to. When the request comes in, I want to forward that request as if my web service was taking the request directly from the thrid party service. When I'm ready, I'll deploy the application to a public service, change the URL that the thrid party service is sending too, and viola I should be up and running. What I'm looking for looks exactly like this: Clemens the Master of Service Bus but it's from the 2009 CTP. I'm working at it, but haven't yet got it working using all the new bits in 2012 (a.ka. its over my head at the moment). Somebody want to help? Clemens also help somebody else create a Reverse Proxy using the Service Bus, but I can't seem to find it. Yes I've also tweeted Clemens, but I'm sure he is a busy man! p.s. I know about Application Request Routing, but my dev machine is not on a public URL, I need to rewrite the URL after my client listener on the service bus recieves the message that was relayed from the Server side endpoint.

    Read the article

  • database setup for web application

    - by vbNewbie
    I have an application that requires a database and I have already setup tables but not sure if they match the requirements of the app. The app is a crawler which fetches web urls, crawls and stores appropriate urls and posts and all this is based on client requests which are stored as projects. So for each url stored there is one post and for client there are many projects and for each project there are many types of requests. So we get a client with a request and assign them a project name and then use the request to search for content and store the url and post. A request could already exist and should not be duplicated but should be associated with the right client and project and post etc. Here is my schema now: url table: urlId PK queryId FK url post table: postId PK urlId FK post date request table: queryId PK request client table: clientId PK client Name projectId FK project table: projectID PK queryID FK project Does this look right? or does anyone have suggestions. Of course my stored procedures and insert statements will have to be in depth.

    Read the article

  • Understanding run time code interpretation and execution

    - by Bob
    I'm creating a game in XNA and was thinking of creating my own scripting language (extremely simple mind you). I know there's better ways to go about this (and that I'm reinventing the wheel), but I want the learning experience more than to be productive and fast. When confronted with code at run time, from what I understand, the usual approach is to parse into a machine code or byte code or something else that is actually executable and then execute that, right? But, for instance, when Chrome first came out they said their JavaScript engine was fast because it compiles the JavaScript into machine code. This implies other engines weren't compiling into machine code. I'd prefer not compiling to a lower language, so are there any known modern techniques for parsing and executing code without compiling to low level? Perhaps something like parsing the code into some sort of tree, branching through the tree, and comparing each symbol and calling some function that handles that symbol? (Wild guessing and stabbing in the dark)

    Read the article

  • python threading and performace?

    - by kumar
    I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads) for file in file_list: t=threading.Thread(target = self.convertfile,args = file) t.start() ts.append(t) for t in ts: t.join() But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

    Read the article

  • Getting response status code 0 in SmartGWT webservice call using json

    - by Girish
    I have developed application using SmartGWT, now i need to call webservice using json to another application which is deployed in another server for submitting username and password. When i make a request with url and POST method, getting the response status code as 0 and response text as blank. Here is my code, public void sendRequest() throws Exception { // Get login json data to be sent to server. String strData = createLoginReqPacket(); String url = "some url"; RequestBuilder builder = new RequestBuilder(RequestBuilder.POST, url); builder.setHeader("Content-Type", "application/json"); builder.setHeader("Content-Length", strData.length() + ""); Request response = builder.sendRequest(strData, new RequestCallback() { @Override public void onResponseReceived(Request request, Response response) { int statusCode = response.getStatusCode(); System.out.println("Response code ----"+response.getStatusCode()+""); if (statusCode == Response.SC_OK) { String responseBody = response.getText(); System.out.println("Respose :" + responseBody); // do something with the response } else { GWT.log("Response error at server side ----",null); // do in case of server error } }// end of method. @Override public void onError(Request request, Throwable exception) { GWT.log("**** Error in service call ******",null); }// end of method. }); builder.send(); }// end of send request. Please anybody knows the solution?? Give some reference code or links for this. Thanks.

    Read the article

  • ASP.NET MVC3 - Bug using Javascript

    - by ebb
    Hey there, I'm trying to use Ajax.BeginForm() to POST A Json result from my controller (I'm using MVC3). When the Json result is called it should be sent to a javascript function and extract the object using "var myObject = content.get_response().get_object();", However it just throws a "Microsoft JScript runtime error: Object doesn't support this property or method" when trying to invoke the Ajax POST. My code: Controller: [HttpPost] public ActionResult Index(string message) { return Json(new { Success = true, Message = message }); } View: <!DOCTYPE html> <html> <head> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftAjax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftMvcAjax.js")" type="text/javascript"></script> <script type="text/javascript"> function JsonAdd_OnComplete(mycontext) { var myObject = mycontext.get_response().get_object(); alert(mycontext.Message); } </script> </head> <body> <div> @using(Ajax.BeginForm("Index", "Home", new AjaxOptions() { HttpMethod = "POST", OnComplete = "JsonAdd_OnComplete" })) { @Html.TextBox("message") <input type="submit" value="SUBMIT" /> } </div> </body> </html> The strange thing is that the exactly same code works in MVC2 - Is this a bug, or have I forgot something? Thanks in advance.

    Read the article

  • [Javascript] Linux Ajax (mootools Request.JSON) Header error

    - by VDVLeon
    Hi all, I use the following code to get some json data: var request = new Request.JSON( { 'url': sourceURI, 'onSuccess': onPageData } ); request.get(); Request.JSON is a class from Mootools (a javascript library). But on linux (ubuntu on firefox 3.5 and Chrome) the request always fails. So i tried to display the http request ajax is sending. (I used netcat to display it) The request is like this: OPTIONS /the+url HTTP/1.1 Host: example.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/4.0.226.0 Safari/532.3 Referer: http://example.com/ref... Access-Control-Request-Method: GET Origin: http://example.com Access-Control-Request-Headers: X-Request, X-Requested-With, Accept Accept: */* Accept-Encoding: gzip,deflate Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The HTTP request (first line) is not how it should be: OPTIONS /the+url HTTP/1.1 It should be: GET /the+url HTTP/1.1 Does anybody know why this problem is and how to fix it?

    Read the article

  • How to reference other documents in a couchDB view (joining like functionality)

    - by Surfrdan
    We have a CouchDB representation of an XML database which we use to power a javascript based frontend for manipulating the XML documents. The basic structure is a simple 3 level hierachy. i.e. A - B - C A: Parent doucument (type A) B: any number of child documents of parent type A C: any number of child documents of parent type B We represent these 3 document types in CouchDB with a 'type' attribute: e.g. { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"A", "label":"Top Level A document", "logicalMap":{ "issues":{ "1":{ "URL":"http://hdl.handle.net/10107/434-0", "FILE":"llgc-id:434" }, "2":{ "URL":"http://hdl.handle.net/10107/467-0", "FILE":"llgc-id:467" etc... } } } } { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"B", "label":"a B document", } What I want to do is produce a view which returns documents just like the A type but includes the label attribute from the B document within the logicalMap list e.g. { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"A", "label":"Top Level A document", "logicalMap":{ "issues":{ "1":{ "URL":"http://hdl.handle.net/10107/434-0", "FILE":"llgc-id:434", "LABEL":"a B document" }, "2":{ "URL":"http://hdl.handle.net/10107/467-0", "FILE":"llgc-id:467", "LABEL":"another B document" etc... } } } } I'm struggling to get my head around the best way to perform this. It looks like it should be fairly simple though!

    Read the article

  • How spoof referrer using curl

    - by golu molu
    I am using curl code below to spoof referrer , it works fine but there is error on every page - Curl error: $url = somesite.com function doMagic($url) { $curl = curl_init(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; curl_setopt($curl, CURLOPT_URL, $url); curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.12011-10-16 20:23:00"); curl_setopt($curl, CURLOPT_HTTPHEADER, $header); curl_setopt($curl, CURLOPT_REFERER, "http://www.facebook.com"); curl_setopt($curl, CURLOPT_ENCODING, "gzip,deflate"); curl_setopt($curl, CURLOPT_AUTOREFERER, true); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_TIMEOUT, 30); curl_setopt($curl, CURLOPT_FOLLOWLOCATION,true); $html = curl_exec($curl); echo 'Curl error: '. curl_error($curl); curl_close($curl); return $html; } $text = doMagic($url); print("$text"); what i'm doing wrong?

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >