Search Results

Search found 19495 results on 780 pages for 'xml parse'.

Page 351/780 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to Global onRouteRequest directly to onBadRequest?

    - by virtualeyes
    EDIT Came up with this to sanitize URI date params prior to passing off to Play router val ymdMatcher = "\\d{8}".r // matcher for yyyyMMdd URI param val ymdFormat = org.joda.time.format.DateTimeFormat.forPattern("yyyyMMdd") def ymd2Date(ymd: String) = ymdFormat.parseDateTime(ymd) override def onRouteRequest(r: RequestHeader): Option[Handler] = { import play.api.i18n.Messages ymdMatcher.findFirstIn(r.uri) map{ ymd=> try { ymd2Date( ymd); super.onRouteRequest(r) } catch { case e:Exception => // kick to "bad" action handler on invalid date Some(controllers.Application.bad(Messages("bad.date.format"))) } } getOrElse(super.onRouteRequest(r)) } ORIGINAL Let's say I want to return a BadRequest result type for all /foo URIs: override def onBadRequest(r: RequestHeader, error: String) = { BadRequest("Bad Request: " + error) } override def onRouteRequest(r: RequestHeader): Option[Handler] = { if(r.uri.startsWith("/foo") onBadRequest("go away") else super.onRouteRequest(r) } Of course does not work, since the expected return type is Option[play.api.mvc.Handler] What's the idiomatic way to deal with this, create a default Application controller method to handle filtered bad requests? Ideally, since I know in onRouteRequest that /foo is in fact a BadRequest, I'd like to call onBadRequest directly. Should note that this is a contrived example, am actually verifying a URI yyyyMMdd date param, and BadRequest-ing if it does not parse to a JodaTime instance -- basically a catch-all filter to sanitize a given date param rather than handling on every single controller method call, not to mention, avoiding cluttering up application log with useless stack traces re: invalid date parse conversions (have several MBs of these junk trace entries accruing daily due to users pointlessly manipulating the uri date in attempts to get at paid subscriber content)

    Read the article

  • Problem with Json Date format when calling cross-domain proxy

    - by Christo Fur
    I am using a proxy service to allow my client side javascript to talk to a service on another domain The proxy is a simple ashx file with simply gets the request and forwards it onto the service on the other domain : using (var sr = new System.IO.StreamReader(context.Request.InputStream)) { requestData = sr.ReadToEnd(); } string data = HttpUtility.UrlDecode(requestData); using (var client = new WebClient()) { client.BaseAddress = serviceUrl; client.Headers.Add("Content-Type", "application/json"); response = client.UploadString(new Uri(webserviceUrl), data); } The client javascript calling this proxy looks like this function TestMethod() { $.ajax({ type: "POST", url: "/custommodules/configuratorproxyservice.ashx?m=TestMethod", contentType: "application/json; charset=utf-8", data: JSON.parse('{"testObj":{"Name":"jo","Ref":"jones","LastModified":"\/Date(-62135596800000+0000)\/"}}'), dataType: "json", success: AjaxSucceeded, error: AjaxFailed }); function AjaxSucceeded(result) { alert(result); } function AjaxFailed(result) { alert(result.status + ' - ' + result.statusText); } } This works fine until I have to pass a date. At which point I get a Bad Request error when the proxy tries to call the service I did have this working at one point but have now lost it. Have tried using JSON.Parse on the object before sending. and JSON.Stringify, but no joy anyone got any ideas what I am missing

    Read the article

  • The Date type and the Android database

    - by cdonner
    I understand that SQLite stores dates as long integers. When I read rows into a cursor using the standard method (i.e. using the query() method that reads data into a cursor), the result is a date string that includes the time. If I want a different format, I have to parse the string back into a date - possible, but a bit backwards. By using a ViewBinder (as suggested in this question), I can pretty much do anything I want, but the date is already a string at the time the method executes. The accepted answer to the above question also suggests that storing dates as longs would help avoid this problem. I don't want to do that, just in case I want to interpret my data with something else than this application. Maybe I want to expose it via a provider. Is there a way to handle this in the database adapter instead, i.e. controlling the date format that the cursor contains, so that I don't have to change the schema, and don't have to parse the default output back into a Date type?

    Read the article

  • Powershell: Select-Object with additional calculated difference value

    - by David.Chu.ca
    Let me explain my question. I have a daily report about one PC's free space as in text file (sp.txt) like: DateTime FreeSpace(MB) ----- ------------- 03/01/2010 100.43 DateTime FreeSpace(MB) ----- ------------- 03/02/2010 98.31 .... Then I need to use this file as input to generate a report with difference of free space between dates: DateTime FreeSpace(MB) Change ----- ------------- ------ 03/01/2010 100.43 03/02/2010 98.31 2.12 .... Here are some codes I have to get above result without the additional "Change" calculation: Get-Content "C:\report\sp.txt" ` | where {$_.trim().length -gt 0 -and -not ($_ -like "Date*") -and -not ($_ -like "---*")} ` | Select-Object @{Name='DateTime'; expression={[DateTime]::Parse($_.SubString(0, 10).Trim())}}, ` @{Name='FreeSpace(MB)'; expression={$_.SubString(12, 12).Trim()}}, ` | Sort-Object DateTime ` | Select-Object DateTime, 'FreeSpace(MB)' |ft -AutoSize # how to add additional column Change to this Select-Object? My understanding is that Select-Object will return a collection of objects. Here I create this collection from an input text file(parse each line into parts: datetime and freespace(MB)). In order to calculate the difference between dates, I guess that I need to add additional calculated value to the result of the last Select-Object, or pipe the result to another Select-Object with the calculation as additional property or column. In order to do it, I need to get the previous row's FreeSpace value. Not sure if it is possible and how?

    Read the article

  • Date query with Hibernate on Timestamp Column in PostgreSQL

    - by Shashikant Kore
    A table has timestamp column. A sample value in that could be 2010-03-30 13:42:42. With Hibernate, I am doing a range query Restrictions.between("column-name", fromDate, toDate). The Hibernate mapping for this column is as follows. <property name="orderTimestamp" column="order_timestamp" type="java.util.Date" /> Let's say, I want to find out all the records that have the date 30th March 2010 and 31st March 2010. A range query on this field is done as follows. Date fromDate = new SimpleDateFormat("yyyy-MM-dd").parse("2010-03-30"); Date toDate = new SimpleDateFormat("yyyy-MM-dd").parse("2008-03-31"); Expression.between("orderTimestamp", fromDate, toDate); This doesn't work. The query is converted to respective timestamps as "2010-03-30 00:00:00" and "2010-03-31 00:00:00". So, all the records for the 31st March 2010 are ignored. A simple solution to this problem could be to have the end date as "2010-03-31 23:59:59." But, I would like to know if there is way to match only the date part of the timestamp column. Also, is Expression.between() inclusive of both limits? Documentation doesn't throw any light on this.

    Read the article

  • Salesforce/PHP - outbound messages (SOAP) - memory limit issue

    - by Phill Pafford
    I'm using Salesforce to send outbound messages (via SOAP) to another server. The server can process about 8 messages at a time, but will not send back the ACK file if the SOAP request contains more than 8 messages. SF can send up to 100 outbound messages in 1 SOAP request and I think this is causing a memory issue with PHP. If I process the outbound messages 1 by 1 they all go through fine, I can even do 8 at a time with no issues. But larger sets are not working. ERROR in SF: org.xml.sax.SAXParseException: Premature end of file Looking in the HTTP error logs I see that the incoming SOAP message looks to be getting cut of which throws a PHP warning stating: Premature end of data in tag ... PHP Fatal error: Call to a member function getAttribute() on a non-object This leads me to believe that PHP is having a memory issue and can not parse the incoming message due to it's size. I was thinking I could just set: ini_set('memory_limit', '64M'); But would this be the correct approach? Is there a way I could set this to increase with the incoming SOAP request dynamically? UPDATE: Adding some code $data = fopen('php://input','rb'); $headers = getallheaders(); $content_length = $headers['Content-Length']; $buffer_length = 1000; $fread_length = $content_length + $buffer_length; $content = fread($data,$fread_length); /** * Parse values from soap string into DOM XML */ $dom = new DOMDocument(); $dom->loadXML($content); ....

    Read the article

  • Problem with JSONResult

    - by vikitor
    Hi, I' still newby to this so I'll try to explain what I'm doing. Basically what I want is to load a dropdownlist depending on the value of a previous one, and I want it to load the data and appear when the other one is changed. This is the code I've written in my controller: public ActionResult GetClassesSearch(bool ajax, string phylumID, string kingdom){ IList<TaxClass> lists = null; int _phylumID = int.Parse(phylumID); int _kingdom = int.Parse(kingdom); lists = _taxon.getClassByPhylumSearch(_phylumID, _kingdom); return Json(lists.count); } and this is how I call the method from the javascript function: function loadClasses(_phylum) { var phylum = _phylum.value; $.getJSON("/Suspension/GetClassesSearch/", { ajax: true, phylumID: phylum, kingdom: kingdom }, function(data) { alert(data); alert('no fallo') document.getElementById("pClass").style.display = "block"; document.getElementById("sClass").options[0] = new Option("-select-", "0", true, true); //for (i = 0; i < data.length; i++) { // $('#sClass').addOption(data[i].classID, data[i].className); //} }); } The thing is that just like this it works, I pass the function the number of classes within a selected phylum, and it displays the pclass element, the problem gets when I try to populate the slist with data (which should contain the objects retrieved from the database), because when there is data returned by the database changing return Json(lists) instead of return Json(lists.count) I keep getting the same error: A circular reference was detected while serializing an object of type 'SubSonic.Schema.DatabaseColumn'. I've been going round and round debugging and making tests but I can't make it work, and it is suppossed to be a simple thing, but I'm missing something. I have commented the for loop because I'm not quite sure if that's the way you access the data, because I've not been able to make it work when it finds records. Can anyone help me? Thanks in advance, Victor

    Read the article

  • process csv in scala

    - by portoalet
    I am using scala 2.7.7, and wanted to parse CSV file and store the data in SQLite database. I ended up using OpenCSV java library to parse the CSV file, and using sqlitejdbc library. Using these java libraries makes my scala code looks almost identical to that of Java code (sans semicolon and with val/var) As I am dealing with java objects, I can't use scala list, map, etc, unless I do scala2java conversion or upgrade to scala 2.8 Is there a way I can simplify my code further using scala bits that I don't know? val filename = "file.csv"; val reader = new CSVReader(new FileReader(filename)) var aLine = new Array[String](10) var lastSymbol = "" while( (aLine = reader.readNext()) != null ) { if( aLine != null ) { val symbol = aLine(0) if( !symbol.equals(lastSymbol)) { try { val rs = stat.executeQuery("select name from sqlite_master where name='" + symbol + "';" ) if( !rs.next() ) { stat.executeUpdate("drop table if exists '" + symbol + "';") stat.executeUpdate("create table '" + symbol + "' (symbol,data,open,high,low,close,vol);") } } catch { case sqle : java.sql.SQLException => println(sqle) } lastSymbol = symbol } val prep = conn.prepareStatement("insert into '" + symbol + "' values (?,?,?,?,?,?,?);") prep.setString(1, aLine(0)) //symbol prep.setString(2, aLine(1)) //date prep.setString(3, aLine(2)) //open prep.setString(4, aLine(3)) //high prep.setString(5, aLine(4)) //low prep.setString(6, aLine(5)) //close prep.setString(7, aLine(6)) //vol prep.addBatch() prep.executeBatch() } } conn.close()

    Read the article

  • jQuery autocomplete: taking JSON input and setting multiple fields from single field

    - by Sly
    I am trying to get the jQuery autocomplete plugin to take a local JSON variable as input. Once the user has selected one option from the autocomplete list, I want the adjacent address fields to be autopopulated. Here's the JSON variable that declared as a global variable in the of the HTML file: var JSON_address={"1":{"origin":{"nametag":"Home","street":"Easy St","city":"Emerald City","state":"CA","zip":"9xxxx"},"destination":{"nametag":"Work","street":"Factory St","city":"San Francisco","state":"CA","zip":"94104"}},"2":{"origin":{"nametag":"Work","street":"Umpa Loompa St","city":"San Francisco","state":"CA","zip":"94104"},"destination":{"nametag":"Home","street":"Easy St","city":"Emerald City ","state":"CA","zip":"9xxxx"}}}</script> I want the first field to display a list of "origin" nametags: "Home", "Work". Then when "Home" is selected, adjacent fields are automatically populated with Street: Easy St, City: Emerald City, etc. Here's the code I have for the autocomplete: $(document).ready(function(){ $("#origin_nametag_id").autocomplete(JSON_address, { autoFill:true, minChars:0, dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.nametag, result:data[i].origin.nametag }; } return rows; } }).change(function(){ $("#street_address_id").autocomplete({ dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.street, result:data[i].origin.street }; } return rows; } }); }); }); So this question really has two subparts: 1) How do you get autocomplete to process the multi-dimensional JSON object? (When the field is clicked and text entered, nothing happens - no list) 2) How do you get the other fields (street, city, etc) to populate based upon the origin nametag sub-array? Thanks in advance!

    Read the article

  • Parsing a file in C

    - by sfactor
    I need parse through a file and do some processing into it. The file is a text file and the data is a variable length data of the form "PP1004181350D001002003..........". So there will be timestamps if there is PP so 1004181350 is 2010-04-08 13:50. The ones where there are D are the data points that are three separate data each three digits long, so D001002003 has three coordonates of 001, 002 and 003. Now I need to parse this data from a file for which I need to store each timestamp into a array and the corresponding datas into arrays that has as many rows as the number of data and three rows for each co-ordinate. The end array might be like TimeStamp[1] = "135000", low[1] = "001", medium[1] = "002", high[1] = "003" TimeStamp[2] = "135015", low[2] = "010", medium[2] = "012", high[2] = "013" TimeStamp[3] = "135030", low[3] = "051", medium[3] = "052", high[3] = "043" .... The question is how do I go about doing this in C? How do I go through this string looking for these patterns? Note: Here the seconds value in timestamp is added on our own as it is known at each data comes after 15 seconds.

    Read the article

  • Can't hit breakpoint in ASP.NET web app (Stop debugging in progress... popup in VS)

    - by dotnetdev
    Hi, I am trying to debug some code in my ASP.NET web app. I set a breakpoint in one of the page events of the page's codebehind, and this once came up with a special icon in place of the red breakpoint saying symbols have not been loaded and the breakpoint will not be hit. This error has not repeated itself but why can't I hit the breakpoint? Also, when I press stop, I get a popup in VS stating: Stop Debugging In Progres... Debugging is being stopped but is not yet complete. You can force debugging to stop completely, but any processes attached may terminate. This window will automatically close when debugging has completely stopped. Completely stop I also don't get the website appear in my browser either when starting to debug. :( To make things worse, I have a line of code like this in my page's codebehind: RssFeedSites = opml.Parse(filestream); I am putting the problematic breakpoint on this line. But I have a programatic breakpoint in the Parse() method of opml, but this does not get hit, either. Thanks

    Read the article

  • x86 instruction encoding tables

    - by Cheery
    I'm in middle of rewriting my assembler. While at it I'm curious about implementing disassembly as well. I want to make it simple and compact, and there's concepts I can exploit while doing so. It is possible to determine rest of the x86 instruction encoding from opcode (maybe prefix bytes are required too, a bit). I know many people have written tables for doing it. I'm not interested about mnemonics but instruction encoding, because it is an actual hard problem there. For each opcode number I need to know: does this instruction contain modrm? how many immediate fields does this instruction have? what encoding does an immediate use? is the immediate in field an instruction pointer -relative address? what kind of registers does the modrm use for operand and register fields? sandpile.org has somewhat quite much what I'd need, but it's in format that isn't easy to parse. Before I start writing and validating those tables myself, I decided to write this question. Do you know about this kind of tables existing somewhere? In a form that doesn't require too much effort to parse.

    Read the article

  • ajax error when calling from javascript

    - by vikitor
    Hi, I' still newby to this so I'll try to explain what I'm doing. Basically what I want is to load a dropdownlist depending on the value of a previous one, and I want it to load the data and appear when the other one is changed. This is the code I've written in my controller: public ActionResult GetClassesSearch(bool ajax, string phylumID, string kingdom){ IList<TaxClass> lists = null; int _phylumID = int.Parse(phylumID); int _kingdom = int.Parse(kingdom); lists = _taxon.getClassByPhylumSearch(_phylumID, _kingdom); return Json(lists); } and this is how I call the method from the javascript function: function loadClasses(_phylum) { var phylum = _phylum.value; $.getJSON("/Suspension/GetClassesSearch", { ajax: true, classID: phylum, kingdom: kingdom }, }, function(data) { $('#sClass').attr("disabled", true); $('#sClass').selectOptions("-2"); $('#sClass').removeOption(/./); var values = ''; $('#sClass').addOption("-2", "-Select All-"); for (i = 0; i < data.length; i++) { $('#sClass').addOption(data[i].RecId, data[i].TaxonName); } $('#sClass').selectOptions("-2"); $('#sClass').removeAttr("disabled"); }); } I can't get it to work. Any tips? Thanks in advance

    Read the article

  • Parsec: backtracking not working

    - by Nathan Sanders
    I am trying to parse F# type syntax. I started writing an [F]Parsec grammar and ran into problems, so I simplified the grammar down to this: type ::= identifier | type -> type identifier ::= [A-Za-z0-9.`]+ After running into problems with FParsec, I switched to Parsec, since I have a full chapter of a book dedicated to explaining it. My code for this grammar is typeP = choice [identP, arrowP] identP = do id <- many1 (digit <|> letter <|> char '.' <|> char '`') -- more complicated code here later return id arrowP = do domain <- typeP string "->" range <- typeP return $ "("++domain++" -> "++range++")" run = parse (do t <- typeP eof return t) "F# type syntax" The problem is that Parsec doesn't backtrack by default, so > run "int" Right "int" -- works! > run "int->int" Left "F# type syntax" unexpected "-" expecting digit, letter, ".", "`" or end of input -- doesn't work! The first thing I tried was to reorder typeP: typeP = choice [arrowP, identP] But this just stack overflows because the grammar is left-recursive--typeP never gets to trying identP because it keeps trying arrowP over and over. Next I tried try in various places, for example: typeP = choice [try identP, arrowP] But nothing I do seems to change the basic behaviours of (1) stack overflow or (2) non-recognition of "-" following an identifier. My mistake is probably obvious to anybody who has successfully written a Parsec grammar. Can somebody point it out?

    Read the article

  • "Programming In Haskell" error in sat function

    - by Matt Ellen
    I'm in chapter 8 of Graham Hutton's Programming in Haskell and I'm copying the code and testing it in GHC. See the slides here: http://www.cis.syr.edu/~sueo/cis352/chapter8.pdf in particular slide 15 The relevant code I've copied so far is: type Parser a = String -> [(a, String)] pih_return :: a -> Parser a pih_return v = \inp -> [(v, inp)] failure :: Parser a failure = \inp -> [] item :: Parser Char item = \inp -> case inp of [] -> [] (x:xs) -> [(x,xs)] parse :: Parser a -> String -> [(a, String)] parse p inp = p inp sat :: (Char -> Bool) -> Parser Char sat p = do x <- item if p x then pih_return x else failure I have changed the name of the return function from the book to pih_return so that it doesn't clash with the Prelude return function. The errors are in the last function sat. I have copied this directly from the book. As you can probably see p is a function from Char to Bool (e.g. isDigit) and x is of type [(Char, String)], so that's the first error. Then pih_return takes a value v and returns [(v, inp)] where inp is a String. This causes an error in sat because the v being passed is x which is not a Char. I have come up with this solution, by explicitly including inp into sat sat :: (Char -> Bool) -> Parser Char sat p inp = do x <- item inp if p (fst x) then pih_return (fst x) inp else failure inp Is this the best way to solve the issue?

    Read the article

  • Reading from serial port with Boost Asio?

    - by trikri
    Hi! I'm going to check for incoming messages (data packages) on the serial port, using Boost Asio. Each message will start with a header that is one byte long, and will specify which type of the message has been sent. Each different type of message has an own length. The function I'm about to write should check for new incoming messages continually, and when it finds one it should read it, and then some other function should parse it. I thought that the code might look something like this: void check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { s1 = streambuf_to_string(response); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { s2 = streambuf_to_string(response); // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Is boost::asio::streambuf is the right thing to use? And how do I extract the data from it so I can parse the message? I also want to know if I need to have a separate thread which only calls this function, so that it get called more often? Isn't there a risk for loosing data in between two calls to the function otherwise, because so much data comes in that it can't be stored in the serial ports memory? I'm using Qt as a widget toolkit and I don't really know how long time it needs to process all it's events.

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • Proper DateTime Format for a Web Service

    - by user48408
    I have a webservice with a method which is called via a xmlhttprequest object in my javascript. The method accepts a datetime parameter which is subsequently converted to a string and run against the database to perform a calculation. I get the value from m_txtDateAdd and send off the xmlHttprequest <asp:textbox id=m_txtDateAdd tabIndex=4 runat="server" Width="96px" Text="<%# Today %>"> </asp:textbox> which has a validator attacted to it <asp:CustomValidator id="m_DateAddValidator" runat="server" ErrorMessage="Please Enter a Valid Date" ControlToValidate="m_txtDateAdd">&#x25CF;</asp:CustomValidator> My webmethod looks something like this [WebMethod] public decimal GetTotalCost(DateTime transactionDate) { String sqlDateString = transactionDate.Year+"/"+transactionDate.Month+"/"+transactionDate.Day; I use sqlDateString as part of the commandtext i send off to the database. Its a legacy application and its inline sql so I don't have the freedom to set up a stored procedure and create and assign parameters in my code behind. This works 90% of the time. The webservice is called on the onchange event of m_txtDateAdd. Every now and again the response i get from the server is System.ArgumentException: Cannot convert 25/06/2009 to System.DateTime. System.ArgumentException: Cannot convert 25/06/2009 to System.DateTime. Parameter name: type --- System.FormatException: String was not recognized as a valid DateTime. at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles) at System.DateTime.Parse(String s, IFormatProvider provider) at System.Convert.ToDateTime(String value, IFormatProvider provider) at System.String.System.IConvertible.ToDateTime(IFormatProvider provider) at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider) at System.Web.Services.Protocols.ScalarFormatter.FromString(String value, Type type) --- End of inner exception stack trace --- at System.Web.Services.Protocols.ScalarFormatter.FromString(String value, Type type) at System.Web.Services.Protocols.ValueCollectionParameterReader.Read(NameValueCollection collection) at System.Web.Services.Protocols.HtmlFormParameterReader.Read(HttpRequest request) at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest()

    Read the article

  • jQuery Autocomplete Json Ajax cross browser issue with Google Search Appliance

    - by skyfoot
    I am implementing a jquery autocomplete on a search form and am getting the suggestions from the Google Search Appliance Autocomple suggestions service which returns a result set in json. What I am trying to do is go off to the GSA to get suggestions when the user types something in the search box. The url to get the json suggestions is as follows: http://gsaurl/suggest?q=<query>&max=10&site=default_site&client=default_frontend&access=p&format=rich The json which is returned is as follows: { "query":"re", "results": [ {"name":"red", "type":"suggest"}, {"name":"read", "type":"suggest"}] } The jQuery autocomplete code is as follows: $(#q).autocomplete(searchUrl, { width: 320, dataType: 'json', highlight: false, scroll: true, scrollHeight: 300, parse: function(data) { var array = new Array(); for(var i=0;i<data.results.length;i++) { array[i] = { data: data.results[i], value: data.results[i].name, result: data.results[i].name }; } return array; }, formatItem: function(row) { return row.name; } }); This works in IE but fails in firefox as the data returned in the parse function is null. Any ideas why this would be the case? Workaround I created an aspx page to call the GSA suggest service and to return the json from the suggest service. Using this page as a proxy and setting it as the url in the jQuery autocomplete worked in both IE and FireFox.

    Read the article

  • JsonParseException on Valid JSON

    - by user2909602
    I am having an issue calling a RESTful service from my client code. I have written the RESTful service using CXF/Jackson, deployed to localhost, and tested using RESTClient successfully. Below is a snippet of the service code: @POST @Produces("application/json") @Consumes("application/json") @Path("/set/mood") public Response setMood(MoodMeter mm) { this.getMmDAO().insert(mm); return Response.ok().entity(mm).build(); } The model class and dao work successfully and the service itself works fine using RESTClient. However, when I attempt to call this service from Java Script, I get the error below on the server side: Caused by: org.codehaus.jackson.JsonParseException: Unexpected character ('m' (code 109)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') I have copied the client side code below. To make sure it has nothing to do with the JSON data itself, I used a valid JSON string (which works using RESTClient, JSON.parse() method, and JSONLint) in the vars 'json' (string) and 'jsonData' (JSON). Below is the Java Script code: var json = '{"mood_value":8,"mood_comments":"new comments","user_id":5,"point":{"latitude":37.292929,"longitude":38.0323323},"created_dtm":1381546869260}'; var jsonData = JSON.parse(json); $.ajax({ url: 'http://localhost:8080/moodmeter/app/service/set/mood', dataType: 'json', data: jsonData, type: "POST", contentType: "application/json" }); I've seen the JsonParseException a number of times on other threads, but in this case the JSON itself appears to be valid (and tested). Any thoughts are appreciated.

    Read the article

  • Python doctest error

    - by user74283
    Hi I recently started experimenting with python currently reading "Think like a computer scientist: Learning python v2nd edition" I have been having some trouble with doctest. I use a windows 7 machine and Eclipse IDE with pydev. My question is when i run the script below i get the error below. Said script is below the the error message Traceback (most recent call last): File "C:\Users\shaytac\PythonProjects\test.py", line 21, in doctest.testmod() File "C:\Python26\lib\doctest.py", line 1829, in testmod for test in finder.find(m, name, globs=globs, extraglobs=extraglobs): File "C:\Python26\lib\doctest.py", line 852, in find self._find(tests, obj, name, module, source_lines, globs, {}) File "C:\Python26\lib\doctest.py", line 906, in _find globs, seen) File "C:\Python26\lib\doctest.py", line 894, in _find test = self._get_test(obj, name, module, globs, source_lines) File "C:\Python26\lib\doctest.py", line 978, in _get_test filename, lineno) File "C:\Python26\lib\doctest.py", line 597, in get_doctest return DocTest(self.get_examples(string, name), globs, File "C:\Python26\lib\doctest.py", line 611, in get_examples return [x for x in self.parse(string, name) File "C:\Python26\lib\doctest.py", line 573, in parse self._parse_example(m, name, lineno) File "C:\Python26\lib\doctest.py", line 631, in _parse_example self._check_prompt_blank(source_lines, indent, name, lineno) File "C:\Python26\lib\doctest.py", line 718, in _check_prompt_blank line[indent:indent+3], line)) ValueError: line 2 of the docstring for main.compare lacks blank after : 'compare(5, 4) ' def compare(a, b): """ >>>compare(5, 4) 1 >>>compare(7, 7) 0 >>>compare(2, 3) -1 >>>compare(42, 1) 1 """ if a > b : return 1 if a == b : return 0 if a < b : return -1 if __name__ == '__main__': import doctest doctest.testmod()

    Read the article

  • Autodetect Presence of CSV Headers in a File

    - by banzaimonkey
    Short question: How do I automatically detect whether a CSV file has headers in the first row? Details: I've written a small CSV parsing engine that places the data into an object that I can access as (approximately) an in-memory database. The original code was written to parse third-party CSV with a predictable format, but I'd like to be able to use this code more generally. I'm trying to figure out a reliable way to automatically detect the presence of CSV headers, so the script can decide whether to use the first row of the CSV file as keys / column names or start parsing data immediately. Since all I need is a boolean test, I could easily specify an argument after inspecting the CSV file myself, but I'd rather not have to (go go automation). I imagine I'd have to parse the first 3 to ? rows of the CSV file and look for a pattern of some sort to compare against the headers. I'm having nightmares of three particularly bad cases in which: The headers include numeric data for some reason The first few rows (or large portions of the CSV) are null There headers and data look too similar to tell them apart If I can get a "best guess" and have the parser fail with an error or spit out a warning if it can't decide, that's OK. If this is something that's going to be tremendously expensive in terms of time or computation (and take more time than it's supposed to save me) I'll happily scrap the idea and go back to working on "important things". I'm working with PHP, but this strikes me as more of an algorithmic / computational question than something that's implementation-specific. If there's a simple algorithm I can use, great. If you can point me to some relevant theory / discussion, that'd be great, too. If there's a giant library that does natural language processing or 300 different kinds of parsing, I'm not interested.

    Read the article

  • Using NHibernate's HQL to make a query with multiple inner joins

    - by Abu Dhabi
    The problem here consists of translating a statement written in LINQ to SQL syntax into the equivalent for NHibernate. The LINQ to SQL code looks like so: var whatevervar = from threads in context.THREADs join threadposts in context.THREADPOSTs on threads.thread_id equals threadposts.thread_id join posts1 in context.POSTs on threadposts.post_id equals posts1.post_id join users in context.USERs on posts1.user_id equals users.user_id orderby posts1.post_time where threads.thread_id == int.Parse(id) select new { threads.thread_topic, posts1.post_time, users.user_display_name, users.user_signature, users.user_avatar, posts1.post_body, posts1.post_topic }; It's essentially trying to grab a list of posts within a given forum thread. The best I've been able to come up with (with the help of the helpful users of this site) for NHibernate is: var whatevervar = session.CreateQuery("select t.Thread_topic, p.Post_time, " + "u.User_display_name, u.User_signature, " + "u.User_avatar, p.Post_body, p.Post_topic " + "from THREADPOST tp " + "inner join tp.Thread_ as t " + "inner join tp.Post_ as p " + "inner join p.User_ as u " + "where tp.Thread_ = :what") .SetParameter<THREAD>("what", threadid) .SetResultTransformer(Transformers.AliasToBean(typeof(MyDTO))) .List<MyDTO>(); But that doesn't parse well, complaining that the aliases for the joined tables are null references. MyDTO is a custom type for the output: public class MyDTO { public string thread_topic { get; set; } public DateTime post_time { get; set; } public string user_display_name { get; set; } public string user_signature { get; set; } public string user_avatar { get; set; } public string post_topic { get; set; } public string post_body { get; set; } } I'm out of ideas, and while doing this by direct SQL query is possible, I'd like to do it properly, without defeating the purpose of using an ORM. Thanks in advance!

    Read the article

  • Building simple Reddit scraper

    - by Bazant Fundator
    Let's say that I would like to make a collection of images from reddit for my own amusement. I have ran the code on my development env and It haven't gone past the first page of posts (anything beyond requries the after string from the JSON. Additionally, When I turn on the validation, the whole loop breaks if the item doesn't pass it, not just the current iteration. I would be glad If you helped me understand mistakes I made. class Link include Mongoid::Document include Mongoid::Timestamps field :author, type: String field :url, type: String validates_uniqueness_of :url, # no duplicates validates :url, uniqueness :true end def fetch (count, after) count_s = count.to_s # convert count to string link = "http://reddit.com/r/aww/.json?count="+count_s+"&after="+after #so it can be used there res = HTTParty.get(link) # GET req. to the reddit server json = JSON.parse(res.body) # Parse the response if json['kind'] == "Listing" then # check if the retrieved item is a Listing for i in 1...(count) do # for each list item datum = json['data']['children'][i]['data'] #i-th element properties if datum['domain'].in?(["imgur.com", "i.imgur.com"]) then # fetch only imgur links Link.create!(author: datum['author'], url: datum['url']) # save to db end end count += 25 fetch(count, json['data']['after']) # if it retrieved the right kind of object, move on to the next page end end fetch(25," ") # run it

    Read the article

  • Using Regex, how can I remove certain characters from inside angle-brackets, leaving the characters

    - by Iain Fraser
    Edit: To be clear, please understand that I am not using Regex to parse the html, that's crazy talk! I'm simply wanting to clean up a messy string of html so it will parse Edit #2: I should also point out that the control character I'm using is a special unicode character - it's not something that would ever be used in a proper tag under any normal circumstances Suppose I have a string of html that contains a bunch of control characters and I want to remove the control characters from inside tags only, leaving the characters outside the tags alone. For example Here the control character is the numeral "1". Input The quick 1<strong>orange</strong> lemming <sp11a1n 1class1='jumpe111r'11>jumps over</span> 1the idle 1frog Desired Output The quick 1<strong>orange</strong> lemming <span class='jumper'>jumps over</span> 1the idle 1frog So far I can match tags which contain the control character but I can't remove them in one regex. I guess I could perform another regex on my matches, but I'd really like to know if there's a better way. My regex Bear in mind this one only matches tags which contain the control character. <(([^>])*?`([^>])*?)*?> Thanks very much for your time and consideration. Iain Fraser

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >