Search Results

Search found 15004 results on 601 pages for 'date parsing'.

Page 106/601 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • How to Parse Html Website in Perl?

    - by Nano HE
    Hi, Could you please give me some suggestions on how to parse HTML in Perl? BTW, Do I need download some website pages to local harddirver with some Offline Explorer Tool? If I need, Could you give me a download URL link to a good Offline Explorer Tool. I plan to parse the keywords(including URL links) and save them to MySQL. Thanks a lot. WinXP used

    Read the article

  • Jquery to find a name on html page and add hyperlink

    - by mikejones12
    Here is my example: I have a a website that contains the following: <body> Jim Nebraska zipcode 65437 Tony lives in California his zipcode is 98708 </body> I would like to be able to search for zip codes on the page and wrap them with hyperlinks like: <body> Jim Nebraska zipcode <a href="/65437.htm">65437</a> Tony lives in California his zipcode is <a href="/65437.htm">98708</a> </body> Could I use a regex selector to find the string and then wrap the string, or replace it with the new hyperlink? I am new to Jquery and looking for someone to point me in the right direction. Thank you, Mike

    Read the article

  • Parsec Haskell Lists

    - by Martin
    I'm using Text.ParserCombinators.Parsec and Text.XHtml to parse an input and get a HTML output. If my input is: * First item, First level ** First item, Second level ** Second item, Second level * Second item, First level My output should be: <ul><li>First item, First level <ul><li>First item, Second level </li><li>Second item, Second level </li></ul></li><li>Second item, First level</li></ul> I wrote this, but obviously does not work recursively list= do{ s <- many1 item;return (olist << s) } item= do{ (count 1 (char '*')) ;s <- manyTill anyChar newline ;return ( li << s) } Any ideas? the recursion can be more than two levels Thanks!

    Read the article

  • Call jquery datepicker from link and send the date through a post call

    - by Alex
    Hi all, I need to make the datepicker show when I click on a link and then send the selected date to a different page through a post call. T tried to use this code for the link call: $(".click-on-link").click(function(){ $('#datepicker').datepicker({ changeMonth: true, changeYear: true, dateFormat: 'dd/mm/yy', firstDay: 1 }); }); but it's not working. Any idea? Thanks!

    Read the article

  • How do I add 2 years to a date in powerbuilder and account for the leap year correctly?

    - by Judy
    How do I add 2 years to a date in powerbuilder and account for the leap year correctly? We have a medical license application where the user would like the date to go expire two years. Current license date is 7/10/2010 and expire date should be 7/2/2012 I used relative date and added 729 if not a leap year and 730 if it was but that is messy. I wish the relativedate function took another parameter to so you could pass in number years.

    Read the article

  • Reading numeric Date value from CSV file to data.frame in "R"

    - by Dick Eshelman
    D <- read.csv("sample1.csv", header = FALSE, sep = ",") D V1 V2 V3 V4 1 20100316 109825 352120 239065 2 20100317 108625 352020 239000 3 20100318 109125 352324 241065 D[,1] [1] 20100316 20100317 20100318 In the above example how do I get the data in D[,1] to be read, and stored as date values: 2010-03-16, 2010-03-17, 2010-03-18 ? I have lots of data files in this format. TIA,

    Read the article

  • How to get Nokogiri to ignore HTML elements that doesn't exist

    - by user296507
    any idea how i can get the code below to produce this output? 1 - 2 - B i'm getting this error "undefined method `text' for nil:NilClass (NoMethodError)", because i think table 1 does not have the element 'td class=r2' in it. require 'rubygems' require 'nokogiri' require 'open-uri' doc = Nokogiri::HTML.parse(<<-eohtml) <table class="t1"> <tbody> <tr> <td class="r1">1</td> </tr> </tbody> </table> <table class="t2"> <tbody> <tr> <td class="r1">2</td> <td class="r2">B</td> </tr> </tbody> </table> eohtml doc.css('tbody > tr').each do |n| r1 = n.at_css(".r1").text r2 = n.at_css(".r2").text puts "#{r1} - #{r2}" end

    Read the article

  • Pros and Cons of Java HTML to XML cleaners

    - by cjavapro
    I am looking to allow HTML emails (and other HTML uploads) without letting in scripts and stuff. I plan to have a white list of safe tags and attributes as well as a whitelist of CSS tags and value regexes (to prevent automatic return receipt). I asked a question: Parse a badly formatted XML document (like an HTML file) I found there are many many ways to do this. Some systems have built in sanitizers (which I don't care so much about). I will post some answers and say Community Wiki. Please post any other options you like and say Community Wiki so they can be voted on. Also any comments or wiki edits on what part of a certain product is better and what is not would be greatly appreciated. This page is a very nice listing page but I get kinda lost http://java-source.net/open-source/html-parsers

    Read the article

  • Create Duplicate Records on SELECT for Calendar Date Range

    - by peterallcdn
    Hey all, I've built a pretty shnazzy calendar system but there is one tweak that I need to make so that I'm completely happy with it. My calendar has three tables: calevents - The calendared event. caldates - The occurrences and date-range of each occurrence for each event. calcats - The categories that can be applied to an event. The short: For each calevent, there can be many caldates, one for each occurrence of calevent. So a calevent that repeats weekly and spans 3 days might have caldates like this: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-23 3 37 2010-06-28 2010-06-30 7 37 2010-07-05 2010-07-07 9 37 2010-07-12 2010-07-14 What I want to do, is when selecting all the caldates for a specified month such as 2010-06, to return not just the two records above, but instead a record for each date in the range of date_start and date_end for each caldate. So if I searched for 2010-06, I would get: date_id date_eid date_start date_end date_day 2 37 2010-06-21 2010-06-23 2010-06-21 2 37 2010-06-21 2010-06-23 2010-06-22 2 37 2010-06-21 2010-06-23 2010-06-23 3 37 2010-06-28 2010-06-30 2010-06-28 3 37 2010-06-28 2010-06-30 2010-06-29 3 37 2010-06-28 2010-06-30 2010-06-30 The Long: The reason I want to do this, is so when displaying a list of events(calevents) for a specified month, an occurrence(caldates) of that event will be displayed for EACH of the days it spans. I could do this with php by looping through each day of the current month and displaying a copy of each caldate if the month day falls between date_start and date_end. But doing it this way will prevent me from using record pagination if needed. For example, if for a specified month the following caldates were returned: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-27 94 53 2010-06-09 2010-07-08 Doing record pagination would see this as only 2 records("rows"). But looping through them with PHP would generate 29 "rows". So, I figure if I use mysql to create each row instead of PHP, I can achieve the same thing AND still be able to use pagination if a month has a lot of events/dates. As far as performance goes, I'm not sure which option is more efficient. Both would send the same amount of info to the browser, so it's really only the work required to generate the info that matters. My current query which fetches all the occurrences for a specified month, and to make things just a little more complicated... joins them with their event and category, looks like this: $sql_to_execute = " SELECT date_id, date_eid, date_start, date_end, event_id, event_title, event_category, event_private, event_location, SUBSTRING_INDEX(event_detailsstripped, ' ', 40) AS event_detailsstripped, event_time, event_starttime, event_endtime, event_active, cat_colour FROM ( caldates LEFT JOIN calevents ON caldates.date_eid = calevents.event_id ) LEFT JOIN calcats ON calevents.event_category = calcats.cat_id WHERE date_start <= '".mysql_real_escape_string($dbi_list_end_date)."' AND date_end >= '".mysql_real_escape_string($dbi_list_start_date)."' ".$dbi_category." ORDER BY date_start ASC "; Any help or advice would be greatly appreciated! Thanks, Peter

    Read the article

  • Python: Is there a way to get HTML that was dynamically created by Javascript?

    - by Joschua
    As far as I can tell, this is the case for LyricWikia. The lyrics (example) can be accessed from the browser, but can't be found in the source code (can be opened with CTRL + U in most browsers) or reading the contents of the site with Python: from urllib.request import urlopen URL = 'http://lyrics.wikia.com/Billy_Joel:Piano_Man' r = urlopen(URL).read().decode('utf-8') And the test: >>> 'Now John at the bar is a friend of mine' in r False >>> 'John' in r False But when you select and look at the source code of the box in which the lyrics are displayed, you can see that there is: <div class="lyricbox">[...]</div> Is there a way to get the contents of that div-element with Python?

    Read the article

  • Parse usable Street Address, City, State, Zip from a string

    - by Rob Allen
    Problem: I have an address field from an Access database which has been converted to Sql Server 2005. This field has everything all in one field. I need to parse out the individual sections of the address into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records and it needs to be repeatable. Here are the rules for this exercise: 1 - no whining about how this should have been separate fields in the first place, we are often confronted with less than ideal situations and have to make the best of them 2- for this post, use any language you want 3- feel free to play code golf 4 - Assume an address in the US (for now) 5 - assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) 6 - states may be abbreviated 7 - zip code could be standard 5 digit or zip+4 8 - there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed, I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903

    Read the article

  • how can I parse and group/do stats on user agent strings?

    - by user151841
    I have a database that has the various user-agent strings of visitors to our site. I'd like to do a 'survey' of them to see what browsers our users are using, so that I can know what features I can use in future development. Is there a tool to parse and run statistics on user-agent strings, or a bunch of strings like this? Ideally, I'd like to see a hierarchical grouping of the stats. For instance: Opera/9.80 (Windows Mobile; WCE; Opera Mobi/WMD-50301; U; en) Presto/2.4.13 Version/10.00 Opera/9.80 (J2ME/MIDP; Opera Mini/4.2.14912/1280; U; en) Presto/2.2.0 Opera/9.80 (J2ME/MIDP; Opera Mini/4.2.13918/812; U; en) Presto/2.2.0 Opera/9.64 (Macintosh; Intel Mac OS X; U; en) Presto/2.1.1 Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.13918/786; U; en) Presto/2.2.0 Opera/9.60 (J2ME/MIDP; Opera Mini/4.0.10992/432; U; en) Presto/2.2.0 I'd like to see 6 entries for Opera, broken down into 3 for 9.80, 1 for 9.64 and 2 for 9.60, and so forth for all browsers. Other dimensions, such as OS, would cross the boundaries of the browser version hierarchy, but it might be nice to see also.

    Read the article

  • Evaluating mathematical expressions in Python

    - by vander
    Hi, I want to tokenize a given mathematical expression into a binary tree like this: ((3 + 4 - 1) * 5 + 6 * -7) / 2 '/' / \ + 2 / \ * * / \ / \ - 5 6 -7 / \ + 1 / \ 3 4 Is there any pure Python way to do this? Like passing as a string to Python and then get back as a tree like mentioned above. Thanks.

    Read the article

  • Trying to parse xml, but xmldocument.loadxml() is trying to download?

    - by maxp
    I have a string input that i do not know whether or not is valid xml. I think the simplest aprroach is to wrap new XmlDocument().LoadXml(strINPUT); In a try/catch. The problem im facing is, sometimes strINPUT is an html file, if the header of this file contains <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Transitional//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd""> <html xml:lang=""en-GB"" xmlns=""http://www.w3.org/1999/xhtml"" lang=""en-GB""> ...like many do, it actually tries to make a connection to the w3.org url, which i really dont want it doing. Anyone know if its possible to just parse the string without trying to be clever and checking external urls? Failing that is there an alternative to xmldocument?

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • Get HTML element informations in .NET

    - by martin.malek
    Hi, I'm just thinking if there is any way how to get information about element in HTML in my .NET application. The input is HTML page and path to CSS files etc. I want to take e.g. H1 tag and found what will be the CSS for it. Is there any code or can I use IE and try to take this information from it automatically inside of my application?

    Read the article

  • How can I parse a namespace using the SAX parser?

    - by Silvestri
    Hello, Using a twitter search URL ie. http://search.twitter.com/search.rss?q=android returns CSS that has an item that looks like: <item> <title>@UberTwiter still waiting for @ubertwitter android app!!!</title> <link>http://twitter.com/meals69/statuses/21158076391</link> <description>still waiting for an app!!!</description> <pubDate>Sat, 14 Aug 2010 15:33:44 +0000</pubDate> <guid>http://twitter.com/meals69/statuses/21158076391</guid> <author>Some Twitter User</author> <media:content type="image/jpg" height="48" width="48" url="http://a1.twimg.com/profile_images/756343289/me2_normal.jpg"/> <google:image_link>http://a1.twimg.com/profile_images/756343289/me2_normal.jpg</google:image_link> <twitter:metadata> <twitter:result_type>recent</twitter:result_type> </twitter:metadata> </item> Pretty simple. My code parses out everything (title, link, description, pubDate, etc.) without any problems. However, I'm getting null on: <google:image_link> I'm using Java to parse the RSS feed. Do I have to handle compound localnames differently than I would a more simple localname? This is the bit of code that parses out Link, Description, pubDate, etc: @Override public void endElement(String uri, String localName, String name) throws SAXException { super.endElement(uri, localName, name); if (this.currentMessage != null){ if (localName.equalsIgnoreCase(TITLE)){ currentMessage.setTitle(builder.toString()); } else if (localName.equalsIgnoreCase(LINK)){ currentMessage.setLink(builder.toString()); } else if (localName.equalsIgnoreCase(DESCRIPTION)){ currentMessage.setDescription(builder.toString()); } else if (localName.equalsIgnoreCase(PUB_DATE)){ currentMessage.setDate(builder.toString()); } else if (localName.equalsIgnoreCase(GUID)){ currentMessage.setGuid(builder.toString()); } else if (uri.equalsIgnoreCase(AVATAR)){ currentMessage.setAvatar(builder.toString()); } else if (localName.equalsIgnoreCase(ITEM)){ messages.add(currentMessage); } builder.setLength(0); } } startDocument looks like: @Override public void startDocument() throws SAXException { super.startDocument(); messages = new ArrayList<Message>(); builder = new StringBuilder(); } startElement looks like: @Override public void startElement(String uri, String localName, String name, Attributes attributes) throws SAXException { super.startElement(uri, localName, name, attributes); if (localName.equalsIgnoreCase(ITEM)){ this.currentMessage = new Message(); } } Tony

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >