Search Results

Search found 32104 results on 1285 pages for 'html parsing'.

Page 11/1285 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • displaying multi-section html documents - best practices

    - by ecpepper
    I work at a research organization and we publish a lot of large-ish documents, usually organized in sections. What I want to know is how best to present these multi-section documents on our website. Presently, what I do is load the entire document as a single page, with each section as its own div. Then I show and hide divs as needed via a table of contents and "next" and "prev" buttons. The advantages to this are mainly: 1) that you can move between sections very quickly, 2) it produces consistent analytics (when a page is loaded, I know a report is being read). The disadvantages, however, are real: Readers can't take advantage of browser back/forward buttons to move between sections. It's complicated to create direct links to individual sections (I can do it with javascript but it's not easy for other people to grab and share). For long reports, you have to wait for the full report to load before you can move around (and that can include hordes of images and charts). Do other people have thoughts on better ways to organize this? Here's an example of the current system: http://massbudget.org/825

    Read the article

  • Layouts in HTML

    - by TerNovi
    I am trying to have a div then inside the division have some different places where I can place stuff. For example. <div blah> <table blah> content... </table> <table blah> content... </table> <table blah> content.... </table> </div> I am not really a web developer so I know this question might seem simple but any help is greatly appreciated. Oh and I am using Macromedia Dreamweaver 8. Thanks.

    Read the article

  • Practical considerations for HTML / CSS naming conventions (syntax)

    - by Jeroen
    Question: what are the practical considerations for the syntax in class and id values? Note that I'm not asking about the semantics, i.e. the actual words that are being used, as for example described in this blogpost. There are a lot of resources on that side of naming conventions already, in fact obscuring my search for practical information on the various syntactical bits: casing, use of interpunction (specifically the - dash), specific characters to use or avoid, etc. To sum up the reasons I'm asking this question: The naming restrictions on id and class don't naturally lead to any conventions The abundance of resources on the semantic side of naming conventions obscure searches on the syntactic considerations I couldn't find any authorative source on this There wasn't any question on SE Programmers yet on this topic :) Some of the conventions I've considered using: UpperCamelCase, mainly as a cross-over habit from server side coding lowerCamelCase, for consistency with JavaScript naming conventions css-style-classes, which is consistent with naming of css properties (but can be annoying when Ctrl+Shift+ArrowKey selection of text) with_under_scores, which I personally haven't seen used much alllowercase, simple to remember but can be hard to read for longer names UPPERCASEFTW, as a great way to annoy your fellow programmers (perhaps combined with option 4 for readability) And probably I've left out some important options or combinations as well. So: what considerations are there for naming conventions, and to which convention do they lead?

    Read the article

  • HTML coding style: attribute starts on a new line

    - by Matty
    sublvl's front end developer seems to have a strange coding style that I've never seen before. Every time they begin a new element, immediately after the element name they insert a line break. The first thing that appears on the next line is the first attribute of the element. For example: id="player-container"><div id="player-bar"><div id="player-controls-wrapper"><div id="player-controls"><div id ="player-controls-buttons"> <a The above code was found here. I've never seen this kind of coding style before. What's going on here? Is this just a quirky style or is there some reasoning behind it?

    Read the article

  • Does sitewide html refactoring affect Google traffic?

    - by Name
    Good morning, I have recently made a big structural change on my site and the very next day the number of Google impressions went from 75.000 to 3.000, with a proportional drop of traffic from searches. No URLs were changed, neither were the page titles or descriptions. Everything is exactly the same, but different looking, except that it does barely appear on Google anymore. Anybody has a clue to why?

    Read the article

  • Render Ruby object to interactive html

    - by AvImd
    I am developing a tool that discovers network services enabled on host and writes short summary on them like this: init,1 +-- login,1560 -- +-- bash,1629 +-- nc,12137 -lup 50505 { :net = [ [0] "*:50505 IPv4 UDP " ], :fds = [ [0] "/root (cwd)", [1] "/", [2] "/bin/nc.traditional", [3] "/xochikit/ld_poison.so (stat: No such file or directory)", [4] "/dev/tty2", [5] "*:50505" ] } It proved to be very nice formatted and useful for quick discovery thanks to colors provided by the awesome_print gem. However, its output is just a text. One issue is that if I want to share it, I lose colors. I'd also like to fold and unfold parts of objects, quickly jump to specific processes and what not? Adding comments, for example. Thus I want something web-based. What is the best approach to implement features like these? I haven't worked with web interfaces before and I don't have much experience with Ruby.

    Read the article

  • PHP parsing invalid html

    - by kmunky
    hi , i'm trying to parse some html that is not on my server $dom = new DOMDocument(); $dom->loadHTMLfile("http://www.some-site.org/page.aspx"); echo $dom->getElementById('his_id')->item(0); but php returns an error something like ID his_id already defined in http://www.some-site.org/page.aspx, line: 33. I think that is because DOMDocument is dealing with invalid html. So, how can i parse it even though is invalid?

    Read the article

  • PDF parsing file trailer

    - by Ralph
    It is not clear from the PDF ISO standard document (PDF32000-2008) whether a comment may follow the startxref keyword: startxref Byte_offset_of_last_cross-reference_section %%EOF The standard does seem to imply that comments may appear anywhere: 7.2.3 Comments Any occurrence of the PERCENT SIGN (25h) outside a string or stream introduces a comment. The comment consists of all characters after the PERCENT SIGN and up to but not including the end of the line, including regular, delimiter, SPACE (20h), and HORZONTAL TAB characters (09h). A conforming reader shall ignore comments, and treat them as single white-space characters. That is, a comment separates the token preceding it from the one following it. EXAMPLE The PDF fragment in this example is syntactically equivalent to just the tokens abc and 123. abc% comment ( /%) blah blah blah 123 Comments (other than the %PDF–n.m and %%EOF comments described in 7.5, "File Structure") have no semantics. They are not necessarily preserved by applications that edit PDF files. If they are allowed to appear after the startxref, parsing the file becomes more difficult because you do not know how far to back up from the %%EOF comment to start parsing to find the byte offset. Any ideas?

    Read the article

  • Parsing basic math equations for children's educational software?

    - by Simucal
    Inspired by a recent TED talk, I want to write a small piece of educational software. The researcher created little miniature computers in the shape of blocks called "Siftables". [David Merril, inventor - with Siftables in the background.] There were many applications he used the blocks in but my favorite was when each block was a number or basic operation symbol. You could then re-arrange the blocks of numbers or operation symbols in a line, and it would display an answer on another siftable block. So, I've decided I wanted to implemented a software version of "Math Siftables" on a limited scale as my final project for a CS course I'm taking. What is the generally accepted way for parsing and interpreting a string of math expressions, and if they are valid, perform the operation? Is this a case where I should implement a full parser/lexer? I would imagine interpreting basic math expressions would be a semi-common problem in computer science so I'm looking for the right way to approach this. For example, if my Math Siftable blocks where arranged like: [1] [+] [2] This would be a valid sequence and I would perform the necessary operation to arrive at "3". However, if the child were to drag several operation blocks together such as: [2] [\] [\] [5] It would obviously be invalid. Ultimately, I want to be able to parse and interpret any number of chains of operations with the blocks that the user can drag together. Can anyone explain to me or point me to resources for parsing basic math expressions? I'd prefer as much of a language agnostic answer as possible.

    Read the article

  • Parsing HTTP - Bytes.length != String.length

    - by hotzen
    Hello, I consume HTTP via nio.SocketChannel, so I get chunks of data as Array[Byte]. I want to put these chunks into a parser and continue parsing after each chunk has been put. HTTP itself seems to use an ISO8859-Charset but the Payload/Body itself may be arbitrarily encoded: If the HTTP Content-Length specifies X bytes, the UTF8-decoded Body may have much less Characters (1 Character may be represented in UTF8 by 2 bytes, etc). So what is a good parsing strategy to honor an explicitly specified Content-Length and/or a Transfer-Encoding: Chunked which specifies a chunk-length to be honored. append each data-chunk to an mutable.ArrayBuffer[Byte], search for CRLF in the bytes, decode everything from 0 until CRLF to String and match with Regular-Expressions like StatusRegex, HeaderRegex, etc? decode each data-chunk with the proper charset (e.g. iso8859, utf8, etc) and add to StringBuilder. With this solution I am not able to honor any Content-Length or Chunk-Size, but.. do I have to care for it? any other solution... ?

    Read the article

  • Need some ideas on how to acomplish this in Java (parsing strings)

    - by Matt
    Sorry I couldn't think of a better title, but thanks for reading! My ultimate goal is to read a .java file, parse it, and pull out every identifier. Then store them all in a list. Two preconditions are there are no comments in the file, and all identifiers are composed of letters only. Right now I can read the file, parse it by spaces, and store everything in a list. If anything in the list is a java reserved word, it is removed. Also, I remove any loose symbols that are not attached to anything (brackets and arithmetic symbols). Now I am left with a bunch of weird strings, but at least they have no spaces in them. I know I am going to have to re-parse everything with a . delimiter in order to pull out identifiers like System.out.print, but what about strings like this example: Logger.getLogger(MyHash.class.getName()).log(Level.SEVERE, After re-parsing by . I will be left with more crazy strings like: getLogger(MyHash getName()) log(Level SEVERE, How am I going to be able to pull out all the identifiers while leaving out all the trash? Just keep re-parsing by every symbol that could exist in java code? That seems rather lame and time consuming. I am not even sure if it would work completely. So, can you suggest a better way of doing this?

    Read the article

  • Does jQuery strip some html elements from a string when using .html()?

    - by Nic Hubbard
    I have a var that contains a full html page, including the head, html, body, etc. When I pass that string into the .html() function, jQuery strips out all those elements, such as body, html, head, etc, which I don't want. My data var contains: <html> <head> <title>Untitled Document</title> </head> <body> </body> </html> // data is a full html document string data = $('<div/>').html(data); // jQuery stips my document string! alert(data.find('head').html()); I am needing to manipulate a full html page string, so that I can return what is in the element. I would like to do this with jQuery, but it seems all of the methods, append(), prepend() and html() all try to convert the string to dom elements, which remove all the other parts of a full html page. Is there another way that I could do this? I would be fine using another method. My final goal is to find certain elements inside my string, so I figured jQuery would be best, since I am so used to it. But, if it is going to trim and remove parts of my string, I am going to have to look for another method. Ideas?

    Read the article

  • Problem Parsing JSON Result with jQuery

    - by senfo
    I am attempting to parse JSON using jQuery and I'm running into issues. In the code below, I'm using a static file, but I've also tested using an actual URL. For some reason, the data keeps coming back as null: <!DOCTYPE html> <html> <head> <title>JSON Test</title> <script src="http://code.jquery.com/jquery-latest.js"></script> <script> $.getJSON('results.json', function(data) { alert(data); // Result is always null }); </script> </head> <body> </body> </html> The JSON results look like the following: {"title":"HEALTHPOINT TYEE CAMPUS","link":"http://www.healthpointchc.org","id":"tag:datawarehouse.hrsa.gov,2010-04-29:/8357","org":"HEALTHPOINT TYEE CAMPUS","address":{"street-address":"4424 S. 188TH St.","locality":"Seatac","region":"Washington","postal-code":"98188-5028"},"tel":"206-444-7746","category":"Service Delivery Site","location":"47.4344818181818 -122.277672727273","update":"2010-04-28T00:00:00-05:00"} If I replace my URL with the Flickr API URL (http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?), I get back a valid JSON result that I am able to make use of. I have successfully validated my JSON at JSONLint, so I've run out of ideas as to what I might be doing wrong. Any thoughts?

    Read the article

  • COMPLETE list of HTML tag attributes which have a URL value?

    - by system PAUSE
    Besides the following, are there any HTML tag attributes that have a URL as their value? href attribute on tags: <link>, <a>, <area> src attribute on tags: <img>, <iframe>, <frame>, <embed>, <script>, <input> action attribute on tags: <form> data attribute on tags: <object> Looking for tags in wide usage, including non-standard tags and old browsers as well as HTML 4.01, HTML 5, and XHTML. Yes this question is kinda lightweight, but I googled around for about 45 minutes and didn't find this data centralized anywhere, so I figure it might help some other developer to have it here. Plus I'm sure I'm missing something. Feel free to repeat/reorganize this list in your answer. Upvoting the most complete answers will probably be most helpful to others.

    Read the article

  • Exceptions with DateTime parsing in RSS feed in C#

    - by hIpPy
    I'm trying to parse Rss2, Atom feeds using SyndicationFeedFormatter and SyndicationFeed objects. But I'm getting XmlExceptions while parsing DateTime field like pubDate and/or lastBuildDate. Wed, 24 Feb 2010 18:56:04 GMT+00:00 does not work Wed, 24 Feb 2010 18:56:04 GMT works So, it's throwing due to the timezone field. As a workaround, for familiar feeds I would manually fix those DateTime nodes - by catching the XmlException, loading the Rss into an XmlDocument, fixing those nodes' value, creating a new XmlReader and then returning the formatter from this new XmlReader object (code not shown). But for this approach to work, I need to know beforehand which nodes cause exception. SyndicationFeedFormatter syndicationFeedFormatter = null; XmlReaderSettings settings = new XmlReaderSettings(); using (XmlReader reader = XmlReader.Create(url, settings)) { try { syndicationFeedFormatter = SyndicationFormatterFactory.CreateFeedFormatter(reader); syndicationFeedFormatter.ReadFrom(reader); } catch (XmlException xexp) { // fix those datetime nodes with exceptions and read again. } return syndicationFeedFormatter; } rss feed: http://news.google.com/news?pz=1&cf=all&ned=us&hl=en&q=test&cf=all&output=rss exception detials: XmlException Error in line 1 position 376. An error was encountered when parsing a DateTime value in the XML. at System.ServiceModel.Syndication.Rss20FeedFormatter.DateFromString(String dateTimeString, XmlReader reader) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadXml(XmlReader reader, SyndicationFeed result) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadFrom(XmlReader reader) at ... cs:line 171 <rss version="2.0"> <channel> ... <pubDate>Wed, 24 Feb 2010 18:56:04 GMT+00:00</pubDate> <lastBuildDate>Wed, 24 Feb 2010 18:56:04 GMT+00:00</lastBuildDate> <-----exception ... <item> ... <pubDate>Wed, 24 Feb 2010 16:17:50 GMT+00:00</pubDate> <lastBuildDate>Wed, 24 Feb 2010 18:56:04 GMT+00:00</lastBuildDate> </item> ... </channel> </rss> Is there a better way to achieve this? Please help. Thanks.

    Read the article

  • Using PHP substr() and strip_tags() while retaining formatting and without breaking HTML

    - by Peter
    I have various HTML strings to cut to 100 characters (of the stripped content, not the original) without stripping tags and without breaking HTML. Original HTML string (288 characters): $content = "<div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div over <div class='nestedDivClass'>there</div> </div> and a lot of other nested <strong><em>texts</em> and tags in the air <span>everywhere</span>, it's a HTML taggy kind of day.</strong></div>"; Standard trim: Trim to 100 characters and HTML breaks, stripped content comes to ~40 characters: $content = substr($content, 0, 100)."..."; /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div ove... */ Stripped HTML: Outputs correct character count but obviously looses formatting: $content = substr(strip_tags($content)), 0, 100)."..."; /* output: With a span over here and a nested div over there and a lot of other nested texts and tags in the ai... */ Partial solution: using HTML Tidy or purifier to close off tags outputs clean HTML but 100 characters of HTML not displayed content. $content = substr($content, 0, 100)."..."; $tidy = new tidy; $tidy->parseString($content); $tidy->cleanRepair(); /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div ove</div></div>... */ Challenge: To output clean HTML and n characters (excluding character count of HTML elements): $content = cutHTML($content, 100); /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div over <div class='nestedDivClass'>there</div> </div> and a lot of other nested <strong><em>texts</em> and tags in the ai</strong></div>..."; Similar Questions How to clip HTML fragments without breaking up tags Cutting HTML strings without breaking HTML tags

    Read the article

  • Ruby libraries for parsing .doc files?

    - by Platinum Azure
    Hi all, I was just wondering if anyone knew of any good libraries for parsing .doc files (and similar formats, like .odt) to extract text, yet also keep formatting information where possible for display on a website. Capability of doing similarly for PDFs would be a bonus, but I'm not looking as much for that. This is for a Rails project, if that helps at all. Thanks in advance!

    Read the article

  • Sending and Parsing JSON in Android

    - by primal
    Hi, In the application I am developing, I would like to send messages in the form of JSON objects to a Django Server and parse the JSON response from the server and populate a custom listview. From the little JSON knowledge I have, I thought this format for the response from server { "post": { "username": "someusername", "message": "this is a sweet message", "image": "http://localhost/someimage.jpg", "time": "present time" }, } How much knowledge of JSON should I have to accomplish this purpose? Also it would be great if someone could provide me links of some tutorials for sending and parsing JSON Objects.

    Read the article

  • Looking for a good text parsing library for C#

    - by Chris Stewart
    Has anyone run across a quality library that will parse, line by line, CSV, tab-delimited, and Excel files? I've started to do it manually but have noticed some of the intricacies in parsing a comma-delimited file. Such as situations where a cell has a comma in it as part of the data (blah,\"LastName, Jr.\",blah,blah).

    Read the article

  • XML Parsing need help iphone sdk

    - by neha
    Hi all, How do you get "MayurS123" from following xml tag by parsing? <eletitle lnk="http://192.168.10.2/justmeans/trunk/newsfeed/mayurs">MayurS123 Sharma</eletitle> My file is getting parsed properly. Here I'm able to retrieve the lnk component by doing: if([elementName isEqualToString:@"eletitle"]) { aGoodwork.lnk = [attributeDict objectForKey:@"lnk"]; } But I'm not getting how to get in actual title. Thanx in advance.

    Read the article

  • MSBuild 4.0 Regex parsing

    - by Chandam
    I have heard that MSBuild 4.0 has increased Regex parsing support. However, I am unable to find any detailed documentation/links/material on this. Can anyone give a brief description of the new features and/or possibly give pointers to more material? Thanks in advance.

    Read the article

  • Parsing plain text to some structured object

    - by Jeriho
    I am working on parsing plain text and converting it to key-value pairs. For example, plain text: some_uninteresting_thing key1 valueA, valueB, valueC key2 valueD key3 valueE valueF key4 valueG(valueH, valueI) key5 some_uninteresting_thing valueJ some_uninteresting_thing key6 some_uninteresting_thing (key6 shouldn't be mapped because has no appropriate values) As you can see plain text is lenient. What java library can handle this? If no such library exist, any suggestions on algorithm to do this.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >