Search Results

Search found 54956 results on 2199 pages for 'parsing error'.

Page 156/2199 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • python getelementbyid from string

    - by matthewgall
    Hey, I have the following program, that is trying to upload a file (or files) to an image upload site, however I am struggling to find out how to parse the returned HTML to grab the direct link (contained in a ). I have the code below: #!/usr/bin/python # -*- coding: utf-8 -*- import pycurl import urllib import urlparse import xml.dom.minidom import StringIO import sys import gtk import os import imghdr import locale import gettext try: import pynotify except: print "Please install pynotify." APP="Uploadir Uploader" DIR="locale" locale.setlocale(locale.LC_ALL, '') gettext.bindtextdomain(APP, DIR) gettext.textdomain(APP) _ = gettext.gettext ##STRINGS uploading = _("Uploading image to Uploadir.") oneimage = _("1 image has been successfully uploaded.") multimages = _("images have been successfully uploaded.") uploadfailed = _("Unable to upload to Uploadir.") class Uploadir: def __init__(self, args): self.images = [] self.urls = [] self.broadcasts = [] self.username="" self.password="" if len(args) == 1: return else: for file in args: if file == args[0] or file == "": continue if file.startswith("-u"): self.username = file.split("-u")[1] #print self.username continue if file.startswith("-p"): self.password = file.split("-p")[1] #print self.password continue self.type = imghdr.what(file) self.images.append(file) for file in self.images: self.upload(file) self.setClipBoard() self.broadcast(self.broadcasts) def broadcast(self, l): try: str = '\n'.join(l) n = pynotify.Notification(str) n.set_urgency(pynotify.URGENCY_LOW) n.show() except: for line in l: print line def upload(self, file): #Try to login cookie_file_name = "/tmp/uploadircookie" if ( self.username!="" and self.password!=""): print "Uploadir authentication in progress" l=pycurl.Curl() loginData = [ ("username",self.username),("password", self.password), ("login", "Login") ] l.setopt(l.URL, "http://uploadir.com/user/login") l.setopt(l.HTTPPOST, loginData) l.setopt(l.USERAGENT,"User-Agent: Uploadir (Python Image Uploader)") l.setopt(l.FOLLOWLOCATION,1) l.setopt(l.COOKIEFILE,cookie_file_name) l.setopt(l.COOKIEJAR,cookie_file_name) l.setopt(l.HEADER,1) loginDataReturnedBuffer = StringIO.StringIO() l.setopt( l.WRITEFUNCTION, loginDataReturnedBuffer.write ) if l.perform(): self.broadcasts.append("Login failed. Please check connection.") l.close() return loginDataReturned = loginDataReturnedBuffer.getvalue() l.close() #print loginDataReturned if loginDataReturned.find("<li>Your supplied username or password is invalid.</li>")!=-1: self.broadcasts.append("Uploadir authentication failed. Username/password invalid.") return else: self.broadcasts.append("Uploadir authentication successful.") #cookie = loginDataReturned.split("Set-Cookie: ")[1] #cookie = cookie.split(";",0) #print cookie c = pycurl.Curl() values = [ ("file", (c.FORM_FILE, file)) ] buf = StringIO.StringIO() c.setopt(c.URL, "http://uploadir.com/file/upload") c.setopt(c.HTTPPOST, values) c.setopt(c.COOKIEFILE, cookie_file_name) c.setopt(c.COOKIEJAR, cookie_file_name) c.setopt(c.WRITEFUNCTION, buf.write) if c.perform(): self.broadcasts.append(uploadfailed+" "+file+".") c.close() return self.result = buf.getvalue() #print self.result c.close() doc = urlparse.urlparse(self.result) self.urls.append(doc.getElementsByTagName("download")[0].childNodes[0].nodeValue) def setClipBoard(self): c = gtk.Clipboard() c.set_text('\n'.join(self.urls)) c.store() if len(self.urls) == 1: self.broadcasts.append(oneimage) elif len(self.urls) != 0: self.broadcasts.append(str(len(self.urls))+" "+multimages) if __name__ == '__main__': uploadir = Uploadir(sys.argv) Any help would be gratefully appreciated. Warm regards,

    Read the article

  • Why does ANTLR not parse the entire input?

    - by Martin Wiboe
    Hello, I am quite new to ANTLR, so this is likely a simple question. I have defined a simple grammar which is supposed to include arithmetic expressions with numbers and identifiers (strings that start with a letter and continue with one or more letters or numbers.) The grammar looks as follows: grammar while; @lexer::header { package ConFreeG; } @header { package ConFreeG; import ConFreeG.IR.*; } @parser::members { } arith: term | '(' arith ( '-' | '+' | '*' ) arith ')' ; term returns [AExpr a]: NUM { int n = Integer.parseInt($NUM.text); a = new Num(n); } | IDENT { a = new Var($IDENT.text); } ; fragment LOWER : ('a'..'z'); fragment UPPER : ('A'..'Z'); fragment NONNULL : ('1'..'9'); fragment NUMBER : ('0' | NONNULL); IDENT : ( LOWER | UPPER ) ( LOWER | UPPER | NUMBER )*; NUM : '0' | NONNULL NUMBER*; fragment NEWLINE:'\r'? '\n'; WHITESPACE : ( ' ' | '\t' | NEWLINE )+ { $channel=HIDDEN; }; I am using ANTLR v3 with the ANTLR IDE Eclipse plugin. When I parse the expression (8 + a45) using the interpreter, only part of the parse tree is generated: http://imgur.com/iBaEC.png Why does the second term (a45) not get parsed? The same happens if both terms are numbers. Thank you, Martin Wiboe

    Read the article

  • How to parse a string into a nullable int in C# (.NET 3.5)

    - by Glenn Slaven
    I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed. I was kind of hoping that this would work int? val = stringVal as int?; But that won't work, so the way I'm doing it now is I've written this extension method public static int? ParseNullableInt(this string value) { if (value == null || value.Trim() == string.Empty) { return null; } else { try { return int.Parse(value); } catch { return null; } } } Is there a better way of doing this? EDIT: Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?

    Read the article

  • A database of questions with unambiguous numeric answers.

    - by dreeves
    I (and co-hackers) are building a sort of trivia game inspired by this blog post: http://messymatters.com/calibration. The idea is to give confidence intervals and learn how to be calibrated (when you're "90% sure" you should be right 90% of the time). We're thus looking for, ideally, thousands of questions with unambiguous numerical answers. Also, they shouldn't be too boring. There are a lot of random statistics out there -- eg, enclosed water area in different countries -- that would make the game mind-numbing. Things like release dates of classic movies are more interesting (to most people). Other interesting ones we've found include Olympic records, median incomes for different professions, dates of famous inventions, and celebrity ages. Scraping things like above, by the way, was my reason for asking this question: http://stackoverflow.com/questions/2611418/scrape-html-tables So, if you know of other sources of interesting numerical facts (in a parsable form) I'm eager for pointers to them. Thanks!

    Read the article

  • pyparsing ambiguity

    - by Claudiu
    I'm trying to parse some text using PyParser. The problem is that I have names that can contain white spaces. So my input might look like this: Joe Bob Jimmy Foo Joe decides to eat. Bob decides to not eat. Jimmy Foo decides to eat. How can I create a parser for the decides to eat line? If I create my name parser naively, meaning with alphabetic characters plus space characters, then it will match the entire line.

    Read the article

  • Strange Scala error.

    - by Lukasz Lew
    I tried to create abstract turn based Game and abstract AI: abstract class AGame { type Player type Move // Player inside def actPlayer : Player def moves (player : Player) : Iterator[Move] def play (move : Move) def undo () def isFinished : Boolean def result (player : Player) : Double } abstract class Ai[Game <: AGame] { def genMove (player : Game#Player) : Game#Move } class DummyGame extends AGame { type Player = Unit type Move = Unit def moves (player : Player) = new Iterator[Move] { def hasNext = false def next = throw new Exception ("asd") } def actPlayer = () def play (move : Move) { } def undo () { } def isFinished = true def result (player : Player) = 0 } class DummyAi[Game <: AGame] (game : Game) extends Ai[Game] { override def genMove (player : Game#Player) : Game#Move = { game.moves (player).next } } I thought that I have to use this strange type accessors like Game#Player. I get very puzzling error. I would like to understand it: [error] /home/lew/Devel/CGSearch/src/main/scala/Main.scala:41: type mismatch; [error] found : Game#Player [error] required: DummyAi.this.game.Player [error] game.moves (player).next [error] ^

    Read the article

  • Using Python's ConfigParser to read a file without section name

    - by Arrieta
    Hello: I am using ConfigParser to read the runtime configuration of a script. I would like to have the flexibility of not providing a section name (there are scripts which are simple enough; they don't need a 'section'). ConfigParser will throw the NoSectionError exception, and will not accept the file. How can I make ConfigParser simply retrieve the (key, value) tuples of a config file without section names? For instance: key1=val1 key2:val2 I would rather not write to the config file.

    Read the article

  • `strip`ing the results of a split in python

    - by Igor
    i'm trying to do something pretty simple: line = "name : bob" k, v = line.lower().split(':') k = k.strip() v = v.strip() is there a way to combine this into one line somehow? i found myself writing this over and over again when making parsers, and sometimes this involves way more than just two variables. i know i can use regexp, but this is simple enough to not really have to require it...

    Read the article

  • writing header in csv python with DictWriter

    - by user248237
    assume I have a csv.DictReader object and I want to write it out as a csv file. How can I do this? I thought of the following: dr = csv.DictReader(open(f), delimiter='\t') # process my dr object # ... # write out object output = csv.DictWriter(open(f2, 'w'), delimiter='\t') for item in dr: output.writerow(item) Is that the best way? More importantly, how can I make it so a header is written out too, in this case the object "dr"s .fieldnames property? thanks.

    Read the article

  • Can't get custom error rendering to work in symfony 1.4

    - by hongkildong
    I'm tring to customize error rendering in my form according to this example. Here is my code: if ($this['message']->hasError()) { $error_msg = '<ul>'; foreach ($this['message']->getError() as $error) $error_msg .= "<li>$error</li>"; $error_msg .= '</ul>'; } return $error_msg; but when $this['message'] has error this code returns '<ul></ul>' so it seems foreach ($this['message']->getError() as $error) causes no iterations $this['message']->getError() returns sfValidatorError object - maybe something changed in symfony 1.4 and it isn't iterable anymore... At first I thought that all magic in that example happened because of object being placed in $error by iteration implements __toString() but it seems no iterations happens at all...

    Read the article

  • Perl - Read XML

    - by chinna_82
    XML <?xml version='1.0'?> <employee> <name>Smith</name> <age>43</age> <sex>M</sex> <department role='manager'>Operations</department> </employee> Perl use XML::Simple; use Data::Dumper; $xml = new XML::Simple; foreach my $data1 ($data = $xml->XMLin("test.xml")) { print Dumper($data1); } Above code managed to all the xml value like this. Output $VAR1 = { 'department' => { 'content' => 'Operations', 'role' => 'manager' }, 'name' => 'John Doe', 'sex' => 'M', 'age' => '43' }; How do I do, if I only want to get the role value. For this example I need to get Role = manager. Any advice or reference link is highly appreciated.

    Read the article

  • libxml2 on iPhone

    - by mellkord
    I'm trying to parse HTML file with libxml2. Usually this works fine, but not in this case: <p> <b>Titles</b> (Some Text) <table> <tr> <td valign="top"> …Something1... </td> <td align="right" valign="top"> …Something2... </td> </tr> </table> </p> I do this query to get the first <td> //p[b='Titles']/table/tr/td[0] but nothing is returned because libxml think that <table> tag is not a child of a tag <p> and following him. And finally the question WHY?

    Read the article

  • translating play in HTML to python

    - by aharon
    So, I'd like to represent one of Shakespeare's plays, Hamlet, into the following objects (maybe this isn't the best representation, if so please tell me): class Play(): acts = [] ... def add_act(self, act): acts.append(act) class Act(): scenes = [] ... def add_scene(self, scene): scenes.append(scene) class Scene(): elems = [] def __init__(self, title, setting=""): ... def add_elem(self, elem): elems.append(elem) ... class StageDirection(): # elem def __init__(self, text): ... class Line(): # elem def __init__(self, id, text, character = None): ... # A None character represents a continuation from the previous line # id could be, for example, 1.1.1 There are other methods, of course, for printing and such in each of the classes. The question is, how do I get a structure based on these classes (or something like them) from HTML 4 code that looks like this: <H3>ACT I</h3> <h3>SCENE I. Elsinore. A platform before the castle.</h3> <p><blockquote> <i>FRANCISCO at his post. Enter to him BERNARDO</i> </blockquote> <A NAME=speech1><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.1>Who's there?</A><br> </blockquote> <A NAME=speech2><b>FRANCISCO</b></a> <blockquote> <A NAME=1.1.2>Nay, answer me: stand, and unfold yourself.</A><br> </blockquote> <A NAME=speech3><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.3>Long live the king!</A><br> </blockquote> <A NAME=speech4><b>FRANCISCO</b></a> <blockquote> <A NAME=1.1.4>Bernardo?</A><br> </blockquote> <A NAME=speech5><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.5>He.</A><br> </blockquote> <!-- for more, see the source of shakespeare.mit.edu/hamlet/full.html --> translating that into something like this: play = Play() actI = Act() sceneI = Scene("Scene I", "Elsinore. A platform before the castle.") sceneI.add_elem(StageDirection("Francisco at his post. Enter to him Bernardo.")) sceneI.add_elem(Line("Bernardo", "Who's there?")) ... Of course, I don't expect all the code—but what libraries and, when there aren't libraries, logic should I use? Thanks. (This is for a future opensource project and me learning Python for fun—not homework.)

    Read the article

  • How do I get 3 lines of text from a paragraph in C#

    - by Keltex
    I'm trying to create an "snippet" from a paragraph. I have a long paragraph of text with a word hilighted in the middle. I want to get the line containing the word before that line and the line after that line. I have the following piece of information: The text (in a string) The lines are deliminated by a NEWLINE character \n I have the index into the string of the text I want to hilight A couple other criteria: If my word falls on first line of the paragraph, it should show the 1st 3 lines If my word falls on the last line of the paragraph, it should show the last 3 lines Should show the entire paragraph in the degenative cases (the paragraph only has 1 or 2 lines) Here's an example: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph Example, if my index points to BIRD, it should show lines 1, 2, & 3 as one complete string like this: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph If my index points to DOG, it should show lines 3, 4, & 5 as one complete string like this: This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph etc. Anybody want to help tackle this?

    Read the article

  • Unable to Parse Date using NSDateFormatter

    - by Ansari
    Hi, I am fetching a RSS, in which i receive the following Date stamp: 2010-05-10T06:11:14.000Z Now i am using NSDateFormatter to parse this datetime stamp. [parseFormatter setDateFormat:@"yyyy-MM-dTH:m:s.z"]; But its not working fine if just remove the time stamp part it works for the date [parseFormatter setDateFormat:@"yyyy-MM-d"]; But if i add the rest of the stuff it returns nil. Any idea ? Thanks in Advance....

    Read the article

  • Lexing partial SQL in C#

    - by Chris T
    I'd need to parse partial SQL queries (it's for a SQL injection auditing tool). For example '1' AND 1=1-- Should break down into tokens like [0] => [SQL_STRING, '1'] [1] => [SQL_AND] [2] => [SQL_INT, 1] [3] => [SQL_AND] [4] => [SQL_INT, 1] [5] => [SQL_COMMENT] [6] => [SQL_QUERY_END] Are their any at least lexers for SQL that I base mine off of or any good tools like bison for C# (though I'd rather not write my own grammar as I need to support most if not all the grammar of MySQL 5)

    Read the article

  • Changing href atributes with nokogiri and ruby on rails

    - by fool
    Hi, I Have a HTML document with links links, for exemple: <html> <body> <ul> <li><a href="http://someurl.com/etc/etc">teste1</a></li> <li><a href="http://someurl.com/etc/etc">teste2</a></li> <li><a href="http://someurl.com/etc/etc">teste3</a></li> <ul> </body> </html> I want with Ruby on Rails, with nokogiri or some other method, to have a final doc like this: <html> <body> <ul> <li><a href="http://myproxy.com/?url=http://someurl.com/etc/etc">teste1</a></li> <li><a href="http://myproxy.com/?url=http://someurl.com/etc/etc">teste2</a></li> <li><a href="http://myproxy.com/?url=http://someurl.com/etc/etc">teste3</a></li> <ul> </body> </html> What's the best strategy to achieve this?

    Read the article

  • How can I parse a C header file with Perl?

    - by Alphaneo
    Hi, I have a header file in which there is a large struct. I need to read this structure using some program and make some operations on each member of the structure and write them back. For example I have some structure like const BYTE Some_Idx[] = { 4,7,10,15,17,19,24,29, 31,32,35,45,49,51,52,54, 55,58,60,64,65,66,67,69, 70,72,76,77,81,82,83,85, 88,93,94,95,97,99,102,103, 105,106,113,115,122,124,125,126, 129,131,137,139,140,149,151,152, 153,155,158,159,160,163,165,169, 174,175,181,182,183,189,190,193, 197,201,204,206,208,210,211,212, 213,214,215,217,218,219,220,223, 225,228,230,234,236,237,240,241, 242,247,249}; Now, I need to read this and apply some operation on each of the member variable and create a new structure with different order, something like: const BYTE Some_Idx_Mod_mul_2[] = { 8,14,20, ... ... 484,494,498}; Is there any Perl library already available for this? If not Perl, something else like Python is also OK. Can somebody please help!!!

    Read the article

  • ICalendar parser in PHP that supports timezones

    - by Vincent Robert
    I am looking for a PHP class that can parse an ICalendar (ICS) file and correctly handle timezones. I already created an ICS parser myself but it can only handle timezones known to PHP (like 'Europe/Paris'). Unfortunately, ICS file generated by Evolution (default calendar software of Ubuntu) does not use default timezone IDs. It exports events with its a specific timezone ID exporting also the full definition of the timezone: daylight saving dates, recurrence rule and all the hard stuff to understand about timezones. This is too much for me. Since it was only a small utility for my girlfriend, I won't have time to investigate further the ICalendar specification and create a full blown ICalendar parser myself. So is there any known implementation in PHP of ICalendar file format that can parse timezones definitions?

    Read the article

  • How to read XML using XPath in Java

    - by kaibuki
    Hi guys, I want to read XML data using XPath in Java, so for the information I have gathered I am not able to parse XML according to my requirement. here is what I want to do: Get XML file from online via its URL, then use XPath to parse it, I want to create two methods in it. One is in which I enter a specific node attribute id, and I get all the child nodes as result, and second is suppose I just want to get a specific child node value only <?xml version="1.0"?><howto> <topic name="Java"> <url>http://www.rgagnonjavahowto.htm</url> <car>taxi</car> </topic> <topic ame="PowerBuilder"> <url>http://www.rgagnon/pbhowto.htm</url> <url>http://www.rgagnon/pbhowtonew.htm</url> </topic> <topic name="Javascript"> <url>http://www.rgagnon/jshowto.htm</url> </topic> <topic name="VBScript"> <url>http://www.rgagnon/vbshowto.htm</url> </topic></howto> In above example I want to read all the elements if I search via @name and also one function in which I just want the url from @name 'Javascript' only return one node element. I hope I cleared my question :) Thanks. Kai

    Read the article

  • Best way to get back to using the power of lxml after having to use a regex to find something in an

    - by PyNEwbie
    I am trying to rip some text out of a large number of html documents (numbers in the hundreds of thousands). The documents are really forms but they are prepared by a very large group of different organizations so there is significant variation in how they create the document. For example, the documents are divided into chapters. I might want to extract the contents of Chapter 5 from every document so I can analyze the content of the chapter. Initially I thought this would be easy but it turns out that the authors might use a set of non-nested tables throughout the document to hold the content so that Chapter n could be displayed using td tags inside a table. Or they might use other elements such as p tags H tags, div tags or any other block level element. After trying repeatedly to use lxml to help me identify the beginning and end of each chapter I have determined that it is a lot cleaner to use a regular expression because in every case, no matter what the enclosing html element is the chapter label is always in the form of >Chapter # It is a little more complicated in that there might be some white space or non-breaking space represented in different ways (  or   or just spaces). Nonetheless it was trivial to write a regular expression to identify the beginning of each section. (The beginning of one section is the end of the previous section.) But now I want to use lxml to get the text out. My thought is that I have really no choice but to walk along my string to find the close tag for the element that encloses the text I am using to find the relevant section. That is here is one example where the element holding the Chapter name is a div <div style="DISPLAY: block; MARGIN-LEFT: 0pt; TEXT-INDENT: 0pt; MARGIN-RIGHT: 0pt" align="left"><font style="DISPLAY: inline; FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman">Chapter 1.&#160;&#160;&#160;Our Beginnings.</font></div> So I am imagining that I would begin at the location where I found the match for chapter 1 and set up a regular expressions to find the next </div|</td|</p|</h1 . . . So at this point I have identified the type of element holding my chapter heading I can use the same logic to find all of the text that is within that element that is set up a regular expression to help me mark from >Chapter 1.&#160;&#160;&#160;Our Beginnings.< So I have identified where my Chapter 1 begins I can do the same for chapter 2 (which is where Chapter 1 ends) Now I am imagining that I am going to snip the document beginning at the opening of the element that I identified as the element the indicates where chapter 1 begins and ending just before the opening of the element that I identified as the element that indicates where Chapter 2 begins. The string that I have identified will then be fed to lxml to use its power to get the content. I am going to all of this trouble because I have read over and over - never use a regular expression to extract content from html documents and I have not hit on a way to be as accurate with lxml to identify the starting and ending locations for the text I want to extract. For example, I can never be certain that the subtitle of Chapter 1 is Our Beginnings it could be Our Red Canary. Let me say that I spent two solid days trying with lxml to be confident that I had the beginning and ending elements and I could only be accurate <60% of the time but a very short regular expression has given me better than 95% success. I have a tendency to make things more complicated than necessary so I am wondering if anyone has seen or solved a similar problems and if they had an approach (not the details mind you) that they would like to offer.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >