Search Results

Search found 15087 results on 604 pages for 'python multithreading'.

Page 173/604 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Python timezone issue?

    - by Timmy
    im having troubles with parsing a feed and getting the time. i am using dateutil.parser from dateutil.parser import parse print updated, parse(updated ), parse( updated ).utcoffset() this should be a time in cali, output 2010-05-20T11:00:00.000-07:00 2010-05-20 11:00:00.000000-07:00 -1 day, 17:00:00 why is the offset -1 day 17 hours? this is causing me issues when i try to do things with it

    Read the article

  • Python - output from functions?

    - by Seafoid
    Hi I have a very rudimentary question. Assume I call a function, e.g., def foo(): x = 'hello world' How do I get the function to return x in such a way that I can use it as the input for another function or use the variable within the body of a program? When I use return and call the variable within another functions I get a NameError. Thanks, S :-)

    Read the article

  • Python + Expat: Error on &#0; entities

    - by clacke
    I have written a small function, which uses ElementTree and xpath to extract the text contents of certain elements in an xml file: #!/usr/bin/env python2.5 import doctest from xml.etree import ElementTree from StringIO import StringIO def parse_xml_etree(sin, xpath): """ Takes as input a stream containing XML and an XPath expression. Applies the XPath expression to the XML and returns a generator yielding the text contents of each element returned. >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem1').next() 'one' >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem2').next() 'two' >>> parse_xml_etree( ... StringIO('<test><null>&#0;</null><elem3>three</elem3></test>'), ... '//elem2').next() 'three' """ tree = ElementTree.parse(sin) for element in tree.findall(xpath): yield element.text if __name__ == '__main__': doctest.testmod(verbose=True) The third test fails with the following exception: ExpatError: reference to invalid character number: line 1, column 13 Is the � entity illegal XML? Regardless whether it is or not, the files I want to parse contain it, and I need some way to parse them. Any suggestions for another parser than Expat, or settings for Expat, that would allow me to do that?

    Read the article

  • optimize python code

    - by user283405
    i have code that uses BeautifulSoup library for parsing. But it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me about this? I am using beautifulsoup library for parsing and than save in DB. if i comment the save statement, than still it takes time so there is no problem with database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... #... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • Python Beautiful Soup .content Property

    - by Robert Birch
    What does BeautifulSoup's .content do? I am working through crummy.com's tutorial and I don't really understand what .content does. I have looked at the forums and I have not seen any answers. Looking at the code below.... from BeautifulSoup import BeautifulSoup import re doc = ['<html><head><title>Page title</title></head>', '<body><p id="firstpara" align="center">This is paragraph <b>one</b>.', '<p id="secondpara" align="blah">This is paragraph <b>two</b>.', '</html>'] soup = BeautifulSoup(''.join(doc)) print soup.contents[0].contents[0].contents[0].contents[0].name I would expect the last line of the code to print out 'body' instead of... File "pe_ratio.py", line 29, in <module> print soup.contents[0].contents[0].contents[0].contents[0].name File "C:\Python27\lib\BeautifulSoup.py", line 473, in __getattr__ raise AttributeError, "'%s' object has no attribute '%s'" % (self.__class__.__name__, attr) AttributeError: 'NavigableString' object has no attribute 'name' Is .content only concerned with html, head and title? If, so why is that? Thanks for the help in advance.

    Read the article

  • Sorting and aligning the contents of a text file in Python

    - by Emily Price
    In my program I have a text file that I read from and write to. However, I would like to display the contents of the text file in an aligned and sorted manner. The contents currently read: name, score name, score This is my code where the text file in read and printed: elif userCommand == 'V': print "High Scores:" scoresFile = open("scores1.txt", 'r') scores = scoresFile.read().split("\n") for score in scores: print score scoresFile.close() Would I have to convert this information into lists in order to be able to do this? If so, how do I go about doing this? Thank you

    Read the article

  • networking application and GUI in python

    - by pygabriel
    I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe). I've tried with twisted, I've built a good app but there's a bug that makes my GUI blocking, so I've to switch to another framework/strategy. What do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult? It's viable running two mainloops in different threads? Asking for suggestions

    Read the article

  • Python - List and Loop in one def

    - by Dunwitch
    I'm trying to get the def wfsc_pod1 and wfsc_ip into the same def. I'm not quite sure how to approach the problem. I want wfsc_pod1 to display all the information for name, subnet and gateway. Then wfsc_ip shows the ip addresses below it. I also get a None value when I run it as it. Not sure why. Anything more pythonic is more appreciated. class OutageAddress: subnet = ["255.255.255.0", "255.255.255.1"] # Gateway order is matched with names gateway = ["192.168.1.1", "192.168.1.2", "192.168.1.3", "192.168.1.4", "192.168.1.5", "192.168.1.6", "192.168.1.7", "192.168.1.8", "192.168.1.9"] name = ["LOC1", "LOC2", "LOC3", "LOC4", "LOC5", "LOC6", "LOC7", "LOC8", "LOC9"] def wfsc_pod1(self): wfsc_1 = "%s\t %s\t %s\t" % (network.name[0],network.subnet[0],network.gateway[0]) return wfsc_1 def wfsc_ip(self): for ip in range(100,110): ip = "192.168.1."+str(ip) print ip network = OutageAddress() print network.wfsc_pod1() print network.wfsc_ip()

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • Error with python decorator

    - by Timmy
    I get this error object has no attribute 'im_func' with this class Test(object): def __init__(self, f): self.func = f def __call__( self, *args ): return self.func(*args) pylons code: class TestController(BaseController): @Test def index(self): return 'hello world'

    Read the article

  • Join a list of lists together into 1 list in Python

    - by dotty
    Hay All. I have a list which consists of many lists, here is an example [ [Obj, Obj, Obj, Obj], [Obj], [Obj], [ [Obj,Obj], [Obj,Obj,Obj] ] ] Is there a way to join all these items together as 1 list, so the output will be something like [Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj] Thanks

    Read the article

  • Parsing line with delimiter in Python

    - by neversaint
    I have lines of data which I want to parse. The data looks like this: a score=216 expect=1.05e-06 a score=180 expect=0.0394 What I want to do is to have a subroutine that parse them and return 2 values (score and expect) for each line. However this function of mine doesn't seem to work: def scoreEvalFromMaf(mafLines): for word in mafLines[0]: if word.startswith("score="): theScore = word.split('=')[1] theEval = word.split('=')[2] return [theScore, theEval] raise Exception("encountered an alignment without a score") Please advice what's the right way to do it?

    Read the article

  • Python: replace urls with title names from a string

    - by Hellnar
    Hello I would like to remove urls from a string replace them with their titles of the original contents. For example: mystring = "Ah I like this site: http://www.stackoverflow.com. Also I must say I like http://www.digg.com" sanitize(mystring) # it becomes "Ah I like this site: Stack Overflow. Also I must say I like Digg - The Latest News Headlines, Videos and Images" For replacing url to the title, I have written this snipplet: #get_title: string -> string def get_title(url): """Returns the title of the input URL""" output = BeautifulSoup.BeautifulSoup(urllib.urlopen(url)) return output.title.string

    Read the article

  • new to lists on python

    - by user1762229
    This is my current code: while True: try: mylist = [0] * 7 for x in range(7): sales = float(input("Sales for day:")) mylist[x] = sales if sales < 0: print ("Sorry,invalid. Try again.") except: print ("Sorry, invalid. Try again.") else: break print (mylist) best = max(sales) worst = min(sales) print ("Your best day had", best, "in sales.") print ("Your worst day had", worst, "in sales.") When I run it I get this: Sales for day:-5 Sorry,invalid. Try again. Sales for day:-6 Sorry,invalid. Try again. Sales for day:-7 Sorry,invalid. Try again. Sales for day:-8 Sorry,invalid. Try again. Sales for day:-9 Sorry,invalid. Try again. Sales for day:-2 Sorry,invalid. Try again. Sales for day:-5 Sorry,invalid. Try again. [-5.0, -6.0, -7.0, -8.0, -9.0, -2.0, -5.0] Traceback (most recent call last): File "C:/Users/Si Hong/Desktop/HuangSiHong_assign9_part.py", line 45, in <module> best = max(sales) TypeError: 'float' object is not iterable I am not quite sure how to code it so that, the lists do NOT take in negative values, because I only want values 0 or greater. I am not sure how to solve the TypeError issue so that the min and max values will print as in my code My last issue is, if I want to find the average value of the seven inputs that an user puts in, how should I go about this in pulling the values out of the lists Thank you so much

    Read the article

  • in python how to remove this \n from string or list

    - by pritesh modi
    this is my main string "action","employee_id","name" "absent","pritesh",2010/09/15 00:00:00 so after name coolumn its goes to new line but here i append to list a new line character is added and make it like this way data_list*** ['"action","employee_id","name"\n"absent","pritesh",2010/09/15 00:00:00\n'] here its append the new line character with absent but actually its a new line strarting but its appended i want to make it like data_list*** ['"action","employee_id","name","absent","pritesh",2010/09/15 00:00:00']

    Read the article

  • Python: Pickling highly-recursive objects without using `setrecursionlimit`

    - by cool-RR
    I've been getting RuntimeError: maximum recursion depth exceeded when trying to pickle a highly-recursive tree object. Much like this asker here. He solved his problem by setting the recursion limit higher with sys.setrecursionlimit. But I don't want to do that: I think that's more of a workaround than a solution. Because I want to be able to pickle my trees even if they have 10,000 nodes in them. (It currently fails at around 200.) (Also, every platform's true recursion limit is different, and I would really like to avoid opening this can of worms.) Is there any way to solve this at the fundamental level? If only the pickle module would pickle using a loop instead of recursion, I wouldn't have had this problem. Maybe someone has an idea how I can cause something like this to happen, without rewriting the pickle module? Any other idea how I can solve this problem will be appreciated.

    Read the article

  • python unit testing os.remove fails file system

    - by hwjp
    Am doing a bit of unit testing on a function which attempts to open a new file, but should fail if the file already exists. when the function runs sucessfully, the new file is created, so i want to delete it after every test run, but it doesn't seem to be working: class MyObject_Initialisation(unittest.TestCase): def setUp(self): if os.path.exists(TEMPORARY_FILE_NAME): try: os.remove(TEMPORARY_FILE_NAME) except WindowsError: #TODO: can't figure out how to fix this... #time.sleep(3) #self.setUp() #this just loops forever pass def tearDown(self): self.setUp() any thoughts? The Windows Error thrown seems to suggest the file is in use... could it be that the tests are run in parallel threads? I've read elsewhere that it's 'bad practice' to use the filesystem in unit testing, but really? Surely there's a way around this that doesn't invole dummying the filesystem?

    Read the article

  • Make python display help screen if no action is given

    - by luckytaxi
    Let's say a user runs the script w/o giving any paramters. How can I make it so that it defaults to ./myscript.py -h so that it shows them the help info? parser = optparse.OptionParser() parser.add_option("-d", "--directory", metavar="DIR", help="Directory to scan for big files") parser.add_option("-e", "--email", metavar='EMAIL', help='email to send the list to') parser.add_option("-l", "--limit", metavar='LIMIT', help='return number of files')

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >