Search Results

Search found 20092 results on 804 pages for 'python import'.

Page 159/804 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • In Python BeautifulSoup How to move tags

    - by JJ
    I have a partially converted XML document in soup coming from HTML. After some replacement and editing in the soup, the body is essentially - <Text...></Text> # This replaces <a href..> tags but automatically creates the </Text> <p class=norm ...</p> <p class=norm ...</p> <Text...></Text> <p class=norm ...</p> and so forth. I need to "move" the <p> tags to be children to <Text> or know how to suppress the </Text>. I want - <Text...> <p class=norm ...</p> <p class=norm ...</p> </Text> <Text...> <p class=norm ...</p> </Text> I've tried using item.insert and item.append but I'm thinking there must be a more elegant solution. for item in soup.findAll(['p','span']): if item.name == 'span' and item.has_key('class') and item['class'] == 'section': xBCV = short_2_long(item._getAttrMap().get('value','')) if currentnode: pass currentnode = Tag(soup,'Text', attrs=[('TypeOf', 'Section'),... ]) item.replaceWith(currentnode) # works but creates end tag elif item.name == 'p' and item.has_key('class') and item['class'] == 'norm': childcdatanode = None for ahref in item.findAll('a'): if childcdatanode: pass newlink = filter_hrefs(str(ahref)) childcdatanode = Tag(soup, newlink) ahref.replaceWith(childcdatanode) Thanks

    Read the article

  • python- scipy optimization

    - by pear
    In scipy fmin_slsqp (Sequential Least Squares Quadratic Programming), I tried reading the code 'slsqp.py' provided with the scipy package, to find what are the criteria to get the exit_modes 0? I cannot find which statements in the code produce this exit mode? Please help me 'slsqp.py' code as follows, exit_modes = { -1 : "Gradient evaluation required (g & a)", 0 : "Optimization terminated successfully.", 1 : "Function evaluation required (f & c)", 2 : "More equality constraints than independent variables", 3 : "More than 3*n iterations in LSQ subproblem", 4 : "Inequality constraints incompatible", 5 : "Singular matrix E in LSQ subproblem", 6 : "Singular matrix C in LSQ subproblem", 7 : "Rank-deficient equality constraint subproblem HFTI", 8 : "Positive directional derivative for linesearch", 9 : "Iteration limit exceeded" } def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_eqcons=None, fprime_ieqcons=None, args = (), iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): # Now do a lot of function wrapping # Wrap func feval, func = wrap_function(func, args) # Wrap fprime, if provided, or approx_fprime if not if fprime: geval, fprime = wrap_function(fprime,args) else: geval, fprime = wrap_function(approx_fprime,(func,epsilon)) if f_eqcons: # Equality constraints provided via f_eqcons ceval, f_eqcons = wrap_function(f_eqcons,args) if fprime_eqcons: # Wrap fprime_eqcons geval, fprime_eqcons = wrap_function(fprime_eqcons,args) else: # Wrap approx_jacobian geval, fprime_eqcons = wrap_function(approx_jacobian, (f_eqcons,epsilon)) else: # Equality constraints provided via eqcons[] eqcons_prime = [] for i in range(len(eqcons)): eqcons_prime.append(None) if eqcons[i]: # Wrap eqcons and eqcons_prime ceval, eqcons[i] = wrap_function(eqcons[i],args) geval, eqcons_prime[i] = wrap_function(approx_fprime, (eqcons[i],epsilon)) if f_ieqcons: # Inequality constraints provided via f_ieqcons ceval, f_ieqcons = wrap_function(f_ieqcons,args) if fprime_ieqcons: # Wrap fprime_ieqcons geval, fprime_ieqcons = wrap_function(fprime_ieqcons,args) else: # Wrap approx_jacobian geval, fprime_ieqcons = wrap_function(approx_jacobian, (f_ieqcons,epsilon)) else: # Inequality constraints provided via ieqcons[] ieqcons_prime = [] for i in range(len(ieqcons)): ieqcons_prime.append(None) if ieqcons[i]: # Wrap ieqcons and ieqcons_prime ceval, ieqcons[i] = wrap_function(ieqcons[i],args) geval, ieqcons_prime[i] = wrap_function(approx_fprime, (ieqcons[i],epsilon)) # Transform x0 into an array. x = asfarray(x0).flatten() # Set the parameters that SLSQP will need # meq = The number of equality constraints if f_eqcons: meq = len(f_eqcons(x)) else: meq = len(eqcons) if f_ieqcons: mieq = len(f_ieqcons(x)) else: mieq = len(ieqcons) # m = The total number of constraints m = meq + mieq # la = The number of constraints, or 1 if there are no constraints la = array([1,m]).max() # n = The number of independent variables n = len(x) # Define the workspaces for SLSQP n1 = n+1 mineq = m - meq + n1 + n1 len_w = (3*n1+m)*(n1+1)+(n1-meq+1)*(mineq+2) + 2*mineq+(n1+mineq)*(n1-meq) \ + 2*meq + n1 +(n+1)*n/2 + 2*m + 3*n + 3*n1 + 1 len_jw = mineq w = zeros(len_w) jw = zeros(len_jw) # Decompose bounds into xl and xu if len(bounds) == 0: bounds = [(-1.0E12, 1.0E12) for i in range(n)] elif len(bounds) != n: raise IndexError, \ 'SLSQP Error: If bounds is specified, len(bounds) == len(x0)' else: for i in range(len(bounds)): if bounds[i][0] > bounds[i][1]: raise ValueError, \ 'SLSQP Error: lb > ub in bounds[' + str(i) +'] ' + str(bounds[4]) xl = array( [ b[0] for b in bounds ] ) xu = array( [ b[1] for b in bounds ] ) # Initialize the iteration counter and the mode value mode = array(0,int) acc = array(acc,float) majiter = array(iter,int) majiter_prev = 0 # Print the header if iprint >= 2 if iprint >= 2: print "%5s %5s %16s %16s" % ("NIT","FC","OBJFUN","GNORM") while 1: if mode == 0 or mode == 1: # objective and constraint evaluation requird # Compute objective function fx = func(x) # Compute the constraints if f_eqcons: c_eq = f_eqcons(x) else: c_eq = array([ eqcons[i](x) for i in range(meq) ]) if f_ieqcons: c_ieq = f_ieqcons(x) else: c_ieq = array([ ieqcons[i](x) for i in range(len(ieqcons)) ]) # Now combine c_eq and c_ieq into a single matrix if m == 0: # no constraints c = zeros([la]) else: # constraints exist if meq > 0 and mieq == 0: # only equality constraints c = c_eq if meq == 0 and mieq > 0: # only inequality constraints c = c_ieq if meq > 0 and mieq > 0: # both equality and inequality constraints exist c = append(c_eq, c_ieq) if mode == 0 or mode == -1: # gradient evaluation required # Compute the derivatives of the objective function # For some reason SLSQP wants g dimensioned to n+1 g = append(fprime(x),0.0) # Compute the normals of the constraints if fprime_eqcons: a_eq = fprime_eqcons(x) else: a_eq = zeros([meq,n]) for i in range(meq): a_eq[i] = eqcons_prime[i](x) if fprime_ieqcons: a_ieq = fprime_ieqcons(x) else: a_ieq = zeros([mieq,n]) for i in range(mieq): a_ieq[i] = ieqcons_prime[i](x) # Now combine a_eq and a_ieq into a single a matrix if m == 0: # no constraints a = zeros([la,n]) elif meq > 0 and mieq == 0: # only equality constraints a = a_eq elif meq == 0 and mieq > 0: # only inequality constraints a = a_ieq elif meq > 0 and mieq > 0: # both equality and inequality constraints exist a = vstack((a_eq,a_ieq)) a = concatenate((a,zeros([la,1])),1) # Call SLSQP slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw) # Print the status of the current iterate if iprint > 2 and the # major iteration has incremented if iprint >= 2 and majiter > majiter_prev: print "%5i %5i % 16.6E % 16.6E" % (majiter,feval[0], fx,linalg.norm(g)) # If exit mode is not -1 or 1, slsqp has completed if abs(mode) != 1: break majiter_prev = int(majiter) # Optimization loop complete. Print status if requested if iprint >= 1: print exit_modes[int(mode)] + " (Exit mode " + str(mode) + ')' print " Current function value:", fx print " Iterations:", majiter print " Function evaluations:", feval[0] print " Gradient evaluations:", geval[0] if not full_output: return x else: return [list(x), float(fx), int(majiter), int(mode), exit_modes[int(mode)] ]

    Read the article

  • Cleaning an XML file in Python before parsing

    - by Sam
    I'm using minidom to parse an xml file and it threw an error indicating that the data is not well formed. I figured out that some of the pages have characters like ไอเฟล &, causing the parser to hiccup. Is there an easy way to clean the file before I start parsing it? Right now I'm using a regular expressing to throw away anything that isn't an alpha numeric character and the </> characters, but it isn't quite working.

    Read the article

  • Double linking array in Python

    - by cdecker
    Since I'm pretty new this question'll certainly sound stupid but I have no idea about how to approach this. I'm trying take a list of nodes and for each of the nodes I want to create an array of predecessors and successors in the ordered array of all nodes. Currently my code looks like this: nodes = self.peers.keys() nodes.sort() peers = {} numPeers = len(nodes) for i in nodes: peers[i] = [self.coordinator] for i in range(0,len(nodes)): peers[nodes[i%numPeers]].append(nodes[(i+1)%numPeers]) peers[nodes[(i+1)%numPeers]].append(nodes[i%numPeers]) # peers[nodes[i%numPeers]].append(nodes[(i+4)%numPeers]) # peers[nodes[(i+4)%numPeers]].append(nodes[i%numPeers]) The last two lines should later be used to create a skip graph, but that's not really important. The problem is that it doesn't really work reliably, sometimes a predecessor or a successor is skipped, and instead the next one is used, and so forth. Is this correct at all or is there a better way to do this? Basically I need to get the array indices with certain offsets from each other. Any ideas?

    Read the article

  • python gio waiting for async operations to be done

    - by pygabriel
    I have to mount a WebDav location and wait for the operation to be finished before to proceed (it's a script). So I'm using the library in this way: location = gio.File("dav://server.bb") location.mount_enclosing_volume(*args,**kw) # The setup is not much relevant location.get_path() # Returns None because it's not yet mounted since the call is async How to wait until the device is mounted?

    Read the article

  • Generate a table of contents from HTML with Python

    - by Oli
    I'm trying to generate a table of contents from a block of HTML (not a complete file - just content) based on its <h2> and <h3> tags. My plan so far was to: Extract a list of headers using beautifulsoup Use a regex on the content to place anchor links before/inside the header tags (so the user can click on the table of contents) -- There might be a method for replacing inside beautifulsoup? Output a nested list of links to the headers in a predefined spot. It sounds easy when I say it like that, but it's proving to be a bit of a pain in the rear. Is there something out there that does all this for me in one go so I don't waste the next couple of hours reinventing the wheel? A example: <p>This is an introduction</p> <h2>This is a sub-header</h2> <p>...</p> <h3>This is a sub-sub-header</h3> <p>...</p> <h2>This is a sub-header</h2> <p>...</p>

    Read the article

  • Problem building a complete binary tree of height 'h' in Python

    - by Jack
    Here is my code. The complete binary tree has 2^k nodes at depth k. class Node: def __init__(self, data): # initializes the data members self.left = None self.right = None self.data = data root = Node(data_root) def create_complete_tree(): row = [root] for i in range(h): newrow = [] for node in row: left = Node(data1) right = Node(data2) node.left = left node.right = right newrow.append(left) newrow.append(right) row = copy.deepcopy(newrow) def traverse_tree(node): if node == None: return else: traverse_tree(node.left) print node.data traverse_tree(node.right) create_complete_tree() print 'Node traversal' traverse_tree(root) The tree traversal only gives the data of root and its children. What am I doing wrong?

    Read the article

  • Python: query a class's parent-class after multiple derivations ("super()" does not work)

    - by henry
    Hi, I have built a class-system that uses multiple derivations of a baseclass (object-class1-class2-class3): class class1(object): def __init__(self): print "class1.__init__()" object.__init__(self) class class2(class1): def __init__(self): print "class2.__init__()" class1.__init__(self) class class3(class2): def __init__(self): print "class3.__init__()" class2.__init__(self) x = class3() It works as expected and prints: class3.__init__() class2.__init__() class1.__init__() Now I would like to replace the 3 lines object.__init__(self) ... class1.__init__(self) ... class2.__init__(self) with something like this: currentParentClass().__init__() ... currentParentClass().__init__() ... currentParentClass().__init__() So basically, i want to create a class-system where i don't have to type "classXYZ.doSomething()". As mentioned above, I want to get the "current class's parent-class". Replacing the three lines with: super(type(self), self).__init__() does NOT work (it always returns the parent-class of the current instance - class2) and will result in an endless loop printing: class3.__init__() class2.__init__() class2.__init__() class2.__init__() class2.__init__() ... So is there a function that can give me the current class's parent-class? Thank you for your help! Henry -------------------- Edit: @Lennart ok maybe i got you wrong but at the moment i think i didn't describe the problem clearly enough.So this example might explain it better: lets create another child-class class class4(class3): pass now what happens if we derive an instance from class4? y = class4() i think it clearly executes: super(class3, self).__init__() which we can translate to this: class2.__init__(y) this is definitly not the goal(that would be class3.__init__(y)) Now making lots of parent-class-function-calls - i do not want to re-implement all of my functions with different base-class-names in my super()-calls.

    Read the article

  • hierarchical clustering on correlations in Python scipy/numpy?

    - by user248237
    How can I run hierarchical clustering on a correlation matrix in scipy/numpy? I have a matrix of 100 rows by 9 columns, and I'd like to hierarchically clustering by correlations of each entry across the 9 conditions. I'd like to use 1-pearson correlation as the distances for clustering. Assuming I have a numpy array "X" that contains the 100 x 9 matrix, how can I do this? I tried using hcluster, based on this example: Y=pdist(X, 'seuclidean') Z=linkage(Y, 'single') dendrogram(Z, color_threshold=0) however, pdist is not what I want since that's euclidean distance. Any ideas? thanks.

    Read the article

  • Python: deleting rows in a text file

    - by Jenny
    A sample of the following text file i have is: > 1 -4.6 -4.6 -7.6 > > 2 -1.7 -3.8 -3.1 > > 3 -1.6 -1.6 -3.1 the data is separated by tabs in the text file and the first column indicates the position. I need to iterate through every value in the text file apart from column 0 and find the lowest value. once the lowest value has been found that value needs to be written to a new text file along with the column name and position. Column 0 has the name "position" Column 1 "fifteen", column 2 "sixteen" and column 3 "seventeen" for example the lowest value in the above data is "-7.6" and is in column 3 which has the name "seventeen". Therefore "7.6", "seventeen" and its position value which in this case is 1 need to be written to the new text file. I then need a number of rows deleted from the above text file. E.G. the lowest value above is "-7.6" and is found at position "1" and is found in column 3 which as the name "seventeen". I therefore need seventeen rows deleted from the text file starting from and including position 1 so the the column in which the lowest value is found denotes the amount of rows that needs to be deleted and the position it is found at states the start point of the deletion

    Read the article

  • Python - Nested List to Tab Delimited File?

    - by Seafoid
    Hi, I have a nested list comprising ~30,000 sub-lists, each with three entries, e.g., nested_list = [['x', 'y', 'z'], ['a', 'b', 'c']]. I wish to create a function in order to output this data construct into a tab delimited format, e.g., x y z a b c Any help greatly appreciated! Thanks in advance, Seafoid.

    Read the article

  • Python: split a list based on a condition?

    - by Parand
    What's the best way, both aesthetically and from a performance perspective, to split a list of items into multiple lists based on a conditional? The equivalent of: good = [x for x in mylist if x in goodvals] bad = [x for x in mylist if x not in goodvals] is there a more elegant way to do this? Update: here's the actual use case, to better explain what I'm trying to do: # files looks like: [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ... ] IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png') images = [f for f in files if f[2].lower() in IMAGE_TYPES] anims = [f for f in files if f[2].lower() not in IMAGE_TYPES]

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Python and Gstreamer

    - by Seif Sallam
    hi, I'm creating a streaming application, using GStreamer with TCP pipeline, and i implemented start, pause, and stop. but the problem is, that i can't seek, i tried to change the playback value from the server side, then i tried on the client side, and Finally tried to change the value on both at the same time, but in all cases it doesn't work. and I even tried to pause the playback then continue but nothing happens. I'm having this problem with the seek and the volume. Any help please, I searched everywhere but i couldn't find anything that worked. this is the code that i use for seeking self.pipeline.seek_simple(gst.FORMAT_TIME, gst.SEEK_FLAG_FLUSH, time)

    Read the article

  • Python Copy Through Assignment?

    - by Marcus Whybrow
    I would expect that the following code would just initialise the dict_a, dict_b and dict_c dictionaries. But it seams to have a copt through effect: dict_a = dict_b = dict_c = {} dict_c['hello'] = 'goodbye' print dict_a print dict_b print dict_c As you can see the result is as follows: {'hello': 'goodbye'} {'hello': 'goodbye'} {'hello': 'goodbye'} Why does that program give the previous result, When I would expect it to return: {} {} {'hello': 'goodbye'}

    Read the article

  • Python + PostgreSQL + strange ascii = UTF8 encoding error

    - by Claudiu
    I have ascii strings which contain the character "\x80" to represent the euro symbol: >>> print "\x80" € When inserting string data containing this character into my database, I get: psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encodi ng expected by the server, which is controlled by "client_encoding". I'm a unicode newbie. How can I convert my strings containing "\x80" to valid UTF-8 containing that same euro symbol? I've tried calling .encode and .decode on various strings, but run into errors: >>> "\x80".encode("utf-8") Traceback (most recent call last): File "<pyshell#14>", line 1, in <module> "\x80".encode("utf-8") UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128)

    Read the article

  • Python: date, time formatting

    - by TarGz
    I need to generate a local timestamp in a form of YYYYMMDDHHmmSSOHH'mm'. That OHH'mm' is one of +, -, Z and then there are hourhs and minutes followed by '. Please, how do I get such a timestamp, denoting both local time zone and possible daylight saving?

    Read the article

  • Python - Blackjack

    - by user335932
    def showCards(): #SUM sum = playerCards[0] + playerCards[1] #Print cards print "Player's Hand: " + str(playerCards) + " : " + "sum print "Dealer's Hand: " + str(compCards[0]) + " : " + "sum" compCards = [Deal(),Deal()] playerCards = [Deal(),Deal()] How can i add up the interger element of a list containing to values? under #SUM error is can combine lists like ints...

    Read the article

  • python decorator to modify variable in current scope

    - by AlexH
    Goal: Make a decorator which can modify the scope that it is used in. If it worked: class Blah(): # or perhaps class Blah(ParentClassWhichMakesThisPossible) def one(self): pass @decorated def two(self): pass Blah.decorated ["two"] Why? I essentially want to write classes which can maintain specific dictionaries of methods, so that I can retrieve lists of available methods of different types on a per class basis. errr..... I want to do this: class RuleClass(ParentClass): @rule def blah(self): pass @rule def kapow(self): pass def shazam(self): class OtherRuleClass(ParentClass): @rule def foo(self): pass def bar(self): pass RuleClass.rules.keys() ["blah", "kapow"] OtherRuleClass.rules.keys() ["foo"]

    Read the article

  • Fast iterating over first n items of an iterable (not a list) in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of an iterable (upd: not a list in a common case, as for lists things are trivial), and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • Python nested function scopes

    - by Thomas O
    I have code like this: def write_postcodes(self): """Write postcodes database. Write data to file pointer. Data is ordered. Initially index pages are written, grouping postcodes by the first three characters, allowing for faster searching.""" status("POSTCODE", "Preparing to sort...", 0, 1) # This function returns the key of x whilst updating the displayed # status of the sort. ctr = 0 def keyfunc(x): ctr += 1 status("POSTCODE", "Sorting postcodes", ctr, len(self.postcodes)) return x sort_res = self.postcodes[:] sort_res.sort(key=keyfunc) But ctr responds with a NameError: Traceback (most recent call last): File "PostcodeWriter.py", line 53, in <module> w.write_postcodes() File "PostcodeWriter.py", line 47, in write_postcodes sort_res.sort(key=keyfunc) File "PostcodeWriter.py", line 43, in keyfunc ctr += 1 UnboundLocalError: local variable 'ctr' referenced before assignment How can I fix this? I thought nester scopes would have allowed me to do this. I've tried with 'global', but it still doesn't work.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >