Search Results

Search found 15449 results on 618 pages for 'python signal'.

Page 422/618 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • Does dict.update affect a function's argspec?

    - by sbox32
    import inspect class Test: def test(self, p, d={}): d.update(p) return d print inspect.getargspec(getattr(Test, 'test'))[3] print Test().test({'1':True}) print inspect.getargspec(getattr(Test, 'test'))[3] I would expect the argspec for Test.test not to change but because of dict.update it does. Why?

    Read the article

  • sip.conf configuration file - add new line to each record

    - by Flukey
    I have a sip configuration file which looks like this: [1664] username=1664 mailbox=1664@8360 host=192.168.254.3 type=friend subscribemwi=no [1679] username=1679 mailbox=1679@8360 host=192.168.254.3 type=friend subscribemwi=no [1700] username=1700 mailbox=1700@8360 host=192.168.254.3 type=friend subscribemwi=no [1701] username=1701 mailbox=1701@8360 host=192.168.254.3 type=friend subscribemwi=no For each record I need to add another line (vmxten for each record) for example the above becomes: [1664] username=1664 mailbox=1664@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1664 [1679] username=1679 mailbox=1679@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1679 [1700] username=1700 mailbox=1700@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1700 [1701] username=1701 mailbox=1701@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1701 What would you say would be the quickest way to do this? there are hundreds of records in the file, therefore modifying all of the records by hand would take a long time. Would you use Regex? Would you use sed? I'm interested to know how you would approach the problem. Thanks

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • Can pydoc/help() hide the documentation for inherited class methods and attributes?

    - by EOL
    When declaring a class that inherits from a specific class: class C(dict): added_attribute = 0 the documentation for class C lists all the methods of dict (either through help(C) or pydoc). Is there a way to hide the inherited methods from the automatically generated documentation (the documentation string can refer to the base class, for non-overwritten methods)? or is it impossible? This would be useful: pydoc lists the functions defined in a module after its classes. Thus, when the classes have a very long documentation, a lot of less than useful information is printed before the new functions provided by the module are presented, which makes the documentation harder to exploit (you have to skip all the documentation for the inherited methods until you reach something specific to the module being documented).

    Read the article

  • Increment number in string

    - by iform
    Hi, I am stumped... I am trying to get the following output until a certain condition is met. test_1.jpg test_2.jpg .. test_50.jpg The solution (if you could remotely call it that) that I have is fileCount = 0 while (os.path.exists(dstPath)): fileCount += 1 parts = os.path.splitext(dstPath) dstPath = "%s_%d%s" % (parts[0], fileCount, parts[1]) however...this produces the following output. test_1.jpg test_1_2.jpg test_1_2_3.jpg .....etc The Question: How do I get change the number in its current place (without appending numbers to the end)? Ps. I'm using this for a file renaming tool.

    Read the article

  • Twisted: how-to bind a server to a specified IP address? (solved)

    - by daccle
    I want to have a twisted service (started via twistd) which listens to TCP/POST request on a specified port on a specified IP address. By now I have a twisted application which listens to port 8040 on localhost. It is running fine, but I want it to only listen to a certain IP address, say 10.0.0.78. How-to manage that? This is a snippet of my code: application = service.Application('SMS_Inbound') smsInbound = resource.Resource() smsInbound.putChild('75sms_inbound',ReceiveSMS(application)) smsInboundServer = internet.TCPServer(8001, webserver.Site(smsInbound)) smsInboundServer.setName("SMS Handling") smsInboundServer.setServiceParent(application)

    Read the article

  • Filtering SQLAlchemy query on attribute_mapped_collection field of relationship

    - by bsa
    I have two classes, Tag and Hardware, defined with a simple parent-child relationship (see the full definition at the end). Now I want to filter a query on Tag using the version field in Hardware through an attribute_mapped_collection, eg: def get_tags(order_code=None, hardware_filters=None): session = Session() query = session.query(Tag) if order_code: query = query.filter(Tag.order_code == order_code) if hardware_filters: for k, v in hardware_filters.iteritems(): query = query.filter(getattr(Tag.hardware, k).version == v) return query.all() But I get: AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Tag.hardware has an attribute 'baseband The same thing happens if I strip it back by hard-coding the attribute, eg: query.filter(Tag.hardware.baseband.version == v) I can do it this way: query = query.filter(Tag.hardware.any(artefact=k, version=v)) But why can't I filter directly through the attribute? Class definitions class Tag(Base): __tablename__ = 'tag' tag_id = Column(Integer, primary_key=True) order_code = Column(String, nullable=False) version = Column(String, nullable=False) status = Column(String, nullable=False) comments = Column(String) hardware = relationship( "Hardware", backref="tag", collection_class=attribute_mapped_collection('artefact'), ) __table_args__ = ( UniqueConstraint('order_code', 'version'), ) class Hardware(Base): __tablename__ = 'hardware' hardware_id = Column(Integer, primary_key=True) tag_id = Column(String, ForeignKey('tag.tag_id')) product_id = Column(String, nullable=True) artefact = Column(String, nullable=False) version = Column(String, nullable=False)

    Read the article

  • Using SQLAlchemy, how can I return a count with multiple columns

    - by Andy
    I am attempting to run a query like this: SELECT comment_type_id, name, count(comment_type_id) FROM comments, commenttypes WHERE comment_type_id=commenttypes.id GROUP BY comment_type_id Without the join between comments and commenttypes for the name column, I can do this using: session.query(Comment.comment_type_id,func.count(Comment.comment_type_id)).group_by(Comment.comment_type_id).all() However, if I try to do something like this, I get incorrect results: session.query(Comment.comment_type_id, Comment.comment_type, func.count(Comment.comment_type_id)).group_by(Comment.comment_type_id).all() I have two problems with the results: (1, False, 82920) (2, False, 588) (3, False, 4278) (4, False, 104370) Problems: The False is not correct The counts are wrong My expected results are: (1, 'Comment Type 1', 13820) (2, 'Comment Type 2', 98) (3, 'Comment Type 2', 713) (4, 'Comment Type 2', 17395) How can I adjust my command to pull the correct name value and the correct count?

    Read the article

  • Choosing randomly all the elements in the the list just once

    - by Dalek
    How is it possible to randomly choose a number from a list with n elements, n time without picking the same element of the list twice. I wrote a code to choose the sequence number of the elements in the list but it is slow: >>>redshift=np.array([0.92,0.17,0.51,1.33,....,0.41,0.82]) >>>redshift.shape (1225,) exclude=[] k=0 ng=1225 while (k < ng): flag1=0 sq=random.randint(0, ng) while (flag1<1): if sq in exclude: flag1=1 sq=random.randint(0, ng) else: print sq exclude.append(sq) flag1=0 z=redshift[sq] k+=1 It doesn't choose all the sequence number of elements in the list.

    Read the article

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • directory and file related doubts??

    - by kaushik
    i have a directory with around 1000 files....i want to run a same code for each of these file... my code requires the file name to be inputted. i have written code to copy the information of one into other in other format... please suggest a method to copy all 1000 files one by one without need to change the file name every time and i have a field serial_num which need to be continous i.e if 1st file has upto 30 then while coping other file it should continue from 30not from 0 again require suggestion please thanks..

    Read the article

  • How to quickly parse a list of strings

    - by math
    If I want to split a list of words separated by a delimiter character, I can use >>> 'abc,foo,bar'.split(',') ['abc', 'foo', 'bar'] But how to easily and quickly do the same thing if I also want to handle quoted-strings which can contain the delimiter character ? In: 'abc,"a string, with a comma","another, one"' Out: ['abc', 'a string, with a comma', 'another, one'] Related question: How can i parse a comma delimited string into a list (caveat)?

    Read the article

  • Prevent web2py from caching ?

    - by Joe
    Hi ! I'm working with web2py and for some reason web2py seems to fail to notice when code has changed in certain cases. I can't really narrow it down, but from time to time changes in the code are not reflected, web2py obviously has the old version cached somewhere. The only thing that helps is quitting web2py and restarting it (i'm using the internal server). Any hints ? Thank you !

    Read the article

  • Debugging (displaying) SQL command sent to the db by SQLAlchemy

    - by morpheous
    I have an ORM class called Person, which wraps around a person table: After setting up the connection to the db etc, I run the ff statement. people = session.query(Person).all() The person table does not contain any data (as yet), so when I print the variable people, I get an empty list. I renamed the table referred to in my ORM class People, to people_foo (which does not exist). I then run the script again. I was surprised that no exception was thrown when attempting to access a table that does not exist. I therefore have the following 2 questions: How may I setup SQLAlchemy so that it propagates db errors back to the script? How may I view (i.e. print) the SQL that is being sent to the db engine If it helps, I am using PostgreSQL as the db

    Read the article

  • What do I do with a Concrete Syntax Tree?

    - by Cap
    I'm using pyPEG to create a parse tree for a simple grammar. The tree is represented using lists and tuples. Here's an example: [('command', [('directives', [('directive', [('name', 'retrieve')]), ('directive', [('name', 'commit')])]), ('filename', [('name', 'f30502')])])] My question is what do I do with it at this point? I know a lot depends on what I am trying to do, but I haven't been able to find much about consuming/using parse trees, only creating them. Does anyone have any pointers to references I might use? Thanks for your help.

    Read the article

  • Sleeping a thread blocking stdin

    - by Sid
    Hey, I'm running a function which evaluates commands passed in using stdin and another function which runs a bunch of jobs. I need to make the latter function sleep at regular intervals but that seems to be blocking the stdin. Any advice on how to resolve this would be appreciated. The source code for the functions is def runJobs(comps, jobQueue, numRunning, limit, lock): while len(jobQueue) >= 0: print(len(jobQueue)); if len(jobQueue) > 0: comp, tasks = find_computer(comps, 0); #do something time.sleep(5); def manageStdin(): print "Global Stdin Begins Now" for line in fileinput.input(): try: print(eval(line)); except Exception, e: print e; --Thanks

    Read the article

  • Get node name with minidom

    - by Alex
    Is it possible to get the name of a node using minidom? for example i have a node: <heading><![CDATA[5 year]]></heading> what i'm trying to do is store the value heading so that i can use it as a key in a dictionary, the closest i can get is something like [<DOM Element: heading at 0x11e6d28>] i'm sure i'm overlooking something very simple here, thanks!

    Read the article

  • how to get all 'username' from my model 'MyUser' on google-app-engine ..

    - by zjm1126
    my model is : class MyUser(db.Model): user = db.UserProperty() password = db.StringProperty(default=UNUSABLE_PASSWORD) email = db.StringProperty() nickname = db.StringProperty(indexed=False) and my method which want to get all username is : s=[] a=MyUser.all().fetch(10000) for i in a: s.append(i.username) and the error is : AttributeError: 'MyUser' object has no attribute 'username' so how can i get all 'username', which is the simplest way . thanks

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • How can I refresh wx.Frame?

    - by aF
    Hello, I have 3 panels and I want to make drags on them. The problem is that when I do a drag on one this happens: How can I refresh the frame to happear its color when the panel is no longer there?

    Read the article

  • element-wise lookup on one ndarray to another ndarray of different shapes

    - by fahhean
    Hi, I am new to numpy. Am wonder is there a way to do lookup of two ndarray of different shapes? for example, i have 2 ndarrays as below: X = array([[0, 3, 6], [3, 3, 3], [6, 0, 3]]) Y = array([[0, 100], [3, 500], [6, 800]]) and would like to lookup each element of X in Y, then be able to return the second column of Y: Z = array([[100, 500, 800], [500, 500, 500], [800, 100, 500]]) thanks, fahhean

    Read the article

  • Feedback on availability with Google App Engine

    - by Ron
    We've had some good experiences building an app on Google App Engine, this first app's target audience are Google Apps users, so no issues there in terms of it being hosted on Google infrastructure. We like it so much that we would like to investigate using it for a another app, however this next project is for a client who is not really that interested in what technology it sits on, they just want it to work, and work all of the time. In this scenario, given that we have the technology applicability and capability side covered, are there any concerns that this stuff is still relatively new and that we may not be as much "in control" as if we had it done with traditional hosting?

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >