Search Results

Search found 13596 results on 544 pages for 'mechanize python'.

Page 412/544 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Can pydoc/help() hide the documentation for inherited class methods and attributes?

    - by EOL
    When declaring a class that inherits from a specific class: class C(dict): added_attribute = 0 the documentation for class C lists all the methods of dict (either through help(C) or pydoc). Is there a way to hide the inherited methods from the automatically generated documentation (the documentation string can refer to the base class, for non-overwritten methods)? or is it impossible? This would be useful: pydoc lists the functions defined in a module after its classes. Thus, when the classes have a very long documentation, a lot of less than useful information is printed before the new functions provided by the module are presented, which makes the documentation harder to exploit (you have to skip all the documentation for the inherited methods until you reach something specific to the module being documented).

    Read the article

  • command line arg?

    - by kaushik
    This is a module named XYZ. def func(x) ..... ..... if __name__=="__main__": print func(sys.argv[1]) Now I have imported this module in another code and want to use the func. How can i use it? import XYZ After this, where to give the argument, and syntax on how to call it, please?

    Read the article

  • Removing Item From List - during iteration - what's wrong with this idiom ?

    - by monojohnny
    As an experiment, I did this: letters=['a','b','c','d','e','f','g','h','i','j','k','l'] for i in letters: letters.remove(i) print letters The last print shows that not all items were removed ? (every other was). IDLE 2.6.2 >>> ================================ RESTART ================================ >>> ['b', 'd', 'f', 'h', 'j', 'l'] >>> What's the explanation for this ? How it could this be re-written to remove every item ?

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • Creating temporary user accounts - Django

    - by RadiantHex
    Hi folks, I need to setup temporary User models for each visitors, where the visitors are obviously tied by session data. I might not be aware of it, but does Django support attaching data to Anonymous users? The only way, I am currently aware of, is to use the session dictionary part of the request object. Help would be very much appreciated!

    Read the article

  • Is there an efficient way to figure out the headers, cookies, and get/post data being passed to a si

    - by kryptobs2000
    More specifically I'm looking for something, perhaps an add-on for firefox, once enabled it logs all of this information as it's passed to and from the server. I'm doing some web scripting and this would be really handy. If anyone is wondering specifically what I'm doing currently I'm trying to make a script to repost my craigslist ad every 2 days since I handle a few things on there. Might even go so far as to make a simple gui to manage the submissions. I do suspect this goes against the ToS, for that reason I don't plan to release the code. Besides cl is already bad enough with spam, I'm not trying to contribute further to it, figured I'd say what I'm doing for the sake of being honest though. I don't have any bad intentions with this, just some things I've been trying to sell an ad for my pc repair business. I've been reposting some things for months now and so often I just forget to do it.

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • GTK+: How do I process RadioMenuItem choice without marking it chosen? And vise versa

    - by eugene.shatsky
    In my program, I've got a menu with a group of RadioMenuItem entries. Choosing one of them should trigger a function which can either succeed or fail. If it fails, this RadioMenuItem shouldn't be marked chosen (the previous one should persist). Besides, sometimes I want to set marked item without running the choice processing function. Here is my current code: # Update seat menu list def update_seat_menu(self, seats, selected_seat=None): seat_menu = self.builder.get_object('seat_menu') # Delete seat menu items for menu_item in seat_menu: # TODO: is it a good way? does remove() delete obsolete menu_item from memory? if menu_item.__class__.__name__ == 'RadioMenuItem': seat_menu.remove(menu_item) # Fill menu with new items group = [] for seat in seats: menu_item = Gtk.RadioMenuItem.new_with_label(group, str(seat[0])) group = menu_item.get_group() seat_menu.append(menu_item) if str(seat[0]) == selected_seat: menu_item.activate() menu_item.connect("activate", self.choose_seat, str(seat[0])) menu_item.show() # Process item choice def choose_seat(self, entry, seat_name): # Looks like this is called when item is deselected, too; must check if active if entry.get_active(): # This can either succeed or fail self.logind.AttachDevice(seat_name, '/sys'+self.device_syspath, True) Chosen RadioMenuItem gets marked irrespective of the choose_seat() execution result; and the only way to set marked item without triggering choose_seat() is to re-run update_seat_menu() with selected_seat argument, which is an overkill. I tried to connect choose_seat() with 'button-release-event' instead of 'activate' and call entry.activate() in choose_seat() if AttachDevice() succeeds, but this resulted in whole X desktop lockup until AttachDevice() timed out, and chosen item still got marked.

    Read the article

  • Using ManagementClass.Getinstances() from IronPython

    - by Leo Bontemps
    I have an IronPython script that looks for current running processes using WMI. The code looks like this: import clr clr.AddReference('System.Management') from System.Management import ManagementClass from System import Array mc = ManagementClass('Win32_Processes') procs = mc.GetInstances() That last line where I call the GetInstances() method raises the following error: Traceback (most recent call first): File "<stdin>", line 1, in <module> SystemError: Not Found I am not understanding what's not being found?!? I believe that I may need to pass an instance of ManagementOperationObserver and of EnumerationOptions to GetInstance() however, I don't understand why that is, since the method with the signature Getinstance() is available in ManagementClass.

    Read the article

  • How can I create a rules engine without using eval() or exec()?

    - by Angela
    I have a simple rules/conditions table in my database which is used to generate alerts for one of our systems. I want to create a rules engine or a domain specific language. A simple rule stored in this table would be..(omitting the relationships here) if temp > 40 send email Please note there would be many more such rules. A script runs once daily to evaluate these rules and perform the necessary actions. At the beginning, there was only one rule, so we had the script in place to only support that rule. However we now need to make it more scalable to support different conditions/rules. I have looked into rules engines , but I hope to achieve this in some simple pythonic way. At the moment, I have only come up with eval/exec and I know that is not the most recommended approach. So, what would be the best way to accomplish this?? ( The rules are stored as data in database so each object like "temperature", condition like "/=..etc" , value like "40,50..etc" and action like "email, sms, etc.." are stored in the database, i retrieve this to form the condition...if temp 50 send email, that was my idea to then use exec or eval on them to make it live code..but not sure if this is the right approach )

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

  • How do you calculate expanding mean on time series using pandas?

    - by mlo
    How would you create a column(s) in the below pandas DataFrame where the new columns are the expanding mean/median of 'val' for each 'Mod_ID_x'. Imagine this as if were time series data and 'ID' 1-2 was on Day 1 and 'ID' 3-4 was on Day 2. I have tried every way I could think of but just can't seem to get it right. left4 = pd.DataFrame({'ID': [1,2,3,4],'val': [10000, 25000, 20000, 40000],'Mod_ID': [15, 35, 15, 42], 'car': ['ford','honda', 'ford', 'lexus']}) right4 = pd.DataFrame({'ID': [3,1,2,4],'color': ['red', 'green', 'blue', 'grey'], 'wheel': ['4wheel','4wheel', '2wheel', '2wheel'], 'Mod_ID': [15, 15, 35, 42]}) df1 = pd.merge(left4, right4, on='ID').drop('Mod_ID_y', axis=1)

    Read the article

  • Generating a .CSV with Several Columns - Use a Dictionary?

    - by Qanthelas
    I am writing a script that looks through my inventory, compares it with a master list of all possible inventory items, and tells me what items I am missing. My goal is a .csv file where the first column contains a unique key integer and then the remaining several columns would have data related to that key. For example, a three row snippet of my end-goal .csv file might look like this: 100001,apple,fruit,medium,12,red 100002,carrot,vegetable,medium,10,orange 100005,radish,vegetable,small,10,red The data for this is being drawn from a couple sources. 1st, a query to an API server gives me a list of keys for items that are in inventory. 2nd, I read in a .csv file into a dict that matches keys with item name for all possible keys. A snippet of the first 5 rows of this .csv file might look like this: 100001,apple 100002,carrot 100003,pear 100004,banana 100005,radish Note how any key in my list of inventory will be found in this two column .csv file that gives all keys and their corresponding item name and this list minus my inventory on hand yields what I'm looking for (which is the inventory I need to get). So far I can get a .csv file that contains just the keys and item names for the items that I don't have in inventory. Give a list of inventory on hand like this: 100003,100004 A snippet of my resulting .csv file looks like this: 100001,apple 100002,carrot 100005,radish This means that I have pear and banana in inventory (so they are not in this .csv file.) To get this I have a function to get an item name when given an item id that looks like this: def getNames(id_to_name, ids): return [id_to_name[id] for id in ids] Then a function which gives a list of keys as integers from my inventory server API call that returns a list and I've run this function like this: invlist = ServerApiCallFunction(AppropriateInfo) A third function takes this invlist as its input and returns a dict of keys (the item id) and names for the items I don't have. It also writes the information of this dict to a .csv file. I am using the set1 - set2 method to do this. It looks like this: def InventoryNumbers(inventory): with open(csvfile,'w') as c: c.write('InvName' + ',InvID' + '\n') missinginvnames = [] with open("KeyAndItemNameTwoColumns.csv","rb") as fp: reader = csv.reader(fp, skipinitialspace=True) fp.readline() # skip header invidsandnames = {int(id): str.upper(name) for id, name in reader} invids = set(invidsandnames.keys()) invnames = set(invidsandnames.values()) invonhandset = set(inventory) missinginvidsset = invids - invonhandset missinginvids = list(missinginvidsset) missinginvnames = getNames(invidsandnames, missinginvids) missinginvnameswithids = dict(zip(missinginvnames, missinginvids)) print missinginvnameswithids with open(csvfile,'a') as c: for invname, invid in missinginvnameswithids.iteritems(): c.write(invname + ',' + str(invid) + '\n') return missinginvnameswithids Which I then call like this: InventoryNumbers(invlist) With that explanation, now on to my question here. I want to expand the data in this output .csv file by adding in additional columns. The data for this would be drawn from another .csv file, a snippet of which would look like this: 100001,fruit,medium,12,red 100002,vegetable,medium,10,orange 100003,fruit,medium,14,green 100004,fruit,medium,12,yellow 100005,vegetable,small,10,red Note how this does not contain the item name (so I have to pull that from a different .csv file that just has the two columns of key and item name) but it does use the same keys. I am looking for a way to bring in this extra information so that my final .csv file will not just tell me the keys (which are item ids) and item names for the items I don't have in stock but it will also have columns for type, size, number, and color. One option I've looked at is the defaultdict piece from collections, but I'm not sure if this is the best way to go about what I want to do. If I did use this method I'm not sure exactly how I'd call it to achieve my desired result. If some other method would be easier I'm certainly willing to try that, too. How can I take my dict of keys and corresponding item names for items that I don't have in inventory and add to it this extra information in such a way that I could output it all to a .csv file? EDIT: As I typed this up it occurred to me that I might make things easier on myself by creating a new single .csv file that would have date in the form key,item name,type,size,number,color (basically just copying in the column for item name into the .csv that already has the other information for each key.) This way I would only need to draw from one .csv file rather than from two. Even if I did this, though, how would I go about making my desired .csv file based on only those keys for items not in inventory?

    Read the article

  • a function that returns a random number that is a multiple of 3 between 0 and the function's non-negative integer parameter n

    - by martin
    I need to write a function called multipleOf3 that returns a random number that is a multiple of 3 between 0 and the function's non-negative integer parameter n and here is the result i want [Note: No number returned can be greater than the value of the parameter n] Examples: multipleOf3(0) -- 0 multipleOf3(1) -- 0 multipleOf3(2) -- 0 multipleOf3(3) -- 0 or 3 multipleOf3(20) -- 0 or 3 or 6 or 9 or 12 or 15 or 18

    Read the article

  • Preventing a security breach

    - by Wiz
    I am creating a website where you "post", and the form content is saved in a MySql database, and upon loading the page, is retrieved, similar to facebook. I construct all the posts and insert raw html into a template. The thing is, as I was testing, I noticed that I could write javascript or other HTML into the form and submit it, and upon reloading, the html or JS would treated as source code, not a post. I figured that some simple encoding would do the trick, but using is not working. Is there an efficient way to prevent this type of security hole?

    Read the article

  • Defining the hash of an object as the sum of hashes of its members

    - by Space_C0wb0y
    I have a class that represents undirected edges in a graph. Every edge has two members vertex1 and vertex2 representing the vertices it connects. The problem is, that an edge can be specified two directions. My idea was now to define the hash of an edge as the sum of the hashes of its vertices. This way, the direction plays no role anymore, the hash would be the same. Are there any pitfalls with that?

    Read the article

  • Problem with validating ModelForm

    - by user561640
    I use ModelForm to create my form. All works fine except 1 thing - validating the unique field. Code: class Article(models.Model): ... title = models.CharField(max_length=255, unique=True, error_messages={'max_length' : 'max translation', 'unique' : 'unique translation', 'required' : 'req translation',}) ... class ArticleForm(ModelForm): ... title = forms.CharField(max_length=255, min_length=3, error_messages={'required' : 'req translation', 'min_length' : 'min translation', 'max_length' : 'max translation', 'unique' : 'unique translation',}) But when I save my form with non-unique title I don't get my custom translated error but I get default error. How to fix it, that my unique field error is displayed?

    Read the article

  • Own params to PeriodicTask run() method in Celery

    - by Alex Isayko
    Hello to all! I am writing a small Django application and I should be able to create for each model object its periodical task which will be executed with a certain interval. I'm use for this a Celery application, but i can't understand one thing: class ProcessQueryTask(PeriodicTask): run_every = timedelta(minutes=1) def run(self, query_task_pk, **kwargs): logging.info('Process celery task for QueryTask %d' % query_task_pk) task = QueryTask.objects.get(pk=query_task_pk) task.exec_task() return True Then i'm do following: >>> from tasks.tasks import ProcessQueryTask >>> result1 = ProcessQueryTask.delay(query_task_pk=1) >>> result2 = ProcessQueryTask.delay(query_task_pk=2) First call is success, but other periodical calls returning the error - TypeError: run() takes exactly 2 non-keyword arguments (1 given) in celeryd server. So, can i pass own params to PeriodicTask run() ? Thanks!

    Read the article

  • Prevent web2py from caching ?

    - by Joe
    Hi ! I'm working with web2py and for some reason web2py seems to fail to notice when code has changed in certain cases. I can't really narrow it down, but from time to time changes in the code are not reflected, web2py obviously has the old version cached somewhere. The only thing that helps is quitting web2py and restarting it (i'm using the internal server). Any hints ? Thank you !

    Read the article

  • How to quickly parse a list of strings

    - by math
    If I want to split a list of words separated by a delimiter character, I can use >>> 'abc,foo,bar'.split(',') ['abc', 'foo', 'bar'] But how to easily and quickly do the same thing if I also want to handle quoted-strings which can contain the delimiter character ? In: 'abc,"a string, with a comma","another, one"' Out: ['abc', 'a string, with a comma', 'another, one'] Related question: How can i parse a comma delimited string into a list (caveat)?

    Read the article

  • Programmatic binding of accelerators in wxPython

    - by Inductiveload
    I am trying to programmatically create and bind a table of accelerators in wxPython in a loop so that I don't need to worry about getting and assigning new IDs to each accelerators (and with a view to inhaling the handler list from some external resource, rather than hard-coding them). I also pass in some arguments to the handler via a lambda since a lot of my handlers will be the same but with different parameters (move, zoom, etc). The class is subclassed from wx.Frame and setup_accelerators() is called during initialisation. def setup_accelerators(self): bindings = [ (wx.ACCEL_CTRL, wx.WXK_UP, self.on_move, 'up'), (wx.ACCEL_CTRL, wx.WXK_DOWN, self.on_move, 'down'), (wx.ACCEL_CTRL, wx.WXK_LEFT, self.on_move, 'left'), (wx.ACCEL_CTRL, wx.WXK_RIGHT, self.on_move, 'right'), ] accelEntries = [] for binding in bindings: eventId = wx.NewId() accelEntries.append( (binding[0], binding[1], eventId) ) self.Bind(wx.EVT_MENU, lambda event: binding[2](event, binding[3]), id=eventId) accelTable = wx.AcceleratorTable(accelEntries) self.SetAcceleratorTable(accelTable) def on_move(self, e, direction): print direction However, this appears to bind all the accelerators to the last entry, so that Ctrl+Up prints "right", as do all the other three. How to correctly bind multiple handlers in this way?

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >