Search Results

Search found 63 results on 3 pages for 'intuited'.

Page 3/3 | < Previous Page | 1 2 3 

  • documenting class properties

    - by intuited
    I'm writing a lightweight class whose properties are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class properties, or any sort of properties, for that matter. What is the accepted way, should there be one, to document these properties? Currently I'm doing this sort of thing: class Albatross(object): """A bird with a flight speed exceeding that of an unladen swallow. Properties: """ flight_speed = 691 __doc__ += """ flight_speed (691) The maximum speed that such a bird can attain """ nesting_grounds = "Throatwarbler Man Grove" __doc__ += """ nesting_grounds ("Throatwarbler Man Grove") The locale where these birds congregate to reproduce. """ def __init__(**keyargs): """Initialize the Albatross from the keyword arguments.""" self.__dict__.update(keyargs) Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document properties alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the properties twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the properties and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the property names and values twice. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.

    Read the article

  • Why does extend() engage in bizarre behaviour when passed the same list twice?

    - by intuited
    I'm pretty confused by one of the subtleties of the vimscript extend() function. If you use it to extend a list with another list, it does pretty much what you'd expect, which is to insert the second list into the first list at the index given by the third parameter: let list1 = [1,2,3,4,5,6] | echo extend(list1,[1,2,3,4,5,6],5) " [1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 6] However if you give it the same list twice it starts tripping out a bit. let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,0) " [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,1) " [1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,2) " [1, 2, 1, 2, 1, 2, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,3) " [1, 2, 3, 1, 2, 3, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,4) " [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,5) " [1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,6) " [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6] Extra-confusingly, this behaviour applies when the list is referenced with two different variables: let list1 = [1,2,3,4,5,6] | let list2 = list1 | echo extend(list1,list2,4) " [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 5, 6] This is totally bizarre to me. I can't fathom a use for this functionality, and it seems like it would be really easy to invoke it by accident when you just wanted to insert one list into another and didn't realize that the variables were referencing the same array. The documentation says the following: If they are |Lists|: Append {expr2} to {expr1}. If {expr3} is given insert the items of {expr2} before item {expr3} in {expr1}. When {expr3} is zero insert before the first item. When {expr3} is equal to len({expr1}) then {expr2} is appended. Examples: :echo sort(extend(mylist, [7, 5])) :call extend(mylist, [2, 3], 1) When {expr1} is the same List as {expr2} then the number of items copied is equal to the original length of the List. E.g., when {expr3} is 1 you get N new copies of the first item (where N is the original length of the List). Does this make sense in a way that I'm not getting, or is it just an eccentricity?

    Read the article

  • foldmethod=indent gets confused

    - by intuited
    Normally a great boon to humanity, on occasion vim's indent-based folding will get confused and need a reset via :set foldmethod=indent. Symptoms include the appearance of consecutive folded lines in the window. Is there a way to avoid having this happen? Is it just me?

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • just-in-time list

    - by intuited
    I'd like to know if there is a class available, either in the standard library or in pypi, that fits this description. The constructor would take an iterator. It would implement the container protocol (ie _getitem_, _len_, etc), so that slices, length, etc., would work. In doing so, it would iterate and retain just enough values from its constructor argument to provide whatever information was requested. So if jitlist[6] was requested, it would call self.source.next() 7 times, save those elements in its list, and return the last one. This would allow downstream code to use it as a list, but avoid unnecessarily instantiating a list for cases where list functionality was not needed, and avoid allocating memory for the entire list if only a few members ended up being requested. It seems like a pretty easy one to write, but it also seems useful enough that it's likely that someone would have already made it available in a module.

    Read the article

  • committing to a branch that's not checked out

    - by intuited
    I'm using git to version my home directories on a couple different machines. I'd like for them to each use separate branches and both pull from a common branch. So most commits should be made to that common branch, unless something specific to that machine is being committed, in which case the commit should go to the checked out, machine-specific branch. Switching branches is clearly not a very good option in this case. It's mentioned in this post that what I want to do is impossible, but I found that answer to be rather blunt and to perhaps not take into account the possibility of using the plumbing commands. Unfortunately I don't have enough reputation to comment on that thread. I rather suspect that there is some way to do this and am hoping to save myself an hour or few of questing for the answer by just asking you good folk. So is it possible to commit to a different branch without checking that branch out first? Ideally I'd like to use the index in the same way that git commit normally does.

    Read the article

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • routine to generate a 2d array from two 1d arrays and a function

    - by intuited
    I'm guessing that there's a word for this concept, and that it's available in at least some popular languages, but my perfunctory search was fruitless. A pseudocode example of what I'd like to do: function foo(a, b) { return a * b // EG } a = [ 1, 2, 3 ] b = [ 4, 5, 6 ] matrix = the_function_for_which_I_search(foo, [a, b] ) print matrix => [ [ 4, 8, 12], [5, 10, 15], [6, 12, 18] ] // or function concatenate(a,b) return a.b } print the_function_for_which_I_search( concatenate, [ a, b ]) => [ [ '14', '24', '34'], ['15', '25', '35'], [16', '26', '36'] ] In other words, function_for_which_I_search will apply the function given as its first argument to each combination of the elements of the two arrays passed as its second argument, and return the results as a two-dimensional array. I would like to know if such a routine has a common name, and if it's available in a python module, cpan package, ruby gem, pear package, etc. I'm also wondering if this is a core function in other languages, maybe haskell or R?

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • Problems installing a package from PyPI: root files not installed

    - by intuited
    After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils. I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for. I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages.

    Read the article

  • Implementing prototypes OR instantiating class objects

    - by intuited
    I'm wondering how to implement prototypal inheritance in Python. It seems like the way to do this would be to either use a metaclass to cause instantiations to actually be classes, rather than objects, or use some magical powers to transform an existing object into a class. The second method would seem to be more flexible, in that it could be applied to existing objects of varied types, while the first would likely be more convenient for typical use cases. Insights on the practicality of these two approaches, as well as alternative suggestions, are hereby requested.

    Read the article

  • Getting a list of all children of a given commit

    - by intuited
    I'd like to run git filter-branch on all children of a given commit. This doesn't seem to be an easy task, since there doesn't appear to be a way to tell git rev-list to only return children of a particular commit. Using the .. syntax won't work because it will also include the parent commits of any merge within that range. Am I missing something here?

    Read the article

  • building a hash lookup table during `git filter-branch` or `git-rebase`

    - by intuited
    I've been using the SHA1 hashes of my commits as references in documentation, etc. I've realized that if I need to rewrite those commits, I'll need to create a lookup table to correspond the hashes for the original repo with the hashes for the filtered repo. Since these are effectively UUID's, a simple lookup table would do. I think that it's relatively straightforward to write a script to do this during a filter-branch run; that's not really my question, though if there are some gotchas that make it complicated, I'd certainly like to hear about them. I'm really wondering if there are any tools that provide this functionality, or if there is some sort of convention on where to keep the lookup table/what to call it? I'd prefer not to do things in a completely idiosyncratic way.

    Read the article

< Previous Page | 1 2 3