Search Results

Search found 95 results on 4 pages for 'iterable'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Destructuring assignment problem

    - by Eli Grey
    Why does for ([] in iterable); work fine but [void 0 for ([] in iterable)] throw a syntax error for invalid left-hand assignment? For example, I would expect the following code to work, but it doesn't (the assertion isn't even done due to the syntax error): let (i = 0, iterable = (i for (i in [1, 2, 3, 4]))) { for ([] in iterable) i++; console.assertNotGreater([void 0 for ([] in iterable)].length, i); }

    Read the article

  • In Scala 2.8 collections, why was the Traversable type added above Iterable?

    - by Seth Tisue
    I know that to be Traversable, you need only have a foreach method. Iterable requires an iterator method. Both the Scala 2.8 collections SID and the "Fighting Bitrot with Types" paper are basically silent on the subject of why Traversable was added. The SID only says "David McIver... proposed Traversable as a generalization of Iterable." I have vaguely gathered from discussions on IRC that it has to do with reclaiming resources when traversal of a collection terminates? The following is probably related to my question. There are some odd-looking function definitions in TraversableLike.scala, for example: def isEmpty: Boolean = { var result = true breakable { for (x <- this) { result = false break } } result } I assume there's a good reason that wasn't just written as: def isEmpty: Boolean = { for (x <- this) return false true }

    Read the article

  • In Python, is there a way to call a method on every item of an iterable? [closed]

    - by Thane Brimhall
    Possible Duplicate: Is there a map without result in python? I often come to a situation in my programs when I want to quickly/efficiently call an in-place method on each of the items contained by an iterable. (Quickly meaning the overhead of a for loop is unacceptable). A good example would be a list of sprites when I want to call draw() on each of the Sprite objects. I know I can do something like this: [sprite.draw() for sprite in sprite_list] But I feel like the list comprehension is misused since I'm not using the returned list. The same goes for the map function. Stone me for premature optimization, but I also don't want the overhead of the return value. What I want to know is if there's a method in Python that lets me do what I just explained, perhaps like the hypothetical function I suggest below: do_all(sprite_list, draw)

    Read the article

  • Wildcards vs. generic methods

    - by FredOverflow
    Is there any practical difference between the following approaches to print all elements in a range? public static void printA(Iterable<?> range) { for (Object o : range) { System.out.println(o); } } public static <T> void printB(Iterable<T> range) { for (T x : range) { System.out.println(x); } } Apparently, printB involves an additional checked cast to Object (see line 16), which seems rather stupid to me -- isn't everything an Object anyway? public static void printA(java.lang.Iterable); Code: 0: aload_0 1: invokeinterface #18, 1; //InterfaceMethod java/lang/Iterable.iterator:()Ljava/util/Iterator; 6: astore_2 7: goto 24 10: aload_2 11: invokeinterface #24, 1; //InterfaceMethod java/util/Iterator.next:()Ljava/lang/Object; 16: astore_1 17: getstatic #30; //Field java/lang/System.out:Ljava/io/PrintStream; 20: aload_1 21: invokevirtual #36; //Method java/io/PrintStream.println:(Ljava/lang/Object;)V 24: aload_2 25: invokeinterface #42, 1; //InterfaceMethod java/util/Iterator.hasNext:()Z 30: ifne 10 33: return public static void printB(java.lang.Iterable); Code: 0: aload_0 1: invokeinterface #18, 1; //InterfaceMethod java/lang/Iterable.iterator:()Ljava/util/Iterator; 6: astore_2 7: goto 27 10: aload_2 11: invokeinterface #24, 1; //InterfaceMethod java/util/Iterator.next:()Ljava/lang/Object; 16: checkcast #3; //class java/lang/Object 19: astore_1 20: getstatic #30; //Field java/lang/System.out:Ljava/io/PrintStream; 23: aload_1 24: invokevirtual #36; //Method java/io/PrintStream.println:(Ljava/lang/Object;)V 27: aload_2 28: invokeinterface #42, 1; //InterfaceMethod java/util/Iterator.hasNext:()Z 33: ifne 10 36: return

    Read the article

  • Convert sqlalchemy row object to python dict

    - by Anurag Uniyal
    or a simple way to iterate over columnName, value pairs? My version of sqlalchemy is 0.5.6 Here is the sample code where I tried using dict(row), but it throws exception , TypeError: 'User' object is not iterable import sqlalchemy from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker print "sqlalchemy version:",sqlalchemy.__version__ engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() users_table = Table('users', metadata, Column('id', Integer, primary_key=True), Column('name', String), ) metadata.create_all(engine) class User(declarative_base()): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String) def __init__(self, name): self.name = name Session = sessionmaker(bind=engine) session = Session() user1 = User("anurag") session.add(user1) session.commit() # uncommenting next line throws exception 'TypeError: 'User' object is not iterable' #print dict(user1) # this one also throws 'TypeError: 'User' object is not iterable' for u in session.query(User).all(): print dict(u) Running this code on my system outputs: sqlalchemy version: 0.5.6 Traceback (most recent call last): File "untitled-1.py", line 37, in <module> print dict(u) TypeError: 'User' object is not iterable

    Read the article

  • how do call a polymorphic function from an agnostic function?

    - by sds
    I have a method foo void foo (String x) { ... } void foo (Integer x) { ... } and I want to call it from a method which does not care about the argument: void bar (Iterable i) { ... for (Object x : i) foo(x); // this is the only time i is used ... } the code above complains that that foo(Object) is not defined and when I add void foo (Object x) { throw new Exception; } then bar(Iterable<String>) calls that instead of foo(String) and throws the exception. How do I avoid having two textually identical definitions of bar(Iterable<String>) and bar(Iterable<Integer>)? I thought I would be able to get away with something like <T> void bar (Iterable<T> i) { ... for (T x : i) foo(x); // this is the only time i is used ... } but then I get cannot find foo(T) error.

    Read the article

  • Chunking a List - .NET vs Python

    - by Abhijeet Patel
    Chunking a List As I mentioned last time, I'm knee deep in python these days. I come from a statically typed background so it's definitely a mental adjustment. List comprehensions is BIG in Python and having worked with a few of them I can see why. Let's say we need to chunk a list into sublists of a specified size. Here is how we'd do it in C#  static class Extensions   {       public static IEnumerable<List<T>> Chunk<T>(this List<T> l, int chunkSize)       {           if (chunkSize <0)           {               throw new ArgumentException("chunkSize cannot be negative", "chunkSize");           }           for (int i = 0; i < l.Count; i += chunkSize)           {               yield return new List<T>(l.Skip(i).Take(chunkSize));           }       }    }    static void Main(string[] args)  {           var l = new List<string> { "a", "b", "c", "d", "e", "f","g" };             foreach (var list in l.Chunk(7))           {               string str = list.Aggregate((s1, s2) => s1 + "," + s2);               Console.WriteLine(str);           }   }   A little wordy but still pretty concise thanks to LINQ.We skip the iteration number plus chunkSize elements and yield out a new List of chunkSize elements on each iteration. The python implementation is a bit more terse. def chunkIterable(iter, chunkSize):      '''Chunks an iterable         object into a list of the specified chunkSize     '''        assert hasattr(iter, "__iter__"), "iter is not an iterable"      for i in xrange(0, len(iter), chunkSize):          yield iter[i:i + chunkSize]    if __name__ == '__main__':      l = ['a', 'b', 'c', 'd', 'e', 'f']      generator = chunkIterable(l,2)      try:          while(1):              print generator.next()      except StopIteration:          pass   xrange generates elements in the specified range taking in a seed and returning a generator. which can be used in a for loop(much like using a C# iterator in a foreach loop) Since chunkIterable has a yield statement, it turns this method into a generator as well. iter[i:i + chunkSize] essentially slices the list based on the current iteration index and chunksize and creates a new list that we yield out to the caller one at a time. A generator much like an iterator is a state machine and each subsequent call to it remembers the state at which the last call left off and resumes execution from that point. The caveat to keep in mind is that since variables are not explicitly typed we need to ensure that the object passed in is iterable using hasattr(iter, "__iter__").This way we can perform chunking on any object which is an "iterable", very similar to accepting an IEnumerable in the .NET land

    Read the article

  • Abort early in a fold

    - by Heptic
    What's the best way to terminate a fold early? As a simplified example, imagine I want to sum up the numbers in an Iterable, but if I encounter something I'm not expecting (say an odd number) I might want to terminate. This is a first approximation def sumEvenNumbers(nums: Iterable[Int]): Option[Int] = { nums.foldLeft (Some(0): Option[Int]) { case (None, _) => None case (Some(s), n) if n % 2 == 0 => Some(s + n) case (Some(_), _) => None } } However, this solution is pretty ugly (as in, if I did a .foreach and a return -- it'd be much cleaner and clearer) and worst of all, it traverses the entire iterable even if it encounters a non-even number. So what would be the best way to write a fold like this, that terminates early? Should I just go and write this recursively, or is there a more accepted way?

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • algorithm for python itertools.permutations

    - by zaharpopov
    Can someone please explain algorithm for itertools.permutations routine in Python standard lib 2.6? I see its code in the documentation but don't undestand why it work? Thanks Code is: def permutations(iterable, r=None): # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC # permutations(range(3)) --> 012 021 102 120 201 210 pool = tuple(iterable) n = len(pool) r = n if r is None else r if r > n: return indices = range(n) cycles = range(n, n-r, -1) yield tuple(pool[i] for i in indices[:r]) while n: for i in reversed(range(r)): cycles[i] -= 1 if cycles[i] == 0: indices[i:] = indices[i+1:] + indices[i:i+1] cycles[i] = n - i else: j = cycles[i] indices[i], indices[-j] = indices[-j], indices[i] yield tuple(pool[i] for i in indices[:r]) break else: return

    Read the article

  • Convert args to flat list?

    - by Mark
    I know this is very similar to a few other questions, but I can't quite get this function to work correctly. def flatten(*args): return list(item for iterable in args for item in iterable) The output I'm looking for is: flatten(1) -> [1] flatten(1,[2]) -> [1, 2] flatten([1,[2]]) -> [1, 2] The current function, which I from another SO answer doesn't seem to produce correct results at all: >>> flatten([1,[2]]) [1, [2]] I wrote the following function which seems to work for 0 or 1 levels of nesting, but not deeper: def flatten(*args): output = [] for arg in args: if hasattr(arg, '__iter__'): output += arg else: output += [arg] return output

    Read the article

  • Library for Dataflow in C

    - by msutherl
    How can I do dataflow (pipes and filters, stream processing, flow based) in C? And not with UNIX pipes. I recently came across stream.py. Streams are iterables with a pipelining mechanism to enable data-flow programming and easy parallelization. The idea is to take the output of a function that turns an iterable into another iterable and plug that as the input of another such function. While you can already do this using function composition, this package provides an elegant notation for it by overloading the operator. I would like to duplicate a simple version of this kind of functionality in C. I particularly like the overloading of the operator to avoid function composition mess. Wikipedia points to this hint from a Usenet post in 1990. Why C? Because I would like to be able to do this on microcontrollers and in C extensions for other high level languages (Max, Pd*, Python). * (ironic given that Max and Pd were written, in C, specifically for this purpose – I'm looking for something barebones)

    Read the article

  • Scala : reference is ambiguous (imported twice)

    - by tk
    I want to use a method as a parameter of another method of the same class. I have a class and objects which are companions: class mM(var elem:Matrix){ //apply a function on a dimension rows (1) or cols (2) def app(func:Iterable[Double]=>Double)(dim : Int) : Matrix = { ... } //utility function def logsumexp(): Double = {...} } object mM{ def apply(elem:Matrix):mM={new mM(elem)} def logsumexp(elem:Iterable[Double]): Double ={ this.apply(elem.asInstanceOf[Matrix]).logsumexp() } } Normally I use logsumexp like this mM(matrix).logsumexp but if want to apply it to the rows I can't use mM(matrix).app(mM.logsumexp)(1), I get the error: error: reference to mM is ambiguous; it is imported twice in the same scope by import mM and import mM What is the most elegant solution ? Should I change logsumexp() to another class ? Thanks,=)

    Read the article

  • Why does Python's 'for ... in' work differently on a list of values vs. a list of dictionaries?

    - by Code Duck
    I'm wondering about some details of how for ... in works in Python. My understanding is for var in iterable on each iteration creates a variable, var, bound to the current value of iterable. So, if you do for c in cows; c = cows[whatever], but changing c within the loop does not affect the original value. However, it seems to work differently if you're assigning a value to a dictionary key. cows=[0,1,2,3,4,5] for c in cows: c+=2 #cows is now the same - [0,1,2,3,4,5] cows=[{'cow':0},{'cow':1},{'cow':2},{'cow':3},{'cow':4},{'cow':5}] for c in cows: c['cow']+=2 # cows is now [{'cow': 2}, {'cow': 3}, {'cow': 4}, {'cow': 5}, {'cow': 6}, {'cow': 7} #so, it's changed the original, unlike the previous example I see one can use enumerate to make the first example work, too, but that's a different story, I guess. cows=[0,1,2,3,4,5] for i,c in enumerate(cows): cows[i]+=1 # cows is now [1, 2, 3, 4, 5, 6] Why does it affect the original list values in the second example but not the first?

    Read the article

  • Python list recursion type error

    - by Jacob J Callahan
    I can't seem to figure out why the following code is giving me a TypeError: 'type' object is not iterable pastebin: http://pastebin.com/VFZYY4v0 def genList(self): #recursively generates a sorted list of child node values numList = [] if self.leftChild != 'none': numList.extend(self.leftChild.genList()) #error numList.extend(list((self.Value,))) if self.rightChild != 'none': numList.extend(self.rightChild.genList()) #error return numList code that adds child nodes (works correctly) def addChild(self, child): #add a child node. working if child.Value < self.Value: if self.leftChild == 'none': self.leftChild = child child.parent = self else: self.leftChild.addChild(child) elif child.Value > self.Value: if self.rightChild == 'none': self.rightChild = child child.parent = self else: self.rightChild.addChild(child) Any help would be appreciated. Full interpreter session: >>> import BinTreeNode as BTN >>> node1 = BTN.BinaryTreeNode(5) >>> node2 = BTN.BinaryTreeNode(2) >>> node3 = BTN.BinaryTreeNode(12) >>> node3 = BTN.BinaryTreeNode(16) >>> node4 = BTN.BinaryTreeNode(4) >>> node5 = BTN.BinaryTreeNode(13) >>> node1.addChild(node2) >>> node1.addChild(node3) >>> node1.addChild(node4) >>> node1.addChild(node5) >>> node4.genList() <class 'list'> >>> node1.genList() Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:...\python\BinTreeNode.py", line 47, in genList numList.extend(self.leftChild.genList()) #error File "C:...\python\BinTreeNode.py", line 52, in genList TypeError: 'type' object is not iterable

    Read the article

  • Partials vs for loop — best practices

    - by Mike
    In coding up your view templates you can render a partial and pass an array of objects to be rendered once per object. OR you can use a For blank in @blank loop. How do you decide when to do which? It seems that if you use a partial for every iterable object you will end up having to modify tons of separate files to make changes to potentially one view. With the loops you can see everything right there in one file.

    Read the article

  • More advanced usage of interfaces

    - by owca
    To be honest I'm not quite sure if I understand the task myself :) I was told to create class MySimpleIt, that implements Iterator and Iterable and will allow to run the provided test code. Arguments and variables of objects cannot be either Collections or arrays. The code : MySimpleIt msi=new MySimple(10,100, MySimpleIt.PRIME_NUMBERS); for(int el: msi) System.out.print(el+" "); System.out.println(); msi.setType(MySimpleIterator.ODD_NUMBERS); msi.setLimits(15,30); for(int el: msi) System.out.print(el+" "); System.out.println(); msi.setType(MySimpleIterator.EVEN_NUMBERS); for(int el: msi) System.out.print(el+" "); System.out.println(); The result I should obtain : 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 15 17 19 21 23 25 27 29 16 18 20 22 24 26 28 30 And here's my code : import java.util.Iterator; interface MySimpleIterator{ static int ODD_NUMBERS=0; static int EVEN_NUMBERS = 1; static int PRIME_NUMBERS = 2; int setType(int i); } public class MySimpleIt implements Iterable, Iterator, MySimpleIterator { public MySimple my; public MySimpleIt(MySimple m){ my = m; } public int setType(int i){ my.numbers = i; return my.numbers; } public void setLimits(int d, int u){ my.down = d; my.up = u; } public Iterator iterator(){ Iterator it = this.iterator(); return it; } public void remove(){ } public Object next(){ Object o = new Object(); return o; } public boolean hasNext(){ return true; } } class MySimple { public int down; public int up; public int numbers; public MySimple(int d, int u, int n){ down = d; up = u; numbers = n; } } In the test code I have error in line when creating MySimpleIt msi object, as it finds MySimple instead of MySimpleIt. Also I have errors in for-each loops, because compiler wants 'ints' there instead of Object. Anyone has any idea on how to solve it ?

    Read the article

  • What are the Ruby equivalent of Python itertools, esp. combinations/permutations/groupby?

    - by Amadeus
    Python's itertools module provides a lots of goodies with respect to processing an iterable/iterator by use of generators. For example, permutations(range(3)) --> 012 021 102 120 201 210 combinations('ABCD', 2) --> AB AC AD BC BD CD [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D What are the equivalent in Ruby? By equivalent, I mean fast and memory efficient (Python's itertools module is written in C).

    Read the article

  • Iterating over key and value of defaultdict dictionaries

    - by gf
    The following works as expected: d = [(1,2), (3,4)] for k,v in d: print "%s - %s" % (str(k), str(v)) But this fails: d = collections.defaultdict(int) d[1] = 2 d[3] = 4 for k,v in d: print "%s - %s" % (str(k), str(v)) With: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not iterable Why? How can i fix it?

    Read the article

  • Python evaluation order

    - by d.m
    Here's the code, I don't quite understand, how does it work. Could anyone tell, is that an expected behavior? $ipython In [1]: 1 in [1] == True Out[1]: False In [2]: (1 in [1]) == True Out[2]: True In [3]: 1 in ([1] == True) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/dmedvinsky/projects/condo/condo/<ipython console> in <module>() TypeError: argument of type 'bool' is not iterable In [4]: from sys import version_info In [5]: version_info Out[5]: (2, 6, 4, 'final', 0)

    Read the article

  • Standard Interfaces

    - by Amir Rachum
    I've used Java for some time and I keep hearing about interfaces such as Cloneable, Iterable and other X-ables. I was wondering if there is a list somewhere of all of these and more importantly - which ones do you regularly use day-to-day? For example, I've read that Cloneable is considered badly written and isn't widely used.

    Read the article

  • How to pick a chunksize for python multiprocessing with large datasets

    - by Sandro
    I am attempting to to use python to gain some performance on a task that can be highly parallelized using http://docs.python.org/library/multiprocessing. When looking at their library they say to use chunk size for very long iterables. Now, my iterable is not long, one of the dicts that it contains is huge: ~100000 entries, with tuples as keys and numpy arrays for values. How would I set the chunksize to handle this and how can I transfer this data quickly? Thank you.

    Read the article

< Previous Page | 1 2 3 4  | Next Page >