Search Results

Search found 338 results on 14 pages for 'numpy'.

Page 1/14 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • append a numpy array to a numpy array

    - by Fraz
    I just started programming in python and am very new to numpy packages.. so still trying to get a hang of it. I have a an numpy_array so something like [ a b c] And then I want to append it into anotehr numpyarray (Just like we create a list of lists) How do we create an array of numpy arrays containing numpy arrays I tried to do the following without any luck >>> M = np.array([]) >>> M array([], dtype=float64) >>> M.append(a,axis=0) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'numpy.ndarray' object has no attribute 'append' >>> a array([1, 2, 3])

    Read the article

  • Building an interleaved buffer for pyopengl and numpy

    - by Nick Sonneveld
    I'm trying to batch up a bunch of vertices and texture coords in an interleaved array before sending it to pyOpengl's glInterleavedArrays/glDrawArrays. The only problem is that I'm unable to find a suitably fast enough way to append data into a numpy array. Is there a better way to do this? I would have thought it would be quicker to preallocate the array and then fill it with data but instead, generating a python list and converting it to a numpy array is "faster". Although 15ms for 4096 quads seems slow. I have included some example code and their timings. #!/usr/bin/python import timeit import numpy import ctypes import random USE_RANDOM=True USE_STATIC_BUFFER=True STATIC_BUFFER = numpy.empty(4096*20, dtype=numpy.float32) def render(i): # pretend these are different each time if USE_RANDOM: tex_left, tex_right, tex_top, tex_bottom = random.random(), random.random(), random.random(), random.random() left, right, top, bottom = random.random(), random.random(), random.random(), random.random() else: tex_left, tex_right, tex_top, tex_bottom = 0.0, 1.0, 1.0, 0.0 left, right, top, bottom = -1.0, 1.0, 1.0, -1.0 ibuffer = ( tex_left, tex_bottom, left, bottom, 0.0, # Lower left corner tex_right, tex_bottom, right, bottom, 0.0, # Lower right corner tex_right, tex_top, right, top, 0.0, # Upper right corner tex_left, tex_top, left, top, 0.0, # upper left ) return ibuffer # create python list.. convert to numpy array at end def create_array_1(): ibuffer = [] for x in xrange(4096): data = render(x) ibuffer += data ibuffer = numpy.array(ibuffer, dtype=numpy.float32) return ibuffer # numpy.array, placing individually by index def create_array_2(): if USE_STATIC_BUFFER: ibuffer = STATIC_BUFFER else: ibuffer = numpy.empty(4096*20, dtype=numpy.float32) index = 0 for x in xrange(4096): data = render(x) for v in data: ibuffer[index] = v index += 1 return ibuffer # using slicing def create_array_3(): if USE_STATIC_BUFFER: ibuffer = STATIC_BUFFER else: ibuffer = numpy.empty(4096*20, dtype=numpy.float32) index = 0 for x in xrange(4096): data = render(x) ibuffer[index:index+20] = data index += 20 return ibuffer # using numpy.concat on a list of ibuffers def create_array_4(): ibuffer_concat = [] for x in xrange(4096): data = render(x) # converting makes a diff! data = numpy.array(data, dtype=numpy.float32) ibuffer_concat.append(data) return numpy.concatenate(ibuffer_concat) # using numpy array.put def create_array_5(): if USE_STATIC_BUFFER: ibuffer = STATIC_BUFFER else: ibuffer = numpy.empty(4096*20, dtype=numpy.float32) index = 0 for x in xrange(4096): data = render(x) ibuffer.put( xrange(index, index+20), data) index += 20 return ibuffer # using ctype array CTYPES_ARRAY = ctypes.c_float*(4096*20) def create_array_6(): ibuffer = [] for x in xrange(4096): data = render(x) ibuffer += data ibuffer = CTYPES_ARRAY(*ibuffer) return ibuffer def equals(a, b): for i,v in enumerate(a): if b[i] != v: return False return True if __name__ == "__main__": number = 100 # if random, don't try and compare arrays if not USE_RANDOM and not USE_STATIC_BUFFER: a = create_array_1() assert equals( a, create_array_2() ) assert equals( a, create_array_3() ) assert equals( a, create_array_4() ) assert equals( a, create_array_5() ) assert equals( a, create_array_6() ) t = timeit.Timer( "testing2.create_array_1()", "import testing2" ) print 'from list:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_2()", "import testing2" ) print 'array: indexed:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_3()", "import testing2" ) print 'array: slicing:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_4()", "import testing2" ) print 'array: concat:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_5()", "import testing2" ) print 'array: put:', t.timeit(number)/number*1000.0, 'ms' t = timeit.Timer( "testing2.create_array_6()", "import testing2" ) print 'ctypes float array:', t.timeit(number)/number*1000.0, 'ms' Timings using random numbers: $ python testing2.py from list: 15.0486779213 ms array: indexed: 24.8184704781 ms array: slicing: 50.2214789391 ms array: concat: 44.1691994667 ms array: put: 73.5879898071 ms ctypes float array: 20.6674289703 ms edit note: changed code to produce random numbers for each render to reduce object reuse and to simulate different vertices each time. edit note2: added static buffer and force all numpy.empty() to use dtype=float32 note 1/Apr/2010: still no progress and I don't really feel that any of the answers have solved the problem yet.

    Read the article

  • How do you construct an array suitable for numpy sorting?

    - by Alex
    I need to sort two arrays simultaneously, or rather I need to sort one of the arrays and bring the corresponding element of its associated array with it as I sort. That is if the array is [(5, 33), (4, 44), (3, 55)] and I sort by the first axis (labeled below dtype='alpha') then I want: [(3.0, 55.0) (4.0, 44.0) (5.0, 33.0)]. These are really big data sets and I need to sort first ( for nlog(n) speed ) before I do some other operations. I don't know how to merge my two separate arrays though in the proper manner to get the sort algorithm working. I think my problem is rather simple. I tried three different methods: import numpy x=numpy.asarray([5,4,3]) y=numpy.asarray([33,44,55]) dtype=[('alpha',float), ('beta',float)] values=numpy.array([(x),(y)]) values=numpy.rollaxis(values,1) #values = numpy.array(values, dtype=dtype) #a=numpy.array(values,dtype=dtype) #q=numpy.sort(a,order='alpha') print "Try 1:\n", values values=numpy.empty((len(x),2)) for n in range (len(x)): values[n][0]=y[n] values[n][1]=x[n] print "Try 2:\n", values #values = numpy.array(values, dtype=dtype) #a=numpy.array(values,dtype=dtype) #q=numpy.sort(a,order='alpha') ### values = [(x[0], y[0]), (x[1],y[1]) , (x[2],y[2])] print "Try 3:\n", values values = numpy.array(values, dtype=dtype) a=numpy.array(values,dtype=dtype) q=numpy.sort(a,order='alpha') print "Result:\n",q I commented out the first and second trys because they create errors, I knew the third one would work because that was mirroring what I saw when I was RTFM. Given the arrays x and y (which are very large, just examples shown) how do I construct the array (called values) that can be called by numpy.sort properly? *** Zip works great, thanks. Bonus question: How can I later unzip the sorted data into two arrays again?

    Read the article

  • Compound assignment operators in Python's Numpy library

    - by Leonard
    The "vectorizing" of fancy indexing by Python's numpy library sometimes gives unexpected results. For example: import numpy a = numpy.zeros((1000,4), dtype='uint32') b = numpy.zeros((1000,4), dtype='uint32') i = numpy.random.random_integers(0,999,1000) j = numpy.random.random_integers(0,3,1000) a[i,j] += 1 for k in xrange(1000): b[i[k],j[k]] += 1 Gives different results in the arrays 'a' and 'b' (i.e. the appearance of tuple (i,j) appears as 1 in 'a' regardless of repeats, whereas repeats are counted in 'b'). This is easily verified as follows: numpy.sum(a) 883 numpy.sum(b) 1000 It is also notable that the fancy indexing version is almost two orders of magnitude faster than the for loop. My question is: "Is there an efficient way for numpy to compute the repeat counts as implemented using the for loop in the provided example?"

    Read the article

  • Ironpython call numpy problem

    - by Begtostudy
    Ironpython 2.6, python 2.6.5, numpy, SciPy import sys sys.path.append(r'D:\Python26\dll') sys.path.append(r'D:\Python26\Lib') sys.path.append(r'D:\Python26\Lib\site-packages') » import numpy Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\Python26\Lib\site-packages\numpy\__init__.py", line 132, in <module> File "D:\Python26\Lib\site-packages\numpy\add_newdocs.py", line 9, in <module> File "D:\Python26\Lib\site-packages\numpy\lib\__init__.py", line 4, in <module> File "D:\Python26\Lib\site-packages\numpy\lib\type_check.py", line 8, in <module> File "D:\Python26\Lib\site-packages\numpy\core\__init__.py", line 5, in <module> ImportError: No module named multiarray What's wrong? Thanks.

    Read the article

  • Mapping functions of 2D numpy arrays

    - by perimosocordiae
    I have a function foo that takes a NxM numpy array as an argument and returns a scalar value. I have a AxNxM numpy array data, over which I'd like to map foo to give me a resultant numpy array of length A. Curently, I'm doing this: result = numpy.array([foo(x) for x in data]) It works, but it seems like I'm not taking advantage of the numpy magic (and speed). Is there a better way? I've looked at numpy.vectorize, and numpy.apply_along_axis, but neither works for a function of 2D arrays. EDIT: I'm doing boosted regression on 24x24 image patches, so my AxNxM is something like 1000x24x24. What I called foo above applies a Haar-like feature to a patch (so, not terribly computationally intensive).

    Read the article

  • How to set UCS2 in numpy?

    - by mindcorrosive
    I'm trying to build numpy 1.2.1 as a module for a third-party python interpreter (custom-built, py2.4 linux x86_64) so that I can make calls to numpy from within it. Let's call this one interpreter A. The thing is, the system-wide python interpreter (also py2.4, let's call it B) from the vendor is built with --enable-unicode=ucs4, while the custom one is with UCS2. Needless to say, when I try to build a module with B, I get an error when I try to import numpy in A -- it complains about undefined symbol _PyUnicodeUCS4_IsWhiteSpace. I've searched around and apparently there's no way around this but to compile a custom Python interpreter -- which I did (let's call it interpreter C), properly specifying the unicode string length (verifiable through sys.maxunicode). I managed to build numpy with C as well, surprisingly enough, but still the problem persists when I try to import it in interpreter C. Previously, when I built numpy using B, there were no problems when importing it in B, but A would complain. Perhaps there's an option when building numpy to specify the length of Unicode strings to be used, as when configuring Python builds? Or am I doing something else wrong? A few notes: Upgrading to newer versions of python and/or numpy is not an option - interpreter A will stay on this version of the grammar for the foreseeable future. Also, it is not possible to start the interpreter A in standalone mode to build numpy with it, as it needs some other libraries preloaded I know that this whole thing is a mess, but I'd appreciate any help I can get to make this work. If you need more information, please let me know, I'd be happy to oblige. Thanks to everybody for their time in advance.

    Read the article

  • Confusion between numpy, scipy, matplotlib and pylab

    - by goFrendiAsgard
    Numpy, scipy, matplotlib, and pylab are common terms among they who use python for scientific computation. I just learn a bit about pylab, and I got a lot of confusion. Whenever I want to import numpy, I can always do: import numpy as np I just consider, that once I do from pylab import * The numpy will be imported as well (with np alias). So basically the second one do more things compared to the first one. There are few things I want to ask. Is it right that pylab is just a wrapper for numpy, scipy and matplotlib? As np is the numpy alias, what is the scipy and matplotlib alias? (as far as I know, plt is alias of matplotlib.pyplot, but I don't know the alias for the matplotlib itself) Thanks in advance.

    Read the article

  • Python/numpy tricky slicing problem

    - by daver
    Hi stack overflow, I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do: Say we have a simple array like this: a = array([1, 0, 0, 0]) I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this: a[1:] = a[0:3] This would get the following result: a = array([1, 1, 1, 1]) Or something like this: a[1:] = 2*a[:3] # a = [1,2,4,8] To illustrate further I want the following kind of behaviour: for i in range(len(a)): if i == 0 or i+1 == len(a): continue a[i+1] = a[i] Except I want the speed of numpy. The default behavior of numpy is to take a copy of the slice, so what I actually get is this: a = array([1, 1, 0, 0]) I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. Am I dreaming or is this magic possible? Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes. The algorithm is this: while not converged: for i in range(len(u[:,0])): for j in range(len(u[0,:])): # skip over boundary entries, i,j == 0 or len(u) u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1]) Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there. I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing: u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:]) But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers. Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.

    Read the article

  • Pretty-printing of numpy.array

    - by camillio
    Hello, I'm curious, whether there is any way to print formated numpy.arrays, e.g., in the way similar to this: x = 1.23456 print '%.3f' % x If I want to print the numpy.array of floats, it prints several decimals, often in 'scientific' format, which is rather hard to read even for low-dimensional arrays. However, numpy.array apparently has to be printed as a string, i.e., with %s. Is there any solution ready for this purpose? Many thanks in advance :-)

    Read the article

  • Reading CSV files in numpy where delimiter is ","

    - by monch1962
    Hello all, I've got a CSV file with a format that looks like this: "FieldName1", "FieldName2", "FieldName3", "FieldName4" "04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373" "04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543" ... Note that there are double quote characters at the start and end of each line in the CSV file, and the "," string is used to delimit fields within each line. When I try to read this into numpy via: import numpy as np data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True) all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type When I use delimiter='","' instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008 and 6.552373" - note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • Several numpy arrays with SWIG

    - by Petter
    I am using SWIG to pass numpy arrays from Python to C++ code: %include "numpy.i" %init %{ import_array(); %} %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data, int n)}; class Class { public: void test(float* data, int n) { //... } }; and in Python: c = Class() a = zeros(5) c.test(a) This works, but how can I pass multiple numpy arrays to the same function?

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • sampling integers uniformly efficiently in python using numpy/scipy

    - by user248237
    I have a problem where depending on the result of a random coin flip, I have to sample a random starting position from a string. If the sampling of this random position is uniform over the string, I thought of two approaches to do it: one using multinomial from numpy.random, the other using the simple randint function of Python standard lib. I tested this as follows: from numpy import * from numpy.random import multinomial from random import randint import time def use_multinomial(length, num_points): probs = ones(length)/float(length) for n in range(num_points): result = multinomial(1, probs) def use_rand(length, num_points): for n in range(num_points): rand(1, length) def main(): length = 1700 num_points = 50000 t1 = time.time() use_multinomial(length, num_points) t2 = time.time() print "Multinomial took: %s seconds" %(t2 - t1) t1 = time.time() use_rand(length, num_points) t2 = time.time() print "Rand took: %s seconds" %(t2 - t1) if __name__ == '__main__': main() The output is: Multinomial took: 6.58072400093 seconds Rand took: 2.35189199448 seconds it seems like randint is faster, but it still seems very slow to me. Is there a vectorized way to get this to be much faster, using numpy or scipy? thanks.

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • Iterate with binary structure over numpy array to get cell sums

    - by Curlew
    In the package scipy there is the function to define a binary structure (such as a taxicab (2,1) or a chessboard (2,2)). import numpy from scipy import ndimage a = numpy.zeros((6,6), dtype=numpy.int) a[1:5, 1:5] = 1;a[3,3] = 0 ; a[2,2] = 2 s = ndimage.generate_binary_structure(2,2) # Binary structure #.... Calculate Sum of result_array = numpy.zeros_like(a) What i want is to iterate over all cells of this array with the given structure s. Then i want to append a function to the current cell value indexed in a empty array (example function sum), which uses the values of all cells in the binary structure. For example: array([[0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 0], [0, 1, 2, 1, 1, 0], [0, 1, 1, 0, 1, 0], [0, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0]]) # The array a. The value in cell 1,2 is currently one. Given the structure s and an example function such as sum the value in the resulting array (result_array) becomes 7 (or 6 if the current cell value is excluded). Someone got an idea?

    Read the article

  • NumPy: how to quickly normalize many vectors?

    - by EOL
    How can a list of vectors be elegantly normalized, in NumPy? Here is an example that does not work: from numpy import * vectors = array([arange(10), arange(10)]) # All x's, then all y's norms = apply_along_axis(linalg.norm, 0, vectors) # Now, what I was expecting would work: print vectors.T / norms # vectors.T has 10 elements, as does norms, but this does not work The last operation yields "shape mismatch: objects cannot be broadcast to a single shape". How can the normalization of the 2D vectors in vectors be elegantly done, with NumPy? Edit: Why does the above not work while adding a dimension to norms does work (as per my answer below)?

    Read the article

  • initialize a numpy array

    - by Curious2learn
    Is there way to initialize a numpy array of a shape and add to it? I will explain what I need with a list example. If I want to create a list of objects generated in a loop, I can do: a = [] for i in range(5): a.append(i) I want to do something similar with a numpy array. I know about vstack, concatenate etc. However, it seems these require two numpy arrays as inputs. What I need is: big_array # Initially empty. This is where I don't know what to specify for i in range(5): array i of shape = (2,4) created. add to big_array The big_array should have a shape (10,4). How to do this? Thanks for your help.

    Read the article

  • numpy.equal with string values

    - by Morgoth
    The numpy.equal function does not work if a list or array contains strings: >>> import numpy >>> index = numpy.equal([1,2,'a'],None) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: function not supported for these types, and can't coerce safely to supported types What is the easiest way to workaround this without looping through each element? In the end, I need index to contain a boolean array indicating which elements are None.

    Read the article

  • Fast image coordinate lookup in Numpy

    - by victor
    I've got a big numpy array full of coordinates (about 400): [[102, 234], [304, 104], .... ] And a numpy 2d array my_map of size 800x800. What's the fastest way to look up the coordinates given in that array? I tried things like paletting as described in this post: http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-numpy.html but couldn't get it to work. I was also thinking about turning each coordinate into a linear index of the map and then piping it straight into my_map like so: my_map[linearized_coords] but I couldn't get vectorize to properly translate the coordinates into a linear fashion. Any ideas?

    Read the article

  • python numpy roll with padding

    - by Marshall Ward
    I'd like to roll a 2D numpy in python, except that I'd like pad the ends with zeros rather than roll the data as if its periodic. Specifically, the following code import numpy as np x = np.array([[1, 2, 3],[4, 5, 6]]) np.roll(x,1,axis=1) returns array([[3, 1, 2],[6, 4, 5]]) but what I would prefer is array([[0, 1, 2], [0, 4, 5]]) I could do this with a few awkward touchups, but I'm hoping that there's a way to do it with fast built-in commands. Thanks

    Read the article

  • String comparison in Numpy

    - by Morgoth
    In the following example In [8]: import numpy as np In [9]: strings = np.array(['hello ', 'world '], dtype='|S10') In [10]: strings == 'hello' Out[10]: array([False, False], dtype=bool) The comparison fails because of the whitespace. Is there a Numpy built-in function that does the equivalent of In [12]: np.array([x.strip()=='hello' for x in strings]) Out[12]: array([ True, False], dtype=bool) which does give the correct result?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >