Search Results

Search found 338 results on 14 pages for 'numpy'.

Page 7/14 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • how to handle an asymptote/discontinuity with Matplotlib

    - by Geddes
    Hello all. Firstly - thanks again for all your help. Sorry not to have accepted the responses to my previous questions as I did not know how the system worked (thanks to Mark for pointing that out!). I have since been back and gratefully acknowledged the kind help I have received. My question: when plotting a graph with a discontinuity/asymptote/singularity/whatever, is there any automatic way to prevent Matplotlib from 'joining the dots' across the 'break'? (please see code/image below). I read that Sage has a [detect_poles] facility that looked good, but I really want it to work with Matplotlib. Thanks and best wishes, Geddes import matplotlib.pyplot as plt import numpy as np from sympy import sympify, lambdify from sympy.abc import x fig = plt.figure(1) ax = fig.add_subplot(111) # set up axis ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') # setup x and y ranges and precision xx = np.arange(-0.5,5.5,0.01) # draw my curve myfunction=sympify(1/(x-2)) mylambdifiedfunction=lambdify(x,myfunction,'numpy') ax.plot(xx, mylambdifiedfunction(xx),zorder=100,linewidth=3,color='red') #set bounds ax.set_xbound(-1,6) ax.set_ybound(-4,4) plt.show()

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • Better way to compare neighboring cells in matrix

    - by HyperCube
    Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size. A sample code in Python/Numpy could look like the following: (the comparison 0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors) import numpy as np my_matrix = np.random.rand(100,100) new_matrix = np.array((100,100)) my_range = np.arange(1,99) for i in my_range: for j in my_range: if my_matrix[i,j+1] > 0.5: new_matrix[i,j+1] = 1 if my_matrix[i,j-1] > 0.5: new_matrix[i,j-1] = 1 if my_matrix[i+1,j] > 0.5: new_matrix[i+1,j] = 1 if my_matrix[i-1,j] > 0.5: new_matrix[i-1,j] = 1 if my_matrix[i+1,j+1] > 0.5: new_matrix[i+1,j+1] = 1 if my_matrix[i+1,j-1] > 0.5: new_matrix[i+1,j-1] = 1 if my_matrix[i-1,j+1] > 0.5: new_matrix[i-1,j+1] = 1 This can get really nasty if I want to step into one neighboring cell and start from it to do a similar task... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?

    Read the article

  • Python lists/arrays: disable negative indexing wrap-around

    - by wim
    While I find the negative number wraparound (i.e. A[-2] indexing the second-to-last element) extremely useful in many cases, there are often use cases I come across where it is more of an annoyance than helpful, and I find myself wishing for an alternate syntax to use when I would rather disable that particular behaviour. Here is a canned 2D example below, but I have had the same peeve a few times with other data structures and in other numbers of dimensions. import numpy as np A = np.random.randint(0, 2, (5, 10)) def foo(i, j, r=2): '''sum of neighbours within r steps of A[i,j]''' return A[i-r:i+r+1, j-r:j+r+1].sum() In the slice above I would rather that any negative number to the slice would be treated the same as None is, rather than wrapping to the other end of the array. Because of the wrapping, the otherwise nice implementation above gives incorrect results at boundary conditions and requires some sort of patch like: def ugly_foo(i, j, r=2): def thing(n): return None if n < 0 else n return A[thing(i-r):i+r+1, thing(j-r):j+r+1].sum() I have also tried zero-padding the array or list, but it is still inelegant (requires adjusting the lookup locations indices accordingly) and inefficient (requires copying the array). Am I missing some standard trick or elegant solution for slicing like this? I noticed that python and numpy already handle the case where you specify too large a number nicely - that is, if the index is greater than the shape of the array it behaves the same as if it were None.

    Read the article

  • vectorize is indeterminate

    - by telliott99
    I'm trying to vectorize a simple function in numpy and getting inconsistent behavior. I expect my code to return 0 for values < 0.5 and the unchanged value otherwise. Strangely, different runs of the script from the command line yield varying results: sometimes it works correctly, and sometimes I get all 0's. It doesn't matter which of the three lines I use for the case when d <= T. It does seem to be correlated with whether the first value to be returned is 0. Any ideas? Thanks. import numpy as np def my_func(d, T=0.5): if d > T: return d #if d <= T: return 0 else: return 0 #return 0 N = 4 A = np.random.uniform(size=N**2) A.shape = (N,N) print A f = np.vectorize(my_func) print f(A) $ python x.py [[ 0.86913815 0.96833127 0.54539153 0.46184594] [ 0.46550903 0.24645558 0.26988519 0.0959257 ] [ 0.73356391 0.69363161 0.57222389 0.98214089] [ 0.15789303 0.06803493 0.01601389 0.04735725]] [[ 0.86913815 0.96833127 0.54539153 0. ] [ 0. 0. 0. 0. ] [ 0.73356391 0.69363161 0.57222389 0.98214089] [ 0. 0. 0. 0. ]] $ python x.py [[ 0.37127366 0.77935622 0.74392301 0.92626644] [ 0.61639086 0.32584431 0.12345342 0.17392298] [ 0.03679475 0.00536863 0.60936931 0.12761859] [ 0.49091897 0.21261635 0.37063752 0.23578082]] [[0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]]

    Read the article

  • Enthought Python, Sage, or others (in Unix clusters)

    - by vailen
    I am currently get access to a cluster of Unix machines, but they don't have the software I need (numpy, scipy, matplotlib, etc), and I have to install them by myself (I don't have the root permission, either, so commands like apt-get or yast doesn't work). In the worst case, I have to compile them all from source. Is there any better way to do so? I hear something about Enthought Python and Sage, but not sure what is the best way to do so. Any suggestion?

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • The truth value of an array with more than one element is ambigous when trying to index an array

    - by user1440194
    I am trying to put all elements of rbs into a new array if the elements in var(another numpy array) is =0 and <=.1 . However when I try the following code I get this error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() rbs = [ish[4] for ish in realbooks] for book in realbooks: var -= float(str(book[0]).replace(":", "")) bidsred = rbs[(var <= .1) and (var >=0)] any ideas on what I'm doing wrong?

    Read the article

  • merging indexed array in Python

    - by leon
    Suppose that I have two numpy arrays of the form x = [[1,2] [2,4] [3,6] [4,NaN] [5,10]] y = [[0,-5] [1,0] [2,5] [5,20] [6,25]] is there an efficient way to merge them such that I have xmy = [[0, NaN, -5 ] [1, 2, 0 ] [2, 4, 5 ] [3, 6, NaN] [4, NaN, NaN] [5, 10, 20 ] [6, NaN, 25 ] I can implement a simple function using search to find the index but this is not elegant and potentially inefficient for a lot of arrays and large dimensions. Any pointer is appreciated.

    Read the article

  • element-wise lookup on one ndarray to another ndarray of different shapes

    - by fahhean
    Hi, I am new to numpy. Am wonder is there a way to do lookup of two ndarray of different shapes? for example, i have 2 ndarrays as below: X = array([[0, 3, 6], [3, 3, 3], [6, 0, 3]]) Y = array([[0, 100], [3, 500], [6, 800]]) and would like to lookup each element of X in Y, then be able to return the second column of Y: Z = array([[100, 500, 800], [500, 500, 500], [800, 100, 500]]) thanks, fahhean

    Read the article

  • Why subtract a value from itself (x - x) in Python?

    - by endolith
    In NumPy functions, there are often initial lines that do checking of variable types, forcing them to be certain types, etc. Can someone explain the point of these lines? What does subtracting a value from itself do? t,w = asarray(t), asarray(duty) w = asarray(w + (t-t)) t = asarray(t + (w-w))

    Read the article

  • Reordering matrix elements to reflect column and row clustering in naiive python

    - by bgbg
    Hello, I'm looking for a way to perform clustering separately on matrix rows and than on its columns, reorder the data in the matrix to reflect the clustering and putting it all together. The clustering problem is easily solvable, so is the dendrogram creation (for example in this blog or in "Programming collective intelligence"). However, how to reorder the data remains unclear for me. Eventually, I'm looking for a way of creating graphs similar to the one below using naive Python (with any "standard" library such as numpy, matplotlib etc, but without using R or other external tools).

    Read the article

  • store/load numpy array from binary files

    - by Javier
    Dear all, I would like to store and load numpy arrays from binary files. For that purposes, I created two small functions. Each binary file should contain the dimensionality of the given matrix. def saveArrayToFile(data, fileName): with open(fileName, 'w') as file: a = array.array('f') nSamples, ndim = data.shape a.extend([nSamples, ndim]) # write number of elements and dimensions a.fromstring(data.tostring()) a.tofile(file) def readArrayFromFile(fileName): _featDesc = np.fromfile(fileName, 'f') _ndesc = int(_featDesc[0]) _ndim = int(_featDesc[1]) _featDesc = _featDesc[2:] _featDesc = _featDesc.reshape([_ndesc, _ndim]) return _featDesc, _ndesc, _ndim An example on how to use the functions is: myarr=np.array([[7, 4],[3, 9],[1, 3]]) saveArrayToFile(myarr,'myfile.txt') _featDesc, _ndesc, _ndim = readArrayFromFile('myfile.txt') However, an error message of 'ValueError: total size of new array must be unchanged' is shown. My arrays can be of size MxN and MxM. Any suggestions are more than welcomed. I think the problem might be in the saveArrayToFile function. Best wishes, Javier

    Read the article

  • hierarchical clustering with gene expression matrix in python

    - by user248237
    how can I do a hierarchical clustering (in this case for gene expression data) in Python in a way that shows the matrix of gene expression values along with the dendrogram? What I mean is like the example here: http://www.mathworks.cn/access/helpdesk/help/toolbox/bioinfo/ug/a1060813239b1.html shown after bullet point 6 (Figure 1), where the dendrogram is plotted to the left of the gene expression matrix, where the rows have been reordered to reflect the clustering. How can I do this in Python using numpy/scipy or other tools? Also, is it computationally practical to do this with a matrix of about 11,000 genes, using euclidean distance as a metric? thanks.

    Read the article

  • How to make scipy.interpolate give a an extrapolated result beyond the input range?

    - by Salim Fadhley
    I'm trying to port a program which uses a hand-rolled interpolator (developed by a mathematitian colleage) over to use the interpolators provided by scipy. I'd like to use or wrap the scipy interpolator so that it has as close as possible behavior to the old interpolator. A key difference between the two functions is that in our original interpolator - if the input value is above or below the input range, our original interpolator will extrapolate the result. If you try this with the scipy interpolator it raises a ValueError. Consider this program as an example: import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x/3.0) f = interpolate.interp1d(x, y) print f(9) print f(11) # Causes ValueError, because it's greater than max(x) Is there a sensible way to make it so that instead of crashing, the final line will simply do a linear extrapolate, continuing the gradients defined by the first and last two pouints to infinity. Note, that in the real software I'm not actually using the exp function - that's here for illustration only!

    Read the article

  • TypeError: 'NoneType' object does not support item assignment

    - by R S John
    I am trying to do some mathematical calculation according to the values at particular index of a NumPy array with the following code X = np.arange(9).reshape(3,3) temp = X.copy().fill(5.446361E-01) ind = np.where(X < 4.0) temp[ind] = 0.5*X[ind]**2 - 1.0 ind = np.where(X >= 4.0 and X < 9.0) temp[ind] = (5.699327E-1*(X[ind]-1)**4)/(X[ind]**4) print temp But I am getting the following error Traceback (most recent call last): File "test.py", line 7, in <module> temp[ind] = 0.5*X[ind]**2 - 1.0 TypeError: 'NoneType' object does not support item assignment Would you please help me in solving this? Thanks

    Read the article

  • Python Library installation

    - by MacPython
    Hi everybody I have two questions regarding python libraries: I would like to know if there is something like a "super" python library which lets me install ALL or at least all scientific useful python libraries, which I can install once and then I have all I need. There is a number of annoying problems when installing different libraries (pythonpath, cant import because it is not installed BUT it is installed). Is there any good documentation about common installation errors and how to avoid them. If there is no total solution I would be interested in numpy, scipy, matplotlib, PIL Thanks a lot for the attention and help Best Z

    Read the article

  • How to suppress error messages in rpy2

    - by Björn
    Hello! The following code does not work. It seems that the R warning message raises a python error. # enable use of python objects in rpy2 import rpy2.robjects.numpy2ri import numpy as np from rpy2.robjects import r # create an example array a = np.array([[5,2,5],[3,7,8]]) # this line leads to a warning message, which in turn raises an # error message if run within a script. result = r['chisq.test'](a) Running that code example in ipython works, however, running it inside a script raises the errorTypeError: 'module' object is unsubscriptable. I assume this is due to the warning message. What is the best way to avoid this problem? Thanks in advance!

    Read the article

  • Return numerical array in python

    - by khan
    Okay..this is kind of an interesting question. I have a php form through which user enters values for x and y like this: X: [1,3,4] Y: [2,4,5] These values are stored into database as varchars. From there, these are called by a python program which is supposed to use them as numerical (numpy) arrays. However, these are called as plain strings, which means that calculation can not be performed over them. Is there a way to convert them into numerical arrays before processing or is there something else which is wrong? Helpp!!

    Read the article

  • Simple wrapping of C code with cython

    - by Jose
    Hi, I have a number of C functions, and I would like to call them from python. cython seems to be the way to go, but I can't really find an example of how exactly this is done. My C function looks like this: void calculate_daily ( char *db_name, int grid_id, int year, double *dtmp, double *dtmn, double *dtmx, double *dprec, double *ddtr, double *dayl, double *dpet, double *dpar ) ; All I want to do is to specify the first three parameters (a string and two integers), and recover 8 numpy arrays (or python lists. All the double arrays have N elements). My code assumes that the pointers are pointing to an already allocated chunk of memory. Also, the produced C code ought to link to some external libraries.

    Read the article

  • Fastest Way to generate 1,000,000+ random numbers in python

    - by Sandro
    I am currently writing an app in python that needs to generate large amount of random numbers, FAST. Currently I have a scheme going that uses numpy to generate all of the numbers in a giant batch (about ~500,000 at a time). While this seems to be faster than python's implementation. I still need it to go faster. Any ideas? I'm open to writing it in C and embedding it in the program or doing w/e it takes. Constraints on the random numbers: A Set of numbers 7 numbers that can all have different bounds: eg: [0-X1, 0-X2, 0-X3, 0-X4, 0-X5, 0-X6, 0-X7] Currently I am generating a list of 7 numbers with random values from [0-1) then multiplying by [X1..X7] A Set of 13 numbers that all add up to 1 Currently just generating 13 numbers then dividing by their sum Any ideas? Would pre calculating these numbers and storing them in a file make this faster? Thanks!

    Read the article

  • displaying a colored 2d array in matplotlib in Python

    - by user248237
    I'd like to plot a 2-d matrix from numpy as a colored matrix in Matplotlib. I have the following 9-by-9 array: my_array = diag(ones(9)) # plot the array pcolor(my_array) I'd like to set the first three elements of the diagonal to be a certain color, the next three to be a different color, and the last three a different color. I'd like to specify the color by a hex code string, like "#FF8C00". How can I do this? Also, how can I set the color of 0-valued elements for pcolor?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >