Search Results

Search found 287 results on 12 pages for 'sparse'.

Page 1/12 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • In R, when using named rows, can a sparse matrix column be added to another sparse matrix?

    - by ayman
    I have two sparse matrices, m1 and m2: > m1 <- Matrix(data=0,nrow=2, ncol=1, sparse=TRUE, dimnames=list(c("b","d"),NULL)) > m2 <- Matrix(data=0,nrow=2, ncol=1, sparse=TRUE, dimnames=list(c("a","b"),NULL)) > m1["b",1]<- 4 > m2["a",1]<- 5 > m1 2 x 1 sparse Matrix of class "dgCMatrix" b 4 d . > m2 2 x 1 sparse Matrix of class "dgCMatrix" a 5 b . > and I want to cbind() them to make a sparse matrix like: [,1] [,2] a . 5 b 4 . d . . however cbind() ignores the named rows: > cbind(m1[,1],m2[,1]) [,1] [,2] b 4 5 d 0 0 is there some way to do this without a brute force loop?

    Read the article

  • Sparse matrices / arrays in Java

    - by DanM
    I'm working on a project, written in Java, which requires that I build a very large 2-D sparse array. Very sparse, if that makes a difference. Anyway: the most crucial aspect for this application is efficency in terms of time (assume loads of memory, though not nearly so unlimited as to allow me to use a standard 2-D array -- the key range is in the billions in both dimensions). Out of the kajillion cells in the array, there will be several hundred thousand cells which contain an object. I need to be able to modify cell contents VERY quickly. Anyway: Does anyone know a particularly good library for this purpose? It would have to be Berkeley, LGPL or similar license (no GPL, as the product can't be entirely open-sourced). Or if there's just a very simple way to make a homebrew sparse array object, that'd be fine too. I'm considering MTJ, but haven't heard any opinions on its quality. Thanks!! -Dan

    Read the article

  • Sparse constrained linear least-squares solver

    - by Jacob
    This great SO answer points to a good sparse solver, but I've got constraints on x (for Ax = b) such that each element in x is >=0 an <=N. The first thing which comes to mind is an QP solver for large sparse matrices. Also, A is huge (around 2e6x2e6) but very sparse with <=4 elements per row. Any ideas/recommendations? I'm looking for something like MATLAB's lsqlin but with huge sparse matrices.

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Make huge space savings by using SPARSE columns

    - by simonsabin
    I’ve blogged before about Getting more than 1024 columns on a table , this is done by using sparse columns. Whilst this is potentially useful for people with insane table designs, sparse columns aren’t just for this. My experience over the past few years has shown that sparse columns are useful for almost all databases when you have columns that are largely null i.e. sparse. A recent client was able to reduce the size of the table by 60% by changing columns to sparse. The way this is achieved is...(read more)

    Read the article

  • Sparse quadratic program solver

    - by Jacob
    This great SO answer points to a good sparse solver, but I've got constraints on x (for Ax = b) such that each element in x is >=0 an <=N. The first thing which comes to mind is an QP solver for large sparse matrices. Also, A is huge (around 2e6x2e6) but very sparse with <=4 elements per row. Any ideas/recommendations?

    Read the article

  • Can I choose a sparse file as vdev for a zfs pool?

    - by Karl Richter
    man zpool states that a vdev for a zfs pool can be a "regular file". Can I specify a sparse file (the warning about the integrity of the file being determined by the underlying filesystem should apply with the same relevance for a sparse file)? The ZFS administration guide on https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ states that file vdevs "must be preallocated, and not sparse files or thin provisioned" (thanks to @jlliagre). On https://wiki.archlinux.org/index.php/Experimenting_with_ZFS sparse files are used without any comment.

    Read the article

  • Representing Sparse Data in PostgreSQL

    - by Chris S
    What's the best way to represent a sparse data matrix in PostgreSQL? The two obvious methods I see are: Store data in a single a table with a separate column for every conceivable feature (potentially millions), but with a default value of NULL for unused features. This is conceptually very simple, but I know that with most RDMS implementations, that this is typically very inefficient, since the NULL values ususually takes up some space. However, I read an article (can't find its link unfortunately) that claimed PG doesn't take up data for NULL values, making it better suited for storing sparse data. Create separate "row" and "column" tables, as well as an intermediate table to link them and store the value for the column at that row. I believe this is the more traditional RDMS solution, but there's more complexity and overhead associated with it. I also found PostgreDynamic, which claims to better support sparse data, but I don't want to switch my entire database server to a PG fork just for this feature. Are there any other solutions? Which one should I use?

    Read the article

  • Learning about sparse columns and filtered indexes

    - by Christian
    I’ve been brushing up on sparse columns and filtered indexes recently and two resources stood out for me as indispensable so I’d thought I’d share them. Those of you studying for Microsoft Certified Master: SQL Server will no doubt have found the Readiness Videos published on TechNet and Kimberley Tripp’s (Blog|Twitter) webcast in this series on Sparse columns provides an excellent resource showing different schema designs and specifically where sparse columns fit in. MCM Readiness Video on Sparse columns by Kimberly Tripp http://technet.microsoft.com/en-us/sqlserver/ff977043 The second resource is a session from this years PASS Summit (2010) by Don Vilen (Twitter) called Filtered Indexes, Sparse Columns: Together, Separately (AD203). I thought this session was great and in combination with Kimberly’s webcast provides the perfect background for anyone wanting to learn this topic, especially for those studying for the MCM knowledge exam. If you attended PASS you should have a login to stream Don’s session but if not you can buy the official DVD’s from http://www.sqlpass.org The DVDs are worthy investment considering how much material you get access to!   Regards, Christian Christian Bolton  - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services twitter: @christianbolton

    Read the article

  • Sparse linear program solver

    - by Jacob
    This great SO answer points to a good sparse solver, but I've got constraints on x (for Ax = b) such that each element in x is >=0 an <=N. The first thing which comes to mind is an LP solver for large sparse matrices. Any ideas/recommendations?

    Read the article

  • Sparse checkouts and svn:externals

    - by JesperE
    I'm trying to do a sparse checkout of a folder containing externals, but none of the externals are being checked out. This issue seems to indicate that this behavior may be by design, or at least that it isn't clear what the behavior should be. From my point of view, the obvious behavior is that externals are treated just as any other directory, and checked out following the same sparse checkout rules. Is there a way to work around this except manually checking out the externals?

    Read the article

  • Vectorizatoin of index operation for a scipy.sparse matrix

    - by celil
    The following code runs too slowly even though everything seems to be vectorized. from numpy import * from scipy.sparse import * n = 100000; i = xrange(n); j = xrange(n); data = ones(n); A=csr_matrix((data,(i,j))); x = A[i,j] The problem seems to be that the indexing operation is implemented as a python function, and invoking A[i,j] results in the following profiling output 500033 function calls in 8.718 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 100000 7.933 0.000 8.156 0.000 csr.py:265(_get_single_element) 1 0.271 0.271 8.705 8.705 csr.py:177(__getitem__) (...) Namely, the python function _get_single_element gets called 100000 times which is really inefficient. Why isn't this implemented in pure C? Does anybody know of a way of getting around this limitation, and speeding up the above code? Should I be using a different sparse matrix type?

    Read the article

  • Sparse checkout in Git 1.7.0?

    - by davr
    With the new sparse checkout feature in Git 1.7.0, is it possible to just get the contents of a subdirectory like how you can in SVN? I found this example, but it preserves the full directory structure. Imagine that I just wanted the contents of the 'perl' directory, without an actual directory named 'perl'. -- EDIT -- Example: My git repository contains the following paths repo/.git/ repo/perl/ repo/perl/script1.pl repo/perl/script2.pl repo/images/ repo/images/image1.jpg repo/images/image2.jpg repo/doc/ repo/doc/readme.txt repo/doc/help.txt What I want is to be able to produce from the above repository this layout: repo/.git/ repo/script1.pl repo/script2.pl However with the current sparse checkout feature, it seems like it is only possible to get repo/.git/ repo/perl/script1.pl repo/perl/script2.pl which is NOT what I want. Thanks.

    Read the article

  • Discrete and Continuous Classifier on Sparse Data

    - by Chris S
    I'm trying to classify an example, which contains discrete and continuous features. Also, the example represents sparse data, so even though the system may have been trained on 100 features, the example may only have 12. What would be the best classifier algorithm to use to accomplish this? I've been looking at Bayes, Maxent, Decision Tree, and KNN, but I'm not sure any fit the bill exactly. The biggest sticking point I've found is that most implementations don't support sparse data sets and both discrete and continuous features. Can anyone recommend an algorithm and implementation (preferably in Python) that fits these criteria? Libraries I've looked at so far include: Orange (Mostly academic. Implementations not terribly efficient or practical.) NLTK (Also academic, although has a good Maxent implementation, but doesn't handle continuous features.) Weka (Still researching this. Seems to support a broad range of algorithms, but has poor documentation, so it's unclear what each implementation supports.)

    Read the article

  • create a sparse BufferedImage in java

    - by elgcom
    I have to create an image with very large resolution, but the image is relatively "sparse", only some areas in the image need to draw. For example with following code /* this take 5GB memory */ final BufferedImage img = new BufferedImage( 36000, 36000, BufferedImage.TYPE_INT_ARGB); /* draw something */ Graphics g = img.getGraphics(); g.drawImage(....); /* output as PNG */ final File out = new File("out.png"); ImageIO.write(img, "png", out); The PNG image on the end I created is ONLY about 200~300 MB. The question is how can I avoid creating a 5GB BufferedImage at the beginning? I do need an image with large dimension, but with very sparse color information. Is there any Stream for BufferedImage so that it will not take so much memory?

    Read the article

  • Decode sparse json array to php array

    - by Isaac Sutherland
    I can create a sparse php array (or map) using the command: $myarray = array(10=>'hi','test20'=>'howdy'); I want to serialize/deserialize this as JSON. I can serialize it using the command: $json = json_encode($myarray); which results in the string {"10":"hi","test20":"howdy"}. However, when I deserialize this and cast it to an array using the command: $mynewarray = (array)json_decode($json); I seem to lose any mappings with keys which were not valid php identifiers. That is, mynewarray has mapping 'test20'=>'howdy', but not 10=>'hi' nor '10'=>'hi'. Is there a way to preserve the numerical keys in a php map when converting to and back from json using the standard json_encode / json_decode functions?

    Read the article

  • Sparse O(1) array with indices being consecutive products

    - by Kos
    Hello, I'd like to pre-calculate an array of values of some unary function f. I know that I'll only need the values for f(x) where x is of the form of a*b, where both a and b are integers in range 0..N. The obvious time-optimized choice is just to make an array of size N*N and just pre-calculate just the elements which I'm going to read later. For f(a*b), I'd just check and set tab[a*b]. This is the fastest method possible - however, this is going to take a lot of space as there are lots of indices in this array (starting with N+1) which will never by touched. Another solution is to make a simple tree map... but this slows down the lookup itself very heavily by introducing lots of branches. No. I wonder - is there any solution to make such an array less sparse and smaller, but still quick branchless O(1) in lookup?

    Read the article

  • MATLAB: Convert two array to a sparse matrix

    - by CziX
    I'm looking for an a command or trick to convert two arrays to a sparse matrix. The two arrays contain x-values and y-values, which gives a coordinate in the cartesian coordinate system. I want to group the coordinates, which if the value is between some value on the x-axes and the y-axes. % MATLAB x_i = find(x > 0.1 & x < 0.9); y_i = find(y > 0.4 & y < 0.8); %Then I want to find indicies which are located in both x_i and y_i Is there an easy way to this little trick?

    Read the article

  • Find the "largest" dense sub matrix in a large sparse matrix

    - by BCS
    Given a large sparse matrix (say 10k+ by 1M+) I need to find a subset, not necessarily continuous, of the rows and columns that form a dense matrix (all non-zero elements). I want this sub matrix to be as large as possible (not the largest sum, but the largest number of elements) within some aspect ratio constraints. Are there any known exact or aproxamate solutions to this problem? A quick scan on Google seems to give a lot of close-but-not-exactly results. What terms should I be looking for? edit: Just to clarify; the sub matrix need not be continuous. In fact the row and column order is completely arbitrary so adjacency is completely irrelevant. A thought based on Chad Okere's idea Order the rows from largest count to smallest count (not necessary but might help perf) Select two rows that have a "large" overlap Add all other rows that won't reduce the overlap Record that set Add whatever row reduces the overlap by the least Repeat at #3 until the result gets to small Start over at #2 with a different starting pair Continue until you decide the result is good enough

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • best way to get a vector from sparse matrix

    - by niko
    Hi, I have a m x n matrix where each row consists of zeros and same values for each row. an example would be: M = -0.6 1.8 -2.3 0 0 0; 0 0 0 3.4 -3.8 -4.3; -0.6 0 0 3.4 0 0 In this example the first column consists of 0s and -0.6, second 0 and 1.8, third -2.3 and so on. In such case I would like to reduce m to 1 (get a vector from a given matrix) so in this example a vector would be (-0.6 1.8 -2.3 3.4 -3.8 -4.3) Does anyone know what is the best way to get a vector from such matrix? Thank you!

    Read the article

  • Efficiently solving sparse matrices

    - by anon
    For solving spare matrices, in general, how big does the matrix have to be (as a rule of thumb) for methods like congraduate descent to be faster than brute force solvers (that do not take advantage o sparsity)?

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • Scipy sparse... arrays?

    - by spitzanator
    Hey, folks. So, I'm doing some Kmeans classification using numpy arrays that are quite sparse-- lots and lots of zeroes. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices. I've gone through this tutorial on how to create sparse matrices: http://www.scipy.org/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 To mimic an array, I just create a 1xN matrix, but as you may guess, Asp.dot(Bsp) doesn't quite work because you can't multiply two 1xN matrices. I'd have to transpose each array to Nx1, and that's pretty lame, since I'd be doing it for every dot-product calculation. Next up, I tried to create an NxN matrix where column 1 == row 1 (such that you can multiply two matrices and just take the top-left corner as the dot product), but that turned out to be really inefficient. I'd love to use scipy's sparse package as a magic replacement for numpy's array(), but as yet, I'm not really sure what to do. Any advice? Thank you very much!

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >