Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 261/361 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • building a pairwise matrix in scipy/numpy in Python from dictionaries

    - by user248237
    I have a dictionary whose keys are strings and values are numpy arrays, e.g.: data = {'a': array([1,2,3]), 'b': array([4,5,6]), 'c': array([7,8,9])} I want to compute a statistic between all pairs of values in 'data' and build an n by x matrix that stores the result. Assume that I know the order of the keys, i.e. I have a list of "labels": labels = ['a', 'b', 'c'] What's the most efficient way to compute this matrix? I can compute the statistic for all pairs like this: result = [] for elt1, elt2 in itertools.product(labels, labels): result.append(compute_statistic(data[elt1], data[elt2])) But I want result to be a n by n matrix, corresponding to "labels" by "labels". How can I record the results as this matrix? thanks.

    Read the article

  • python dictionary with constant value-type

    - by s.kap
    hi there, I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming. However, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; out: d[x]) with an object which actually uses these arrays for the keys and values storage. I can use a index-conversion object (input -- index, of 1..n, where n is the different-values counter), and return array[index]. I can elaborate on some techniques of how to implement such an indexing-methods with reasonable memory requirement, it works and even pretty good. However, I wonder if there is such a data-structure-object already exists (in python, or wrapped to python from C/++), in any package (I checked collections, and some Google searches). Any comment will be welcome, thanks.

    Read the article

  • Vim: Replacing a line with another one yanked before

    - by duddle
    At least once per day i have the following situation: A: This line should also replace line X ... X: This is line should be replaced I believe that I don't perform that task efficiently. What I do: Go to line A: AG Yank line A: yy Go to line X: XG Paste line A: P Move to old line: j Delete old line: dd This has the additional disadvantage that line X is now in the default register, which is annoying if I find another line that should be replaced with A. Yanking to and pasting from an additional register ("ayy, "aP) makes this simple task even less efficient. My Questions: Did I miss a built-in Vim command to replace a line yanked before? If not, how can I bind my own command that leaves (or restores) the yanked line in the default register?

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Correct structure and way of website versioning

    - by Saif Bechan
    Recently I use GIT to version my website. It makes it all really easy to see how my project develops and I always have save backups on different places on the web. Now my main question is if it is recommended to version your whole root of the website. I have a basic structure that looks something like this: /httpdocs /config /media /application index.php .htaccess 1) Should I use the /httpdocs folder to version, or should I use the content of the folder. 2) Is it recommended to version the media folder. In the media version I have several images for the overall layout, and some other images for the website. These imagas can be quite large. I work on these images from time to time and so they change. I hardly never need the old image again, so is this not just taking up precious storage space. I would highly appreciate just some basic recommendation on this topic.

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • Android and PHP - Do I need to use sessions?

    - by jtnire
    I have created an Android App that communicates with a PHP web server. They both send JSON to each other. My App is almost finished, however there is one thing left to do: authentication. Since the user's username and password will be stored in Android SharedPreferences, is there any need to use PHP sessions, given that the user won't need to enter the username/password at every request? Since I can just send the username and password in the HTTP POST header for every request, and that I will be using SSL, is this sufficient? I guess I could add an extra field in the header called 'random' that just adds a random value, just to use as a salt so that the encrypted SSL payload will be different everytime. The reason why I don't want to use sessions is that my Android App would either have to handle cookies, or managed the storage of the session ID. If there are some serious cons to using my method above, then I'm more than happy to use sessions, however all advice is appreciated. Thanks

    Read the article

  • How to search for closest value in a lookup table?

    - by CSharperWithJava
    I have a simple one dimmensional array of integer values that represent a physical set of part values I have to work with. I then calculate and ideal value mathematically. How could I write an efficient search algorithm that will find the smallest abosulte difference from my ideal value in the array? The array is predetermined and constant, so it can be sorted however I need. Example Lookup array: 100, 152, 256, 282, 300 Searching for an ideal value of 125 would find 100 in the array, whereas 127 would find 152. The actual lookup array will be about 250 items long and never change.

    Read the article

  • Database Design Question: GUID + Natural Numbers

    - by Alan
    For a database I'm building, I've decided to use natural numbers as the primary key. I'm aware of the advantages that GUID's allow, but looking at the data, the bulk of row's data were GUID keys. I want to generate XML records from the database data, and one problem with natural numbers is that I don't want to expose my database key's to the outside world, and allow users to guess "keys." I believe GUID's solve this problem. So, I think the solution is to generate a sparse, unique iD derived from the natural ID (hopefully it would be 2-way), or just add an extra column in the database and store a guid (or some other multibyte id) The derived value is nicer because there is no storage penalty, but it would be easier to reverse and guess compared to a GUID. I'm (buy) curious as to what others on SO have done, and what insights they have.

    Read the article

  • How does Photoshop (Or drawing programs) blit?

    - by user146780
    I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit? I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help. I don't think super efficient algorithms could justify that? Thanks

    Read the article

  • Copy an array backwards? Array.Copy?

    - by daniel
    I have a List<T> that I want to be able to copy to an array backwards, meaning start from List.Count and copy maybe 5 items starting at the end of the list and working its way backwards. I could do this with a simple reverse for loop; however there is probably a faster/more efficient way of doing this so I thought I should ask. Can I use Array.Copy somehow? Originally I was using a Queue as that pops it off in the correct order I need, but I now need to pop off multiple items at once into an array and I thought a list would be faster.

    Read the article

  • Secondary keys in a B-tree

    - by Phenom
    Let's say that there is a file that contains an unsorted list of student information, which includes a student ID number as well as other information. I want to make a program that retrieves student information based on student ID number. In order to make it efficient, I store the student IDs in a B-tree. So when I enter a student ID number, it searches the B-tree to see if its there or not. It also does one more thing. If it finds the student ID number, then it also returns where in the file that student's information is. This is the secondary key. The program uses this information to locate the rest of the student's information and prints it to screen. Can this be done? Is this how a b-tree works?

    Read the article

  • Fastest way to sort files

    - by Werner
    Hi, I have a huge text file with lines like: -568.563626 159 33 -1109.660591 -1231.295129 4.381508 -541.181308 159 28 -1019.279615 -1059.115975 4.632301 -535.370812 155 29 -1033.071786 -1152.907805 4.420473 -533.547101 157 28 -1046.218277 -1063.389677 4.423696 What I want is to sort the file, depending on the 5th column, so I would get -568.563626 159 33 -1109.660591 -1231.295129 4.381508 -535.370812 155 29 -1033.071786 -1152.907805 4.420473 -533.547101 157 28 -1046.218277 -1063.389677 4.423696 -541.181308 159 28 -1019.279615 -1059.115975 4.632301 For this I use: for i in file.txt ; do sort -k5n $i ; done I wonder if this is the fastest or more efficient way Thanks

    Read the article

  • Please suggest me the best way to design my database.

    - by Raymond Ho
    I have a table named "Pages" and a table named "Categories". Each entry of the table "Pages" is linked to the table "Categories". The "Categories" table have 5 entries, they are: "Car", "Websites", "Technology", "Mobile Phones", and "Interest". So each time I put an entry to the "Pages" table, I need to map it to the "Categories" table so are arranged properly. Here's my table: Pages ______ id [PK] name url Categories ______ id [PK] Categoryname Pages2Categories ______ Pages.id Categories.id So my question is, is this the most efficient way to create this kind of relationships between tables? It seems very amateur

    Read the article

  • What is faster: multiple `send`s or using buffering?

    - by dauerbaustelle
    I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)

    Read the article

  • How can a string timestamp with hours 0 to 24 be parsed

    - by user897052
    I am trying to parse a string timestamp of format "yyyyMMddHHmmss" with DateTime.ParseExact(). The catch is I must allow for an hour value of "24" (i.e. hours can be from 0 to 24; Note: I can't control the input values.) and, of course, that results in an exception. Are there any settings/properties I can set instead of manually parsing/using regex's? If not, any efficient parsing ideas? ex. DateTime.ParseExact("20120911240000", "yyyyMMddHHmmss", System.Globalization.CultureInfo.InvariantCulture); hour 24 means hour 0 of next day (so day + 1, hour = 0)

    Read the article

  • Pass Session data to a Class Library without using a bunch of constructors?

    - by sah302
    Hi all, I've got my application here where literally every object has a lastUpdatedBy property. The information I put into here is the person's username, which is retrieved from the session("username") variable. How can I pass this data to my DAL in the class library? At first I was just passing in the value into each method, but this is ridiculous I thought, there should be no reason to do that every time a method is called. Then I thought well if I just put it in a constructor for each of the DAL related classes, that will make it even easier. However, even still on any given page, I've got a plethora of New() declarations, for which every single line I need to pass in the session username casted as a string. Is there an even still more efficient way of doing this so that I could only declare this in one place, and everything will know what it is and I can pass it to classes in a class library?

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • Java escape HTML - string replace slow?

    - by cpf
    Hi StackOverflow, I have a Java application that makes heavy use of a large file, to read, process and give through to SolrEmbeddedServer (http://lucene.apache.org/solr/). One of the functions does basic HTML escaping: private String htmlEscape(String input) { return input.replace("&", "&amp;").replace(">", "&gt;").replace("<", "&lt;") .replace("'", "&apos;").replaceAll("\"", "&quot;"); } While profiling the application, the program spends roughly 58% of the time in this function, a total of 47% in replace, and 11% in replaceAll. Now, is the Java replace that slow, or am I on the right path and should I consider the program efficient enough to have its bottleneck in Java and not in my code? (Or am I replacing wrong?) Thanks in advance!

    Read the article

  • Logging NHibernate SQL queries

    - by GuestMVCAsync
    Is there a way to access the full SQL query, including the values, inside my code? I am able to log SQL queries using log4net: <logger name="NHibernate.SQL" additivity="false"> <level value="ALL"/> <appender-ref ref="NHibernateSQLFileLog"/> </logger> However, I would like to find a way to log SQL queries from the code also. This way I will log the specific SQL query that causes an exception in my try/catch statement. Right now I have to data-mine the SQLFileLog to find the query that caused the exception when an exception occurs and it is not efficient.

    Read the article

  • What is the right method for parsing a blog post?

    - by Zedwal
    Hi guys, Need a guide line .... I am trying to write a personal blog. What is the standard structure for for input for the post. I am trying the format like: This is the simple text And I am [b] bold text[/b]. This is the code part: [code lang=java] public static void main (String args[]) { System.out.println("Hello World!"); } [/code] Is this the right way to store post in the database? And What is the right method to parse this kind of post? Shall I use regular expression to parse this or there is another standard for this. If the above mentioned format is not the right way for storage, then what it could be? Thanks

    Read the article

  • Getting Older PHP Extensions

    - by maSnun
    Hello All, It happens that I need the GD, SQLite and a few more extensions for php 5.1. But I can't find out where to get them. I am using WinBinder to develop some desktop applications for Windows with php. The minimal php 5.1 pack has the winbinder extension only. I need other extensions for enhanced features like image editing or data storage. Can anybody help? I really need this very much. Thanks and Regards, Masnun

    Read the article

  • IE 8 prompts user on "slow" jQuery script

    - by Jason
    I have a form with over 100 list items that I must reorder on submit. The following code works to reorder my list without any apparent problems in Firefox; however, IE prompts with the message "A script on this page is causing Internet Explorer to run slowly. If it continues to run, your computer may become unresponsive. Do you want to abort the script?" If the user clicks 'No', the script will work as expected. var listitems = $(form).find('li').get(); listitems.sort(function(a, b) { var compA = $(a).attr('id'); var compB = $(b).attr('id'); return (compA - compB); }); Any ideas on how to make this more efficient?

    Read the article

  • What data structure would be the least painful DataTable replacement?

    - by MatthewMartin
    I'm storing a lot of sorted ~10 row 2 column/key value pairs in ASP.NET cache-- they're the data for dropdownlists. Right now they are all DataTables, which isn't very space efficient (the rule of thumb is 10x increase in size when data is strored in a dataset). Old Code DataTable table = dataAccess.GetDataTable(); dropDownList.DataSource = table; Hoped for new Code Unknown data = dataAccess.GetSomethingMoreSpaceEfficient(); dropDownList.DataSource = data; What pre-existing datastructures are similar enough to DataTable that would minimize code breakage and reduce the serialized size when stored in ASP.NET cache?

    Read the article

  • e-commerce product data/metadata schemas

    - by Shreko
    Trying to figure out how is product data/metadata schema designed. For example, how does an e-commerce site enter a product spec. Does it copy and paste from mfg spec sheet, enters it in their own fields or something else? Here is an example, looking at the D3000 Nikon DSLR Manufacturer: http://nikon.ca/en/Product.aspx?m=17300&disp=Specs futureshop.ca: www.futureshop.ca/en-CA/product/nikon-nikon-d3000-10-2mp-dslr-camera-with-18-55mm-lens-kit-d3000/10128435.aspx?path=865c2348a1542e848982c9dbd9253483en02 memoryexpress.com: www.memoryexpress.com/Products/PID-MX25539%28ME%29.aspx They are all slightly different in order or in parent/child field? What's storage is used for this type of info rdbms or xml?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >