Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 617/1506 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • Ideas Related to Subset Sum with 2,3 and more integers

    - by rolandbishop
    I've been struggling with this problem just like everyone else and I'm quite sure there has been more than enough posts to explain this problem. However in terms of understanding it fully, I wanted to share my thoughts and get more efficient solutions from all the great people in here related to Subset Sum problem. I've searched it over the Internet and there is actually a lot sources but I'm really willing to re-implement an algorithm or finding my own in order to understand fully. The key thing I'm struggling with is the efficiency considering the set size will be large. (I do not have a limit, just conceptually large). The two phases I'm trying to implement ideas on is finding two numbers that are equal to given integer T, finding three numbers and eventually K numbers. Some ideas I've though; For the two integer part I'm thing basically sorting the array O(nlogn) and for each element in the array searching for its negative value. (i.e if the array element is 3 searching for -3). Maybe a hash table inclusion could be better, providing a O(1) indexing the element? For the three or more integers I've found an amazing blog post;http://www.skorks.com/2011/02/algorithms-a-dropbox-challenge-and-dynamic-programming/. However even the author itself states that it is not applicable for large numbers. So I was for 2 and 3 and more integers what ideas could be applied for the subset problem. I'm struggling with setting up a dynamic programming method that will be efficient for the large inputs as well.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

  • Massive speed diff in upgrade to Java 7

    - by Brett Rigby
    We use Java within our build process, as it is used to resolve/publish our dependencies via Ivy. No problem, nor have we had with it for 2 years, until we've tried to upgrade Java 6 Update 26 to Version 7 Update 7, whereas a build on a local developer PC (WinXP) now takes 2 hours to complete, instead of 10 minutes!! Nothing else has changed on the PC, making it the absolute target for our concerns. Does anyone know of any reason as to why version 7 of Java would make such a speed difference like this?

    Read the article

  • Access 2007 and Special/Unicode Characters in SQL

    - by blockcipher
    I have a small Access 2007 database that I need to be able to import data from an existing spreadsheet and put it into our new relational model. For the most part this seems to work pretty well. Part of the process is attempting to see if a record already exists in a target table using SQL. For example, if I extract book information out of the current row in the spreadsheet, it may contain a title and abstract. I use SQL to get the ID of a matching record, if it exists. This works fine except when I have data that's in a non-English language. In this case, it seems that there is some punctuation that is causing me problems. At least I think it's punctuation as I do have some fields that do not have punctuation and are non-English that do not give me any problems. Is there a built-in function that can escape these characters? Currently I have a small function that will escape the single quote character, but that isn't enough. Or, is there a list of Unicode characters that can interfere with how SQL wants data quoted? Thanks in advance.

    Read the article

  • What's the fastest way to check if a word from one string is in another string?

    - by Mike Trpcic
    I have a string of words; let's call them bad: bad = "foo bar baz" I can keep this string as a whitespace separated string, or as a list: bad = bad.split(" "); If I have another string, like so: str = "This is my first foo string" What's the fasted way to check if any word from the bad string is within my comparison string, and what's the fastest way to remove said word if it's found? #Find if a word is there bad.split(" ").each do |word| found = str.include?(word) end #Remove the word bad.split(" ").each do |word| str.gsub!(/#{word}/, "") end

    Read the article

  • Python faster way to read fixed length fields form a file into dictionary

    - by Martlark
    I have a file of names and addresses as follows (example line) OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6) def line_to_dictionary(self, file_line,rec_num): file_line = file_line.lower() # Make it all lowercase return_rec = {} # Return record as a dictionary for (field_start, field_length, field_name) in self.field_list: field_data = file_line[field_start:field_start+field_length] if (self.strip_fields == True): # Strip off white spaces first field_data = field_data.strip() if (field_data != ''): # Only add non-empty fields to dictionary return_rec[field_name] = field_data # Set hidden fields # return_rec['_rec_num_'] = rec_num return_rec['_dataset_name_'] = self.name return return_rec

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • PHP fastest method of reading server response

    - by Peter John
    Hi there, im having some real problems with the lag produced by using fgets to grab the server's response to some batch database calls im making. Im sending through a batch of say, 10,000 calls and ive tracked the lag down to fgets causing the hold up in the speed of my application as the response for each call needs to be grabbed. I have found this thread http://bugs.php.net/bug.php?id=32806 which explains the problem quite well, but hes reading a file, not a server response so fread could be a bit tricky as i could get part of the next line, and extra stuff which i dont want. Any help much appreciated!

    Read the article

  • Why is autorelease especially dangerous/expensive for iPhone applications?

    - by e.James
    I'm looking for a primary source (or a really good explanation) to back up the claim that the use of autorelease is dangerous or overly expensive when writing software for the iPhone. Several developers make this claim, and I have even heard that Apple does not recommend it, but I have not been able to turn up any concrete sources to back it up. SO references: autorelease-iphone Why does this create a memory leak (iPhone)? Note: I can see, from a conceptual point of view, that autorelease is slightly more expensive than a simple call to release, but I don't think that small penalty is enough to make Apple recommend against it. What's the real story?

    Read the article

  • Best method to select an object from another unknown jQuery object

    - by Yosi
    Lets say I have a jQuery object/collection stored in a variable named obj, which should contain a DOM element with an id named target. I don't know in advance if target will be a child in obj, i.e.: obj = $('<div id="parent"><div id="target"></div></div>'); or if obj equals target, i.e.: obj = $('<div id="target"></div>'); or if target is a top-level element inside obj, i.e.: obj = $('<div id="target"/><span id="other"/>'); I need a way to select target from obj, but I don't know in advance when to use .find and when to use .filter. What would be the fastest and/or most concise method of extracting target from obj? What I've come up with is: var $target = obj.find("#target").add(obj.filter("#target")); UPDATE I'm adding solutions to a JSPERF test page to see which one is the best. Currently my solution is still the fastest. Here is the link, please run the tests so that we'll have more data: http://jsperf.com/jquery-selecting-objects

    Read the article

  • STL find performs bettern than hand-crafter loop

    - by dusha
    Hello all, I have some question. Given the following C++ code fragment: #include <boost/progress.hpp> #include <vector> #include <algorithm> #include <numeric> #include <iostream> struct incrementor { incrementor() : curr_() {} unsigned int operator()() { return curr_++; } private: unsigned int curr_; }; template<class Vec> char const* value_found(Vec const& v, typename Vec::const_iterator i) { return i==v.end() ? "no" : "yes"; } template<class Vec> typename Vec::const_iterator find1(Vec const& v, typename Vec::value_type val) { return find(v.begin(), v.end(), val); } template<class Vec> typename Vec::const_iterator find2(Vec const& v, typename Vec::value_type val) { for(typename Vec::const_iterator i=v.begin(), end=v.end(); i<end; ++i) if(*i==val) return i; return v.end(); } int main() { using namespace std; typedef vector<unsigned int>::const_iterator iter; vector<unsigned int> vec; vec.reserve(10000000); boost::progress_timer pt; generate_n(back_inserter(vec), vec.capacity(), incrementor()); //added this line, to avoid any doubts, that compiler is able to // guess the data is sorted random_shuffle(vec.begin(), vec.end()); cout << "value generation required: " << pt.elapsed() << endl; double d; pt.restart(); iter found=find1(vec, vec.capacity()); d=pt.elapsed(); cout << "first search required: " << d << endl; cout << "first search found value: " << value_found(vec, found)<< endl; pt.restart(); found=find2(vec, vec.capacity()); d=pt.elapsed(); cout << "second search required: " << d << endl; cout << "second search found value: " << value_found(vec, found)<< endl; return 0; } On my machine (Intel i7, Windows Vista) STL find (call via find1) runs about 10 times faster than the hand-crafted loop (call via find2). I first thought that Visual C++ performs some kind of vectorization (may be I am mistaken here), but as far as I can see assembly does not look the way it uses vectorization. Why is STL loop faster? Hand-crafted loop is identical to the loop from the STL-find body. I was asked to post program's output. Without shuffle: value generation required: 0.078 first search required: 0.008 first search found value: no second search required: 0.098 second search found value: no With shuffle (caching effects): value generation required: 1.454 first search required: 0.009 first search found value: no second search required: 0.044 second search found value: no Many thanks, dusha. P.S. I return the iterator and write out the result (found or not), because I would like to prevent compiler optimization, that it thinks the loop is not required at all. The searched value is obviously not in the vector.

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • How to populate a listview in ASP.NET 3.5 through a dataset?

    - by EasyDot
    Is it possible to populate a listview with a dataset? I have a function that returns a dataset. Why im asking this is because my SQL is quite complicated and i can't convert it to a SQLDataSource... Public Function getMessages() As DataSet Dim dSet As DataSet = New DataSet Dim da As SqlDataAdapter Dim cmd As SqlCommand Dim SQL As StringBuilder Dim connStr As StringBuilder = New StringBuilder("") connStr.AppendFormat("server={0};", ConfigurationSettings.AppSettings("USERserver").ToString()) connStr.AppendFormat("database={0};", ConfigurationSettings.AppSettings("USERdb").ToString()) connStr.AppendFormat("uid={0};", ConfigurationSettings.AppSettings("USERuid").ToString()) connStr.AppendFormat("pwd={0};", ConfigurationSettings.AppSettings("USERpwd").ToString()) Dim conn As SqlConnection = New SqlConnection(connStr.ToString()) Try SQL = New StringBuilder cmd = New SqlCommand SQL.Append("SELECT m.MESSAGE_ID, m.SYSTEM_ID, m.DATE_CREATED, m.EXPIRE_DATE, ISNULL(s.SYSTEM_DESC,'ALL SYSTEMS') AS SYSTEM_DESC, m.MESSAGE ") SQL.Append("FROM MESSAGE m ") SQL.Append("LEFT OUTER JOIN [SYSTEM] s ") SQL.Append("ON m.SYSTEM_ID = s.SYSTEM_ID ") SQL.AppendFormat("WHERE m.SYSTEM_ID IN ({0}) ", sSystems) SQL.Append("OR m.SYSTEM_ID is NULL ") SQL.Append("ORDER BY m.DATE_CREATED DESC; ") SQL.Append("SELECT mm.MESSAGE_ID, mm.MODEL_ID, m.MODEL_DESC ") SQL.Append("FROM MESSAGE_MODEL mm ") SQL.Append("JOIN MODEL m ") SQL.Append(" ON m.MODEL_ID = mm.MODEL_ID ") cmd.CommandText = SQL.ToString cmd.Connection = conn da = New SqlDataAdapter(cmd) da.Fill(dSet) dSet.Tables(0).TableName = "BASE" dSet.Tables(1).TableName = "MODEL" Return dSet Catch ev As Exception cLog.EventLog.logError(ev, cmd) Finally 'conn.Close() End Try End Function

    Read the article

  • WCF high instance count: anyone knows negative sideffects?

    - by Alex
    Hi there! Did anyone experience or know of negative side effects from having a high service instance count like 60k? Aside from the memory consumption of course. I am planning to increase the threshold for the maximum allowed instance count in our production environments. I am basically sick of severe production incidents just because "something" forgot to close a proxy properly. I plan to go to something like 60k instances which will allow the service to survive using default session timeouts at a call rate average for our clients. Thanks, Alex

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • Avoid having a huge collection of ids by calling a DAO.getAll()

    - by Michael Bavin
    Instead of returning a List<Long> of ids when calling PersonDao.getAll() we wanted not to have an entire collection of ids in memory. Seems like returning a org.springframework.jdbc.support.rowset.SqlRowSet and iterate over this rowset would not hold every object in memory. The only problem here is i cannot cast this row to my entity. Is there a better way for this?

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >