In an environment where you have a relational database which handles all business transactions is it a good idea to utilise SimpleDB for all data queries to have faster and more lightweight search?
So the master data storage would be a relational DB which is "replicated"/"transformed" into SimpleDB to provide very fast read only queries since no JOINS and complicated subselects are needed.
Does the performance of a symmetric encryption algorithm depend on the amount of data being encrypted? Suppose I have about 1000 bytes I need to send over the network rapidly, is it better to encrypt 50 bytes of data 20 times, or 1000 bytes at once? Which will be faster? Does it depend on the algorithm used? If so, what's the highest performing, most secure algorithm for amounts of data under 512 bytes?
Supposing that memory is not an issue does targeting a 64 bit OS make a C/C++ Windows console application run faster?
Update: Prompted by a few comments/answers the application involves statistical algorithms (e.g., linear algebra, random number draws etc).
Hi,
I have a text file from which I want to create a Hash for faster access. My text file is of format (space delimited)
author title date popularity
I want to create a hash in which author is the key and the remaining is the value as an array.
created_hash["briggs"] = ["Manup", "Jun,2007", 10]
Thanks in advance.
Is it possible to create "federated" Subversion servers?
As in one server at location A and another at location B that sync up their local versions of the repository automatically. That way when someone at either location interacts with the repository they are accessing their respective local server and therefore has faster response times.
What is faster?
the Merge statement
MERGE INTO table
USING dual
ON (rowid = 'some_id')
WHEN MATCHED THEN
UPDATE SET colname = 'some_val'
WHEN NOT MATCHED THEN
INSERT (rowid, colname)
VALUES ('some_id', 'some_val')
or
querying a select statement then using an update or insert statement.
SELECT * FROM table where rowid = 'some_id'
if rowCount == 0
INSERT INTO table (rowid,colname) VALUES ('some_id','some_val')
else
UPDATE table SET colname='some_val' WHERE rowid='some_id'
Wondering if anyone knows if either of these methods would produce an output faster:
Method 1
for ($i=1;$i<99999;$i++) {
echo $i, '<br>';
}
or
Method 2
for ($i=1;$i<99999;$i++) {
$string .= $i . '<br>';
}
echo $string;
Thanks for any input you have.
I have n=10000 10 dimensional vectors.
For every vector v1 I want to know the vector v2 that minimizes the angle between v1 and v2.
Is there a way to solve this problem faster than O(n^2)?
I need to store multi-dimensional data consisting of numbers in a manner thats easy to work with. I'm capturing data in real time, and once processed I would destroy and GC older data. This data structure must be fast so it won't hit my overall app performance. The faster the better.
What are my choices in terms of platform supported data structures? I'm using VS 2010. and .NET 4.
Hello,
Long.parseLong( string ) throws an error if string is not parsable into long.
Is there a way to validate the string faster than using try-catch?
Thanks
Hello,
I am looking for the fastest way to remove duplicate values in a string separated by commas.
So my string looks like this;
$str = 'one,two,one,five,seven,bag,tea';
I can do it be exploding the string to values and then compare, but I think it will be slow. what about preg_replace() will it be faster? Any one did it using this function?
I am looking for a way to show my own input view (a UITableView) to enter certain keywords in a UITextView faster than typing them, and also be able to type into this text view the normal way. My solution has a button that causes the keyboard to disappear, revealing the table view underneath it.
Problem is I can't figure out how to make the keyboard go away without resigning first responder, and losing the cursor. Has anyone accomplished this before?
Thanks for any help.
Is this iteration the best?
(Pi^2)/12 = 1 - 1/4 + 1/9 - 1/16 + 1/25 etc.
-For converging faster?
If not please answer with the iteration -preferably in the form above (an example) -not a splat of algebra ...
I'm doing this to find Pi to 1,000,000,000 places online.
http://www.zombiewrath.com/superpi.php
or my 10,000 one:
http://www.zombiewrath.com/pi.php
I am interested in getting started with CommonJS.
With JavaScript frameworks getting faster all the time, and parsing engines and compilers making JavaScript incredibly quick, it is surprising that a project such as CommonJS has not been initiated sooner.
What steps are involved in getting a test project up and running with what has been created so far?
Hi,
is that possible to hide specific nodes in VirtualStringTree?
I'm implementing "filtering" feature (the VST acts as a list with columns), and I'd like to avoid reloading content each time the filter is changed - instead, much faster would be to tell VST not to render specific items ... any solutions?
I just tried the IE9 "Second Internet Explorer Platform Preview" - which supports CSS opacity now. That's nice, but I tried it with one of my website prototypes, and it's quite slow when scrolling etc.
Admittedly, the prototype uses hundreds of images with opacity != 1, but everything is snappy with current versions of Firefox, Safari, Chrome and Opera.
Does anybody know, if there are plans for IE9 to become faster in this area? Even rumours about this would be interesting in this case.
Hi all,
I have 2 tables:
1. news (450k rows)
2. news_tags (3m rows)
There are some triggers on news table update which updating listings. This SQL executes too long...
UPDATE news
SET news_category = some_number
WHERE news_id IN (SELECT news_id
FROM news_tags
WHERE tag_id = some_number); #about 3k rows
How can I make it faster?
Thanks in advance,
S.
Hi,
The title says it all.
public:
inline int GetValue() const {
return m_nValue;
}
inline void SetValue(int nNewValue) {
this -> m_nValue = nNewValue;
}
On Learn C++, they said it would run faster. So, I thought it would be great.
But maybe, there are some negative points to it.
Is this iteration the best?
(Pi^2)/12 = 1 - 1/4 + 1/9 - 1/16 + 1/25 etc.
-For converging faster?
If not please answer with the iteration -preferably in the form above (an example) -not a splat of algebra ...
I'm doing this to find Pi to 1,000,000,000 places online.
I've committed myself to diving into vim to become faster at writing code for ruby/python and I'm having a hard time navigating around files.
Mainly, I'm referring to switching between insert mode and navigation modes. Maybe I'm just not completely used to the editor yet but it feels very awkward to constantly be switching in and out of insert mode.
Is this something that will go away with time? Are there any tricks to getting quicker at moving in and out of insert mode?
quicktime.qdQDException[QTJava:7.6.6g], -3954=Unknown Error Code, QT.vers:7668000
I'm using osx.... is video processing inherently slow in processing?
It's really choppy.
Would it be faster if done in Max?
If I have 3 million pages, which directory structure is better?
Method 1. ~/123456789.htm
(Putting all the 3 million pages into
the same folder without any sub
folders)
Method 2. ~/789/123456789.htm
(Create 999 sub-folders, each
sub-folder contains about 3000 pages)
For Windows Server 2008, which folder structure is faster? (For file creation, reading and deletion)
When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar.
If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user.
Is there any way to change Apache's behavior, and have Apache set a Content-length header instead of chunking the response, so that I don't have to handle the compression myself?