Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 6/184 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Faster Matrix Multiplication in C#

    - by Kyle Lahnakoski
    I have as small c# project that involves matrices. I am processing large amounts of data by splitting it into n-length chunks, treating the chucks as vectors, and multiplying by a Vandermonde** matrix. The problem is, depending on the conditions, the size of the chucks and corresponding Vandermonde** matrix can vary. I have a general solution which is easy to read, but way too slow: public byte[] addBlockRedundancy(byte[] data) { if (data.Length!=numGood) D.error("Expecting data to be just "+numGood+" bytes long"); aMatrix d=aMatrix.newColumnMatrix(this.mod, data); var r=vandermonde.multiplyBy(d); return r.ToByteArray(); }//method This can process about 1/4 megabytes per second on my i5 U470 @ 1.33GHz. I can make this faster by manually inlining the matrix multiplication: int o=0; int d=0; for (d=0; d<data.Length-numGood; d+=numGood) { for (int r=0; r<numGood+numRedundant; r++) { Byte value=0; for (int c=0; c<numGood; c++) { value=mod.Add(value, mod.Multiply(vandermonde.get(r, c), data[d+c])); }//for output[r][o]=value; }//for o++; }//for This can process about 1 meg a second. (Please note the "mod" is performing operations over GF(2^8) modulo my favorite irreducible polynomial.) I know this can get a lot faster: After all, the Vandermonde** matrix is mostly zeros. I should be able to make a routine, or find a routine, that can take my matrix and return a optimized method which will effectively multiply vectors by the given matrix, but faster. Then, when I give this routine a 5x5 Vandermonde matrix (the identity matrix), there is simply no arithmetic to perform, and the original data is just copied. ** Please note: What I use the term "Vandermonde", I actually mean an Identity matrix with some number of rows from the Vandermonde matrix appended (see comments). This matrix is wonderful because of all the zeros, and because if you remove enough rows (of your choosing) to make it square, it is an invertible matrix. And, of course, I would like to use this same routine to convert any one of those inverted matrices into an optimized series of instructions. How can I make this matrix multiplication faster? Thanks! (edited to correct my mistake with Vandermonde matrix)

    Read the article

  • Why would restarting MySQL make my site faster?

    - by beagleguy
    hey all, my site started dragging lately, the queries taking exceptionally longer than I would expect with properly tuned indexes. I just restarted the mysql server after 31 days uptime and every query is now substantially faster and the whole site renders 3-4 times faster. Would there be anything that jumps out at you as to why this may have been? Improper settings on my.cnf perhaps? Any ideas as to what I can start looking at to try and pinpoint why? thanks

    Read the article

  • which loop is faster?

    - by Mansuro
    I have this loop for (it= someCollection.iterator; it.hasNext(); ) { //some code here } I changed it to: for (it= someCollection.iterator;; ) { if (!it.hasNext()) break; //some code here } The second code ran a little bit faster in unit tests in junit on eclipse. Is the second loop faster? I'm asking because the times given by Junit are not too exact, but they give an approximate value

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • Yet again: "This device can perform faster" (Samsung Galaxy Tab 2)

    - by Mike C
    I've been doing a lot of research with no reasonable solution. Please excuse the length of my post. When I plug my Galaxy Tab 2 (7" / Wi-Fi only / Android ICS) into my Windows 7 64-bit machine, I (almost always) get this warning popup that "This device can perform faster." And in fact, transfers onto the Tab in this mode are slow. The two times I've been able to get a high-speed connection, the transfer has occurred at the expected speed. I just don't know what to do to get that high-speed transfer. (The first time I did, it was the first time I connected the Tab; the second time I did, I was fiddling around and unplugging/plugging in again.) That popup is telling me that the device is USB2, but that it thinks I've connected to a USB1 port. In fact, every USB port (there are ten) on this system is USB2. It's an ASUS M3A78-EMH mobo from late 2008. I'm not sure what the chipset is; the CPU is an AMD Athlon 4850e, but I've seen this message reported for non-AMD systems. (Every mobo reference I've seen in reports on this has been for Asus, but of course most reporters aren't reporting that info at all.) The Windows 7 installation is just a couple weeks old (I had a disk crash) but I saw the same warning on the WinXP/64 that was installed previously. In Device Manager, there are two "Standard Enhanced PCI to USB Host Controller" nodes which are the actual high-speed controllers. There are also five "Standard OpenHCD USB Host Controller" nodes, which I have determined are virtual USB1 controllers embedded in the "Enhanced" controllers. (In Device Manager, I'm using View|Devices by Connection.) My high-speed thumb drives, external disks, and iPod all show up as subnodes of the "Enhanced" controllers; the keyboard, mouse, and USB speakers under the "OpenHCD" ones -- and this is true no matter which ports these devices are plugged into. The Tab shows up under an OpenHCD node, unsurprisingly. It appears as a threesome: a top-level "Mobile USB Composite device" with two subs: "Galaxy Tab 2" and "Mobile USB Modem." (I have no idea what the modem device implies or how I might use it, but I don't care about it either: I just want the Tab to reliably connect at high speed.) On the Tab, the USB support has a switch between PTP and MTP, the latter being the default, and the preferred mode for me (as I'm usually hooking it up for music synch). I have tried, however, connecting it as PTP, and it still connects as USB 1. (As PTP, only the "Galaxy Tab 2" device appears -- no Composite, no Modem.) If it's plugged in as MTP and I change the setting to PTP, Windows unloads and reloads the device, and voila: The Tab appears under an "Enhanced" node, but eventually re-loads again to show a exclamation icon on the device; Properties then shows "This device cannot start." Same response if I plug it in as PTP and then change to MTP; in this case, only the Tab itself shows the exclamation, not the other two devices. One thing I have not tried, and really would prefer to avoid, is installing the "beta" chipset driver available on the Asus website, which is dated 2009. Windows tells me it has the most up-to-date drivers for the Tab, and for the chipset, and I'm inclined to believe that. I suspect the problem is with the Samsung drivers, or possibly the hardware. One suggestion I saw elsewhere which might, possibly, pertain is to ensure the USB cable is properly shielded; however, the Tab has one of those misbegotten 30-pin, not-quite-an-iPod connectors; I don't know if I could find a 3rd party one. It seems unlikely that this cable is improperly shielded, tho. (Is there a way to test that?) So, my question is: does anyone know how to get this working as one might reasonably expect it to?

    Read the article

  • Is recursion ever faster than looping?

    - by Carson Myers
    I know that recursion is sometimes a lot cleaner than looping, and I'm not asking anything about when I should use recursion over iteration, I know there are lots of questions about that already. What I'm asking is, is recursion ever faster than a loop? To me it seems like, you would always be able to refine a loop and get it to perform more quickly than a recursive function because the loop is absent constantly setting up new stack frames. I'm specifically looking for whether recursion is faster in applications where recursion is the right way to handle the data, such as in some sorting functions, in binary trees, etc.

    Read the article

  • Seeking for faster $.(':data(key)')

    - by PoltoS
    I'm writing an extension to jQuery that adds data to DOM elements using el.data('lalala', my_data); and then uses that data to upload elements dynamically. Each time I get new data from the server I need to update all elements having el.data('lalala') != null; To get all needed elements I use an extension by James Padolsey: $(':data(lalala)').each(...); Everything was great until I came to the situation where I need to run that code 50 times - it is very slow! It takes about 8 seconds to execute on my page with 3640 DOM elements var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery(':data(lalala)').each(function() { x++; }); }; console.log(((new Date).getTime()-t)/1000); Since I don't need RegExp as parameter of :data selector I've tried to replace this by var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery('*').each(function() { if ($(this).data('lalala')) x++; }); }; console.log(((new Date).getTime()-t)/1000); This code is faster (5 sec), but I want get more. Q Are there any faster way to get all elements with this data key? In fact, I can keep an array with all elements I need, since I execute .data('key') in my module. Checking 100 elements having the desired .data('lalala') is better then checking 3640 :) So the solution would be like for (i in elements) { el = elements[i]; .... But sometimes elements are removed from the page (using jQuery .remove()). Both solutions described above [$(':data(lalala)') solution and if ($(this).data('lalala'))] will skip removed items (as I need), while the solution with array will still point to removed element (in fact, the element would not be really deleted - it will only be deleted from the DOM tree - because my array will still have a reference). I found that .remove() also removes data from the node, so my solution will change into var toRemove = []; for (vari in elements) { var el = elements[i]; if ($(el).data('lalala')) .... else toRemove.push(i); }; for (var ii in toRemove) elements.splice(toRemove[ii], 1); // remove element from array This solution is 100 times faster! Q Will the garbage collector release memory taken by DOM elements when deleted from that array? Remember, elements have been referenced by DOM tree, we made a new reference in our array, then removed with .remove() and then removed from the array. Is there a better way to do this?

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • In .NET which loop runs faster for or foreach

    - by Binoj Antony
    In c#/VB.NET/.NET which loop runs faster for or foreach? Ever since I read that for loop works faster than foreach a long time ago I assumed it stood true for all collections, generic collection all arrays etc. I scoured google and found few articles but most of them are inconclusive (read comments on the articles) and open ended. What would be ideal is to have each scenarios listed and the best solution for the same e.g: (just example of how it should be) for iterating an array of 1000+ strings - for is better than foreach for iterating over IList (non generic) strings - foreach is better than for Few references found on the web for the same: Original grand old article by Emmanuel Schanzer CodeProject FOREACH Vs. FOR Blog - To foreach or not to foreach that is the question asp.net forum - NET 1.1 C# for vs foreach [Edit] Apart from the readability aspect of it I am really interested in facts and figures, there are applications where the last mile of performance optimization squeezed do matter.

    Read the article

  • General suggestions for making R code faster? [closed]

    - by gsk3
    Questions come up fairly frequently about how to make R code faster. This is an attempt to provide a general framework for thinking about the problem. Questions of this nature seem to fall into one of a few categories: I have a loop and it's running slowly. I've heard that vectorization can speed things up. How do I vectorize it? I've vectorized and it's still running slowly. What do I do next? I'd like to speed up my code but it's running quickly enough already. What are the principles and specifics which can be used to make R code run faster?

    Read the article

  • Get image from website url iphone faster

    - by dragon
    Hi i want to get image faster from website in iphone.. Now i get image from website url using this function NSData *data; UIImage *Favimage; data = [NSData dataWithContentsOfURL:[NSURL URLWithString:WebsiteUrl]]; Favimage = [[UIImage alloc]initWithData:data]; But it takes some time to get image from website url. Now i want get faster image means ? What can i do ? Is there any api in apple iphone sdk? Can any one help me ? Thanks in advance.....

    Read the article

  • Why SQL functions are faster than UDF

    - by Zerotoinfinite
    Though it's a quite subjective question but I feel it necessary to share on this forum. I have personally experienced that when I create a UDF (even if that is not complex) and use it into my SQL it drastically decrease the performance. But when I use SQL inbuild function they happen to work pretty faster. Conversion , logical & string functions are clear example of that. So, my question is "Why SQL in build functions are faster than UDF"? and it would be an advantage if someone can guide me how can I judge/manipulate function cost either mathematically or logically.

    Read the article

  • Which is faster join

    - by Costa
    Hi Which is faster SELECT * FROM X INNER JOIN Y ON x.Record_ID = y.ForignKey_NotIndexed_NotUnique or SELECT * FROM X INNER JOIN Y ON y.ForignKey_NotIndexed_NotUnique = x.Record_ID

    Read the article

  • Why is Perl commonly used for writing CGI scripts?

    - by Michael Vasquez
    I plan to add a better search feature to my site, so I thought that I would write it in C and use the CGI as a means to access it. But it seems that Perl is the most popular language when it comes to CGI-based stuff. Why is that? Wouldn't it be faster programmed in C or machine code? What advantages, if any, are there to writing it in a scripting language? Thanks.

    Read the article

  • Why is Perl commonly used as CGI scripts?

    - by Michael Vasquez
    I plan to add a better search feature to my site, so I thought that I would write it in C and use the CGI as a means to access it. But it seems that Perl is the most popular language when it comes to CGI-based stuff. Why is that? Wouldn't it be faster programmed in C or machine code? What advantages, if any, are there to writing it in a scripting language? Thanks.

    Read the article

  • Faster two way Boomark/History Sync for Firefox?

    - by geocoo
    I need a fast automatic two-way bookmark & history sync. Firefox sync seems too slow because I might bookmark/visit a URL then within a few mins put desktop to sleep and leave the house; if it hasn't synced, then I don't have a way of getting it on my laptop (or vice versa). My home upload is only around 200kbps so maybe that's why FF sync seems slow? (I mean it's minutes, not seconds slow). I have read there are a few ways like xmarks/delicious/google but I don't want clutter like extra toolbars or have to visit a site to get bookmarks, they should be as native in the Firefox bookmarks bar as possible, organised in the folders/tags. I know someone must have experimented/know more about this. Thanks.

    Read the article

  • How to copy a 200GB file faster?

    - by RainDoctor
    I got a 200GB .tgz file on server A(RHEL 5.2). I wanna transfer that file to server B (RHEL 5.3). Server B is on ESXi 4 Update1. I gave 10GB to that Server B VM, with 4 vCPUs. Both Server A and Server B are connected with an ethernet cable with local IP addies (no switch involved) scp gives me about 3Mbps. Is there a way to get 400Mbps?

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >