Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 268/1623 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • limit linux background flush (dirty pages)

    - by korkman
    Background flushing in linux happens when either too much written data is pending (adjustable via /proc/sys/vm/dirty_background_ratio) or a timeout for pending writes is reached (/proc/sys/vm/dirty_expire_centisecs). Unless another limit is being hit (/proc/sys/vm/dirty_ratio), more written data may be cached. Further writes will block. In theory, this should create a background process writing out dirty pages without disturbing other processes. In practice, it does disturb any process doing uncached reading or synchronous writing. Badly. This is because the background flush actually writes at 100% device speed and any other device requests at this time will be delayed (because all queues and write-caches on the road are filled). Is there any way to limit the amount of requests per second the flushing process performs, or otherwise effectively prioritize other device I/O?

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • Is it reasonable that a random disk seek & read costs ~16ms?

    - by fzhang
    I am frustrated about the latency of random reading from a non-ssd disk. Based on results from following test program, it speeds ~16 ms for a random read of just 512 bytes without help of os cache. I tried changing 512 to larger values, such as 25k, and the latency did not increase as much. I guess it is because the disk seek dominates the time. I understand that random reading is inherently slow, but just want to be sure that ~16ms is reasonable, even for non-ssd disk. #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> #include <sys/unistd.h> #include <fcntl.h> #include <limits.h> #include <stdio.h> #include <string.h> int main(int argc, char** argv) { int fd = open(argv[1], O_RDONLY); if (fd < 0) { fprintf(stderr, "Failed open %s\n", argv[1]); return -1; } const size_t count = 512; const off_t offset = 25990611 / 2; char buffer[count] = { '\0' }; struct timeval start_time; gettimeofday(&start_time, NULL); off_t ret = lseek(fd, offset, SEEK_SET); if (ret != offset) { perror("lseek error"); close(fd); return -1; } ret = read(fd, buffer, count); if (ret != count) { fprintf(stderr, "Failed reading all: %ld\n", ret); close(fd); return -1; } struct timeval end_time; gettimeofday(&end_time, NULL); printf("tv_sec: %ld, tv_usec: %ld\n", end_time.tv_sec - start_time.tv_sec, end_time.tv_usec - start_time.tv_usec); close(fd); return 0; }

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • Building optimal custom machine for Sql Server

    - by Chad Grant
    Getting the hardware in the mail any day. Hardware related to my question: x10 15.5k RPM SAS Segate Cheetah's x2 Adaptec 5405 PCIe Raid cards Motherboard has integrated SAS raid. Was thinking I would build 2 RAID 10 arrays one for data and one for logs The remaining 2 drives a RAID 0 for TempDB Will probably throw in a drive for OS. Does putting the Sql Server application / exe's on a raid make a difference and is there any impact of leaving the OS on a relatively slow disk compared to the raid arrays? I have 5/6 DBs combined < 50 gigs. With a relatively good / constant load. Estimating 60-7% reads vs writes. Planning on using log shipping as well if that matters. Any advice or suggestions?

    Read the article

  • Rails and Mongoid best way to implement sharing system

    - by Matteo Pagliazzi
    I have to model User and Board in rails using mongoid as ODM. Each board is referenced to an user through a foreign key user_id and now I want to add the ability to share a board with other users. Following CRUD I'd create a new Model called something like Share and it's releated Controller with the ability to create/edit/delete a Share but I have some doubts: First, where to save informations about Shares? I think I may create a field in the Board's collection called shared_with including an array of user ids. in a MySQL I'd created a new table with the ids of who share, the resource shared and the user the resources is shared with but I don't think that's necessary using MongoDB. Every user a Board is shared with should be able to edit the Board (but not to delete it) so the Board should have two relations one with the owner and another with the users the board is shared with, right? For permission (the owner should be able to delete a board but the users it is shared with shouldn't) what to use? I'm using Devise for authentication but I think something like CanCan would fit better. but how to implement it? What do you think about this way? Do you find any problems or have better solutions?

    Read the article

  • get mysql_real_escape is giving me errors when I try and add security to my website

    - by Mike
    I tried doing this: @ $db = new myConnectDB(); $beerName = mysql_real_escape_string($beerName); $beerID = mysql_real_escape_string($beerID); $brewery = mysql_real_escape_string($brewery); $style = mysql_real_escape_string($style); $userID = mysql_real_escape_string($userID); $abv = mysql_real_escape_string($abv); $ibu = mysql_real_escape_string($ibu); $breweryID = mysql_real_escape_string($breweryID); $icon = mysql_real_escape_string($icon); I get this error: Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user

    Read the article

  • Which is faster for read access on EC2; local drive or EBS?

    - by Phillip Oldham
    Which is faster for read access on an EC2 instance; the "local" drive or an attached EBS volume? I have some data that needs to be persisted so have placed this on an EBS volume. I'm using OpenSolaris, so this volume has been attached as a ZFS pool. However, I have a large chunk of EC2 disk space that's going to go unused, so I'm considering re-purposing this as a ZFS cache volume but I don't want to do this if the disk access is going to be slower than that of the EBS volume as it would potentially have a detrimental effect.

    Read the article

  • The understanding of flight search engine

    - by Jens Jensen
    Today I just discovered a search engine website who offered a service to enter your departure destination, and then search for which possible destinations you can have for the cheapest price. This is very nice to use, if one wants to flight somewhere but doesn't know which "good deals" are available. This is the site: http://www.kayak.com/explore/ Can someone explain to me, which programs are (mostly) used, and summarize how to make this sort of search engine. I think this is very interesting but unfortunately there are not shown all the possible flight tickets and therefore I think this project could be improved.

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • Does the tempdb Log file get Zero Initialized at Startup?

    - by Jonathan Kehayias
    While working on a problem today I happened to think about what the impact to startup might be for a really large tempdb transaction log file.  Its fairly common knowledge that data files in SQL Server 2005+ on Windows Server 2003+ can be instant initialized, but the transaction log files can not.  If this is news to you see the following blog posts: Kimberly L. Tripp | Instant Initialization - What, Why and How? In Recovery... | Misconceptions around instant file initialization In Recovery…...(read more)

    Read the article

  • IMPORTANT - FY13 OPN Incentive Program VAD Webcast - June 21st @ 4PM GMT

    - by Cinzia Mascanzoni
    Please mark your calendars for the FY13 OPN Incentive Program update webcast on June 21. The objective of this call is to share the updates to the OPN Incentive Program for FY13 with you. Thursday, June 21st @: 4:00 PM GMT : 5PM CET Click here for the details of the webcast. Please plan to call in 5-10 minutes prior to the start to avoid delays. We look forward to your participation on this call.

    Read the article

  • Do More, Spend Less, Speed Time to Market – All with Oracle Database Appliance.

    - by jgelhaus
    Do More, Spend Less, Speed Time to Market – All with Oracle Database Appliance. Join Oracle for a first hand experience that will highlight how your business can lower TCO for hardware and software, do more with your existing personnel and resources, and get your products to market faster with Oracle Database Appliance. Learn how you can take advantage of the world's most popular database – Oracle Database 11g – in a single solution that's affordable, provides automated installation, is easy to manage, and is supported end-to-end by Oracle. Oracle Database Appliance is the complete package: software, server, storage, and networking, all designed by Oracle to simplify your technology and let you get down to business. Webcast Schedule Wednesday, April 4 1:00pm Eastern Webcast Link Teleconference: 1-866-753-5684 Conference Code: 61908866 Passcode: oda Add meeting to your calendar Wednesday, April 11 1:00pm Eastern Webcast Link Teleconference: 1-866-753-5684 Conference Code: 61909590 Passcode: oda Add meeting to your calendar Wednesday, April 18 1:00pm Eastern Webcast Link Teleconference: 1-866-753-5684 Conference Code: 61910385 Passcode: oda Add meeting to your calendar

    Read the article

  • Throughput = BS * IOPS?

    - by Marvin
    I've seen in many places that throughput = bs * iops should be true. For example writing at 128k block size to a SAS disk that can support 190 IOPS should give a throughput of ~23 MBps - 23.75(MBs) = 128(BS)*190(SAS-15 IOPS)/1024. Now when I tested it in a VM against a monster NetApp filer I got theses results: # dd if=/dev/zero of=/tmp/dd.out bs=4k count=2097152 8589934592 bytes (8.6 GB) copied, 61.5996 seconds, 139 MB/s To view the IO rate of the VM I used iostat and esxtop, and they both showed around 250 IOPS. So to my understanding the throughput was supposed to be ~1000k: 1000(KBs) = 4(BS)*250(IOPS). dd of 8GB is twice the size of RAM of course, so no page caching here. What am I missing? Thanks!

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • Extracting one file from archive: 7-zip requires decompressing entire archive?

    - by siikamiika
    I've noticed that when browsing an archive containing multiple files with 7-zip 9.20 Windows GUI, extracting one file for previewing takes significantly longer with .7z than .rar archives. With .7zips it also cycles through the filenames in the archive. To me it looks like decompressing the entire archive and keeping just one file. Is there a setting in 7-zip (current or beta/alpha versions) that allows RAR-like behavior?

    Read the article

  • Zabbix machine is going crazy with HD writes!

    - by gshankar
    I recently installed Zabbix on a Ubuntu box I had sitting around. It's only monitoring 2 servers but I've noticed that it's continuously smashing the HD with writes. I don't remember Zabbix being this resource heavy when I've used it in the past... Any ideas on why this is happening and what I can do about it? Running iotop gives me this: 1710 be/4 mysql 0.00 B/s 102.12 K/s 0.00 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 1723 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld I'm pretty sure it's Zabbix that's causing all that mysql activity as it's the only thing which uses mysql which is running on the box...

    Read the article

  • Programs minimized for long time takes long time to "wake up"

    - by bart
    I'm working in Photoshop CS6 and multiple browsers a lot. I'm not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar - it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it's slow, unresponsive and even sometimes totally freezes for minute or two. It's not a hardware problem as it's been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me - does it also happen to you? I guess OSes somehow "focus" on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let's say, Photoshop, so it won't slow down after long period of inactivity?

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • MySQL vs. SQL Server Go daddy, What is the difference bewteen hosted DB and App_Data Db

    - by Nate Gates
    I'm using Goddady for site hosting, and I'm currently using MySQL, because there are less limits on size,etc. My question is what is the difference between using a hosted Godaddy Db such as MySQL vs. creating a SQL Serverdatabase in the the App_Data folder? My guess is security? Would it be a bad idea to use a SQL ServerDB thats located in the App_Data folder? Additional Well I am able to create a .mdf (SQL Server DB file) in the App_Data folder, but I'm really unsure if should use that or not, If I did use it it would simplify using some of the Microsoft tools. Like I said my guess is that it would be less secure, but I don't really know. I know I have a 10gb, file system limit, so I'm assuming my db would have to share that space.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >