Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 139/586 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Oracle Hangs on Responses Intermittently

    - by Ryan Cook
    I want to preface this with the fact that I am a developer and I am not even close to a DBA, plus I am new to Oracle. OK, here it goes: I have a Java application which uses spring and hibernate. Its a simple CRUD app and I will leave the details out as I don't think they are the issue. I have noticed that my app runs fine when I use MySql, but when I use an Oracle 10.2 server every 7th-10th hangs for 5-10 seconds. My Oracle installation was done by me using all defaults, same as the mysql install. I don't even know where to start looking. Any ideas? Thanks in advance and sorry that I lack the details that are most likely required for help.

    Read the article

  • Spiceworks SQL Monitor cant authenticate

    - by user11457
    I have SpiceWorks 4.7, installed and added the sql monitor extension to monitor our SQL servers. It does identify our sql servers correctly, but when I put the login information in it can't authenticate, with the following error message. Authentication failed. Check the login and password, and ensure that the server is configured for remote connections. I get the same result trying to use windows authentication. WE do use an alternate port# could this be the problem?

    Read the article

  • Bad idea to keep htop running?

    - by Michael T. Smith
    I'm now monitoring multiple servers (3) and in the coming weeks that'll increase (towards 5 or 6). I've been keeping three terminal windows open running htop via SSH and I'm now wondering if there are any downsides to having a connection constantly open to production servers?

    Read the article

  • My KDE very slow in certain operations

    - by Pietro
    I have a problem with my Linux installation. It seems that the KDE code that deals with directory windows is extremely slow (on both Dolphin and Konqueror). This happens both when I click on a directory icon and when I want to open/save a file from many KDE applications. The time the window takes to open can be one minute or more. The same happens when I right click on an icon. Looking at the CPU usage, this is very low (less than 10%). Am I the only one with this problem, or is it well known and maybe already fixed? Consider that I cannot update to a more recent version of OpenSuse. Thank you, Pietro Configuration: Linux version: OpenSuse 11.4 KDE 4.6.0 System: DELL Precision T3500 - Intel Xeon Home directory mounted on a remote drive. <-- could this be the reason?

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • First requests are painfully slow

    - by winSharp93
    I am running Redmine under IIS using Zoo. Installation was done using the Web Platform Installer and the default configuration has not been touched. However, when using the application, the first requests take very long to complete (sometimes more than one minute). During that time, the ruby.exe causes some CPU load (about 15%). According to the log files, it's mainly the views taking that long to render: Started GET "/redmine/login" for IP at 2012-09-04 09:54:08 +0200 Processing by AccountController#login as HTML Rendered account/login.html.erb within layouts/base (42150.5ms) Completed 200 OK in 43508ms (Views: 43008.5ms | ActiveRecord: 0.0ms) Rendered account/login.html.erb within layouts/base (42435.1ms) Completed 200 OK in 44100ms (Views: 43523.3ms | ActiveRecord: 0.0ms) After the initial delay, further request times are totally acceptable. Any ideas on how to speed up the warmup time?

    Read the article

  • NFS robustness or weaknesses

    - by Thomas
    I have 2 web servers with a load balancer in front of them. They both have mounted a nfs share, so that they can share some common files, like images uploaded from the cms and some run time generated files. Is nfs robust? Are there any specific weaknesses I should now about? I know it does not support file locking but that does not matter to me. I use memcache to emulate file locking for the runtime generated files. Thanks

    Read the article

  • Speeding up ROW_NUMBER in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • What Is The Proper Laptop Battery Care While Running Laptop Solely On Battery?

    - by Boris_yo
    Because of convenience, I had to move my laptop to another room away from room where I always ran laptop on UPS without using battery. Since so far I always run laptop on battery, I question the proper usage to prolong battery life. Currently I run laptop on battery with power supply so battery is constantly being charged until it is full 100% and when it is, I disconnect power supply and continue working until battery meter shows 10% remaining. That's when I plug in power supply and let it charge until 100% once again while I work. But it takes a lot of time to fully charge laptop while working since my power supply is 60W which should be the reason of such slow charge and I think the kind of charger that I use is express charger. The thought of charging laptop until full, all while doing my work makes me think that if it takes way more time to charge, it might keep battery running warm for the period of charging time which brings me to question about whether I should keep running laptop as I've described above or it would be better to leave power supply constantly connected to laptop to keep battery between 99%-100%? On one hand it won't keep battery warm but it will try to frequently supply charge to battery once it gets 99% to replenish charge to 100% (which might reduce battery life?). On the other hand if I'll keep working solely on battery and recharge it when below 10%, the battery will get warm but only when charged. Can anybody suggest the correct way of running laptop on battery to ensure better battery life? Dell Latitude E6420 Windows 7 64-bit

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Simple queries occasionally running very slowly

    - by Johan
    I have some very simple queries that occasionally run very slowly. The table viewed_sites has about 10 - 20 rows. Running EXPLAIN ANALYZE always gives a runtime of less than 3 milliseconds. When the query is run automatically (every 10 seconds) it occasionally takes over a second to run. The query: INSERT INTO ga.viewed_sites (site_id) VALUES ('gop2') The table: CREATE TABLE viewed_sites ( site_id character varying(4) NOT NULL, last_viewed timestamp with time zone DEFAULT now() NOT NULL ); The (occasional) log result: 2010-05-24 15:47:55 UTC LOG: duration: 1044.632 ms statement: INSERT INTO ga.viewed_sites (site_id) VALUES ('gop2') It's a horribly vague question, but what could be causing this? I suppose it comes down to CPU, RAM, HDD or some combination of the above. Postgresql 8.3, Ubuntu 8.04 Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz 2 GiB RAM

    Read the article

  • Can I change a MySQL table back and forth between InnoDB and MyISAM without any problems?

    - by Daniel Magliola
    I have a site with a decently big database, 3Gb in size, a couple of tables with a dozen million records. It's currently 100% on MyISAM, and I have the feeling that the server is going slower than it should because of too much locking, so I'd like to try going to InnoDB and see if that makes things better. However, I need to do that directly in production, because obviously without load this doesn't make any difference. However, I'm a bit worried about this, because InnoDB actually has potential to be slower, so the question is: If I convert all tables to InnoDB and it turns out i'm worse off than before, can I go back to MyISAM without losing anything? Can you think of any problems I might encounter? (For example, I know that InnoDB stores all data in ONE big file that only gets bigger, can this be a problem?) Thank you very much Daniel

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • Is it a good idea to have the operating system on a solid state drive?

    - by Kenji Kina
    There is something I don't quite understand. I know a SSD helps with OS load times, but I'm not sure if all this boost is only noticeable/interesting when booting, or gives an all around considerably better experience thereafter. I am interested in having a quick and responsive environment after booting, which leads me to think that it'd be better to spend the SSD capacity in my most used apps (and the page file? Another inside question) and not the OS itself. This, of course, means that I don't know just how much the OS reads/writes its files during normal usage. So, how good an idea is it to dump the whole 20GB+ of Windows 7 OS into the SSD (considering the hefty price per GB of SSD capacity) if I can put up with the usual hard disk boot times? Would I be missing on a lot if I didn't?

    Read the article

  • Free memory on linux [closed]

    - by Julia Roberts
    Possible Duplicate: Meaning of the buffers/cache line in the output of free What would be a good setting to free memory on linux? I have 8GB but gets used up so fast. current settings: kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000 vm.dirty_ratio = 40 kernel.pid_max = 4096 vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 What settings would I need so linux frees old ram faster?

    Read the article

  • master-slave-slave replication: master will become bottleneck for writes

    - by JMW
    hi, the mysql database has arround 2TB of data. i have a master-slave-slave replication running. the application that uses the database does read (SELECT) queries just on one of the 2 slaves and write (DELETE/INSERT/UPDATE) queries on the master. the application does way more reads, than writes. if we have a problem with the read (SELECT) queries, we can just add another slave database and tell the application, that there is another salve. so it scales well... Currently, the master is running arround 40% disk io due to the writes. So i'm thinking about how to scale the the database in the future. Because one day the master will be overloaded. What could be a solution there? maybe mysql cluster? if so, are there any pitfalls or limitations in switching the database to ndb? thanks a lot in advance... :)

    Read the article

  • MongoDB: ReplicaSet slower than a corresponding Master/Slave config

    - by SecondThought
    Is it true that a mongoDB configured as a replicaset (lets say two nodes + an arbiter) will always be slower than the same DB and server specs but configured as a Master? I've run some tests and found out that for a fresh DB, RS is a little quicker than Master/Slave config but when the DB is getting bigger than ~100k records the latter is getting much snappier. am I missing something here? PS: I was testing it with mongoid driver for ruby.

    Read the article

  • NFS I/O monitoring

    - by Gordon
    I have a NFS mounted directory, and I'd like to monitor the I/O usage on it (MB/s reads and writes). What's the recommended way to do that ? This is the NFS client, I don't have access to the NFS server. I'm not interested in general I/O usage (otherwise I would use vmstat/iostat). It also has multiple NFS mounts, I'm interested in monitoring just one specific mount (or I might have used ethereal). Thanks!

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >