Search Results

Search found 6068 results on 243 pages for 'total anime immersion'.

Page 114/243 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • terminal tools and logs for debugging TCP issues

    - by kellogs
    I have a server which I am testing for functionality (not load, not stress) with tsung. 50 users / second, 100 total users. Judging from tsung (tsung is the testing framework) graphs, there TCP connections (red line) drops to 0 while the commenced user sessions (green line) does not. Server logs show nothing to be gripping onto, so I am speculating some kind of TCP issue. Should this be the case ? Where would I look further on the server, any logs / tools to be looking at ? Only SSH available, no GUI. > root@XMPP:~# cat /etc/lsb-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=11.10 > DISTRIB_CODENAME=oneiric > DISTRIB_DESCRIPTION="Ubuntu 11.10" Thank you

    Read the article

  • Tuning (and understanding) table_cache in mySQL

    - by jotango
    Hello, I ran the excellent MySQL performance tuning script and started to work through the suggestions. One I ran into was TABLE CACHE Current table_cache value = 4096 tables You have a total of 1073 tables. You have 3900 open tables. Current table_cache hit rate is 2%, while 95% of your table cache is in use. You should probably increase your table_cache I started to read up on the table_cache but found the MySQL documentation quite lacking. They do say to increase the table_cache, "if you have the memory". Unfortunately the table_cache variable is defined as "The number of open tables for all threads." How will the memory used by MySQL change, if I increase this variable? What is a good value, to set it to?

    Read the article

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • A list of 'best practices' for extending the life between charges of a notebook battery.

    - by Tim Visher
    Hello Everyone, I'd like to compile a list of best practices for getting the most out of a single charge of a typical notebook battery (be it Li-Ion or Li-Poly). Sources would be great as well. I've heard, for instance, that the best things to do to improve battery performance (not the total lifetime of the battery, just single charge performance) are, in descending order of effectiveness: Turn your display all the way down. Turn off WiFi Turn off Bluetooth Spin down disks when they're not in use. etc… I'd like to get sources together for these and other tips for extending life-between-charge for any battery on any notebook (as these really are all about Demand Management rather than Life lime extension. Thanks!

    Read the article

  • 1080 HD video display - Windows 7

    - by blasteralfred
    I am running Windows 7 in my Dell Studio 1555 Laptop. I have ATI Mobility Radeon HD 4570 graphics card inside. When I try to watch a video, I can see only some disturbed texture. I can hear the audio and no prob with that. The I can see the video frames peeking sometimes but is pretty slower. I have DirectX (directx_feb2010_redist) and Win7 codecs installed. Here are some screenshots; The video details are; Name: video.mp4 Frame width: 1920 Frame height: 1080 Frame rate: 29 fps data Rate: 7958 kbps Total bitrate: 8052 kbps Just comment if you want more info.

    Read the article

  • Comparing, merging, calculating colums of data in Excel

    - by hickster
    I would like to create a formula that a) compares four columns of data (see below) Sep Oct name units name units apple 2 apple 3 pear 3 pear 7 orange 4 banana 6 banana 3 toffee 5 then b) merges the two "names" column into one column, dropping any duplicates but still retaining the two unit columns (for months Sep and Oct) Sep Oct name units units apple 2 3 pear 3 7 orange 4 0 banana 3 6 toffee 0 6 then c) creates a third column that compares "Sep units" against "Oct units" and produces the total in the "difference" column Sep Oct name units units difference apple 2 3 1 pear 3 7 4 orange 4 0 -4 banana 3 6 3 toffee 0 6 6

    Read the article

  • Full Uninstall winvnc.exe

    - by thenickperson
    This computer (Windows 7) has not been able to run Windows Aero after I installed some VNC software, UltraVNC and TightVNC. I want to get Aero back, and Microsoft Fix It tells me that it can't be turned on because of mirror drivers (this edition includes Aero, and I had it working well before), which I managed to learn can come from winvnc.exe, which is installed by both programs. I would like to uninstall winvnc.exe and the service from this computer. I messed with the command line a little (I know, I aws being careful), but I couldn't get anywhere, I'm a total beginner with the command line. Can someone please help me remove the exe file and the service, so the mirror drivers are disabled and I can use Aero again? I don't need the VNC servers (I couldn't get them to work anyway), but if keeping the clients is possible that would be nice.

    Read the article

  • windows server backup 2008 R2 - what is generating all the change data?

    - by bobjandal
    We have a small relatively idle windows server 2008 R2 installation that does basic filesharing and exchange for about 10 not very active users. When running a windows server backup, the incremental data daily is about 20GB. This is not coming from users shared files, nor from changes in their mailbox sizes. The total size of the installation is 249GB, which is mostly old files. Where is all this data coming from, and how can I reduce it ? Using online backup of the vhd file from the backup is taking a while because of this daily change. Is there some way I can at least see what files are changing and contributing to this data ? Options I can think of but am not sure about: 1) pagefile churning - altho the backup does not include the pagefile, perhaps the changed blocks left behind are included ? 2) logs or something ? but the installation size stays the same every day 3) should I zero free space using sdelete before backing up perhaps ?

    Read the article

  • Deleting MySQL rows causes lock table error

    - by Dave L
    I had a couple million rows to delete but they can't be deleted at once without this error ERROR 1206 (HY000): The total number of locks exceeds the lock table size So I wrote a script to delete 100,000 rows 10,000 at a time. It ran once but when I run it a second time I get the error on the first attempt to delete 10,000. The way I'm trying to delete the 10,000 rows is to use a delete statement that refers to all 2 million rows but I use a limit clause to affect only 10,000. I've tried adding an "unlock tables;" statement to the script before the first delete but that doesn't help. I still get the lock table error on the first delete. Any ideas how I can do this? Is there a way I can tell it NOT to lock records? I can make sure nothing else is accessing the table.

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • Is there a way for Windows 7 to show remaining disk space in the status bar?

    - by Matt Thompson
    This is really driving me nuts. I do a lot of moving media files to and from USB drives, and I am constantly looking to the status bar to see how much remaining space I have on a drive. It's quick, and doesn't involve any clicking. At least, that's what I used to do using Windows XP. Is there a way to get the status bar in Windows 7 to behave in the same way? I saw in a Wikipedia article that some features have been removed from Windows 7, including these two that seem to be affecting me the most: The size of any selected item and free disk space are not shown on the status bar. When no items are selected in a folder, neither the details pane nor the status bar show the total size of files in the folder. Are there any plug-ins or registry tweaks that can be made to return this functionality? If not what is the quickest way to get the remaining space on a drive without having to click on something and leaving the directory you are working in?

    Read the article

  • MySQL too many connections

    - by Webnet
    On my server I have 7 databases. Our server has 512 MB of RAM which I'm getting upgraded this evening to 2GB and has a 2.4 single processor. I've gotten an error about the connection limit exceeded. With increasing my RAM, is it ok to increase the number of connections? Currently it's set to 200 but a single page may connect to 3-4 databases considering JOINs and things. We've setup so many databases for mere organization. We have a total of about 250-300 tables in all of the databases. Any advice would be appreciated :)

    Read the article

  • How to calculate running totals of subsets of data in a table

    - by John
    I have 4 columns: Name, Week, Batch and Units Produced (Cols, A,B,C,D). In column E, I need to keep running totals based on name and week. When the week changes for the same person, restart the total. Fred, 12, 4001, 129.0 Answer in e: 129.0 Fred, 12, 4012, 234.0 Answer in e: 363.0 Fred, 13, 4023, 12.0 Answer in e: 12.0 John, 12, 4003, 420.0 Answer in e: 420.0 John, 13, 4021, 1200.0 Answer in e: 1200.0 John, 13, 4029, 120.0 Answer in e: 1320.0 I need to be able to copy the formula to over 1000 rows.

    Read the article

  • Disk space mismatch on OS X Server (Leopard)

    - by John Gardeniers
    My Nagios system sent me an alert to inform me that the disk space on one of the drives on our OS X server is very low. When I run df /Volumes/Apps/ I get /dev/disk0s3 117209520 114932472 2277048 99% /Volumes/Apps When I run du -c /Volumes/Apps it reports 11489944 total Why might there be such a vast difference? Even more importantly, how do I find the problem and what can I do about it? I'm essentially just a Windows admin, so am well out of my comfort zone here. I use a Mac but I'm not a Mac admin in any real sense of the word.

    Read the article

  • Shrink a mounted LVM partition

    - by javanix
    I fear I already know the answer to this question, but here goes. I need to carve out a new partition on a running system. /var/ is mounted from an LVM volume (hdd1_vg-var) and has only 3% used disk space. / is mounted separately (hdd1_vg-root) and has about 80% used disk space. Filesystem Size Used Avail Use% Mounted on /dev/**/hdd1_vg-root 2.0G 1.4G 481M 75% / /dev/**/hdd1_vg-var 33G 699M 31G 3% /var Unfortunately I don't have any free extents to grow this partition organically - vgdisplay shows: Total PE 10000 Alloc PE / Size 10000 / 39.06 GB Free PE / Size 0 / 0 So seeing that I have all this free disk space on /var/, can I shrink /var/ without un-mounting it or is this just a pipe dream? I am really hoping to be able to do this work on a running system - un-mounting would of course not be difficult but it would interfere with system functionality.

    Read the article

  • How do I make YouTube videos fill up the entire screen when using dual monitors?

    - by Jephir
    I am using a dual monitor setup on Ubuntu 9.10 using the TwinView configuration in NIVIDA X Server Settings. My total resolution is 2960x1050 pixels, and my individual monitors are 1680x1050 (primary) and 1280x1024 (secondary). When going into fullscreen mode on any video on YouTube, I only see a cropped version of the video on my primary display as seen below. This does not occur on any other video sharing website - they properly make the video to fill the entire screen on my primary monitor. To my knowledge this problem only happens on YouTube.

    Read the article

  • Virtual Machine files on ramdisk doesn't run faster than on physical disk

    - by Landy
    I installed total 36G memory (4x8G + 2x2G) in the host (Windows 7) and I used ImDisk to create a 32G ramdisk and format it to NTFS file system. Then I copied the virtual machine (in VMware Workstation format) folder, including vmx, vmdk, etc... to the new created ram disk. Then I tried to power on it in VMware Workstation. What made me surprised is that the performance is not better than before. It cost almost the same time to power on the Windows 7 VM. I check the Resource Monitor in the Windows 7 host, and the statistics of CPU, disk, network are rather normal. The memory has reported 3000+ hard fault/sec when guest OS boot then drop to 0 after the guest powered on. Any idea about this issue? I had thought the performance of ramdisk will be better than physical disk in this case. Am I wrong? Thanks.

    Read the article

  • TTL and traceroute showing different values for same domain

    - by Cray XT3
    y am i getting two different output for tracert and ping. Ping Result showing total hops of 20 and tracert showing 8. default ttl value on my linux machine 64,icmp echo reply ttl value 44. 64-44=20 but tracert is showing only 8 hops. What can be the reason? If tracert is implemented using ttl then why am i getting different values for same domain.no matter how many times i tried. Fo google and google services,ttl value and tracrt are same,but for other domains its different.

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • Twitter 2 for Android crash every time I try uploading multi photos [closed]

    - by Hazz
    Hello, I'm using the new Twitter 2 on Android 2.1. Whenever I hit the button which enables me to upload multiple photos in a single tweet, I always get the error "The application Camera (process com.sonyericsson.camera) has stopped unexpectidly. Please try again". However, uploading a single photo using the camera button in Twitter have no problem, it works. My phone is Sony Ericsson x10 mini pro. I tried signing out and back in, same result. Anything I can do to fix this? This is the log info I got using Log Collector: 02-23 15:05:57.328 I/ActivityManager( 1240): Starting activity: Intent { act=com.twitter.android.post.status cmp=com.twitter.android/.PostActivity } 02-23 15:05:57.338 D/PhoneWindow(15095): couldn't save which view has focus because the focused view com.android.internal.policy.impl.PhoneWindow$DecorView@45726938 has no id. 02-23 15:05:57.688 I/ActivityManager( 1240): Displayed activity com.twitter.android/.PostActivity: 340 ms (total 340 ms) 02-23 15:05:59.018 I/ActivityManager( 1240): Starting activity: Intent { act=android.intent.action.PICK typ=vnd.android.cursor.dir/image cmp=com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity } 02-23 15:05:59.038 I/ActivityManager( 1240): Start proc com.sonyericsson.camera for activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: pid=15113 uid=10057 gids={1006, 1015, 3003} 02-23 15:05:59.128 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 02-23 15:05:59.158 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=50) 02-23 15:05:59.448 I/ActivityManager( 1240): Displayed activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: 423 ms (total 423 ms) 02-23 15:05:59.458 W/dalvikvm(15113): threadid=15: thread exiting with uncaught exception (group=0x4001e160) 02-23 15:05:59.458 E/AndroidRuntime(15113): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 02-23 15:05:59.468 E/AndroidRuntime(15113): java.lang.RuntimeException: An error occured while executing doInBackground() 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$3.done(AsyncTask.java:200) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.lang.Thread.run(Thread.java:1096) 02-23 15:05:59.468 E/AndroidRuntime(15113): Caused by: java.lang.IllegalArgumentException: Unsupported MIME type. 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:202) 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$2.call(AsyncTask.java:185) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 02-23 15:05:59.468 E/AndroidRuntime(15113): ... 4 more 02-23 15:05:59.628 E/SemcCheckin(15113): Get crash dump level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:05:59.628 W/ActivityManager( 1240): Unable to start service Intent { act=com.sonyericsson.android.jcrashcatcher.action.BUGREPORT_AUTO cmp=com.sonyericsson.android.jcrashcatcher/.JCrashCatcherService (has extras) }: not found 02-23 15:05:59.648 I/Process ( 1240): Sending signal. PID: 15113 SIG: 3 02-23 15:05:59.648 I/dalvikvm(15113): threadid=7: reacting to signal 3 02-23 15:05:59.778 I/dalvikvm(15113): Wrote stack trace to '/data/anr/traces.txt' 02-23 15:06:00.388 E/SemcCheckin( 1673): Get Crash Level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:06:01.708 I/DumpStateReceiver( 1240): Added state dump to 1 crashes 02-23 15:06:02.008 D/iddd-events( 1117): Registering event com.sonyericsson.idd.probe.android.devicemonitor::ApplicationCrash with 4314 bytes payload. 02-23 15:06:06.968 D/dalvikvm( 1673): GC freed 661 objects / 126704 bytes in 124ms 02-23 15:06:11.928 D/dalvikvm( 1379): GC freed 19753 objects / 858832 bytes in 84ms 02-23 15:06:13.038 I/Process (15113): Sending signal. PID: 15113 SIG: 9 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{4596ecc0 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.048 I/ActivityManager( 1240): Process com.sonyericsson.camera (pid 15113) has died. 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{459db5e8 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.078 I/UsageStats( 1240): Unexpected resume of com.twitter.android while already resumed in com.sonyericsson.camera 02-23 15:06:13.098 W/InputManagerService( 1240): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@456e7168 02-23 15:06:21.278 D/dalvikvm( 1745): GC freed 2032 objects / 410848 bytes in 60ms

    Read the article

  • Linux file permissions seem right but I can't write to a directory

    - by CaseyB
    I believe that I have the permissions set correctly but I can't write to a directory. Here's my problem: cborders@Kraken:/var/www$ ls -la total 12 drwxrwxr-x 2 webz webz 4096 2011-12-30 14:58 ./ drwxr-xr-x 13 root root 4096 2011-12-30 14:58 ../ -rw-rw-r-- 1 webz webz 177 2011-12-30 14:58 index.html cborders@Kraken:/var/www$ id cborders uid=1000(cborders) gid=1000(cborders) groups=1000(cborders),4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),113(lpadmin),114(admin),1002(webz) cborders@Kraken:/var/www$ mkdir test mkdir: cannot create directory `test': Permission denied The owner of the directory is a user called webz and the permissions allow the user and group rwx access to it. I am in the webz group but I still can't make any changes. What am I doing wrong here?

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • iptables: limiting bytes downloaded per IP per day?

    - by Miles
    On a public-facing web server, I'd like to limit the total bytes downloaded per IP address per day. For example, after a visitor downloaded 100MB, any additional requests would be dropped or rejected for the next 24 hours. Is it possible to accomplish this using iptables alone? The connbytes, connlimit, hashlimit, quota, and recent options all look promising, but the man page plays its cards close to the vest (e.g., "quota - Implements network quotas by decrementing a byte counter with each packet. --quota bytes The quota in bytes."). Would like to avoid using a proxy (like Squid) if possible.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >