Search Results

Search found 4 results on 1 pages for 'korkman'.

Page 1/1 | 1 

  • Networked Parallel Port in Linux / KVM / QEMU

    - by korkman
    What I want to use is the "-parallel" tcp or udp option from KVM / QEMU, but I don't seem to find any server for this client. I don't serve a printer but a hardware dongle. I checked ser2net, which does provide "/dev/lp0" sharing, but it doesn't seem to work for KVM / QEMU. I suspect KVM / QEMU requires "/dev/parport0". I did rmmod lp, modprobe ppdev, linked ser2net to parport0, but it didn't work out. Perhaps ser2net is not suited for this. I tried socat as well, and I tried netcat. No success. Does anyone know any KVM / QEMU compatible parallel port server? Or did any of netcat, socat or ser2net work for you?

    Read the article

  • limit linux background flush (dirty pages)

    - by korkman
    Background flushing in linux happens when either too much written data is pending (adjustable via /proc/sys/vm/dirty_background_ratio) or a timeout for pending writes is reached (/proc/sys/vm/dirty_expire_centisecs). Unless another limit is being hit (/proc/sys/vm/dirty_ratio), more written data may be cached. Further writes will block. In theory, this should create a background process writing out dirty pages without disturbing other processes. In practice, it does disturb any process doing uncached reading or synchronous writing. Badly. This is because the background flush actually writes at 100% device speed and any other device requests at this time will be delayed (because all queues and write-caches on the road are filled). Is there any way to limit the amount of requests per second the flushing process performs, or otherwise effectively prioritize other device I/O?

    Read the article

  • Is current SATA 6 gb/s equipment simply unreliable?

    - by korkman
    I have a 45-disk array of Seagate Barracuda 3 TB ST3000DM001 (yes these are desktop drives I'm aware of that) in a Supermicro sc847 JBOD, connected via LSI 9285. I have found a solution for the problem description below by reducing speed via MegaCli -PhySetLinkSpeed -phy0 2 -a0; for i in $(seq 48); do MegaCli -PhySetLinkSpeed -phy${i} 2 -a0; done and rebooting. The question remains: Is this typical for current 6 gb/s equipment? Is this the sad state of SATA storage? Or is some of my equipment (the sff-8088 cables come to mind) bad? The Problem was: Synchronizing HW RAID-6, disks kept offlining. Fetching SMART values reveiled that those which offlined did not increase powered-on hours anymore. That is, their firmware (CC4C) seems to crash. Digging into the matter by switching to Software RAID-6, with the disks passed-through, I got tons of kernel messages scattered across all disks, with 6 gb/s: sd 0:0:9:0: [sdb] Sense Key : No Sense [current] Info fld=0x0 sd 0:0:9:0: [sdb] Add. Sense: No additional sense information And finally, when a disk offlines: megasas: [ 5]waiting for 160 commands to complete ... megasas: [35]waiting for 159 commands to complete ... megasas: [155]waiting for 156 commands to complete ... megaraid_sas: pending commands remain after waiting, will reset adapter. Ugly controller reset here, then minutes later: megaraid_sas: Reset successful. sd 0:0:28:0: Device offlined - not ready after error recovery ... sd 0:0:28:0: [sdu] Unhandled error code sd 0:0:28:0: [sdu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK sd 0:0:28:0: [sdu] CDB: Read(10): 28 00 23 21 2f 40 00 00 70 00 sd 0:0:28:0: [sdu] killing request Reduced speed to 3 gb/s like written above, all problems vanished.

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

1