Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 379/985 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Should tripwire be entering /proc?

    - by dsadinoff
    When initializing the db with tripwire --init it spat out a bunch of errors pertaining to /proc: ### Warning: File system error. ### Filename: /proc/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: Duplicate object encountered. ### /proc/sys/net/ipv6/neigh This feels like noise. The twpol.txt file has the following clause: # # Critical devices # ( rulename = "Devices & Kernel information", severity = $(SIG_HI), ) { /dev -> $(Device) ; /proc -> $(Device) ; } Which, if I understand it right, is going to cause tripwire to care deeply about the entire contents of /proc. Shouldn't it just care about the static parts of /proc like the drivers and such, and not the per-pid stuff? Why does it ship like this?

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • external disk suddenly unmounting

    - by hasen j
    Platform: Ubuntu 9.10 Disk Brand/model: WD My Book The external hard disk suddenly unmounts after a while. I suspect it's due to it "sleeping" to save power. I don't recall the problem having occurred before the upgrade to Karmic. How can this be fixed?

    Read the article

  • Configure host access rights in OpenLDAP

    - by Anonymous Coward
    I've set up an OpenLDAP-Server to authenticate users to our Ubuntu-servers. The authentication works quite well but I'd like to restict the user's access to certain servers. I know this can be done through nss_base_something in the client's ldap.conf. However, this requires the group restrictions to be specified on the client. I wonder if the restrictions can be set completely in OpenLDAP. If it is, I'd like to know how. Thanks, AC

    Read the article

  • In Cent os 6.2 can i update Kernel version to 3.4 ? if so how to upgrade kernel?

    - by shiva
    Hi, I have a server with Centos 6.2 with Kernel version 2.6 , but i need to increase my application Performance. The Kernel Version 3.4 has x32abi which can improve the performance so i want to upgrade to 3.4 ? Is it possible? I tried 1) downloading kernel compiling and installing but still i see the same Kernel version.. What went wrong? i followed the process in mentioned in the below link.. http://www.tecmint.com/kernel-3-5-released-install-compile-in-redhat-centos-and-fedora/

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • /etc/hosts.deny ignored in Ubuntu 14.04

    - by Matt
    I have Apache2 running on Ubuntu 14.04LTS. To begin securing network access to the machine, I want to start by blocking everything, then make specific allow statements for specific subnets to browse to sites hosted in Apache. The Ubuntu Server is installed with no packages selected during install, the only packages added after install are: apt-get update; apt-get install apache2, php5 (with additional php5-modules), openssh-server, mysql-client Following are my /etc/hosts.deny & /etc/hosts.allow settings: /etc/hosts.deny ALL:ALL /etc/hosts.allow has no allow entries at all. I would expect all network protocols to be denied. The symptom is that I can still web browse to sites hosted on the Apache web server even though there is a deny all statement in /etc/hosts.deny The system was rebooted after the deny entry was added. Why would /etc/hosts.deny with ALL:ALL be ignored and allow http browsing to sites hosted on the apache web server?

    Read the article

  • Minutely cron, ensuring only a single instance

    - by Christopher Nadeau
    Is there a way to run a script every minute (or 2, or 5, etc), but only if it isn't already running? We have a set of scripts that need to run every minute. Sometimes they might start and finish in a second, other times they might go on for 5 minutes. Our current way of avoiding simultaneous executions is by setting a is_running flag in each script, and exiting if it's still enabled. But this is a little unreliable (i.e., fatal errors would cause the flag to remain enabled even after the script halted). We could write our own little manager, but I'm wondering if there is a more fashionable solution that already exists.

    Read the article

  • How do i install apache on my ubuntu 12.04 where it has virtualhost

    - by YumYumYum
    According to the docs https://help.ubuntu.com/10.04/serverguide/httpd.html i have done following, and that is almost how i do always in my Fedora, but Ubuntu looks like its not working. a) DNS to IP $ echo "127.0.0.1 a" > /etc/hosts $ echo "127.0.0.1 b" > /etc/hosts b) Apache virtualhost $ ls 1 2 default default.backup default-ssl $ cat 1 <VirtualHost *:80> ServerName a ServerAlias a DocumentRoot /var/www/html/a/public <Directory /var/www/html/a/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> $ cat 2 <VirtualHost *:80> ServerName b ServerAlias b DocumentRoot /var/www/html/b/public <Directory /var/www/html/b/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> c) load into Apache and restart the service $ a2ensite 1 $ a2ensite 2 $ a2dissite default $ /etc/init.d/apache2 restart d) Browse the new 2 hosts $ firefox http://a Does not work it goes always with http://a or http://b to /var/www/html How do i fix it so that it goes to its own directory e.g: http://a goes to /var/www/html/a/public not /var/www/html?

    Read the article

  • Tomato/DD-WRT router to act as switch & only NAT some port

    - by fseto
    BACKGROUND: I have a device that must use a real IP address. Currently, my ISP uses DHCP and I can have up to 4 real IP address assigned. However, the cable modem only have 1 ethernet port and it's connected to my router (running Tomato, but can run DD-wrt or other Openwrt if required). Question stems from how I can connect the additional device, requiring a real IP? EASY SOLUTION: would be to get a switch and connect to the CM, Router, and Device. But alas, I want to avoid this route, since: my wiring cabinet in my home is drawing lots of power and heat already Device will be unprotected by any firewall unable to monitor the traffic to/from device. Besides, what would be the FUN in that? =) IDEA: So what I want to do is to configure the router, so that one of the switchport is removed from the normal br0 bridge. Instead, I want to make it behave like a switch on the WAN port. What's the best way of doing this? Should I create another bridge on the WAN & the device port? Can a single port belongs to two bridges? or would I need to create a subinterface first? Would I need a DHCP-relay? Am I expecting too much from my poor cheapie router? +------+ | CM | +--++--+ || +----WAN---------------+ | / \ Router | | BR1? BR0 | | | \ | | | {NAT} | | | / | | \ | +-P0----P1-P2-P3-Wifi--+ | +------+ |Device| +------+

    Read the article

  • What can cause peaks in pagetables in /proc/meminfo ?

    - by Fuzzy76
    I have a gameserver running Debian Lenny on a VPS host. Even when experiencing a fairly low load, the players start experiencing major lag (ping times rise from 50 ms to 150-500 ms) in bursts of 3 - 10 seconds. I have installed Munin server monitoring, but when looking at the graphs it looks like the server has plenty of CPU, RAM and bandwidth available. The only weird thing I noticed is some peaks in the memory graph attributed to "page_tables" which maps to PageTables in /proc/meminfo but I can't find any good information on what this might mean. Any ideas what might be causing this? If you need any more graps, just let me know. The interrupts/second count is at roughly 400-600 during this period (nearly all from eth0). The drop in committed was caused by me trying to lower the allocated memory for the server from 512MB to 256MB, but that didn't seem to help.

    Read the article

  • How to re-add a RAID-10 failed drive on Ubuntu?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • sshd: How to enable PAM authentication for specific users under

    - by Brad
    I am using sshd, and allow logins with public key authentication. I want to allow select users to log in with a PAM two-factor authentication module. Is there any way I can allow PAM two-factor authentication for a specifc user? I don't want users - By the same token - I only want to enable password authentication for specific accounts. I want my SSH daemon to reject the password authentication attempts to thwart would-be hackers into thinking that I will not accept password authentication - except for the case in which someone knows my heavily guarded secret account, which is password enabled. I want to do this for cases in which my SSH clients will not let me do either secret key, or two-factor authentication.

    Read the article

  • Make a socket as a user but make it readable and writable by another

    - by user1598585
    I have a software that is run under user A, this software creates a socket in /sockets and the socket should be readable and writable by user B. I have tried setting the directory to have ownership A:A or A:B but when user A creates the socket, it ends up with uid A and gid A. Using ACLs has not helped so far, the default mask is preventing the rights to be effective. rw permisions for B will always turn into jusr r. If what I make is not a socket it will work fine. How can I best accomplish this task? (It is for a web-server where the web-application makes the socket and the web-server software forwards requests to it)

    Read the article

  • Ubuntu - No gnome-panels, no right-click, no internet, no hotkeys

    - by Darthfett
    Hey guys, I've been using Ubuntu (Maverick 10.10) on my desktop (ATI Radeon 5830) for about 3 weeks now, but all of a sudden I am unable to even use my computer. As soon as I start up, I see my desktop, with icons, but I don't see any gnome-panels, and I'm unable to get any options if I right-click. I can start programs by double clicking them. I also cannot get an internet connection. I've tried restarting gnome-panel by killing it, using Ctrl+Alt+5 to switch to a terminal (I don't have a shortcut to one on my desktop, and no hotkeys will work), but no luck. Restarting my computer has no effect upon this (I have to manually cut the power, since I don't know the terminal command). As far as I know, I have not made any changes, and I've never had any problems in the past. This started when I was playing Minecraft, but my internet crapped out, and no amount of re-trying the connection would work. I know it was my computer, as my brother's was working fine in the other room. Any clues as to what's going on? I'm more than willing to troubleshoot.

    Read the article

  • PHP Websites: Very high IOPS

    - by Khuram
    We are hosting a set of websites on VM Cloud. These sites were previously on a couple of dedicated servers but to enhance performance, we transferred them onto a Cloud environment. The Cloud has SSD storage but they are now saying that we have very high IOPS and are goign to degrade us if we do not do anything soon. We have good PHP Websites but they are run without any Caching. how do I start to debug this? Sincerely, Khuram

    Read the article

  • Command to find the source package of a binary?

    - by Delan Azabani
    I know there's a which command, that echoes the full name of a binary (e.g. which sh). However, I'm fairly sure there's a command that echoes the package that provides a particular binary. Is there such a command? If so, what is it? I'd like to be able to run this: commandName ls and get coreutils for example.

    Read the article

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

  • Connect a ssd externally such that hdparm is fully supported

    - by student
    I am just trying some hdparm magic with my new ssd (samsung 840 pro). However I don't want to change my drive over and over so it would be great if I could connect it externally to my laptop. I have a cheap sata-usb Adapter, but I feel it doesn't support the ATA commands send by hdparm. So what's the best way to do this? Are there sata-usb Adapters which fully support the hdparm things? Would it be a good idea to buy a sata-esata adapter to get full control over the drive?

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >