Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 308/985 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How to tell X.org to reload input device module? (Working around suspend-to-ram crash on Acer laptop

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • After using lvextend, I can't recover unused space

    - by Cory Gagliardi
    I needed to add more disk space to my CentOS VM, so I added another virtual disk, then used lvextend to add the space to the existing partition. The steps I followed was: echo "- - -" > /sys/class/scsi_host/host0/scan pvcreate /dev/sdb vgextend VolGroup00 /dev/sdb lvextend -l +100%FREE /dev/VolGroup00/LogVol00 resize2fs /dev/VolGroup00/LogVol00 This worked fine. I subsequently filled up the VM, then deleted most of the used disk space. However, the unused disk space was never recovered after I deleted all of the files. This will illustrate what I'm saying better: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 61G 32G 26G 56% / /dev/sda1 99M 20M 75M 21% /boot tmpfs 1006M 0 1006M 0% /dev/shm # pwd; du -h --max-depth=0 / 5.1G . I cannot figure out how to get the partition to see that only 5.1 GB is used. Any ideas what I'm doing wrong?

    Read the article

  • Alternatives to native LDAP

    - by Matt
    We've implemented an LDAP to NIS solution and have begun transitioning some systems to native LDAP binding for authentication and automount maps. Unfortunately we have a very mixed environment with more than 20 *nix environments. The setup for each variant is of course unique and has required various workarounds to get full functionality. We're now at the point where we're willing to revisit the solution and possibly migrate toward something like Likewise (http://www.likewise.org), but would like to know what others are using to solve this problem.

    Read the article

  • Accessing Netatalk/AFP Shares from OS X Snow Leopard

    - by j4nus_
    Recently upgraded Ubuntu home server from 8.04 client to 10.04 server and reinstalled all services therein. One of them is a Netatalk daemon that I configured in a fashion similar to this website: http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/ Finder recognizes my server and the afp service, yet when I attempt to log in (using valid credentials), Finder indicates its the wrong username and password. I've tried altering some of the config files and my Google-fu to look for solutions, but no luck. Any tips? (This was not an issue under 8.04, if it matters)

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • Monitor LSI 3ware raid controller on ESXi

    - by aseq
    This concerns a server that runs ESXi (v. 4.x or 5.x) installed on drives that are configured into a raid10 using an LSI 3ware 97050 raid controller. I would like to know if there is a way to monitor the LSI 3ware series of controllers, in particular the 9750, through ESXi. And to hopefully also run the monitoring daemon LSI provides. I know you can set up a cronjob to execute tw_cli through ssh on the ESXi server. However that's not really ideal. I am not using vcenter by the way. It would be nice to have more than just monitoring working, since the 3ware software has a very useful web client, besides tw_cli.

    Read the article

  • Snort install issue on debian 6 with libpcre - libpcre library not found

    - by Chuck
    I've read the manual on snort.org for installing snort on Debian but am still having an issue. Does anyone know how to resolve this? I've tried installing the libpcre3 amd libpcre3-dev packages by using apt-get and also manually installing by downloading the latest version off the tcpdump website. Any ideas? Checking for pcre-compile in -l pcre...no Error! Libpcre library not found. Get it from http://www.pcre.org

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

  • What is the best distro to host a KVM virtualization solution

    - by elventear
    I am looking for a Server oriented distro that we can expect have decent support but also offer as much as possible some of the latest features that KVM might offer. I am leaning towards Ubuntu LTS 10.04, because well it's LTS and more bleeding edge, but I find Ubuntu not serious enough in terms of support (I say this a heavy Ubuntu user). Given that Centos 6 is not out yet, I am not sure if going Centos 5 would be the best option in terms of getting more features from KVM. Any other distro you would recommend that could meet the criteria of long term support? (At least 4 years)

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • Nagios orphaned services warnings

    - by Gordon
    We have had Nagios running on one of our servers with out any problems for a while but lately certain old service warning have been reappearing and then disappearing on the service detail page. From looking at the logs I found warning like the following. Warning: The check of service 'Tomcat' on host 'virtual1' looks like it was orphaned (results never came back). I'm scheduling an immediate check of the service... Has anyone ever came across this before or at least know a way to delete the old Orphaned Warnings. The Nagios Version we are running is Version 3.0b7 so an update might be in order. Thanks.

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • LVM2 vs MDADM performance

    - by archer
    I've used MDADM + LVM2 on many boxes for quite a while. MDADM was serving for both RAID0 and RAID1 arrays, while LVM2 where used for logical volumes on top of MDADM. Recently I've found that LVM2 could be used w/o MDADM (thus minus one layer, as the result - less overhead) for both mirroring and stripping. However, some guys claims that READ PERFORMANCE on LVM2 for mirrored array is not that fast as for LVM2 (linear) on top of MDADM (RAID1) as LVM2 does not read from 2+ devices at a time, but use 2nd and higher devices in case of 1st device failure. MDADM reads from 2 devices at a time (even in mirrored mode). Who could confirm that?

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

  • Debian Unstable + Postfix 2.6.5 + dkim-filter 2.8.2 issue

    - by kura
    I have Postfix installed on Debian Unstable, as the title states, the system is completely up-to-date, I have tried to get DKIM signatures working on outgoing mail using dkim-filter 2.8.2. I couldn't use the default Debian way of doing things with sockets, instead I used the Ubuntu way: SOCKET="inet:12345@localhost"` I have the following in my postfix/main.cf milter_default_action = accept milter_protocol = 6 smtpd_milters = inet:localhost:12345 non_smtpd_milters = inet:localhost:12345 All is fine except I get the following message I start DKIM in mail.log: dkim-filter[22029]: can't configure DKIM library; continuing And when it tries to sign mails I get the following error: postfix/cleanup[22042]: warning: milter inet:localhost:12345: can't read SMFIC_EOH reply packet header: Success And then dkim-filter daemon stops. I've looked through Google but found no actual way to fix this that works for me. I have this working fine on an Ubuntu server but would love to get it working on Debian too.

    Read the article

  • mplayer audio desync

    - by geek
    I have and avi file and an ac3 file that contains an alternate audio stream. I run mplayer like: mplayer -audiofile foo.ac3 bar.avi mplayer takes the audio stream from the ac3 file as expected, but when I try to scroll the video using arrows or pgup/pgdown keys, the audio gets desynced: mplayer just starts playing the audio stream from the beginning. Do I have to pass any additional command line arguments in order to make it scroll properly without desyncing audio?

    Read the article

  • Duplicate IP address detection with multiple NICs

    - by sfink
    I am using arping -D to detect duplicate IP addresses within a network when setting up servers. (The network is controlled by someone else, and we have had many issues with IP allocation in the past.) It works fine as long as my host has a single NIC on a given VLAN, but when my host has more than one (I have one with 9 NICs on one VLAN and 1 on the other), arping -D always returns false collisions. The problem is that all 9 of my NICs respond to an ARP request for any of the IPs on those NICs. (These are real physical NICs, not aliases or anything.) I send out one ARP request packet, and get 9 ARP is-at ARP replies, one for each MAC address. I could implement my own solution by sniffing packets and checking for any replies with a MAC address other than the local NICs', but it seems like there ought to be an easier way.

    Read the article

  • Best practice for authenticating DMZ against AD in LAN

    - by Sergei
    We have few customer facing servers in DMZ that also have user accounts , all accounts are in shadow password file. I am trying to consolidate user logons and thinking about letting LAN users to authenticate against Active Directory.Services needing authentication are Apache, Proftpd and ssh. After consulting security team I have setup authentication DMZ that has LDAPS proxy that in turn contacts another LDAPS proxy (proxy2) in LAN and this one passes authentication info via LDAP (as LDAP bind) to AD controller.Second LDAP proxy only needed because AD server refuses speak TLS with our secure LDAP implemetation. This works for Apache using appropriate module.At a later stage I may try to move customer accounts from servers to LDAP proxy so they are not scattered around servers. For SSH I joined proxy2 to Windows domain so users can logon using their windows credentials.Then I created ssh keys and copied them to DMZ servers using ssh-copy, to enable passwordless logon once users are authenticated. Is this a good way to implement this kind of SSO?Did I miss any security issues here or maybe there is a better way ofachieving my goal?

    Read the article

  • HAproxy - Redirect issue - Uri Variables ?

    - by Justin
    I'm using haproxy 1.5dev3 and I was wondering if there is any possible way to grab uri variables from a request to reappend the query on the end of a redirect url? What I'm trying to do is redirect from: http://www.domain.com/page/example.htm?id=1234567 to: http://www.domain.com/frame/newpage.cfm?id=1234567 redirect prefix doesn't work properly as it tries to append /page/example.htm to the end of the redirect url. Can I do some sort of rewrite to accomplish this? It would be awesome if you could use uri and queries as variables for redirection/pool selection like on F5. Please help...Thanks!

    Read the article

  • Postfix character encoding?

    - by Anonymous12345
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • How can I still see the 'man' text after I quit man?

    - by Sol
    I typically use tcsh or bash and often want to use 'man' to review a command's options. Currently when I quit man or ctrl-C, the man text disappears and I see the scrollback buffer that was there before I performed the 'man' command. I would like to still see the 'man' text I was viewing as a reference while I'm typing the command at the command prompt without opening a second window, how can I do that?

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • changing ext4 journal data mode with remount?

    - by Amos Shapira
    I'm tweaking ext4 file system for speed, one tweak at a time. First tweak is to change from "data=ordered" to "data=writeback". To test this, I execute "mount -n -o remount,data=ordered /" but I keep getting "mount: / not mounted already, or bad option". From lots of google'ing I found many questions about similar problems and one answer circa 2001+ext3 which says that you can't change the journal mode with remount. Is this limitation still current?

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >