Search Results

Search found 44 results on 2 pages for 'autofs'.

Page 1/2 | 1 2  | Next Page >

  • Mount CIFS share with autofs

    - by Phanto
    I have a system running RHEL 5.5, and I am trying to mount a Windows share on a server using autofs. (Due to the network not being ready upon startup, I do not want to utilize fstab.) I am able to mount the shares manually, but autofs is just not mounting them. Here are the files I am working with: At the end of /etc/auto.master, I have: ## Mount this test share: /test /etc/auto.test --timeout=60 In /etc/auto.test, I have: test -fstype=cifs,username=testuser,domain=domain.com,password=password ://server/test I then restart the autofs service. However, this does not work. ls-ing the directory does not return any results. I have followed all these guides on the web, and I either don't understand them, or they.just.don't.work. Thank You

    Read the article

  • Mount an additional Xserve volume with autofs on Linux

    - by daustin777
    A few years ago I setup autofs on a RH Linux box to mount volumes from four XServes. I need to add a couple new volumes from these same Xserves so that I can access files from the Linux box. I've completely forgotten how to do this and haven't been able to find a solution online. How do I add the new volumes? Do I need to add paths to the new volumes?

    Read the article

  • How to setup Automount/Autofs

    - by matt wilkie
    I've followed the ubuntu help docs for setting up NFSv4 on a server running Ubuntu 10.4LTS and now I'm trying to get Autofs (on ubuntu 10.10) to mount the exports, following these instructions. So far it doesn't work. Where the docs say server -fstype=nfs4 server:/ I'm supposed to replace 'server' with my server's hostname right? If yes, should that be server-foo or server-foo.local? # Sample /etc/auto.master file # --- comments snipped --8<-- +auto.master # pre-existing /nfs /etc/auto.nfs # added by me . # manually created /etc/auto.nfs ubuntu-server.local -fstype=nfs4 ubuntu-server.local:/ ls /nfs/ubuntu-server /nfs/ubuntu-server.local shows nothing. What's the next troubleshooting step?

    Read the article

  • autofs mac os x afp not loading as correct user?

    - by Stephen Furlani
    Hello, I am way out of my depth, and I am trying to get all of my nodes on a cluster to mount a drive on my head node. I've got /etc/auto_master and /etc/auto_afp configured according to Apple's "Autofs: Automatically Mounting Network File Shares in Mac OS X" White Paper: /etc/auto_master +auto_master # Use directory service /net -hosts -nobrowse,hidefromfinder,nosuid /home auto_home -nobrowse,hidefromfinder /Network/Servers -fstab /- -static /- auto_afp /etc/auto_afp /Volumes/userA -fstype=afp afp://userA:[email protected]:/ /Volumes/userB -fstype=afp afp://userB:[email protected]:/ I am logged into a compute-node as userA. automount appears to mount both /Volumes/userA and /Volumes/userB to head-node.local:/Users/userA/Documents/ even though I have usernames, passwords, and user-directory specified in the afp url. If I go and login with Finder - it mounts userB appropriately. File sharing and cd/dvd sharing is enabled on all computers involved. Am I doing the right thing, and if so, what did I do wrong? -Stephen

    Read the article

  • How to configure autofs5 timeout on per-filesystem basis?

    - by Norman Ramsey
    Because of a show-stopping bug in Debian autofs 4, I just upgraded to autofs5. It is not honoring the timeout option in my auto.master file: /var/autofs/removable /etc/auto.removable --timeout=2 I use this map for thumb drives and so on; I don't want a general default timeout of 2 seconds. I did some digging and although the --timeout option worked in autofs 4, and it appears in some examples on the Web, it is not actually sanctioned (or even mentioned) in the documentation for the auto.master file. So I don't feel I can report the problem as a bug. How can I get autofs5 to timeout after 2 seconds only on designated filesystems? Update: I am using a Debian-packaged autofs5, version 5.0.4-3.2.

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • OS X Server - setting up a share for network home directories + fast user switching

    - by sohocoke
    I'm nearly finished setting up my Open Directory master to allow users to be managed centrally and logged in on any of the client machines at home. I found discussion suggesting that using an AFP share for the 'Users' automount would result in network users (cf. users defined locally, or Portable Home Directory users) are unable to use fast user switching, as the first user logging in would trigger the automount and mount the 'Users' share with his permissions, preventing further users from using the mount. I've also found some suggestions to configure autofs such that the 'Users' share is mounted prior to any user logging in, but not great amounts of details along these lines. I'd greatly appreciate some instructions to set up autofs - ideally, on the OpenDirectory server rather than each client's etc/fstab or equivalent location, so that it isn't required per every client machine - to get fast user switching working with network users.

    Read the article

  • Ubuntu Automount by Label?

    - by Jakobud
    Ubuntu Server 9.10 I know that using the mount command, you can use -L to mount by label like so: mount -L thelabel /media/themount Is there any similar way to setup Automount / Autofs to mount by label name?

    Read the article

  • Can I force NFS automounts to use NFSv3?

    - by Steve
    I have a linux server that is exporting NFSv4 as well as NFSv3. I have a Fedora14 client that is defaulting to NFSv4 when automounting NFS shares off of the linux server, and it seems to be causing some problems. All my other linux clients on the network are mounting via NFSv3 without issue, so is there a way I can tell automount to mount the share via v3? I am pulling my automount maps via LDAP, with an entry in my /etc/auto.master file like so: +auto_master, so I assume it's a bit different than listing options with a regular automount map? (.i.e. /home --nfsvers=3 fileserver:/DATA)

    Read the article

  • LDAP + NFS + automount home directories permissions issue

    - by noobishguy
    When an LDAP user logs into the system they have incorrect permissions to their home directory. LDAP and NFS services exist on the same server. The directory shows the correct ownership / permissions: drwx------. 4 ldaptest ldaptest 4096 Jun 9 2014 ldaptest however the UID / GID do not match those on the server client: bash-4.1$ id uid=10001(ldaptest) gid=10001(ldaptest) groups=10001(ldaptest) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 server: [root@ldap1 log]# id ldaptest uid=502(ldaptest) gid=502(ldaptest) groups=502(ldaptest) How do I resolve this?

    Read the article

  • How do you get autofs and updatedb to work together?

    - by Veek.M
    /etc/my.misc sda1 -fstype=ntfs,user,exec :/dev/sda1 sda3 -fstype=ntfs,user,exec :/dev/sda3 sda4 -fstype=ntfs,user,exec :/dev/sda4 /etc/auto.master /my /etc/my.misc --ghost When I run locate .pdf, I get nothing because though the mount points (sda1, sda2, ..) are created in /my - there's nothing in them till I access them. Unfortunately this is not good enough for updatedb and it purges its cache of /my/sdaX files. How do I prevent/solve this problem?

    Read the article

  • How to subscribe to the free Oracle Linux errata yum repositories

    - by Lenz Grimmer
    Now that updates and errata for Oracle Linux are available for free (both as in beer and freedom), here's a quick HOWTO on how to subscribe your Oracle Linux system to the newly added yum repositories on our public yum server, assuming that you just installed Oracle Linux from scratch, e.g. by using the installation media (ISO images) available from the Oracle Software Delivery Cloud You need to download the appropriate yum repository configuration file from the public yum server and install it in the yum repository directory. For Oracle Linux 6, the process would look as follows: as the root user, run the following command: [root@oraclelinux62 ~]# wget http://public-yum.oracle.com/public-yum-ol6.repo \ -P /etc/yum.repos.d/ --2012-03-23 00:18:25-- http://public-yum.oracle.com/public-yum-ol6.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1461 (1.4K) [text/plain] Saving to: “/etc/yum.repos.d/public-yum-ol6.repo” 100%[=================================================>] 1,461 --.-K/s in 0s 2012-03-23 00:18:26 (37.1 MB/s) - “/etc/yum.repos.d/public-yum-ol6.repo” saved [1461/1461] For Oracle Linux 5, the file name would be public-yum-ol5.repo in the URL above instead. The "_latest" repositories that contain the errata packages are already enabled by default — you can simply pull in all available updates by running "yum update" next: [root@oraclelinux62 ~]# yum update Loaded plugins: refresh-packagekit, security ol6_latest | 1.1 kB 00:00 ol6_latest/primary | 15 MB 00:42 ol6_latest 14643/14643 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package at.x86_64 0:3.1.10-43.el6 will be updated ---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be an update ---> Package autofs.x86_64 1:5.0.5-39.el6 will be updated ---> Package autofs.x86_64 1:5.0.5-39.el6_2.1 will be an update ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package cvs.x86_64 0:1.11.23-11.el6_0.1 will be updated ---> Package cvs.x86_64 0:1.11.23-11.el6_2.1 will be an update [...] ---> Package yum.noarch 0:3.2.29-22.0.1.el6 will be updated ---> Package yum.noarch 0:3.2.29-22.0.2.el6_2.2 will be an update ---> Package yum-plugin-security.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 will be an update ---> Package yum-utils.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-utils.noarch 0:1.1.30-10.0.1.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: kernel x86_64 2.6.32-220.7.1.el6 ol6_latest 24 M kernel-uek x86_64 2.6.32-300.11.1.el6uek ol6_latest 21 M kernel-uek-devel x86_64 2.6.32-300.11.1.el6uek ol6_latest 6.3 M Updating: at x86_64 3.1.10-43.el6_2.1 ol6_latest 60 k autofs x86_64 1:5.0.5-39.el6_2.1 ol6_latest 470 k bind-libs x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 839 k bind-utils x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 178 k cvs x86_64 1.11.23-11.el6_2.1 ol6_latest 711 k [...] xulrunner x86_64 10.0.3-1.0.1.el6_2 ol6_latest 12 M yelp x86_64 2.28.1-13.el6_2 ol6_latest 778 k yum noarch 3.2.29-22.0.2.el6_2.2 ol6_latest 987 k yum-plugin-security noarch 1.1.30-10.0.1.el6 ol6_latest 36 k yum-utils noarch 1.1.30-10.0.1.el6 ol6_latest 94 k Transaction Summary ===================================================================================== Install 3 Package(s) Upgrade 96 Package(s) Total download size: 173 M Is this ok [y/N]: y Downloading Packages: (1/99): at-3.1.10-43.el6_2.1.x86_64.rpm | 60 kB 00:00 (2/99): autofs-5.0.5-39.el6_2.1.x86_64.rpm | 470 kB 00:01 (3/99): bind-libs-9.7.3-8.P3.el6_2.2.x86_64.rpm | 839 kB 00:02 (4/99): bind-utils-9.7.3-8.P3.el6_2.2.x86_64.rpm | 178 kB 00:00 [...] (96/99): yelp-2.28.1-13.el6_2.x86_64.rpm | 778 kB 00:02 (97/99): yum-3.2.29-22.0.2.el6_2.2.noarch.rpm | 987 kB 00:03 (98/99): yum-plugin-security-1.1.30-10.0.1.el6.noarch.rpm | 36 kB 00:00 (99/99): yum-utils-1.1.30-10.0.1.el6.noarch.rpm | 94 kB 00:00 ------------------------------------------------------------------------------------- Total 306 kB/s | 173 MB 09:38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Importing GPG key 0xEC551F03: Userid: "Oracle OSS group (Open Source Software group) " From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : yum-3.2.29-22.0.2.el6_2.2.noarch 1/195 Updating : xorg-x11-server-common-1.10.4-6.el6_2.3.x86_64 2/195 Updating : kernel-uek-headers-2.6.32-300.11.1.el6uek.x86_64 3/195 Updating : 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 4/195 Updating : tzdata-java-2011n-2.el6.noarch 5/195 Updating : tzdata-2011n-2.el6.noarch 6/195 Updating : glibc-common-2.12-1.47.el6_2.9.x86_64 7/195 Updating : glibc-2.12-1.47.el6_2.9.x86_64 8/195 [...] Cleanup : kernel-firmware-2.6.32-220.el6.noarch 191/195 Cleanup : kernel-uek-firmware-2.6.32-300.3.1.el6uek.noarch 192/195 Cleanup : glibc-common-2.12-1.47.el6.x86_64 193/195 Cleanup : glibc-2.12-1.47.el6.x86_64 194/195 Cleanup : tzdata-2011l-4.el6.noarch 195/195 Installed: kernel.x86_64 0:2.6.32-220.7.1.el6 kernel-uek.x86_64 0:2.6.32-300.11.1.el6uek kernel-uek-devel.x86_64 0:2.6.32-300.11.1.el6uek Updated: at.x86_64 0:3.1.10-43.el6_2.1 autofs.x86_64 1:5.0.5-39.el6_2.1 bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 cvs.x86_64 0:1.11.23-11.el6_2.1 dhclient.x86_64 12:4.1.1-25.P1.el6_2.1 [...] xorg-x11-server-common.x86_64 0:1.10.4-6.el6_2.3 xulrunner.x86_64 0:10.0.3-1.0.1.el6_2 yelp.x86_64 0:2.28.1-13.el6_2 yum.noarch 0:3.2.29-22.0.2.el6_2.2 yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 yum-utils.noarch 0:1.1.30-10.0.1.el6 Complete! At this point, your system is fully up to date. As the kernel was updated as well, a reboot is the recommended next action. If you want to install the latest release of the Unbreakable Enterprise Kernel Release 2 as well, you need to edit the .repo file and enable the respective yum repository (e.g. "ol6_UEK_latest" for Oracle Linux 6 and "ol5_UEK_latest" for Oracle Linux 5) manually, by setting enabled to "1". The next yum update run will download and install the second release of the Unbreakable Enterprise Kernel, which will be enabled after the next reboot. -Lenz

    Read the article

  • What part of SMF is likely broken by a hard power down?

    - by David Mackintosh
    At one of my customer sites, the local guy shut down their local Solaris 10 x86 server, pulled the power inputs, moved it, and now it won’t start properly. It boots and then presents a prompt which lets you log in. This appears to be single user milestone (or equivalent). Digging into it, I think that SMF isn’t permitting the system to go multi-user. SMF was generating a ton of errors on autofs, after some fooling with it I got it to generate errors on inetd and nfs/client instead. This all tells me that the problem is in some SMF state file or database that needs to be fixed/deleted/recreated or something, but I don’t know what the actual issue is. By “generate errors”, I mean that every second I get a message on the console saying “Method or service exit timed out. Killing contract <#.” This makes interacting with the computer difficult. Running svcs –xv shows the service as “enabled”, in state “disabled”, reason “Start method is running”. Fooling with svcadm on the service does nothing, except confirm that the service is not in a Maintenance state. Logs in /lib/svc/log/$SERVICE just tell you that this loop has been happening once per second. Logs in /etc/svc/volatile/$SERVICE confirm that at boot the service is attempted to start, and immediately stopped, no further entries. Note that system-log isn’t starting because system-log depends on autofs so I have no syslog or dmesg. Googling all these terms ends up telling me how to debug/fix either autofs or nfs/client or inetd or rpc/gss (which was the dependency that SMF was using as an excuse to prevent nfs/client from “starting”, it was claiming that rpc/gss was “undefined” which is incorrect since this all used to work. I re-enabled it with inetadm, but inetd still won’t start properly). But I think that the problem is SMF in general, not the individual services. Doing a restore_repository to the “manifest_import” does nothing to improve, or even detectibly change, the situation. I didn’t use a boot backup because the last boot(s) were not useful. I have told the customer that since the valuable data directories are on a separate file system (which fsck’s as clean so it is intact) we could just re-install solaris 10 on the / partition. But that seems like an awfully windows-like solution to inflict on this problem. So. Any ideas what piece is broken and how I might fix it?

    Read the article

  • Solaris mounting partitions

    - by Benco
    I'm trying to mount a partition in solaris 10... bash-3.00# mount /dev/dsk/c0t0d0s3 /data mount: /dev/dsk/c0t0d0s3 is already mounted or /data is busy As far as I know c0t0d0s3 isn't already mounted elsewhere, so what's really going on here? From /etc/mnttab : /dev/dsk/c1t0d0s0 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=7800001285811136 /devices /devices devfs dev=4840000 1285811125 ctfs /system/contract ctfs dev=48c0001 1285811125 proc /proc proc dev=4880000 1285811125 mnttab /etc/mnttab mntfs dev=4900001 1285811125 swap /etc/svc/volatile tmpfs xattr,dev=4940001 1285811125 objfs /system/object objfs dev=4980001 1285811125 sharefs /etc/dfs/sharetab sharefs dev=49c0001 1285811125 /usr/lib/libc/libc_hwcap1.so.1 /lib/libc.so.1 lofs dev=780000 1285811131 fd /dev/fd fd rw,dev=4b40001 1285811136 swap /tmp tmpfs xattr,dev=4940002 1285811137 swap /var/run tmpfs xattr,dev=4940003 1285811137 -hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=4c00001 1285811148 auto_home /home autofs indirect,ignore,nobrowse,dev=4c00002 1285811148 cordb:vold(pid530) /vol nfs ignore,noquota,dev=4bc0001 1285811149 I suspect the problem is not related to the mount point, but rather the disk slice I'm trying to mount: bash-3.00# newfs -v /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3: Device busy

    Read the article

  • how can i find my usb2rs232 driver

    - by mefmef
    i have a device that is correctly connected to my PC . but i could not see it in /dev . what does it means? is it because of not installing my drive? $ /dev ls before connecting my device: agpgart mei sda1 tty28 tty59 ttyS30 autofs mem sda2 tty29 tty6 ttyS31 block net sda5 tty3 tty60 ttyS4 bsg network_latency sda6 tty30 tty61 ttyS5 btrfs-control network_throughput serial tty31 tty62 ttyS6 bus null sg0 tty32 tty63 ttyS7 char oldmem shm tty33 tty7 ttyS8 console parport0 snapshot tty34 tty8 ttyS9 core port snd tty35 tty9 ttyUSB0 cpu ppp stderr tty36 ttyprintk uinput cpu_dma_latency psaux stdin tty37 ttyS0 urandom disk ptmx stdout tty38 ttyS1 usbmon0 dri pts tty tty39 ttyS10 usbmon1 ecryptfs ram0 tty0 tty4 ttyS11 usbmon2 fb0 ram1 tty1 tty40 ttyS12 vcs fd ram10 tty10 tty41 ttyS13 vcs1 full ram11 tty11 tty42 ttyS14 vcs2 fuse ram12 tty12 tty43 ttyS15 vcs3 hidraw0 ram13 tty13 tty44 ttyS16 vcs4 hpet ram14 tty14 tty45 ttyS17 vcs5 input ram15 tty15 tty46 ttyS18 vcs6 kmsg ram2 tty16 tty47 ttyS19 vcsa log ram3 tty17 tty48 ttyS2 vcsa1 loop0 ram4 tty18 tty49 ttyS20 vcsa2 loop1 ram5 tty19 tty5 ttyS21 vcsa3 loop2 ram6 tty2 tty50 ttyS22 vcsa4 loop3 ram7 tty20 tty51 ttyS23 vcsa5 loop4 ram8 tty21 tty52 ttyS24 vcsa6 loop5 ram9 tty22 tty53 ttyS25 vga_arbiter loop6 random tty23 tty54 ttyS26 zero loop7 rfkill tty24 tty55 ttyS27 lp0 rtc tty25 tty56 ttyS28 mapper rtc0 tty26 tty57 ttyS29 mcelog sda tty27 tty58 ttyS3 $ /dev ls after connecting my device: agpgart mei sda1 tty28 tty59 ttyS30 autofs mem sda2 tty29 tty6 ttyS31 block net sda5 tty3 tty60 ttyS4 bsg network_latency sda6 tty30 tty61 ttyS5 btrfs-control network_throughput serial tty31 tty62 ttyS6 bus null sg0 tty32 tty63 ttyS7 char oldmem shm tty33 tty7 ttyS8 console parport0 snapshot tty34 tty8 ttyS9 core port snd tty35 tty9 ttyUSB0 cpu ppp stderr tty36 ttyprintk ttyUSB1 cpu_dma_latency psaux stdin tty37 ttyS0 uinput disk ptmx stdout tty38 ttyS1 urandom dri pts tty tty39 ttyS10 usbmon0 ecryptfs ram0 tty0 tty4 ttyS11 usbmon1 fb0 ram1 tty1 tty40 ttyS12 usbmon2 fd ram10 tty10 tty41 ttyS13 vcs full ram11 tty11 tty42 ttyS14 vcs1 fuse ram12 tty12 tty43 ttyS15 vcs2 hidraw0 ram13 tty13 tty44 ttyS16 vcs3 hpet ram14 tty14 tty45 ttyS17 vcs4 input ram15 tty15 tty46 ttyS18 vcs5 kmsg ram2 tty16 tty47 ttyS19 vcs6 log ram3 tty17 tty48 ttyS2 vcsa loop0 ram4 tty18 tty49 ttyS20 vcsa1 loop1 ram5 tty19 tty5 ttyS21 vcsa2 loop2 ram6 tty2 tty50 ttyS22 vcsa3 loop3 ram7 tty20 tty51 ttyS23 vcsa4 loop4 ram8 tty21 tty52 ttyS24 vcsa5 loop5 ram9 tty22 tty53 ttyS25 vcsa6 loop6 random tty23 tty54 ttyS26 vga_arbiter loop7 rfkill tty24 tty55 ttyS27 zero lp0 rtc tty25 tty56 ttyS28 mapper rtc0 tty26 tty57 ttyS29 mcelog sda tty27 tty58 ttyS3

    Read the article

  • How do I reduce the size of mlocate database?

    - by MountainX
    I'm out of space on /var 25G 25G 0 100% /var It looks like mlocate.db is the problem: # find . -printf '%s %p\n' | sort -nr | head 13140140032 ./lib/mlocate/mlocate.db.cgLMAM 12409839616 ./lib/mlocate/mlocate.db.MqGeqe cat /etc/updatedb.conf PRUNE_BIND_MOUNTS="yes" PRUNENAMES=".git .bzr .hg .svn" PRUNEPATHS="/tmp /var/spool /media" PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf" I don't see anything else to prune. So how can I fix this? Thanks

    Read the article

  • How can I make sure one Upstart job starts before other Upstart jobs?

    - by marrusl
    This is a general Upstart question, but let me use a specific case: Centrify is an NIS to ActiveDirectory gateway. It needs to load before any service that will depend the authentication service that it provides, e.g. autofs, cron, nis, et al. This has proven to be quite challenging to achieve, even when trying to change the dependencies of the other services (which I don't think we should be doing anyway, I don't want to touch the other Upstart jobs if at all possible). Suggestions?

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • automount a windows share

    - by user1632812
    I have this line and it works mount -t cifs -o myuser //192.168.0.12/Public/Docs /mnt/cifs_shares/Docs But then I try with autofs and it doesn't In /etc/auto.master: /mnt/cifs_shares/Docs /etc/auto.cifs_shares and in /etc/auto.cifs_shares Docs -fstype=cifs,rw,noperm,credentials=/etc/credentials.txt ://192.168.0.12/Public/Docs it seems that the thing gets mounted actually, but it turns to be empty. When mounted with mount it's not empty at all What am I missing ? I'm on Centos 6.3 64 bits

    Read the article

  • Why 'nobody' always starts a new `find` program that always consume my memory?

    - by UniMouS
    $ ps -elf | grep ... 0 D nobody 27320 27319 2 90 10 - 353471 sleep_ 07:54 ? 00:02:19 /usr/bin/find / -ignore_readdir_race ( -fstype NFS -o -fstype nfs -o -fstype nfs4 -o -fstype afs -o -fstype binfmt_misc -o -fstype proc -o -fstype smbfs -o -fstype autofs -o -fstype iso9660 -o -fstype ncpfs -o -fstype coda -o -fstype devpts -o -fstype ftpfs -o -fstype devfs -o -fstype mfs -o -fstype shfs -o -fstype sysfs -o -fstype cifs -o -fstype lustre_lite -o -fstype tmpfs -o -fstype usbfs -o -fstype udf -o -fstype ocfs2 -o -type d -regex \(^/tmp$\)\|\(^/usr/tmp$\)\|\(^/var/tmp$\)\|\(^/afs$\)\|\(^/amd$\)\|\(^/alex$\)\|\(^/var/spool$\)\|\(^/sfs$\)\|\(^/media$\)\|\(^/var/lib/schroot/mount$\) ) -prune -o -print0 ... This job always start automatically and consumes my memory. Even after I kill it, it will starts several hours later. What's that job? EDIT Note: the pid is different from the above because I killed the above one, wait for several hours, then the second one comes. $ pstree -psl |-anacron(25920)---sh(25929)---run-parts(25930)---locate(26343)---updatedb.findut(26348)-+-frcode(26358) | |-sort(26357) | `-updatedb.findut(26356)---su(26387)---sh(26402)---find(26403) This is what it look like in a graphical tool:

    Read the article

  • Mounting Samba share whenever it's available, unmounting when it's not

    - by Laurynas Biveinis
    I am trying setup permanent samba share mounts. That's not too hard using these instructions. But, I want them to Automatically remount whenever I join the network where these shares are available. Automatically unmount (or make access requests fail immediately instead of hanging) whenever I leave the network, i.e. avoid this automatically. Googling suggests that AutoFS might be helpful. I gather it takes care of the 1. above but I am not sure about the 2. The other questions about automated Samba mounts, i.e. How to mount a samba share permanently?, do not seem to address automatic remounts/unmounts, so I think this is not a duplicate. Thanks.

    Read the article

  • To mount NAS on a Laptop?

    - by deckoff
    So, I bought a NAS, which I configured successfully in /etc/fstab, on mu Kubuntu 10.10 Thinkpad x40. It works just fine when I am at home. A few days I went out with my laptop and the problem is, that when not at home, both suspend and hibernate functions seem forever to work. I commented out the entry on fstab and the laptop started to work as expected. I played with autofs, but it seems just dies at one moment and I cannot access anything. It works for some time, and then just goes off. Is there any consistent way, to make my laptop access the drive when at home and work OK when away? Probably a script that runs at startup, checks if the mount is there and mounts it if available... or a script that umount the drive at suspend|hibernate and loads it back at startup. Any useful ideas?

    Read the article

  • NFS Client reports Permission Denied, Server reports Permission Granted

    - by VxJasonxV
    I have two RedHat 4 Servers. The client is 4.6, the server is 4.5. I'm attempting to mount a share from the server, onto the client via NFS. The /etc/exports configuration is as follows: /opt/data/config bkup(rw,no_root_squash,async) /opt/data/db bkup(rw,no_root_squash,async) exportfs returns these (among other) shares, nfs is running according to ps output. I've been attempting to use autofs on the client, but have opted to just mount the share manually considering the issues I'm having. So, I issue the mount request: mount dist:/opt/data/config /mnt/config mount: dist:/opt/data/config failed, reason given by server: Permission denied Ok, so let's see what the server has to say for itself. May 6 23:17:55 dist mountd[3782]: authenticated mount request from bkup:662 for /opt/data/config (/opt/data/config) It says it allowed the mount to take place. How can I diagnose why the client and server are disagreeing on the result?

    Read the article

  • How to automount SMB shared network drives in Mac OS X Lion

    - by cyppher
    In Mac OS X 10.7 (Lion) Apple has replaced good old SMB support. Now I can't auto connect to my shared (SMB) network drives. Workarounds? Or Impossible? In OS X Snow Leopard, I could automatically connect my Ubuntu (SMB) shared network drives with auto_smb / auto_master (autofs configuration in /private/etc/). I made three mount points (folders) directly in '/Volumes', I used /Volumes/Data and /Volumes/webroot (both SMB shared). Unfortunately Lion doesn't connect (automount) my network drives. I have to manually connect to the server (Ubuntu file server) in Finder, then open up Terminal to navigate to the mount points, and then it connects. This is not a workable solution. I've searched (Google/SO) but found no solutions apart from an unsupported hack. Isn't it possible any more to automatically connect to an SMB-shared drive during startup?

    Read the article

  • How can I view updatedb database content, and then exclude certain files/paths?

    - by rubo77
    The updatedb database on my debian server is quite slow. where is the database located and how can I view its content and find out if there are some paths with useless stuff, that I could add to the prunepaths? my /etc/updatedb.conf looks like this: ... # filesystems which are pruned from updatedb database PRUNEFS="NFS nfs nfs4 afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf" export PRUNEFS # paths which are pruned from updatedb database PRUNEPATHS="/tmp /usr/tmp /var/tmp /afs /amd /alex /var/spool /sfs /media /var/backups/rsnapshot /var/mod_pagespeed/" ... and how can I prune all paths that contain */.git/* and */.svn/* ?

    Read the article

1 2  | Next Page >