Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 361/974 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • Setup a Autoreply Only Account

    - by dabrain
    For some very good reason you might would like to setup a 'autoreply' only account, without storing the incoming mail into a mailbox. If not already done, create an account via Delegated Admin Gui or commadmin Commandline Tool. Example: /opt/sun/comms/da/bin/commadmin user create -D admin -d vmdomain.tld -w enigma -F Mike -l    mparis -L Paris -W tester -E [email protected] -S mail -H mars.vmdomain.tld Setup mailDeliveryOption to autoreply mode only, so no email will be stored in the user mailbox, skip this step if you want incoming emails stored in the mailbox. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/modfile [/tmp/modfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify replace: mailDeliveryOption mailDeliveryOption: autoreply Setup mailSieveRuleSource with the autoreply text and 'do-not-reply' From address. The "Thank you ..." part becomes the subject. The next string in quotes is the body part of the message. The ":hours 0" denotes that we want a reply sent for every message. Finally,  the \n is used because of the wanted newlines in the body. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/addfile [/tmp/addfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: mailSieveRuleSource mailSieveRuleSource: require "vacation"; vacation :hours 0 :reply :from "do-not-reply   @domain.com" :subject "Thank you for contacting webpost" "Your Mail is being review   ed.\nTo access contact information please visit : http://www.domain.com \nPlease do    not reply to this e-mail as it is an automated response on your mail being accessed   .\n\nPublic Respose Unit.\n"

    Read the article

  • Ubuntu + latest samba version, symlinks no longer work on share mounted in windows

    - by Roy Rico
    I just apt-getted (apt-got?) the latest software for my Ubuntu 9.10 linux box, and I noticed that samba was the included in the update. After the install, the symlinks in my home directory no longer work when mounted as a drive in my linux box. They worked literally seconds before I did the update. All my normal directories work just fine. Viewing the directory listing on the command line, all the files, dirs & links have the exact same permissions, yet this is the error I get: Location is not available L:\LinkDir is not accessible. Access is denied. I looked on the forums, and i saw this option for the smb.conf follow symlinks = yes wide symlinks = yes unix extensions = no I put those in, but they had no effect. Has anyone had this problem yet?

    Read the article

  • Install a Program from ZIP File on Ubuntu Not Found Using Aptitude

    - by nicorellius
    I have a specific program that I use often on Windows and Mac, but today need to install it on a Linux machine. I downloaded the ZIP file from the vendors website, unzipped it to the Desktop and now I have an SH file. I tried running this file from the command line as root, but the permissions were denied. How can I install this program on Linux? I know it's possible because I have heard of it being done. I just don't have the experience with Linux I need to get it done. To which directory should I install it? I tried the install command but it needed a directory to which to install.

    Read the article

  • Using the right folder for the right job. Article link, please?

    - by Droogans
    There are specific folders designed for specific tasks. /var/www holds your web sites, /usr/bin contains files to run your applications...yet I still find myself putting nearly all of my work in ~. Is it possible to overuse my home directory? Will it come back to haunt me? Anyone have a good link to an article of best practices for organizing your files so that they are placed in their "correct" place? Is there even such a thing in Linux? I am referring specifically to user-generated content. I do not compile applications from source, I use apt-get for those tasks. This article has a great introduction to what I'm looking for. Table 3-2, "Subdirectories of the root directory" is the sort of thing I'm looking for, but with more details/examples.

    Read the article

  • Installing Java 1.5 on Ubuntu?

    - by StackedCrooked
    I already have Java 1.6, but I need to test something with 1.5. I have downloaded the .bin file from http://java.sun.com/javase/downloads/index_jdk5.jsp using the Sun Download Manager. Now I want to create a deb file from this bin file: $ fakeroot make-jpkg java_ee_sdk-5_01-linux.bin Creating temporary directory: /tmp/make-jpkg.Zpm1Y7LbZ0 Loading plugins: blackdown-j2re.sh blackdown-j2sdk.sh common.sh ibm-j2re.sh ibm-j2sdk.sh j2re.sh j2sdk-doc.sh j2sdk.sh j2se.sh sun-j2re.sh sun-j2sdk-doc.sh sun-j2sdk.sh Detected Debian build architecture: i386 Detected Debian GNU type: i486-linux-gnu No matching plugin was found. Removing temporary directory: done How can I fix the "No matching plugin was found." error?

    Read the article

  • vmware linux headers not found for ubuntu 10.10 ?

    - by Tumbleweed
    I've installed Vmware 6.5 on Ubuntu 10.10... when I start vmware player/workstation its asking for linux kernel header for some compilation but I'm not able to find the appropriate package, see the Image below.... Update after running following commands sudo -s cd /lib/modules/$(uname -r)/build/include/linux ln -s ../generated/utsrelease.h ln -s ../generated/autoconf.h Error has been changed like below.... ERROR: modinfo: could not find module vmmon ERROR: modinfo: could not find module vmnet ERROR: modinfo: could not find module vmblock ERROR: modinfo: could not find module vmci ERROR: modinfo: could not find module vsock Using 2.6.x kernel build system. make: Entering directory /tmp/vmware-root/modules/vmmon-only' make -C /lib/modules/2.6.35-22-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules make[1]: Entering directory/usr/src/linux-headers-2.6.35-22-generic' CC [M] /tmp/vmware-root/modules/vmmon-only/linux/driver.o In file included from /tmp/vmware-root/modules/vmmon-only/linux/driver.c:31: /tmp/vmware-root/modules/vmmon-only/./include/compat_wait.h:78: error: conflicting types for ‘poll_initwait’ include/linux/poll.h:72: note: previous declaration of ‘poll_initwait’ was here

    Read the article

  • Issue with www to non www redirect

    - by bob
    Hello, I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated!

    Read the article

  • Include Binary Files in DEB package

    - by user22611
    I need to build a DEB package from mainly Node.js Javascript files, but it should include some binary files as well. They are listed inside debian/source/include-binaries. Otherwise I get the error message dpkg-source: error: unrepresentable changes to source The command in question is: bzr builddeb -- -us -uc After adding the file include-binaries, when running bzr builddeb -- -us -uc again, now I get a different error: It says dpkg-source: error: aborting due to unexpected upstream changes, see /tmp/mailadmin_0.0-1.diff.n6m5_6 I have no idea how to get rid of this. In the next line of output it tells me dpkg-source: info: you can integrate the local changes with dpkg-source --commit But if I run this command in the build area of my package, it gives me the unrepresentable changes to source error message again, even though debian/source/include-binaries is present in the build area as well. I am missing the way out of this... I tried deleting all files that are produced by the build process, still no success. Further details: The target directory is /opt/mailadmin. Since this directory is unusual, I listed it in the file debian/mailadmin.install (which contains one line:) opt/mailadmin opt/ The bzr builddeb process uses this file as expected.

    Read the article

  • Mod_rewrite not working on ISPConfig 3 Server

    - by Akahadaka
    Problem I recently migrated a Drupal site from a shared hosting server to my own VM. Everything appears to work correctly, except clean urls. My VM Setup Ubuntu 10.04 LAMP ISPConfig 3 What I've tried From reading up on a number of drupal forums I've tried the following in this order Check that mod_rewrite is installed and enabled Changed PHP from FastCGI to Mod_PHP (prefer to use FastCGI or suPHP though to avoid having tmp/files folders with 777 permissions) Changed the Redirect type to L in ISPConfig Sites-domain.com-Redirect Changed /etc/apache2/sites-enabled/000-default <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All ... </Directory> Not sure about points 3 and 4, I do want all domains to be able to use mod_rewrite out of the box. Question Have I done something wrong or am I missing a step? Ultimately I would like to use FastCGI and clean urls working on all ISPConfig 3 domains without having to make any changes to individual domain settings. Any ideas appreciated, I'll try them all.

    Read the article

  • Setup symbolic link where users can access it with FTP

    - by Dan Shields
    I have a folder on a server where a client of mine has a bunch of folders that they upload images and what not for a site, I do a symbolic link to those folders to the root of the website. This way I can give them ftp access to upload whatever they need without having access to the root level of the website. I have another folder that I can't setup as a symbolic link to their folder, which has images they need to upload to. I know that if I create a symbolic link the other way around where the sym link is in their folder, they can't access it through FTP. There has to be a way without creating two separate FTP accounts and give a user the ability to upload to a different directory that is outside of their home directory. I see that it is ftp specific and that there are some settings that can be changed but I haven't seen any clear cut answers for the best way to handle this.

    Read the article

  • music for an arcade game?

    - by user717572
    I'm thinking about music for my brick breaker game, but I don't know how to choose any. If I'd make a loop from a few seconds, I think it would get annoying very quickly. I also found some longer length tracks (about 2 minutes), but when this is over, it's going to be repeated anyway, just like when you'd select a new level, you'd have to listen to the same beginning of the song again. I can't put an hour of music in my application, so what would you recommend I'd do for the music?

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • ATI Radeon HD 6870 Driver fails to install default-policy.sh does not support version

    - by Rogue Coder
    I'm running Ubuntu 11.04 Beta, with everything updated completely. I'm using Ubuntu Classic, because Unity fails to run, supposedly because of my video card. The drivers for the Radeon HD 6870 series is apparently lacking, but I found a post stating the newest version has full support for Ubuntu Natty Narwhal. That post is slightly old, so i grabbed 11.3 for Ubuntu x86 off the ATI website. When I run the installation program, I receive the following error: > ./ati-driver-installer-11-3-x86.x86_64.run Created directory fglrx-install.uREFoO Verifying archive integrity... All good. Uncompressing ATI Catalyst(TM) Proprietary Driver-8.831.2......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro Removing temporary directory: fglrx-install.uREFoO > I would love to get the latest ATI drivers working so that I can try out Unity!

    Read the article

  • Annoying Search Behavior - Search Companion

    - by David Stein
    I'm running Windows XP Professional, and ever since the last service pack I've had a searching problem. When I want to search a network drive, I get the following message: This folder is not indexed. To search this directory plase use Search Companion or add this directory to your index via options. Basically, I have two questions. Is there some way that I can use the indexed search where appropriate and then have it switch over to the Search Companion automatically? Second, how does a programmer look at this code and think this is a good idea? I realize that this question is rhetorical. However, I must enter my search string into one search, receive the error, and then click "Search Companion" to bring up the new search window. This window doesn't even take the defaults from the previous one so I have to specify the search string and drive again.

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • Securely automount encrypted drive at user login

    - by Tom Brossman
    An encrypted /home directory gets mounted automatically for me when I log in. I have a second internal hard drive that I've formatted and encrypted with Disk Utility. I want it to be automatically mounted when I login, just like my encrypted /home directory is. How do I do this? There are several very similar questions here, but the answers don't apply to my situation. It might be best to close/merge my question here and edit the second one below, but I think it may have been abandoned (and therefore never to be marked as accepted). This solution isn't a secure method, it circumvents the encryption. This one requires editing fstab, which necessitates entering an additional password at boot. It's not automatic like mounting /home. This question is very similar, but does not apply to an encrypted drive. The solution won't work for my needs. Here is one but it's for NTFS drives, mine is ext4. I can re-format and re-encrypt the second drive if a solution requires this. I've got all the data backed up elsewhere.

    Read the article

  • Bash script won't stay open in background after running through while

    - by jfreak53
    I can't get the following bash script to stay open after the first message is received from NC: #!/bin/bash port=3333 nc -l $port | while read msg; do notify-send Alert "$msg"; done After the first message it exits. I want it to stay open and continue monitoring for new messages from NC. I know that if I launch nc -l port without the while loop it stays open and I can chat away between the two connections even disconnect from the connected host. I am sending the message using: echo 'done' | nc IP port

    Read the article

  • Cannot login in account with encrypted home after update from 11.04 to 11.10

    - by martin
    After upgrading from ubuntu 11.04 to 10.10 I cannot access my encrypted home partition anymore. I can login, however all data stays encrypted. ecryptfs-mount-private gives: ERROR: Encrypted private directory is not setup properly Any idea how to fix this? Update I have several kernels installed (after the upgrade my menu.lst looks like this: http://paste.org/pastebin/view/35591) the problem is the same for all kernels. Booting from 2.6.32-27-generic and adduser --encrypt-home tes gives: Adding user `tes' ... Adding new group `tes' (1008) ... Adding new user `tes' (1007) with group `tes' ... Creating home directory `/home/tes' ... Setting up encryption ... ************************************************************************ YOU SHOULD RECORD YOUR MOUNT PASSPHRASE AND STORE IT IN A SAFE LOCATION. ecryptfs-unwrap-passphrase ~/.ecryptfs/wrapped-passphrase THIS WILL BE REQUIRED IF YOU NEED TO RECOVER YOUR DATA AT A LATER TIME. ************************************************************************ Error: Your kernel does not support filename encryption ERROR: Could not add passphrase to the current keyring adduser: `/usr/bin/ecryptfs-setup-private -b -u tes' returned error code 1. Exiting.

    Read the article

  • File doesn't exist in Linux although it's located in Terminal

    - by Mazen Ayman
    I'm a bit new to unix/linux environment, but I have a small problem. I'm using "locate" to find the path of a file I need, it gives me the path for it, but the file doesn't exist in that path, like that: locate test1.txt /home/user/test files/text1.txt /home/user/test1.txt~ "test files" directory is where I was keeping the file and I copied it to the home directory once but I deleted it, no idea what it keeps telling me there is still a tmp file for it. it worth mentioning that I used the command: locate test1.txt~ |xargs -n1 rm to remove that tmp file, but maybe that what caused the problem. I tried to show hidden files, and check for temp files, didn't find it either. any clue what happened?

    Read the article

  • /dev/fuse "permission denied" even when member of fuse group

    - by steeef
    I have a backup script scheduled on a Debian 5.0 x86 server, via sshfs. However, when I attempt to mount the remote directory, I receive: failed to open /dev/fuse: Permission denied ls -l /dev/fuse returns: crwxrwxr-x 1 root fuse 10, 229 2010-11-12 09:08 /dev/fuse id backup returns: uid=501(backup) gid=501(backup) groups=501(backup),46(plugdev),108(fuse) The only way I can get the directory to mount is if I run chmod a+w /dev/fuse, but this is reset at some point during the day. It's a kludge though, and I'd rather figure out why the group permissions aren't working.

    Read the article

  • Why does sharepoint claim not enougth disk space for backup when there is lots availalbe?

    - by Mr Shoubs
    I'm trying to run the following command: Backup-SPFarm -Directory E:\Backups -BackupMethod full -Verbose However it errors saying there isn't enough disk space... the backup will be about 1.8Gb in size, I have 27.52GB free, so why does it think I need 30Gb? VERBOSE: Leaving BeginProcessing Method of Backup-SPFarm. VERBOSE: Performing operation "Backup-SPFarm" on Target "SHAREPOINTSERV". Backup-SPFarm : There is not enough disk space. Free additional space on your h ard disk and then try again. Approximate amount of space needed: 30.12 GB. Amou nt of space free on disk: 27.52 GB. At E:\Backups\Script\BackupSharePointFarm.ps1:3 char:14 + Backup-SPFarm <<<< -Directory E:\Backups -BackupMethod full -Verbose + CategoryInfo : InvalidData: (Microsoft.Share...mdletBackupFarm: SPCmdletBackupFarm) [Backup-SPFarm], SPException + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletBackupFa rm VERBOSE: Leaving ProcessRecord Method of Backup-SPFarm. VERBOSE: Leaving EndProcessing Method of Backup-SPFarm.

    Read the article

  • Setting up NFS server on Gentoo

    - by StackedCrooked
    I'm trying to set up an NFS server on a Gentoo VM. I've installed nfs-utils-1.2.2 and added the following line to the /etc/exports file: /root/svn 10.0.0.0/255.0.0.0(rw,sync,no_subtree_check) However, when I try to start the nfs service I get the following errors: gentoo-amd64-francis orig # /etc/init.d/nfs start FATAL: Could not load /lib/modules/2.6.24-9-pve/modules.dep: No such file or directory * Exporting NFS directories ... [ ok ] * Starting NFS mountd ... [ !! ] * Starting NFS daemon ... [ !! ] * Starting NFS smnotify ... [ ok ] It complains about not finding the /lib/modules/2.6.24-9-pve/modules.dep file, but the /lib/modules directory doesn't even exist on this machine. Can anyone help me getting it to work?

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >