Search Results

Search found 13164 results on 527 pages for 'missing'.

Page 189/527 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • mail server administration

    - by kibs
    MY postfix does not show that it is listening to the smtp daemon getting mesaage below: The message WAS NOT relayed Reporting-MTA: dns; mail.mak.ac.ug Received-From-MTA: smtp; mail.mak.ac.ug ([127.0.0.1]) Arrival-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Original-Recipient: rfc822;[email protected] Final-Recipient: rfc822;[email protected] Action: failed Status: 5.4.0 Remote-MTA: dns; 127.0.0.1 Diagnostic-Code: smtp; 554 5.4.0 Error: too many hops Last-Attempt-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Final-Log-ID: 23434-08/A38QHg8z+0r7 undeliverable mail MTA BLOCKED OUTPUT FROM lsof -i tcp:25 command master 3014 root 12u IPv4 9429 TCP *:smtp (LISTEN) (Postfix as a user is missing )

    Read the article

  • Check if all files in a directory exists elsewhere

    - by aioobe
    I'm about to remove an old backup directory, but before doing so I'd like to make sure that all these files exist in a newer directory. Is there a tool for this? Or am I best off doing this "manually" using find, md5sum, sorting, comparing, etc? Clarification: If I have the following directory listings /path/to/old_backup/dir1/fileA /path/to/old_backup/dir1/fileB /path/to/old_backup/dir2/fileC and /path/to/new_backup/dir1/fileA /path/to/new_backup/dir2/fileB /path/to/new_backup/dir2/fileD then fileA and fileB exists in new_backup (fileA in its original directory, and fileB has moved from dir1 to dir2). fileC on the other hand is missing in new_backup and fileD has been created. In this situation I'd like the output to be something like fileC exists in old_backup, but not in new_backup.

    Read the article

  • MinGW MSYS ssh error: Could not create directory '/home/<username>/.ssh'

    - by SoldOut
    I have just installed a fresh MinGW installation on Windows 7 64bit using the Graphical User Interface Installer (the recommended approach) following the instructions given here and keeping the default options (i.e. installation in C:\MinGW) - hopefully without missing any steps or messing things up in any way. However, when running the ssh command, I get the following error: C:\Users\Diablossh username@host Could not create directory '/home/username/.ssh'. The authenticity of host 'username@host (host ip here)' can't be established. RSA key fingerprint is (fingerprint here). Are you sure you want to continue connecting (yes/no)? yes Failed to add the host to the list of known hosts (/home/username/.ssh/known_hosts). So, I basically have to confirm the connection every time. Why does this happen and how do I fix it?

    Read the article

  • Possible to use LVM partitions inside a vmbuilder created KVM virtual machine?

    - by Tauren
    I have an Ubuntu 9.10 host system with LVM partitions running KVM. I've been creating VMs using vmbuilder using LVM partitions for each VM instead of files for the VMs. When I configure a VM using vmbuilder --part, the partitions in the file I'm using are created as regular partitions (sda1, sda2, etc.). What I'd like to do is use LVM inside of the VM in case I need to resize the partitions at some point. But I don't see any options for doing that using the vmbuilder tool. It seems like this might be a common request to avoid using kpartx, etc. Is there something I'm missing, or is this just not possible with vmbuilder?

    Read the article

  • Export-Mailbox - "an unknown error has occurred"

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes?

    Read the article

  • System Center Configuration Manager 2007 - Debugging Client Installs

    - by Dayton Brown
    Hi All: Having an issue installing the CCMsetup client on desktops. The CCMSetup makes it to the PC, files are there, it gets added to the services for automatic start, it starts, but quits almost instantly. Logs on the desktop show a entry like this. <![LOG[Failed to successfully complete HTTP request. (StatusCode at WinHttpQueryHeaders: 404)]LOG]!><time="14:28:51.183+240" date="06-11-2009" component="ccmsetup" context="" type="3" thread="2388" file="ccmsetup.cpp:5808"> What am I missing? EDIT: Firewall is off on both client and server.

    Read the article

  • How can I update fontconfig to a newer version in Red Hat 5.3?

    - by user16654
    I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.

    Read the article

  • How to disable Windows 8 lock screen?

    - by Filip
    So I took a plunge and installed Windows 8 Consumer Preview on my main home PC. So far so good, but there is one annoyance - the system "locks" the computer after a period of inactivity causing me to re-enter my password. I really would like to avoid this, but have no idea how. I already tried the power settings (no pass on wake up) and the screen saver settings with no luck. Is this some sort of bug, or am I missing something? P.S. In this case I favor convenience over security.

    Read the article

  • Best way to grow Linux software RAID 1 to RAID 10

    - by Hans Malherbe
    mdadm does not seem to support growing an array from level 1 to level 10. I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array. My current strategy: Make good backup. Create a degraded 4 disk RAID 10 array with two missing disks. rsync the RAID 1 array with the RAID 10 array. fail and remove one disk from the RAID 1 array. Add the available disk to the RAID 10 array and wait for resynch to complete. Destroy the RAID 1 array and add the last disk to the RAID 10 array. The problem is the lack of redundancy at step 5. Is there a better way?

    Read the article

  • Sql Server differential backup : Simple vs Full recovery model

    - by MaxiWheat
    I need to better understand the backup process under SQL Server 2008. Since drive space is a kind of matter for us and we want to have a better disaster recovery solution, I decided that we will implement differential backups throughout the day (every hour). Am I right to think that if I keep the recovery model of my databases to Simple, the differential backup will be almost the same size as Full Backup (too big to make one every hour) ? I already tried to switch to Full recovery and it seemed to have fixed the issue (differential backups were way smaller). I heard that the recovery model must be set to Full to use Log backups (to the minute recovery etc., but we don't need that) but never about differential backups. So, is the recovery model really having an impact on differential backups or am I missing something ? Thank you

    Read the article

  • How to troubleshoot a service failure?

    - by AngryHacker
    I get a GPF dialog box out of the blue fairly often (like about 2 hours after I turn on the computer). It basically says that svchost.exe had a failure... (see the corresponding Event Log entry below). Event Type: Error Event Source: Application Error Event Category: (100) Event ID: 1000 Date: 5/18/2010 Time: 7:41:16 PM User: N/A Computer: DKHA-IPSA Description: Faulting application svchost.exe, version 5.1.2600.5512, faulting module ole32.dll, version 5.1.2600.5512, fault address 0x0004eaa9. Shortly after this error pops up, the computer pretty much grinds to a halt (e.g. some UI elements on the desktop simply do not respond). And I have to do a hard reboot. How do I troubleshoot this type of thing? P.S. The PC has all the latest patches and nothing is missing in the Device Manager.

    Read the article

  • can't install sun jdk on ubuntu karmic 9.10

    - by pstanton
    i've just started using a new ec2 install of ubuntu karmic 9.10 and my first task was to install the jdk. when running apt-get install sun-java6-jdk i get the following: Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jdk is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jdk has no installation candidate i've tried apt-get update as well as adding deb http://us.archive.ubuntu.com/ubuntu/ karmic universe to /etc/apt/sources.list any ideas fellas?

    Read the article

  • BitNami LAMP stack on ubuntu

    - by Desmond Liang
    I just installed BitNami LAMP stack on ubuntu. When I visit localhost/127.0.0.1 Apache returns "403 Forbidden. You don't have permission to access / on this server." I try repointing Apache's home directory to another folder (same hard drive, same partition) that's set to 777 recursively. Still getting 403. And then I change the ownership of the directory to under my username and daemon group from root/root. Same error. Am I missing something here?

    Read the article

  • Running a command or script in terminal from anywhere by adding it to the PATH, what am I doing wrong?

    - by Joe
    On osx/linux I want to be able to run a command/script on the terminal from anywhere which links to a program. ie I want to be able to run: alloy that runs: /usr/local/share/npm/lib/node_modules/alloy/bin/alloy I'm guessing adding to .bashrc is the best way? I've tried running: export PATH="$PATH:/usr/local/share/npm/lib/node_modules/alloy/bin" and also: export PATH="$PATH:/usr/local/share/npm/lib/node_modules/alloy/bin/alloy" Then I started a new terminal window but the alloy command doesnt work. Am I missing something?

    Read the article

  • How to stop a Linux LVM volume group?

    - by thkala
    I am currently dealing with a multiple disk failure on a Linux LVM Volume Group that is backed up by a RAID-5 md device. One disk has been taken out completely and another one is showing a limited number of corrupt sectors, due to what seems to have been a misbehaving power supply. The problem is that once an I/O error hits, md takes the array down, since it does not have enough devices for it to be operational. Where md the only one involved, I could mdadm --stop the array and then recreate it to get all devices active again. Unfortunately, the array is a PV in an LVM volume group and I cannot seem to get the kernel to release it. vgchange -an does not seem to do anything, bar spew out a couple of I/O errors. I am obviously missing something, but how in the name of -insert-favorite-deity- do I get LVM to release the underlying PV without rebooting the server?

    Read the article

  • Best way to grow Linux software RAID 1 to RAID 10

    - by Hans Malherbe
    mdadm does not seem to support growing an array from level 1 to level 10. I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array. My current strategy: Make good backup. Create a degraded 4 disk RAID 10 array with two missing disks. rsync the RAID 1 array with the RAID 10 array. fail and remove one disk from the RAID 1 array. Add the available disk to the RAID 10 array and wait for resynch to complete. Destroy the RAID 1 array and add the last disk to the RAID 10 array. The problem is the lack of redundancy at step 5. Is there a better way?

    Read the article

  • Solaris to Linux conversion: Use VxFS or GFS?

    - by w00t
    We're a Solaris shop looking at RedHat Enterprise Linux and one of the things we're wondering is if we should keep Veritas Volume Manager + FileSystem or go with LVM+ext3 or RedHat's preferred cluster filesystem solution, GFS. One of the things we like about Veritas is that it can use Veritas Volume Replicator to have a remote copy of important filesystems. This functionality seems to be missing from RedHat, DRBD doesn't seem to be packaged in RHEL... So my questions are: Does anybody use VxFS/VxVM/VVR on Linux? Thoughts, experiences? Comparison with LVM+ext3? Anybody using GFS? Thoughts, experiences? Do you do remote replication for disaster recovery, and if so, how? Is there a standard RedHat way?

    Read the article

  • eclipse adding java ant based project: Specified buildfile does not contain a javac task

    - by ufk
    Hello. I have a project that i wrote for apache tomcat. I started working with eclipse i want to import the project to eclipse ide. using eclipse 3.6.1. when i create a new project using : File - New - Other - Java - Java Project from Existing Ant Buildfile and i provide the build file location i get the following error: Specified buildfile does not contain a javac task I have a red5 project that i used the same method and it worked. what am i missing? do i need to add something to the ant build file to make it work? what exactly ? where can i find more information regarding this specific subject ? thanks!

    Read the article

  • microsoft ergo keyboard 4000 zoom feature

    - by d3020
    This may be an odd question, I apologize. I just got the Microsoft ergo keyboard 4000 and was curious about how the zoom feature was to work. I'm using Windows 7 and in Word, IE, or when viewing an image the zoom doesn't seem to do anything. Device manager says that the drivers are updated. Is there a special key combination that is used to make it work. Not sure what I'm missing with this. Thanks.

    Read the article

  • IIS 6.0 FTP Folder Permissions

    - by Beuy
    I have a IIS FTP website setup like this \ftp\users\domain\public\public Software that runs on clients computers logs into the FTP server by specifying domain\public and moving to public, it then uploads or downloads files / folders into that area. I want to restrict permissions on \ftp\users\domain\public so that nothing/nobody can write files or folders here, only to \ftp\users\domain\public\public. I setup the NTFS permissions of the folder to remove domain\users, public and server\users to not have modify right, yet I can still upload/modify files. I have disabled inheritance from the parent folder of \ftp\users\domain\public as well. Any ideas on what I'm missing here? P.S I know this is a stupid setup and makes no sense, it's some bizarre legacy application that I need to migrate to a safer environment until it can be replaced.

    Read the article

  • Excel INDIRECT function and conditional formatting - highlighting a row

    - by Ehryk
    I'm having an issue with conditional formatting using the INDIRECT function. I'm doing something similar to Using INDIRECT and AND/IF for conditional formatting , but the only answer there isn't working for me. Basically, I want to highlight rows where B is not blank and F is blank. INDIRECT will work for ONE of the conditions, but = AND(INDIRECT("B"&ROW()) > 0, INDIRECT("F"&ROW()) = "") does not work at all. The answer in the question points to replacing the references with relative ones, so I'm thinking this should work: = AND ($B2 > 0, $F2 = "") But it does not, nor does ISBLANK($F@) or ISEMPTY($F2) (the cell contains a formula that sometimes will return "", I want the row highlighted in these cases but only when something is in column B). Am I missing something about relative references? Why doesn't INDIRECT work with AND/OR?

    Read the article

  • Problems connecting Centos to the network when running in VMWare

    - by Sakin
    Hi, I installed CentOs on VMware running on windows XP. When trying to configure it to connect to the internet, I get an error message when trying to bring up the network interface: [root@VMLinux ~]# /et/init.d/network start Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... failed [FAILED] VM is running on a machine that has access to the network, I tried it on two different networks that have DHCP enabled. I tried to configure VMWare to use a bridged connection as well as NAT connection. An image of Ubuntu runs fine on the same VMware. What am I missing here? Thanks.

    Read the article

  • 1Gigabit vs 1.25Gibabit mismatch

    - by Joel Coel
    I need to re-connect the network to a small old outbuilding that hasn't been used in several years. I have to use the existing 62.5um multi-mode fiber run. This end of the fiber is already connected. For the end in the building, I was looking at this pair: http://www.tp-link.com/products/productDetails.asp?class=switch&content=spe&pmodel=TL-SM311LM http://www.tp-link.com/products/productDetails.asp?class=&content=spe&pmodel=TL-SL2210WEB If you look at the sfp first (first link), it's listed at 1.25Gpbs. That's odd, because IIRC the fiber should really only do 1Gbps. It's also supposed to work with the switch I posted (2nd link), but the gbic port on the switch also only shows 1Gbps. What am I missing here?

    Read the article

  • Apache + LDAP Auth: access to / failed, reason: require directives present and no Authoritative hand

    - by Karolis T.
    Can't solve this one, here's my .htaccess: AuthPAM_Enabled Off AuthType Basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthName "MESSAGE" Require ldap-group cn=CHANGED, cn=CHANGED AuthLDAPURL "ldap://localhost/dc=CHANGED,dc=CHANGED?uid?sub?(objectClass=posixAccount)" AuthLDAPBindDN CHANGED AuthLDAPBindPassword CHANGED AuthLDAPGroupAttribute memberUid AuthLDAPURL is correct, BindDN and BindPassword are correct also (verified with ldapvi -D ..). Apache version: Apache/2.2.9 (Debian) The error message seems cryptic to me, I have AuthzLDAPAuthoritative on so where's the problem. EDIT: LDAP modules are loaded, the problem is not with them being missing. # ls /etc/apache2/mods-enabled/*ldap* /etc/apache2/mods-enabled/authnz_ldap.load /etc/apache2/mods-enabled/ldap.load EDIT2: Solved it by changing funky Require ldap-group cn=CHANGED, cn=CHANGED line with Require valid-user Since AuthzLDAPAuthoritative is on, no other auth methods will be used and valid-user requirement will auth via LDAP. (right? :/)

    Read the article

  • Serve WordPress from root of site on Bitnami EC2 Ubuntu image

    - by user57087
    Running Bitnami's Ubuntu Wordpress Image on Amazon EC2. By default the WordPress install is at /wordpress and there is a static index.html file in the root. How do I configure Apache and WordPress to serve from the root instead of /wordpress? I have tried the instructions on this page: http://digitivity.org/10/how-to-serve-your-wordpress-blog-from-the-root-directory-if-its-installed-in-a-subdirectory without success. I have tried changing the document root to the folder serving the wordpress content, but that just breaks WordPress. I know I am missing something simple, but not sure what. Any help would be appreciated.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >