Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 715/956 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • A single AD user can't log into a single Mac bound to the domain (DirectoryServices error). How can I resolve this?

    - by Ben Wyatt
    On our campus, we have about 60 Macs joined to our Active Directory domain. Most users have no problems logging into Macs, as long as their accounts are configured correctly. However, we have one particular user who is unable to log in to just some of the Macs. He has no problem with most of them, but there is one group of them (all built from the same image) that he can't log in to. The machine in question is running OS X 10.6.2. The relevant entries from secure.log are below, with the hostname and username redacted. Aug 16 10:32:43 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:32:43 hostname SecurityAgent[4411]: Will sleep 1 seconds and try again (retryCount = 4) Aug 16 10:32:44 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:32:44 hostname SecurityAgent[4411]: Will sleep 2 seconds and try again (retryCount = 3) Aug 16 10:32:46 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:32:46 hostname SecurityAgent[4411]: Will sleep 4 seconds and try again (retryCount = 2) Aug 16 10:33:10 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:33:10 hostname SecurityAgent[4411]: Will sleep 8 seconds and try again (retryCount = 1) Aug 16 10:33:18 hostname SecurityAgent[4411]: User info context values set for username Aug 16 10:33:18 hostname SecurityAgent[4411]: unknown-user (username) login attempt PASSED for auditing Everything I've found online suggests that our use of Mobile Accounts is causing the issue. I turned that feature off, but I still can't log in as that user. id returns a record for his account, and nothing looks out of the ordinary. Has anyone here run into this before?

    Read the article

  • Setting Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from newly mounted drives

    - by galacticninja
    How do I set Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from external hard drives and TrueCrypt-mounted volumes? I remember in Windows XP, I can set a percentage of total disk space that will automatically be used as storage capacity for deleted files by the Recycle Bin, and this will be applied to all external HDs or TC-mounted volumes. Windows 7 defaults to the 'Don't move files to the Recycle Bin. Remove files immediately when deleted' setting for newly mounted external HDs and TC mounted volumes. Since I am expecting deleted files to go to the Recycle Bin, sometimes this causes an 'Oops' when I delete files in external hard drives or TC mounted volumes, as Windows does not move deleted files to the Recycle Bin, but just deletes the files permanently. I have to remember to manually set a custom Recycle Bin storage space for each new drive that is mounted by Windows to avoid this issue. I only use and mount TrueCrypt file containers, not drives. I also don't mount TrueCrypt file containers as removable drives. ('Mount volume as removable medium' is unchecked in Mount Options.) In my $Recycle.Bin > Properties > Security settings, 'System' and 'Administrators' are already set to 'Full Control', while 'Users' only have 'Special Permissions' checked in gray. There are no other groups. I haven't changed or edited anything in these settings. I am using Windows 7 Ultimate.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • ClassNotFoundException returned for all plugins

    - by razumny
    I am trying to use a Java applet (any Java Applet), but I always get a messages saying "Error. Click for details". When I do so, the pop-up says: Application Error ClassNotFoundException jreVerification.class When I click the "Details" button, all I see is the following: Java Plug-in 10.7.2.10 Using JRE version 1.7.0_07-b10 Java HotSpot(TM) Client VM User home directory = C:\Users\razumny ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ---------------------------------------------------- I am running Windows 7 Professional, and am up to date on patches. The problem occurs in Google Chrome, Mozilla Firefox and Internet Explorer, regardless of what Java Applet I am running. The error I quoted above came from here: http://java.com/en/download/installed.jsp?detect=jre I have attempted the following to rectify the issue: Uninstall and reinstall Java Uninstall Java, reboot, install Java Uninstall Java, delete all registry entries, reboot, install Java In addition, I have run Malware and Virus scans, none of which have shown anything of relevance. At this point, I am at my wit's end, and so, I turn to you.

    Read the article

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Problems using InfraRecorder to back up ISOs of certain CDs

    - by Voyagerfan5761
    I've gotten into the habit of backing up my CDs as ISO files, just in case the discs should be damanged, lost, or destroyed. Using InfraRecorder, the process is pretty painless. Unfortunately, I have run into at least two discs that don't back up. I get the error message: Can't read source disc. Retrying from sector 252270 Sometimes this will appear repeatedly. One of the discs is my retail copy of Star Trek: Armada II; the other is disc one of DOOM 3. Both discs run flawlessly when I put them in the drive and let Windows AutoPlay them. Armada II appears as two tracks (one data, one audio) in InfraRecorder, and the error happens at the approximate track boundary. DOOM 3's first disc, however, fails much sooner (around sector 990) and appears as one solid data track. Am I simply using the wrong tools for this job? InfraRecorder is a nice free tool that I can run from my flash drive and use for most tasks of this type, but it does seem to have trouble with certain things. Ideally I'd like to hear about any workarounds people have found for this issue, but if I must switch tools I'm open to it (preferably other free tools).

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • Too many concurrent connections Exchange 2010. What else is there to check?

    - by hydroparadise
    I thought that I had this under control before. But for some reason during our last email marketing promo, I start receiving from our mass email client (built in house).. The message could not be sent to the SMTP server. The transport error code is 0x800ccc67. The server repsonse was 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel again. There's several places I've checked to make sure that wouldn't be an issue. First I checked that receive connector was set to receive an adequate number of connections on our relay connector (1000 connections). Then, I would later find out about Throttling Policies. I created one and set all the properties I knew to set in terms of the policy following properties to 1000; EWSMaxConcurrency, OWAMaxConcurrency, CPAMaxConcurrency, and CPAMaxConcurrency. Still, the email client starts receiving the error shortly after 100 has been sent and takes about 15-30 seconds. The process is then repeatable, but still the error gets received at the same spot everytime. Is there a rate setting that I am missing? Was there a windows update that I missed looking at? Should the software have it's own throttling feature?

    Read the article

  • How to make the ec2 ami work on the Xen on Debian

    - by user67103
    Hi. Here is the scenario. We are creating a cloud App. We have a Debian on which we create a customized image and upload it to the Amazon EC2. After uploading it to the cloud we have made some more customizations and are trying to rebundle it. We are facing some issues in rebundling it. We would like to know if we could do something like this. a. Create an AMI Image on Debian b. Load it on to the Xen Hypervisor which would be over the Debian c. Customize the image d. Save the customized image e. Upload it to EC2. The issue is that I am unable to find a proper solution on how to install Xen on Debian and will the AMI on Xen work on EC2. Any suggestions/Help would be greatly appreciated. thanks. Krishna

    Read the article

  • Windows 7 Explorer keeps crashing

    - by Daniel Liang
    I currently have an issue with Windows Explorer. It keeps crashing when I browse through a network drive. This is happening on several computers. I have already obtained a crash dump file but it doesn't state much: Microsoft (R) Windows Debugger Version 6.12.0002.633 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\LocalDumps\explorer.exe.3964.dmp] User Mini Dump File with Full Memory: Only application data is available Symbol search path is: SRV*c:\websymbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Version 7601 (Service Pack 1) MP (2 procs) Free x86 compatible Product: WinNt, suite: SingleUserTS Machine Name: Debug session time: Mon Oct 21 11:21:30.000 2013 (UTC - 4:00) System Uptime: 0 days 0:06:20.449 Process Uptime: 0 days 0:05:54.000 ................................................................ ................................................................ .... Loading unloaded module list ............. This dump file has an exception of interest stored in it. The stored exception information can be accessed via .ecxr. (f7c.fe4): Access violation - code c0000005 (first/second chance not available) eax=00000000 ebx=07a3f080 ecx=00000400 edx=00000000 esi=00000002 edi=00000000 eip=76e170f4 esp=07a3f030 ebp=07a3f0cc iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 ntdll!KiFastSystemCallRet: 76e170f4 c3 ret I've already tried removing certain context menu items. I disabled all unnecessary start-up items. Ran memtest86 and it looks fine on that end. It also happens when I browse through my local disk. Can anyone take a look into this? Thanks!

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • The great Vanishing Act of INetMgr.exe on my Windows 7 x64 system

    - by marc_s
    I'm facing an odd issue with the IIS Manager on Windows 7 (x64). At home, I have Win7 Professional, and when I check my IIS manager icon in the start menu, I see it links to %windir%\system32\inetsrv\InetMgr.exe When I launch this from the command line, it works like a charm. At work, however, I have Windows 7 Enterprise (x64), and when I check my link in the start menu, the entry is exactly the same. If I click on it - it works like a charm. Now if I'd like to launch it from the command line (cmd.exe or TakeCommand), however - the file just isn't there - a DIR %windir%\system32\inetsrv\*.exe shows a number of files, including a "inetmgr6.exe" - but no "inetmgr.exe" - and of course, I can't launch it either :-( Strangely enough, when I look at the directory %windir%\system32\INetSrv in Windows Explorer or Windows Powershell, I SEE the INetMgr.exe file and I can launch it - no problem. What the **** is going on here? How can I find the INetMgr.exe from my classic command line and launch it from there?? UPDATE: ok, some updates. On my work laptop, the INetMgr.exe file appears to really be located in a directory called c:\windows\syswow64\inetsrv (I'm recalling from memory, so don't quote me on the directory name - something like that). I can see this if I search for it in e.g. Powershell or Windows 7 Explorer. However, from a "classic" command line like cmd.exe, it appears to be in c:\windows\system32\inetsrv ..... hmmm.... trouble is - even though I now know where the file really is, I cannot access that directory from my classic command line - not even if I'm running cmd.exe as admin with elevated privileges....... so I know where the file is, but that still doesn't solve my problem :-(

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • Installing checkinstall on x86_64 bit

    - by SephMerah
    I downloaded the source for check install. checkinstall-1.6.2.tar.gz. I then tar -xzvf checkinstall-1.6.2.tar.gz Then I make. It prints this error: [root@ip-50-63-180-135 checkinstall-1.6.2]# make for file in locale/checkinstall-*.po ; do \ case ${file} in \ locale/checkinstall-template.po) ;; \ *) \ out=`echo $file | sed -s 's/po/mo/'` ; \ msgfmt -o ${out} ${file} ; \ if [ $? != 0 ] ; then \ exit 1 ; \ fi ; \ ;; \ esac ; \ done make -C installwatch make[1]: Entering directory `/home/sofiane/checkinstall-1.6.2/installwatch' gcc -Wall -c -D_GNU_SOURCE -DPIC -fPIC -D_REENTRANT -DVERSION=\"0.7.0beta7\" installwatch.c installwatch.c:2942: error: conflicting types for 'readlink' /usr/include/unistd.h:828: note: previous declaration of 'readlink' was here installwatch.c:3080: error: conflicting types for 'scandir' /usr/include/dirent.h:252: note: previous declaration of 'scandir' was here installwatch.c:3692: error: conflicting types for 'scandir64' /usr/include/dirent.h:275: note: previous declaration of 'scandir64' was here make[1]: *** [installwatch.o] Error 1 make[1]: Leaving directory `/home/sofiane/checkinstall-1.6.2/installwatch' make: *** [all] Error 2 I searched extensively on this issue and this solution looks promising. Should I attempt to install checkinstall as an fpm? What would be the best way to go about that? Centos 6.3 x86_64

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Radeon 6950 - Garbling of text and graphics in certain Windows only

    - by Greg
    This morning I noticed the text in Gmail (in Firefox 4) looked a little funny (kind of thin, maybe some color fringing). I went to work and thought it might be some ClearType issue or something with the way Direct way that FF4 draws to the screen. When I came back from work (I left the computer on), the problem was much worse - way beyond ClearType nit-picking. The text was barely readable. I opened Chrome and there was no such problem. It seems like only Windows that use hardware acceleration are garbled, and ones that use GDI are not. But, I fired up Dragon Age and didn't notice any problems (I only really looked at the main menu though). Here is a link to a screen shot that illustrates the problem. Notice how the Windows Live Mesh window is completely unreadable, the text in Firefox 4 (left) is pretty bad, while Chrome, the Windows Control Panel, and the task bar are perfectly fine. The fact that the problem shows up in screen shots and that it only happens in certain Windows makes me confident that the problem cannot be with the monitor or DVI cable. I am using the AMD Radeon drivers from 4/27/11. The card I have (MSI Frozr II) came with a slight overclock (810Mhz) out of the box, but it looks like when I'm on the Windows desktop it's not running at full clock (CCC reports 450Mhz). Still, I underclocked it to the stock reference clock (800Mhz) and it made no difference. The idle temperature according to Afterburner is 42-44 Celsius, which seems a tad high but not enough to cause a problem - it's cold to the touch if I open up the machine. What the heck could be causing this? The problem varies in intensity. As we speak I'm in Firefox and things look better than they did earlier - it'll probably get worse again soon. Radeon 6950 (MSI Frozr II), Seasonic X 560, Core i5 2500K at stock clockspeeds, 16GB RAM, Asus P8P67 M Pro

    Read the article

  • Word 2003 will not show up in Windows 7

    - by invadersil
    I just installed Windows 7 over the holiday and it went swimmingly well. Today I finished up a few things like installed MS Office 2003. That went well too, until I tried to open up Word. When I try to open up Word on its own, it comes up in the application bar but the application window does not show. I use Word as the editor in Outlook which does work. I also discovered that I can start it up in safe mode and it will work normally. But normal startup just doesn't show me anything. Oddly, if I start typing stuff while the app is selected in the app bar and then try to close it, it pops up a message asking if I want to save it. I tried running the compatibility utility within Windows 7 but still no dice. Has anybody seen this issue yet? The other Office apps start normally. Edit: More info: Windows 7 Pro 64-bit. Office is patched up to SP3. And last time I checked, there were no updates either (and fully updated with KBs after SP3) And I did a fresh install of Windows 7.

    Read the article

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • What program sent which packet to the network [closed]

    - by Erik Johansson
    I would like to have a tcpdump like program that shows which program sent a specific packet, instead of just getting the port number. This is a generic problem I've had on and off sometimes when you have and old tcpdump file lying around you have no way to find what program was sending that data.. The solution in how i can identify which process is making UDP traffic on linux ? is an indication that I can solve this with auditd, dTrace, OProfile or SystemTap, but doesn't show how to do it. I.e. it doesn't show the source port of the program calling bind().. The problem I had was strange UDP packets, and since those ports are so short lived it took me a while to solve this issue. I solved this by running an ugly hack similar to: while true; date +%s.%N;netstat -panut;done So either a method better than this hack, a replacement for tcpdump, or some way to get this info from the kernel so I can patch tcpdump. EDIT: This was asked on superuser "tracking what programs sends to net", no good solution though.

    Read the article

  • Code to update HyperV Export file

    - by Andy Schneider
    I am using the HyperV Module from Codeplex to do a "config only" export from a 2008R2 Hyper-V server. In order to import the configuration on another HyperV server, I need to edit the value of CopyVMStorage in the EXP file. This file is an XML file. I wrote the following code in PowerShell to do the update for me. The variable $existing is the existing exp file. $xml = [xml](get-content $existing) $xpath = '//PROPERTY[@NAME ="CopyVmStorage"]' foreach ($node in $xml.SelectNodes($xpath)) {$node.Value = 'TRUE'} $xml.Save($existing) This code makes the correct changes to the XML. However, when I go to import the file on the Hyper-V server, I get an error that says the file format is incorrect. I am wondering if the encoding of the file is incorrect or if there is something else going on. If I edit the file manually in wordpad, it imports without an issue. The filename is a GUID with a .exp extension, and it appears that the file name is too long for notepad to open. Notepad throws an error trying to open the file, which is why I went with WordPad. I have noticed that the file that is updated with PowerShell comes out formatted whereas the raw file is xml all bunched together with no whitespace. Any ideas on what "file format" means in this HyperV error message and how I might be able to use my code to automate this change in the XML without changing the file format?

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >