Search Results

Search found 21769 results on 871 pages for 'check constraints'.

Page 231/871 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Why does HP Update at remote system trigger RDP printing at local system?

    - by lcbrevard
    This is obscure. When connected with RDP to another system that has HP Update installed on it, either directly running the HP Update or having the notification pop up to ask if you want to run HP Update causes the local system to try to print something to peculiarly-chosen-local-printer. Case 1: Desktop Win 7 Ult system RDP connected to HP Laptop Win 7 Ult system. When HP Update runs on the laptop a dialog for XPS Writer Save As... appears on Desktop system. Even if you put in a name, nothing gets generated and the dialog repeats. And repeats. Until you (a) close the RDP connection and (b) clean out the queued entries. If the HP Update pops up the request to run the update and you are not at the desk when this happens, there can be dozens of queued requests for this bogus printing. NOTE: the XPS Writer is not selected as a default printer on either system. Case 2: (Different) HP Laptop Win 7 Ult system RDP connected to XP Pro "brand X" desktop system but with HP printer drivers installed. If the request to run HP Update notification pops on the XP system, dozens of attempts to print, in this case to a Versa Check Printer driver, are queued. Dismissing the HP request, closing RDP, and cleaning out the queue are required to stop this. NOTE: the Versa Check Writer is not selected as a default printer on either system. THE QUESTION: What the heck is going on here? Some kind of scripting or COM activity that is misdirected?

    Read the article

  • 750Gig Hard Drive shows full with only 315Gigs used

    - by Chris Kelly
    I have a Win7 laptop with a 750Gig C: drive. It came partitioned with 714Gig usable from manufacturer. I installed programs, music files, etc up to 285 gigs. As of a few weeks ago it showed 285 Gigs. Two weeks of house guests later and it shows HD is full. I deleted some files but it still shows 652 Gigs on this drive while there are only 285 Gigs on drive. Relevant details: I am Administrator on laptop and have fair knowledge of what I am doing. I did not restore from backup, restore from mirror, upgrade HD's or anything else that would have touched the partition structure. Just daily use as imaging machine and web. I have checked partitions under disk administrator - no change, still partitioned with 714Gigs usable. Have looked through computer C drive by hand showing Hidden files and folders - no change. I have used JDisk Report to double check - it shows I have only 285 Gigs on C drive. I triple checked with TreeSize run as Administrator and it also shows 285 Gigs on C drive - yet Windows 7 still shows almost full. I used Windows 7 Utilities to Check for Disk Errors, and Defragged the drive. No errors shown and no change after Defrag.

    Read the article

  • grub refuses to install to raid array

    - by ronno
    I have a software raid 0 setup with dual booting Windows 7 and Ubuntu 12.04. The GRUB bootloader that is already on the hard drive seems to work fine. However, since the latest package update for grub, it refuses to install the new version to the hard disk. grub-install throws the following error: /usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map. Auto-detection of a filesystem of /dev/mapper/< raid name_RAID0p9 failed. Try with --recheck. If the problem persists please report this together with the output of "/usr/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to < [email protected] update-grub pops the same "/usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map." every alternate line. I don't understand what exactly is going on. I'm afraid to reinstall the grub package because it might mess up the boot, which currently works fine. Is it safe to just ignore this?

    Read the article

  • Internet connection problem,ping ok , but outlook and browsers dont work

    - by Ashian
    Hi, From some days ago I have a big problem on my laptop( run windows xp sp3) When I connect to internet I can ping web sites but when try to browse them some times it work correctly and some times the connection to server intrupted and I have to refresh the page several times. in this case browser show a connection problem immediatly after I click on address bar or a link on page( wihtout any try to connect to server) I use FireFox and opera and both of them have this problem. try another ISP and still I have this problem. I didnt use any proxy server and check the proxy setting. In this case Outlook also can't connect to mail server. this problem anfter some time or after restart windows have been fixed for a while. I check for virus and can't find anything. Is there any idea how can I fix it? UPDATE: Thanks for your responses. I test them , also I use Open DNS setting and that dosent help me. last night I see that my local web application ( such as Adsl modem config web site , and sites that I set up on windows xo IIS ) aslo can't open and Internal Communication error apears ( Opera Message) that didnt relate to DNS settings or Internet connection.

    Read the article

  • Windows Server 2012 licensing issue preventing RDP connections?

    - by QF_Developer
    I am witnessing an unusual behaviour on 1 of 5 Windows Server 2012 R2 machines (clean install) that is preventing any remote connections from being established via RDP. I have run through the prerequisites for RDP here but I am finding that any remote connection attempt instantly stops the "Windows Protection Service". When I check the event logs I see the following entry. The Software Protection Service has stopped Event ID: 903 Source: Security-SPP From what I have read Security-SPP is tasked with enforcing activation and licensing, it appears that RDP requires this service to be in the running state. Is it possible that I have inadvertently activated this instance of Windows with a key that has already been associated to another instance (We have 5 keys as part of an MSDN subscription)? Would this be sufficient to block RDP access? When I look under System Properties (Windows Activation) it states that Windows is activated and there are no other obvious indicators that there's a licensing issue. EDIT 1: I ran a Powershell script to display the product keys for all servers in order to check for any duplication. For the problematic server I am getting the message The RPC server is unavailable.

    Read the article

  • Coldfusion:-Firefox can't establish a connection to the server at localhost

    - by Fransis
    I installed Coldfusion 8 trial version on my system (XP Professional sp3). I created an Folder in the “C:/Coldfusion8/wwwroot” called “buildProject” containing an Index.cfm and some other .cfm files. But I am unable to access the Neither my project files or CFIDE/Administrator I tried the following URLS http://localhost:8500/wwwroot/buildProject/ http://localhost:8500/CFIDE/administrator/index.cfm http:// 127.0.0.1:8500/wwwroot/buildProject/ http:// 127.0.0.1:8500/CFIDE/administrator/index.cfm http://localhost /wwwroot/buildProject/index.cfm http://localhost /CFIDE/administrator/index.cfm http://localhost /wwwroot/buildProject/ http://localhost /CFIDE/administrator/index.cfm Firefox can't establish a connection to the server at 127.0.0.1:8500. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. • I cleared the browsing “History” from both IE and FF. • I have restarted the CF server in the Control Panel Administrative Tools Services • Even restarted the IIS Getting the same error. Further I was trying to access IE/FF via CFbuilder But still I am getting the error “The connection was refused when attempting to contact [URL].” My inetpub is in the D rive where as CF8 is in C drive Also when i check IIS-5 Control Panel Admin tools Services I do not find the Localhost under web sites or FTP sites. Kindly help me with a fix.

    Read the article

  • What does Firefox do when "scanning for viruses" after download?

    - by Joey
    Never mind the fact that Firefox is a browser and not a AV tool, but what exactly does it do after a download? Even on systems that have an up-to-date AV this generates a pause of several seconds after download (where I can't open the file from within the DL manager) and I have no idea what FF might be trying there. I know I can turn it off (using FF only at work anyway) but I'm wondering. I can think of some things here what it might be: FF itself is a AV scanner and it loads signatures in the background and whatnot. Sounds highly unlikely and shouldn't need tens of seconds for 20 KiB files. FF tries to talk with the installed AV to munch the file. Sounds unneeded, given that most AV programs feature real-time protection anyway and therefore will have caught a virus already and also because FF does that on systems without AV installed too. FF uploads the file to some online virus checker. Unlikely and stupid. FF instructs some online virus checker to download the file and check it. Unlikely and would be a nice target for DoSing that service. FF generates a hash of the file and sends that somewhere (presumably Google) to check for. They then respond with either "Whoa, that hash is totally a virus" or "Nope, that MD5 doesn't look very virus-y to me". I'm running out of better ideas. Anyone have a clue?

    Read the article

  • .htaccess with godaddy not working in subdomain

    - by explorex
    Hi, i have a site uploaded to shared subdomain (which is inside a folder). and htaccess is not working. please get details from here. EDIT::copied from stack overflow Hi, i uploaded as website to a subdomain, and every page is not working except the front page please check it here. what could be the possible reason? i shoud have 8 pages in front level and many more on admin level but i am getting 404 error as you can see, does anyone has idea or suggestion? UPDATE:: .htaccess file RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] UPDATE to url rounting i do have few url router like below BUT i dont have any default router $router->addRoute( 'get-destination', new Zend_Controller_Router_Route('destination/get/:id/:dest-name', array( 'controller' => 'destination', 'action' => 'get', 'id' => 'id', 'dest-name' => 'dest-name' )) ); just to make look cooler and on my navigation (which is loaded from xml i have) something like <nav> <home> <label>HOME</label> <controller>index</controller> <action>index</action> <route>default</route> </home> since i was getting url problem from where url was routed and please check phpinfo at http://websmartus.com/demo/globaltours/public_html/phpinfo.php

    Read the article

  • Computer loop restarts on Windows loading

    - by Robinson G.
    My computer restarts, just after or during Windows loading (4 squares getting together), or after some time on the desktop IF I'm idle. In order to start the computer I either have to swap my RAM (24Go) in differents DIMM slots, for example (I have 4 slots) : 1-2-4 or 2-3-4 or 1-3-4, not the same position each boot or it restarts in a loop... I can also change the ram timing from 1.5 to 1.6V and it starts, at the next reboot I have to change it again to 1.5 or it won't boot... And If after all, I succeed to boot, I have to use the computer for about 10 minutes, and it's ok, if I don't, it restarts by itself after some minutes. If I stay on the BIOS, It's all OK, I can stay for a whole year without restarting. I have check my RAM on Memtest (4*8G SDRAM DDR3) : OK I have check without graphic card : Still the same. I tried to reset the bios stack by getting it off and on : Still the same. I tried to reset my BIOS, and many settings about the RAM in the menu : Still the same. CPU temps are just fine (around 35°C) I was thinking ofc about the motherboard but I want to be sure. Motherboard : MSI zh77a-g43 RAM : 4*8G Dual Mode or not (depends on the number I put ofc.) PSU : 600W (enough to run all the config) CPU : i7 3770 non K

    Read the article

  • Apache2.2 not responding or logging anything on Win 7

    - by Adam
    I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't time out as such, the browser just keeps on waiting forever. Nothing is recorded in either the error log (set to debug level), the access log, or Windows' Event Log. The problem showed up when I added a new VHost and restarted, however a syntax check has shown there's no problem with the config (from the little I changed), and the service does actually start error free. I've also disabled VHosts and tried with just localhost. I've tried to telnet to the web server, and it connects, but nothing happens. The prompt just goes blank and I can't type anything, and effectively become stuck. I've ensured there's a rule within Windows Firewall for Apache, and I've even disabled the entire thing just to check it wasn't the cause. Still the same. If I stop Apache however, the request fails immediately. I've uninstalled and reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've tried using a different port but nothing different. Does anybody have any suggestions to fix this? Or to perhaps try and figure out either if it's Apache itself not responding or something sitting between the two that's holding things up? I'm not too savvy on debugging Windows issues like this and I've been searching for hours but not found anything of use to me. Cheers Adam

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • Howto get exit code of a script started in screen session

    - by Bettina
    Hi folks, I am currently creating a backup script which uses screen to start a backup job with rsync inside a screen session. The backup jobs are started as follows. screen -dmS backup /usr/bin/rsync ... As soon as the rsync job is finished, the screen session is terminated automatically. To make sure, that the backup was successful, I would like to check the exit code of the rsync job but unfortunately I really don't know how to get the exit code after the screen was terminated. Does someone have a good idea how to automatically check, if the rsync job was successful or not? Would be great if someone does. I already thought about using a temp file but like this: screen -dmS myScreen "rsync -av ... ; echo $? /tmp/myExitCode" but this unfortunately does not work. Then I thought about using stderr like in the example below: screen -dmS myScreen "rsync -av ... 2 /tmp/rsync-sterr None of my ideas worked out so far, since stderr is not written when I use the command above. :-( ? Would be great if someone has a good idea or even a solution. Cheers, Bettina

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • .php file blank - .php5 files works

    - by Kleidi
    I have a problem with a server of mine. I've installed virtualmin/webin on it for administration and I have 1 domain on it. DNS management is external. On this domain I only have an html "Under Construction" index and 5 subdomains. In all those subdomains I have PHP systems running perfectly. I've tried to install Wordpress on the main domain and I'm having some issues: None .php files loads. I have made a phpinfo file on it to check it and it won't work either; only a blank page appears. When I check the source code of it in browser, appears the code. I have changed the extensions to .php5 and it worked perfectly. Something is going wrong with it but I can't figure out what. I have checked the apache error and nothing appears. 3 Days ago I upgraded from php 5.2.* to 5.4.21. Server is running CentOS 5.10.

    Read the article

  • How do i play nicely with MS SQL/SQL Server 2008

    - by acidzombie24
    Big problem. I have nearly given up. I am trying to port my prototype to use MS SQL so it will work on a server once i get it (the server will be SQL Server 2008, shared, i dont know any more info). So i tried to connect to SQL Server via visual studios IDE and had no luck. I enabled TCP and named pipes and restarted the service (and computer) with still no luck. I remembered about mdf files so i made that after an obstacle of not being able to make the connect string require i figure out visual studio has it in its properties and successfully connected with that. Then i had a problem with nested transactions. After not being able to figure out how to check i wondered if i can configure it to allow it somehow. I always thought all of MS were the same except for limitations but sql server seems to support nested transactions so theres no point trying to work around the problem with .mdf files since i wont need them and really just used it to port the base of my sql code and to check if syntax is correct. I tried installing SQL Server Management Studio since people mentioned it several times (as a solution or at least help). When installing it on windows 7 it says it may not be compatible. After running it, it launched SQL Server Installation Center (64-bit) which doesnt seem to be the same thing as i dont see a way to modify any of my server (networking) configurations or edit user permissions, etc. I am clueless what to do next. Does anyone have any ideas? I'm posting here bc i think my problem is more configurations and sql server then programming.

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • yum trying to install el5 when I am on el6

    - by giorgio79
    When I run the following yum command I get this error: Package: git-1.7.10.1-1.el5.rf.x86_64 (rpmforge) Requires: libcurl.so.3()(64bit)" I read that this error is due to running an el5 rpmforge or having some el5 installed packages. How can I solve this problem? $ yum install git Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.kiewel-online.ch * epel: fedora.kiewel-online.ch * extras: centos.kiewel-online.ch * rpmforge: mirror.de.leaseweb.net * updates: centos.kiewel-online.ch Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package git.x86_64 0:1.7.10.1-1.el5.rf will be installed --> Processing Dependency: perl-Git = 1.7.10.1-1.el5.rf for package: git-1.7.10.1-1.el5.rf.x86_64 --> Processing Dependency: perl(Git) for package: git-1.7.10.1-1.el5.rf.x86_64 --> Processing Dependency: libexpat.so.0()(64bit) for package: git-1.7.10.1-1.el5.rf.x86_64 --> Processing Dependency: libcurl.so.3()(64bit) for package: git-1.7.10.1-1.el5.rf.x86_64 --> Running transaction check ---> Package compat-expat1.x86_64 0:1.95.8-8.el6 will be installed ---> Package git.x86_64 0:1.7.10.1-1.el5.rf will be installed --> Processing Dependency: libcurl.so.3()(64bit) for package: git-1.7.10.1-1.el5.rf.x86_64 ---> Package perl-Git.x86_64 0:1.7.10.1-1.el5.rf will be installed --> Finished Dependency Resolution Error: Package: git-1.7.10.1-1.el5.rf.x86_64 (rpmforge) Requires: libcurl.so.3()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • exchange server 2010 with multiple domains

    - by air
    i have one exchange server 2010, which is working fine with one domain. my exchange is working as follows pop3 collector collect emails from one master catchall account and then deliver to exchange server, this working perfect. now what i want to add another domain to same exchange, i have added new domain as trusted domain & email policy and this new domain email account works fine with internal emails. now what i have done, i again forward new email account to same catchall account. but if i send email from any other external email address email is bounce, i can see email receive by pop3 collector but bounce by exchange server. to make you more clear let me explain logic on which i am working. i have 2 domains 1. domain1.com ([email protected]) 2. domain2.com ([email protected] -->[email protected]) now on my machine with exchange server i have pop3 collector which collect all emails from [email protected] and forward to exchange 2010 server. all emails to domain1.com is working perfect but when i send email to [email protected] this email redirect to [email protected] perfectly but when exchanger server receive this email, it bounce. i have also study the url link text and follow the whole process but no success. i also check that my DNS/MX is working fine as the bounce message is going from my exchange server. EDIT the only problem is with accepted domain, as email come to exchange server then bounce back. i just try this today i create one user called test, then i goto his properties -- email there was only one email account [email protected] i try to send email to [email protected] from internet (email bounce) then again i go to test user properties -- email and Add one email [email protected] again u try to send email to t*[email protected]* from internet (email received) i think the only problem is with accepted domain but in hub transport , it shows accepted is there any way to check does domain is properly accepted or not in exchange 2010 server. Thanks

    Read the article

  • How can I get write permission for the Web (Inetpub) directory on a new Win 7 machine?

    - by marcipollo
    I mirror my Web site on my laptop, and am trying to move the mirror site to a new laptop. I copied the files to the Inetpub directory, and can view them perfectly, but they are read-only (the check-mark is grey, not black), and I cannot change the permission. When I un-check the read-only attribute on the Inetpub directory, and click "apply" it displays a dialog box stating that I need administrative permission to change the attributes. (I am logged in as an administrator). When I click "continue," it pops up another dialog box saying access is denied to the attributes of the file: c:\inetpub\custerr\en-us\500-100.asp That dialog box has an "ignore" button, and if I click that, it appears to work through the directory tree setting the permissions. It leaves all of the files (leafs) set to "read-write," but the directories remain "read only." I am using 64-bit Windows 7. I stopped the IIS service while doing all of this. Might it have something to do with the fact that I copied the files from a different machine in the workgroup (my old laptop)?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Constant crashes in windows 7 64bit when playing games

    - by yx.
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >