Search Results

Search found 100267 results on 4011 pages for 'user instance'.

Page 370/4011 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • How to configure what certificates can be issued using Web Enrollment in Windows Server 2008 R2 Enterprise?

    - by antik
    I have a CA installed on of my Windows Servers in a small farm of systems. I've installed the Certification Authority Web Enrollment and Certificate Enrollment Web Service roles on the CA. I want to issue a Computer certificate to a computer not jointed to my domain. The user attempting web enrollment has domain credentials. The user was able to navigate to https://myServerHostname/certsrv and request a User certificate successfully. However, the user needs a Computer cert as well. From the certsrv site, the user tried the following: Advanced Certificate Request Create and Submit a Request to this CA However, the Computer certificate template is not available under the Certificate Template heading. He is only seeing "User" and "Basic EFS". How do I configure the CA to allow him to request a Computer cert for his system?

    Read the article

  • Apache returns 403 Forbidden for alternative port vhost

    - by Wesley
    I'm having an issue getting vhosts to work on Apache 2.2, Debian 6. I have two VirtualHosts, one on port 80 and one on port 8888. The port 80 one has been created automatically by DirectAdmin, the 8888 is a custom one. It's configuration is as follows. <VirtualHost *:8888 > DocumentRoot /home/user/public_html/development ServerName www.myserver.nl ServerAlias myserver.nl <Directory "/home/user/public_html/development"> Options +Indexes +FollowSymLinks +MultiViews AllowOverride All Order Allow,deny Allow from all </Directory> </VirtualHost> Of course I also have a NameVirtualHost *:8888 The port 80 DocumentRoot is /home/user/public_html/production, which is perfectly accessible and works like a charm. The port 8888 docroot of /home/user/public_html/development is 403 forbidden though. I have compared the permissions for both folders. They seem fine to me. drwxr-xr-x 2 root root 4096 Aug 17 16:14 development drwxr-xr-x 4 root root 4096 Aug 18 04:29 production Also, the index.php file which is supposed to display when accessing through port 8888, located in /development/: -rwxr-xr-x 1 root root 41 Aug 17 16:14 index.html I have looked at my error_log and found many of the following entries, only being added to the log file when accessing through port 8888. [Sat Aug 18 04:35:09 2012] [error] [client 27.32.156.232] Symbolic link not allowed or link target not accessible: /home/user/public_html /home/user/public_html is a symbolic link that refers to /home/user/domains/mydomain/public_html. The symbolic link has the following permissions: lrwxrwxrwx 1 admin admin 29 Aug 17 15:56 public_html -> ./domains/mydomain/public_html I'm at a loss. It seems that everything is readable or executable. I've set the Directory to FollowSymLinks in the httpd.conf file, but that doesn't seem to make a difference. If I change that directory tag to <Directory "/home/admin/public_html"> (so it has FollowSymLinks on that as well) it still does not work. Any help is greatly appreciated. If I need to post more information, let me know. I'm pretty much a beginner at this stuff. .. .. UPDATE: I ended up changing the configuration to directly go to the actual path of the files, avoiding the public_html symlink altogether. That worked. Thanks for the suggestions folks. DocumentRoot /home/user/domains/mydomain/public_html/development instead of DocumentRoot /home/user/public_html/development

    Read the article

  • How to run django on localhost with nginx and uwsgi?

    - by user2426362
    How to run django on localhost with nginx and uwsgi? This im my config but not works. nginx: server { listen 80; server_name localhost; access_log /var/log/nginx/localhost_access.log; error_log /var/log/nginx/localhost_error.log; location / { uwsgi_pass unix:///tmp/localhost.sock; include uwsgi_params; } location /media/ { alias /home/user/projects/zt/myproject/myproject/media/; } location /static/ { alias /home/user/projects/zt/myproject/myproject/static/; } } uwsgi: [uwsgi] vhost = true plugins = python socket = /tmp/localhost.sock master = true enable-threads = true processes = 2 wsgi-file = /home/user/projects/zt/myproject/myproject/wsgi.py virtualenv = /home/user/projects/zt chdir = /home/user/projects/zt/myproject touch-reload = /home/user/projects/zt/myproject/reload This config work on my ubuntu server with normal domain (not localhost) but on localhost not working. If I run localhost in web browser I have Welcome to nginx!

    Read the article

  • Sendmail Undeliverable Redirection?

    - by Dizzle
    Good afternoon; I don't know much about sendmail, so this may be fairly easy for those of you more experienced with it. We have an account, "[email protected]", sending reports to various groups. From time to time an undeliverable message will be sent back to "[email protected]". We'd like for those undeliverable messages to be rerouted, or bounced, from "[email protected]" to a group of our choosing. To carve out a scenario for clarity: [email protected] sends a report to [email protected] and [email protected] [email protected] has someone who's mail account no longer exists, triggering an undeliverable message being sent back to [email protected] Rather than having the undeliverable message sit in [email protected]'s Inbox, we'd like for it to be automatically rerouted/bounced to an admin group, [email protected] So I guess a "rule" of sorts. I've come across this solution: Sendmail : ignore local delivery But I don't know enough about sendmail to know if this is what will fit this situation. Any help is greatly appreciated.

    Read the article

  • How can I find the original un-changed configuration file to compare with the *.rpmnew file?

    - by User
    While upgrading from CentOS 5.7 to 5.8 I've received the following warnings: warning: /etc/sysconfig/iptables-config created as /etc/sysconfig/iptables-config.rpmnew warning: /etc/ssh/sshd_config created as /etc/ssh/sshd_config.rpmnew warning: /etc/odbcinst.ini created as /etc/odbcinst.ini.rpmnew (To know the reason for such files, and what one can do with them read - Why do I have .rpmnew file after an update? ) I want to know what exactly has been change in the default config file by comparing the old default file (the original un-changed configuration file) with the new default file (*.rpmnew). Then, I can apply the changes to my modified file (aka diff merge). The problem is I don't know where can I find the original un-changed configuration file...

    Read the article

  • Is there any way to prevent a Mac from creating dot underscore files?

    - by SoaperGEM
    At work we're letting one of our very tech savvy clients actually help out a little with a few development projects specific to him. However, he uses his own personal Macbook, and as he edits files on our (Windows) networks, his Macbook always creates a bunch of unnecessary meta files that we end up deleting later. For instance, it creates a file called .DS_Store in any directory he opens, as well as "dot underscore" files for each file he edits. So for instance, if he's editing a file called "Main.php", his Macbook will create another file called "._Main.php". I know there are ways to prevent creation of .DS_Store files, but none about how to prevent creation of these hidden files prefixed with dot underscore. Is there any way to turn that off on Macs? Any way to prevent it from creating those files in the first place?

    Read the article

  • How to disable Spotlight content indexing in Mac OS

    - by o.v.
    From Windows experience, I could always elect Live search to only index file names not their content. Is this something that can be done with Spotlight on a Mac? It used to index absolutely everything, for instance it would return a bunch of video files for any obscure character combination typed into the search field. Right now I've disabled Spotlight entirely as per this answer, but it seems to have disabled searching altogether. For instance, Finder is yet to locate any .pdf files in a small directory as I'm typing this question (unlike windows search which would still be able to work even with indexing disabled) Alternatively, if there is any way (including a trusted third-party app) that will index file names and metadata e.g. ID3 tags that would likely be the preferred option.

    Read the article

  • Windows 7 file explorer preview window and password protected word documents

    - by Carbonara
    When using the Windows 7 Explorer with the preview pane open you get a little preview of a file when you click on it. This includes Word, Excel spreadsheets, etc. My problem is if the Word document is password protected. Clicking on it in Explorer automatically asks for the password to display its preview. It does this if you single or double clicking on it. You then get an empty Word instance running (which allows it to display the preview) and another instance of Word with your actual file and you're asked for the password twice in total. This is annoying and untidy. Is there a way of stopping the preview pane from wanting to display password protected documents and thus not asking for the password to display a preview?

    Read the article

  • Virtual Box - not filling entire screen

    - by jdavis
    I am new to VirtualBox and am trying to set up an instance of Windows 7 64. I have the virtual machine instance running with Windows 7 now installed, but it only fills up a small portion of my screen. Even when I go full screen, the window stays the same size and the rest of the screen is filled with gray space. I have installed VirtualBox Guest Additions, which allowed me to go from a resolution of 800x600 to 1024x768, but this still isn't satisfactory as my laptop display is 1600x900. Any help on this would be most appreciated. Thanks.

    Read the article

  • how to reduce time of git pulling each time when you do a make world on Xen source

    - by Registered User
    I am compiling xen from source and each time I do a make world it basically gives some or the other error my problem are not those errors ( I am trying to debug them) but the problem is each time when I do a make world Xen basically pulls things from git repository + rm -rf linux-2.6-pvops.git linux-2.6-pvops.git.tmp + mkdir linux-2.6-pvops.git.tmp + rmdir linux-2.6-pvops.git.tmp + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /usr/src/xen-4.0.1/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1941611, done. remote: Compressing objects: 100% (319127/319127), done. remote: Total 1941611 (delta 1614302), reused 1930655 (delta 1604595) **Receiving objects: 20% (1941611/1941611), 98.17 MiB | 87 KiB/s, done.** and if you notice the last line it is still consuming my bandwidth pulling things from internet.How can I stop this step each time and use existing git repository?

    Read the article

  • Windows 7 - Move open programs more freely on the taskbar

    - by SalamiArmi
    When I used Windows XP last year, I had a program called Taskix which let me move the tabs in the taskbar around for programs in any order I wanted. Now that I have Windows 7, a similar function comes built into the operating system - however, it isn't as flexible as Taskix was in XP. Theoretical situation: 1. Open a folder named #1 2. In folder #1, open an instance of a program 3. Open a folder named #2 4. In folder #2, open an instance of the same program that was opened in step 2. After doing this, I will have (in order, on the taskbar) 2 folders open and 2 instances of the programs open. My question really is: Is there any way to fight this automatic ordering of tabs on the taskbar? A registry hack or something would be best, but if there's a third party program out there, that would be nice to know about. Thanks!

    Read the article

  • Extensions disappear when I close and open Google Chrome

    - by PavanM
    I am running the latest version of Google Chrome 23.0.1271.97 (Official Build 171054) m on Windows 7. Any new extension I install simply disappears(not disabled, total disappearance) once I close and re-start google chrome. This is not happening to one of my old extension. It stays there across chrome re-starts. I tried everything google help suggested- I created new user profile by renaming the Defaults folder I checked for any permission change that the extensions might have undergone. This is not the case. I am not running in developer mode. This happens when I close ALL instances of google chrome. Even if one instance of chrome is running, this doesn't happen. But I cant have an instance of Google Chrome always running :( I even reported the issue to Google Chrome team to no avail and new.crbug.com is offline. And I skimmed through many threads opened for the same issue only to find souls like me. SE is my last resort :)

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • sharing a USB printer in SOHO environment [migrated]

    - by Registered User
    Here is a situation I am facing, there is USB printer which works only on a Windows XP machine, there are other devices in LAN it is a Small Office Home Office environment. How can this USB printer attached to Windows XP machine be shared so that other laptops or users in Network who have Windows 7 or Linux on their laptops can use this printer. The printer model number is Canon Laser Shot LBP-1210 http://www.canon-europe.com/For_Home/Product_Finder/Printers/Laser/LaserShot_LBP1210/index.asp a print server is not available to me I need to make it work in this situation only.What can I do? the clients are unable to connect to this.It is not a network or TCP/IP printer If a from Windows 7 machine some one wants to use this printer so that he can take a print he gets an error while adding the printer to his machine which is a Windows 7 machine (where as the printer is USB printer on Windows XP machine) Start--->Devices and Printers---> Add Printer---> Find Printer by name or IP address--->Selected a shared printer by name-->\\PC-Name-printer3 and select browse it gives a message Windows can not find a driver for Canon LASER SHOT LBP-1210 on the network what does this mean do I need to install some kind of software at client machine or on the machine where printer is present?

    Read the article

  • Throttling bandwidth on a per group basis

    - by Robreylen
    I am wondering if it is possible to create a bandwidth shaping/throttling script that shapes traffic based on user group. That is, if user1, user2, are in user group group1, they will have 1mb/s download and 1mb/s upload, whilst if user3 and user4 are in group2, they will have 256kb/s download and 256kb/s upload. I've read a bit about this and I found some iptables and TC implementations of a per user solution, but I have not seen anything for a user group. Hopefully it can be simply implemented in form of a custom iptables rules and script running with TC or the like. Here is a script I was looking into that does a system wide throttle: http://atmail.com/kb/2009/throttling-bandwidth/ I assume it is possible to do user group throttling since it is possible for throttling on a per user basis. Thanks for any info you can provide for this question.

    Read the article

  • ssh -x : howto get clipboard?

    - by Gupu User
    Hello! I'm connected to a server via ssh -x and my only way to get text out of the system is the x clipboard (unless i want to take thousends of screenshots and OCR over it). I can not execute any programs on the other machine, because i don't have access. How can I achive this?

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • "Permission denied (publickey)" error when ssh'ing to Amazon EC2 Debian AMI 05f3ef71

    - by user193537
    I have launched a Debian system using AMI 05f3ef71 in Amazon EC2, but I have no lock connecting to it using SSH as suggested in "Connecting to Your Linux/UNIX Instances Using SSH". I tried several user names: ec2-user, root, debian... None of them worked. I always get a Permission denied (publickey) error message. Using ec2-get-console-output instance_id as suggested doesn't work either, it requires option "-K". If I supply it, I get the error message Required option '-C, --cert CERT' missing, but I have no idea what to supply there. Port 22 is opened on the affected instance. Does anyone have an idea what I could try to log in to my instance?

    Read the article

  • Error 1069 the service did not start due to a logon failure

    - by Si.
    Our CruiseControl.NET service on Win2003 Server (VMWare Virtual) was recently changed from a service account to a user account to allow for a new part of our build process to work. The new user has "Log on as a service" rights, verified by checking Local Security Settings - Local Policies - User Rights Assignment, and the user password is set to never expire. The problem I'm facing is every time the service is restarted, I get the 1069 error as described in this questions subject. I have to go into the properties of the service (log on tab) and re-enter the password, even though it hasn't changed, and the user already has the appropriate rights. Once I enter the password apply the changes, a prompt appears telling me that the user has been granted log on as a service rights. The service will then start will no problems. Not a show stopper, but a pain none-the-less. Why isn't the password persisting with the service?

    Read the article

  • How to create an rpm without a build step

    - by infra.user
    I'm trying to create an rpm of some code which doesn't need to be built. It will just need to run a script when it's installed on the destination system (i.e. I just need the %install portion of the spec file). I've left both %build and %configure sections of my rpm spec file empty, yet rpmbuild continues to try and execute ./configure with a bunch of parameters. Does anyone know how I can have rpmbuild create the rpm without trying to run ./configure? Thanks.

    Read the article

  • how to monitor traffic at port 53 (DNS)

    - by Registered User
    I am a bit confused with the abundant tcpdump tutorials on internet. I am having a few of the virtual machines running on a virtualization server.Where I am debugging a problem.Port 53 is the one in problem. I have a bridged setup where out of 4 LAN cards on the machine in question one is active and it is xen-br0 I want to check if there is any request coming on port 53 on the server by other machines on LAN in question. I also want to see if the guest operating systems on LAN or any other machine is sending traffic at port 53.Due to abundant messages being generated via tcpdump I am finding it difficult to grep the output at desired port. So how can I use it if some one can give an example that would be helpful. Thanks in advance.

    Read the article

  • Best way to "clone" my Windows Server 2008 R2?

    - by A.B. User
    I have a Windows Server 2008 R2 Machine with 1 physical hard drive. I have an exact copy of the hardware of it, which I intend to use a a redundant backup in case my server fails (hardware or software). I'd like to routinely "clone" my production server's hard drive, so that when it fails, I'll just swap it with the latest clone. Is this even possible? If it is, what would be the simplest way to do this?

    Read the article

  • DNS lookup fails when forwarding to subdomain

    - by Kitaro
    In order to migrate to a new mailserver with little dns problems/downtime, I have set up a second postfix that is currently accessible on a subdomain mx record, eg. the main postfix accepts mail for [email protected] while the second postfix also accepts mail for [email protected]. I added a forwarding rule to postfix saying that postfix should forward mail destined for [email protected] to [email protected] (for regular local delivery) and to [email protected]. Local delivery still works as expected, but when trying forward the mail to the new mx, postfix appeds the domain part at the end of the forwarding address, resulting in [email protected], which of course fails and the mail bounces. Why does postfix mess with the alias name in that way and how can I turn that of?

    Read the article

  • a moderator closed my question is any one watching. [closed]

    - by Registered User
    I do not have requisite previlieges to post many links to my questions my genuine question was blocked be this sites moderator is any one watching. The internal IPs of apache vhost configuration file which I was posting were treaated as links using apache as a front end to Tomcat application Moderators should behave more sensibly.I am new to this forum.How do you say the question is not real when your forum is not allowing me to post links to snapshots so that some one can understand what I asked.

    Read the article

  • Block a Server from reaching a machine

    - by user
    I have a Windows 2003 server that I want to block from accessing a specific IP address. I want to control this from the Server. because I control the machine. The traffic is http traffic (webservice call). It uses a non-standard port, so IP address+ Port combination would also work. Background: I have a development enviornment that for some reason is ignoring host file enteries under some circumstances. These host files point the enviornment at services in another Dev enviornment. Wne th host files are ignored, dev is talking to production. This is not my question, rather the motivation for this inquiry. I want is a failsafe to ensure dev will error instead of happily engaging in transactions with production. I control the dev server, I do not control the firewalls or the target production machine.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >