Search Results

Search found 16455 results on 659 pages for 'hosts allow'.

Page 184/659 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Firefox unable to load SSL Certificate Chain, while Chrome, IE do

    - by FryBurger
    I created a certificate for our IIS 6 by sending a request (created with openssl) to our organization's CA. I already had trouble to integrate the private key into that certificate, that has been solved, see SO question IIS 6.0 now uses the certificate (with TSL v1 and SSL v3), that is the 4th in cert hierarchy. Now, if I access the intranet site, chrome accepts the certificate, so does IE, but Firefox complains about an insecure connection and wants me to add an exceptional rule. If I look into the certificate, how FF presents it to me, I cannot see any of the three issuers. How can this be? If I connect via openssl s_client -showcerts -connect... I only see my own certificate too, which is said to be not verified. I am quite confused now. Where's the mistake and how can I make FF accept certificate without forcing our users to add that exceptionrule? Maybe do I have to add all the three issuer certificates into cert store of the win2003 server that hosts IIS 6.0 ??

    Read the article

  • Will the removal of NAT (with the use of IPv6) be bad for consumers? [closed]

    - by Jonathan.
    Possible Duplicate: How will IPv6 impact everyday users? (World IPv6 Day) As I understand when we have finally made the switch to IPv6 not only will NAT be unnecessary but it is incompatible with IPv6? Will that mean that ISPs will have to serve multiple IP addresses per customer? Will they provide a range of addresses for each customer or as each device connects will they get an IP address that isn't necessarily near that of the other devices in their house? But overall will this be bad for the Internet users? as surely it will allow ISPs to see exactly how many devices are being used, and so allow them to charge for the use of additional IP addresses? And then if that happens, what happens when you try to connect an extra device to your network? Will it simply not get an IP address? In my home we have about 15-20 devices connected at once, but for places where there are hundreds of devices, it seems like the perfect opportunity for ISPs to charge more? I think I may have it completely wrong, so is there somewhere where there is an explanation of who things will work when IPv6 becomes the norm?

    Read the article

  • Spoof database connection to be local instead of remote

    - by spydon
    I am trying to connect one of our clients "as is" programs to a remote database instead of a local one, they say that they have coded it to be able to do it, but for some reason the program crashes when trying to connect to a remote database. I don't have the source code so I can't really dig much deeper than that and the company does not provide any upgrades or custom modifications. I can succesfully connect to the database through SqlDbx and HeidiSQL so I know that the server is set up correctly. This is why I need to find a way to spoof a remote connection on port 1433 to appear like a local database connection to the program. I thought about editing the hosts file, but it will most likely crash other programs if I bind localhost to another IP than 127.0.0.1. Any ideas?

    Read the article

  • Can't connect to localhost via browser. Can ping localhost.

    - by Sceptre
    I'm trying to connect to localhost through my browser to learn some apache tomcat stuff. When I tried to connect to localhost through Firefox, I couldn't; when I tried through IE, I could the first time, but not after that. I'm using Windows 7, and changed the hosts file to point localhost to 127.0.0.1. I can successfully ping localhost and 127.0.0.1. I have tried turning off my antivirus and my Windows Firewall, but to no avail. What am I doing wrong?

    Read the article

  • Apache 2.2: Is it possible to redirect different 503 page based on URL?

    - by Wilson60
    Hi I am beginner in using Apache server, all my experience were from official doc and online tutorial. For example: My setup (usual Apache server - tomcat server) and I have two domains configured using virtual hosts directive in httpd.conf www.domain-one.com www.domain-two.com If tomcat is down, I wish to display different 503 error page for two different domains. Is that possible? If so can I have any guide or instruction ? I searched through online but couldn't get what I want. Not sure if it was caused by the wrong keyword or wrong term. Thanks!!

    Read the article

  • Is it possible to use the same MAC address for an entire subnet?

    - by Bruce
    I wish to add static entries to the ARP table of my machine so that it uses a dummy MAC 00:11:22:33:44:55 for any IP address within the subnet 10.0.0.0/8 Using arp -s 10.0.0.0/8 00:11:22:33:44:55 does not work. What can I do? PS - I know it might sound strange why anyone would want to do this but kindly bear with me here. EDIT: I am using this so that the hosts do not send a broadcast ARP message. I route the packets to the appropriate last hop router which changes the dst MAC from the fake MAC to the MAC of the dst IP address. I can get everything working except I have to manually enter the fake MAC for each subnet IP address.

    Read the article

  • Vmware peaks NFS load every 30 seconds

    - by gtirloni
    We were troubleshooting a performance problem on one of our storage servers and after investigating almost everything in sight we saw that every 30 seconds, Vmware would go from 10k IOPS (NFS) to 30k, 50k, 100k or whatever the server would handle. Most of it were reads. What could cause this raise in NFS operations per second every 30 seconds? The virtual machines are managed by external customers and there isn't much in common between them. While breaking utilization down by filename, we discovered 5-10 virtual machines that contributed more to those peaks but it still doesn't explain why every 30 seconds. There are no other peaks outside that 30 sec period (ie. it stays in an almost constant average). Is there an NFS tweak in Vmware to change that 30 second period? If that's really necessary, we would like to introduce some variation so all that workload isn't dropped all at once. It's causing NFS timeout on the ESX 3.5/4.0 hosts when the storage gets overloaded.

    Read the article

  • how to stop deferred emails

    - by Will K
    I have a postfix mail gateway. At the same time, every other host is set to use this gateway as the relay. We have some automated outgoing emails sent from some hosts. I believe the gateway trys to send a deferred status back to the system started this. But that system is a null client, which sends but not receive any email Is there anyway to stop sending the deferred status? e.g. postfix/smtp[35725]: 2F6A155C256: to=, relay=none, delay=260862, delays=260862/0.01/0/0, dsn=4.4.1, status=deferred (connect to orange.mydom.com[192.168.1.5]:25: Connection refused) Thanks

    Read the article

  • Ignore Apache Default Server?

    - by Jakobud
    I run several vhosts on our Apache server. Whenever browse the server using either it's IP address or some other name that resolves to that address, but where a virtual host entry doesn't exist for that address I get the generic Apache test page: I want to change the server so I can specify a Virtual Host to see by default instead of the Apache Default Server page. I don't want to just modify the Default Server page either. I just need to be able to specify a Virtual Host to use instead. I added the following Virtual Host: <VirtualHost _default_:*> DocumentRoot /vhosts/default/public </VirtualHost> What I am reading is supposed to take priority over all other Virtual Hosts as the default. But this doesn't seem to take priority over the Apache Default Server/Host. What do I need to do here?

    Read the article

  • sites now not responding on port 80 [closed]

    - by JohnMerlino
    Possible Duplicate: unable to connect site to different port I was trying to resolve an issue with getting a site running on a different port: unable to connect site to different port But somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Windows XP - Website unaccessible on single pc in LAN

    - by DorentuZ
    For serveral days now, a website isn't accessible on a single pc in the LAN. On the other pc's, it works just fine. And it's just a single website that's not accessible as far as I know of. The website generates a timeout on every single web browser I've tried (IE8, Firefox and Chrome). However, traceroute, nmap and telnet all work just fine. I've even tried multiple user accounts and safe mode, but that didn't work either. As a side note: using a linux live cd did work and I could access the website without any problems. The hosts file is the windows default, the ip- and dns settings on the network adapter normal as well. No strange processes are running and no viruses found. According to tcpview and netstat there are connections to the domain, but every request in the browser results in a timeout.. Any idea what's happening?

    Read the article

  • Is it safe to disable clamd?

    - by mk1000
    Clamd is taking up about 5% of my memory (2GB) on my dedicated server and I'm wondering if I can disable it without any security risks. The server just hosts a few of my own websites. For the most part, email received and sent is done through gmail (which connects to my pop3 accounts). The only other email use case is where one of my websites parses all emails and grabs attached images and the subject line. Would there be any security / risks of virus infection if I disable clamd?

    Read the article

  • Equalizing Agent and Master Nagios on state change alone

    - by punith
    We have a setup where there are distributed Nagios running on multiple sites and are equalizing their data to the main Nagios server. The problem is it sends back the data to main Nagios server no matter if there is a state change in host or service. Is it possible to configure the slave Nagios to check the service/Host every 5 sec but send back the data only if there is a state change. Currently it is implemented by Obsess Over Hosts/Service which always runs the command which will equalize. Nagios version is 3 I am no administrator but a developer so I don't know the exact jargon so please bare with me.

    Read the article

  • Cisco ASA Multiple Public IP

    - by KGDI
    I have a Cisco ASA5510 and articles related to ASA and mulitple Public IP says this cant be done. My question is how to best solve a scenario like this: I have 3 zones, Outside, Inside and DMZ Outside is Internet Inside is Client machines DMZ is a zone for servers related to external and internal services. My scenario is a bit more complex, but to keep things simple this will do: I want to place an Exchange server and a web server (externally reachable in the DMZ zone) The webserver uses both TCP80/443, the Exchange server uses 443 So to the problem: With the ASA only having one public IP, how would you make a DNAT to port 443 on both the internal hosts behind 1 Public IP? Usually, when i do this kind of scenario With Linux boxes i use alias Interfaces like eth0:0, eth0:1 and set 1 Public IP on each. To me this must be a pretty common scenario, any ideas on how to solve it With ASA? /KGDI

    Read the article

  • Can OpenVPN be set up so the server doesn't have interface that is part of the VPN?

    - by BCS
    I'm looking to set up a VPN (OpenVPN is my first choice but I'm not stuck with it) in such a way that the server that hosts the VPN is not visible from within the VPN. That is; any packet that a client sends via the VPN interface will get delivered to another client's VPN interface or get dropped. In the other direction, the server shouldn't have a VPN interface at all and normal network operations shouldn't be able to send packets on the network. Can this be done? All the docs I have found have assumed that clients will connect via DHCP (this requiring that the server connect at least to that extent) but I can't think of any reason that a VPN couldn't use static IP's or that the DHCP server couldn't be implemented inside the VPN (see edit) server without setting up a VPN interface on the server. Edit: Based on the link on bridged mode from Phil Hollenback's answer it seems that OpenVPN does in fact have the "internal DHCP server" that I'm thinking of.

    Read the article

  • Looking for concise set of instructions for upgrading Vmware 5.1 to 5.5

    - by Michael Martinez
    I'm trying to find a set of instructions for upgrading Vmware (ESXi and Vsphere) from 5.1 to 5.5, but all I'm finding online is a bunch of separate, incomplete knowledgebase articles which is making it difficult to get an overview of what's involved. What I'd like is a single, concise document that lists the steps involved. It could be a free online article, someone's blog, a small booklet, someone here who takes the trouble to write it out. Does such a thing exist? If so, can you provide the reference or even provide the text here. I'm running a very small, simple environment consisting of two ESXi hosts and Vsphere Standard edition. Thanks.

    Read the article

  • How to connect 2 virtual machines(VMWare Workstation 7.0) in a separate network?

    - by goluhaque
    There are supposed to be 2 networks: i) The first one is the one which all the virtual machines and the host share(Host-only condition). This one is easily achievable for me, as an amateurish beginner. ii)The second network is the one in which only 2 virtual machines are to be connected. These 2 virtual machines should also be connected to the Network(i). I understand that for the 2 virtual hosts that are to be connected in a separate networks simultaneously, it means that they need to have 2 IPs, and hence 2 ports(physical)/ethernet interfaces?

    Read the article

  • svn:externals cache and stale URLs

    - by dcaunt
    I have a subversion externals entry in a library folder which looks like this: Z https://svn/Z/trunk/library/Z Fetching external item into '/home/releases/50/library/Z' svn: OPTIONS of 'http://svn/repo/trunk/library/Z': could not connect to server (http://svn) The externals URL was the same, but over the HTTP protocol. Having changed the externals to point to the HTTPS, I can't figure out why subversion is still trying to use the old URL. Does subversion cache the externals path, and if so how can I clear this? If not, what else could be causing this? I can check out from the correct (HTTPS) URL fine from the server. NOTE: svn is an entry in the server's local hosts file, pointing to our subversion server's IP.

    Read the article

  • How create a virtual network for practice?

    - by light
    I need to organize a virtual network for practice with Windows Server 2008 and several workstations with Windows OS. To make it all I only have a laptop with Dual-Core 2.10Ghz, 3 GB RAM, 50 GB free space and Windows 7 on it. Also I have external USB 3.0 hard-drive with 250GB free space and flash disk with 8GB space. What can you suggest? Because I have limited resources, I think to install ESXi 5.1 on the main disk of my laptop as second OS, with installed Windows 7. I have no idea will it work or not, but after that I want to try to create hosts using availible space on external hard-drive. Is it possible?

    Read the article

  • Revoke directory access for a particular user in Solaris

    - by permissiontomars
    I have a need to allow directory access to a particular user on my file system. I want this user to be unable to access any other directory in my file system (initially anyway. It may need access to some directories later). For example: I have a directory called /opt/mydir. - I want my dedicated user to only be able to access this directory, and nothing else. - I want all other users to be able to access this directory as normal. I'm new to Linux and its permissions. I've read a fair bit of background material but I'm a little confused. Is there anyway to revoke permissions to /opt/mydir for a single dedicated user? A possible flawed method would be to only allow access to /opt/mydir and exclude every other user. This won't work because I want all other users to work as normal; accessing the directory. I'm working on Solaris 10. Any suggestions are appreciated.

    Read the article

  • Getting much higher than usual brute-force attempts on cPanel

    - by UserZer0
    Although I have many client accounts on my cPanel based server I'm really the only one who has login information to any of the accounts. I have cPhulk setup to alert me and blocking after 4 failed attempts. I usually have only a handful of bots trying to get in each day(2 hosts ago I never had any), but Today the rate has significantly increased, every 10 minutes or so(no not like clockwork, just averaging). Should I be concerned? Is there anything extra I should be doing, is there any automated reporting services I can use? Thanks.

    Read the article

  • My raspberry pi server hostname doesn't work?

    - by xSpartanCx
    The people over on the rPi forums don't have any answers for me... I've got a raspberry pi running raspbian server edition. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my apache2 website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work.

    Read the article

  • Redirect Domain Name to Localhost

    - by somebody
    I have a linux test machine which I would like to run a copy of a production webserver. This is a legacy application which does not use a property file for its server name. Throughout the application, the server name is hardcoded (example: open connection to myServer.myCompany.com). Is there any linux trick which I can use to redirect all requests for a certain host back to localhost? I know in Windows that I can add an entry to the hosts file and have it redirect back to localhost. How do I do this in linux?

    Read the article

  • Cannot connect to domain despite successful pings

    - by egtann
    Pings to my domain name work, but I can't connect via http. I've been trying various methods for a week now, but haven't come up with anything that worked. Any idea what's causing this? /etc/apache2/httpd.conf ServerName machinename.local <VirtualHost *:80> ServerName chipperapp.com DocumentRoot "/Users/myusername/appname/public" <Directory "/Users/myusername/appname/public"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> /etc/hosts 127.0.0.1 chipperapp.com I can access the app from my local machine, but not on any other. I've set up dynamic DNS. Thanks!

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >