Search Results

Search found 12281 results on 492 pages for 'ip blocking'.

Page 360/492 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Connected to internet but can't browse after trying to remove Covenant Eyes

    - by Joanna
    I recently got a MacBook Pro. It connects to ethernet\wifi and has internet but when I open Safari or Firefox, nothing happens. I get a timeout for all websites. I had Covenant Eyes on my Mac before and tried to remove it. My friends who work with computers have tried everything (ping, nslookup etc). Network diagnostics show no problems I can see I'm connected through ifconfig because I get an IP. I also get a response pinging www.google.gr. There are no proxies set in my Network preferences. Any ideas?

    Read the article

  • No response from example.com using Apache

    - by stevens-G
    I am unable to access example.com or by local IP after restarting the server. I checked to make sure httpd service was on. I looked at the error_log in /var/log/httpd and found nothing. Tried to restart httpd again and it says 'Ok'. I'm not sure where else to check. I did move DocumentRoot from /var/www to /web-root and it worked before restarting the server. I tried pointing it back to /var/www and still not able to view the page. IPtables have not changed. Any suggestions?

    Read the article

  • Intermediate SSL Certificates on Azure Websites

    - by amhed
    I have successfully configured an Extended-Validation Certificate on an Azure Website following this article: http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure-ssl-certificate/ The main (non-technical) stakeholder of the web application went through great lengths to validate that our site is secure. He went to this site to check the validity of our SSL: http://www.whynopadlock.com/ The site throw the following error: `SSL verification issue (Possibly mis-matched URL or bad intermediate cert.). Details: ERROR: no certificate subject alternative name matches`` The certificate is installed using IP Based SSL instead of SNI. This is done this way because some site visitors still use Internet Explorer 8 on Windows XP, which has no support for SNI and throws a security warning. Is my certificate correclty installed? I received three .CRT files from my SSL provider: PrimaryIntermediate.crt SecondaryIntermediate.crt EndCertificate.crt This is how I exported our certificate as a .PFX file to Azure: openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt

    Read the article

  • Port Forwarding on Actiontec GT704-WG Router Issues

    - by adamweeks
    I am trying to setup a server at customer's location that has the Actiontec GT704-WG DSL router. The port forwarding it not working at all. Here's the details: Server: OpenSuse Linux box with a static IP address of 192.168.1.200 Application running accepting connections on port 8060 Firewall disabled Local connections (within the network) working properly Router: Updated to latest firmware available DHCP range set to 192.168.1.69-192.168.1.199 to not have any conflicts with the server Firewall set to "off" Rule set in the "Applications" setting to forward 8060 TCP and UDP to 192.168.1.200 machine (I've tried using the "TCP,UDP" option as well as both individual options) I've also tried just simply putting the server in the DMZ to see if I could connect to anything, but still nothing. Looking for any clues before I call and waste hours explaining the issue to tech support.

    Read the article

  • Rsync over NFS with QoS: How to view real transfer speed?

    - by Ian Mackinnon
    We have a bandwidth limit between a Linux server and a NAS, created using 'tc' with an IP filter. When writing to an NFS mount of the NAS, rsync claims a very high transfer speed for each file and then waits a long time before acknowledging that everything has finished. The total time taken is consistent with the QoS limit and the time taken by the same transfer over FTP. Why does the write to the NFS mount report higher transfer speeds than are actually happening over the network? How can I monitor the actual bandwidth of the transfer?

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • FTP script download from linux to windows

    - by user53864
    I'm using following FTP script on windows xp to download zip files from ubuntu cloud servers. A zip file is created every day on ubutnu servers and I will download it to windows via this ftp script. I run this script everyday manually as I have to edit the last line(mget /usr/backup_02-11-2010.Zip) of the script to match today's date. I want to edit this script so that it will download only today's zip file at the scheduled time without needing to edit it everyday, when scheduled. It's clear that date is appended to the zip files and is in the format dd-mm-yyyy. Need help... open server-ip-here username-here user-password-here lcd C:\Backup\files bin hash prompt mget /usr/backup_02-11-2010.zip

    Read the article

  • Debugging Connection Issues Between Two Linux Servers

    - by clickfault
    I have two CentOS 5 servers running iptables and apf. I am having issues connecting with ssh from server 1 to server 2. I can connect from server 1 to a third server and from that third server to both 1 and 2. In all cases I am using the IP address and not a host name. I have stopped iptables and apf on all servers and it doesn't seem to change anything. What is the best way to debug this process?

    Read the article

  • BGP Router reccomendations for simple redundancy [closed]

    - by Jona
    We have two sites that each have an internet connection and have a dedicated dark fibre between them. Each site has it's own IP space and we have an AS number. We're looking to be resilient to failure of the internet connection to either site and so need to buy a pair of approriate routers. Requirements are: Able to run 2 bgp sessions (one with the ISP, one with the other site router) Option to take a full table from the upstream ISPs would be nice. Able to provide HA gateways on the LAN side (e.g. 192.168.0.254 will automatically migrate if it's host router lost power) A dedicated device rather than a server running Linux / BSD Not crazy expensive. Any help / advice much appreciated.

    Read the article

  • How to configure machines in a public subnet with two gateways?

    - by Shtééf
    We have a single public /24 subnet, with a BGP router as the primary gateway. Now I'm interested in configuring a second router for redundancy. How do I deal with multiple gateways on the servers in our public subnet? I found some other questions related to multiple gateways that seem to deal with NAT set-ups. In my situation, the servers all have public routed IP-addresses. So from what I can tell, it doesn't really matter which route incoming or outgoing packets take. But I figure the servers need some way of telling when one of the gateways is down, and route around it? Is this accomplished with protocols such as OSPF? And do I need to deploy this on all my servers?

    Read the article

  • Recommend a mail server setup for multiple domains

    - by Greg
    Hi all, I've just set up a new Debian web server which I have done plenty of times before, but I want to add a mail server which I have never done before. I am aware of this question, but I would like someone to recommend packages and briefly explain how to use them for providing pop/imap access on multiple domains, a concept that has confused me for a while. I'm planning for this server to grow slowly but surely, from serving an initial 5 or 6 domains to about 20 in the first year, continuing at this rate. (yes, I've jumped on the cloud bandwaggon). At the moment, I have a DNS-A record pointing to my server's IP and nothing else. I'm assuming that I need a DNS-MX record pointing there too, but I haven't read up about it yet so today that's what I'll be doing. Hopefully reading up on the subject and the help that I get here will get my server up and running in no time. Thanks!

    Read the article

  • Pygrub with DRBD on Xen 3.2

    - by Joril
    Hi all, we have a two-node cluster using DRBD 8.2 on CentOS 5.2 64bit. The cluster runs a few VMs on top of Xen 3.2.1, here's the configuration for an Ubuntu Jaunty VM: name = 'dev' bootloader = '/usr/bin/pygrub' memory = '512' vif = [ 'ip=192.168.1.217,mac=00:16:3E:CD:60:80' ] disk = [ 'phy:/dev/drbd24,xvda1,w', 'phy:/dev/drbd25,xvda2,w' ] As you can see, the disks are specified like "phy:", and as such pygrub doesn't know a thing about the underlying drbd device... So my problem is that even though the VM boots just fine, it doesn't handle the state of the drbd device. As a result, when for some reason the device gets to a secondary/secondary state, the VM won't boot, and I have to manually specify which node is primary. I read that starting with Xen 3.3 pygrub understands the "drbd:" specification, and I think that it would fix my problem, but I can't upgrade Xen at the moment... Is there a workaround? For example, could I use the 3.3 version of pygrub? Thanks!

    Read the article

  • How to troubleshoot ping request time out

    - by user28317
    I have a Windows 7 and an XP machine connected to a NETGEAR wireless router. Both machines can log into the network and surf the web. Both are connecting wirelessly. I can ping the router from each machine and get a reply. I can ping each machine from the router and get a reply. But i cannot ping each machine from the other; getting a request time out. Subnet IP Addresses are 192.168.1.* Router =1; Win7 = 10; XP = 11; Firewall is currently off in both systems. Since i can ping from router im picking that not the problem anyway. If i try to ping from xp to win7 i get Request Timed Out. If i try to ping from Win7 to Xp i get destination host unreachable. What should i do now? Thanks

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • Unable to connect to Cygwin from Mac OS X by ssh

    - by skyjack
    I've started ssh server on Windows 7 using Cywgin and I'm trying to connect to it by ssh from Mac OS X Mavericks. It fails with next error: ./ssh username@hostname -v OpenSSH_6.6, OpenSSL 1.0.1g 7 Apr 2014 debug1: Reading configuration data /usr/local/etc/ssh/ssh_config debug1: Connecting to hostname [my ip] port 22. debug1: Connection established. debug1: identity file /Users/skyjack/.ssh/id_rsa type -1 debug1: identity file /Users/skyjack/.ssh/id_rsa-cert type -1 debug1: identity file /Users/skyjack/.ssh/id_dsa type -1 debug1: identity file /Users/skyjack/.ssh/id_dsa-cert type -1 debug1: identity file /Users/skyjack/.ssh/id_ecdsa type -1 debug1: identity file /Users/skyjack/.ssh/id_ecdsa-cert type -1 debug1: identity file /Users/skyjack/.ssh/id_ed25519 type -1 debug1: identity file /Users/skyjack/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6 ssh_exchange_identification: read: Connection reset by peer Meanwhile I can connect successfully from Red Hat. OpenSSH version on Cygwin: OpenSSH_6.4p1, OpenSSL 1.0.1f 6 Jan 2014 OpenSSH version on MAC OS X: OpenSSH_6.6p1, OpenSSL 1.0.1g 7 Apr 2014 Please advice.

    Read the article

  • REST-based file server

    - by Chris Wenham
    I need to be able to PUT files and GET them later using nothing but HTTP, so I went searching for something that might match the terms "REST file server" or "HTTP file server" or "REST drop-box", etc. Unfortunately, these terms bring up the wrong kind of results on Google. What I want is the equivalent of an SMB fileshare over HTTP. Some ideal features: Can PUT a file of any type at http://servername/service/any/path/I/want/document.pdf Anyone with access can GET that file at the URL I PUT it at Supports AV scanning on any new file that has been PUT Supports DELETE of existing resources (files) Our shop runs Windows, but I'd be interested to know about Unix software that can do this kind of thing, too. It's to be used in an IT department for private users only. It won't be on a public-facing IP address. Does anything like this exist?

    Read the article

  • Port Redirection on Mac OS X Lion

    - by Andreas
    I have tried to solve this issue using pf but with no luck. Basically, I am trying to redirect incoming port 443 traffic to port 22. I have tried to set up a rule in a file and load it in pf but I get syntax error. Can anyone with more experience with pf provide some insight? Here's what I've attempted: pass in on en1 proto tcp from any to any port 443 rdr-to 127.0.0.1 port 22 and pass in quick proto tcp to port 443 rdr-to 127.0.0.1 port 22 I've been able to do this in MacOSX Snow Leopard with ipfw: sudo ipfw add 1443 forward 127.0.0.1,22 ip from any to any 443 in but it doesn't work in Lion (it gives me an Invalid Argument error).

    Read the article

  • How to broadcast a command on Windows

    - by Xiao Jia
    I am going to frequently deploy different versions of a program on a cluster of Windows machines (mostly Windows XP), so I am willing to use a command-line broadcasting tool (either built-in or 3rd-party) to (1) download a file from some URL, and (2) execute the same command, on all the machines. I googled for a very long time but got nothing related to my goal. (Only pages about broadcasting a message, broadcasting ping, or programmatically broadcast via TCP/IP, etc.) Are there any tool for this purpose? Or is it possible to do it pragmatically (without installing extra client programs on those machines)?

    Read the article

  • pfSense - DHCP Relay

    - by Patrick
    I have 3 pfSense boxes acting as routers on a single subnet (172.22.12.0/26). Router A - 172.22.12.1 Router B - 172.22.12.17 Router C - 172.22.12.33 I want Router A to be the only DHCP server. Router C has DHCP relay enabled that points to Router B. Router B then has DHCP relay enabled that points to Router A. Like this: Router C -- Router B -- Router A (DHCP Server) Router B gets an IP from Router A, but Router C does not. Any ideas why this configuration isn't working? Thanks.

    Read the article

  • Ping, firewall or DNS issue on Win Server 2008 R2

    - by Fred Kaiser
    I've installed windows server 2008 as a VM for the developers here to work on. Installed SQL Server 2008 as well as IIS7. I am not quite sure why, I can remote into that machine using the name I gave to it (winserverdev) but the guys that are supposed to use the bloody thing can't. One very interesting thing is that I can connect but I can't ping... not the name nor the IP address. Is there anything that I should be looking in order to make it work? Any ideas are welcome. Thanks heaps in advance, I really appreciate it.

    Read the article

  • Is it possible to limit output bandwidth between eth0 and lo?

    - by mmcbro
    I'm trying to limit the bandwidth between my eth0 output (nginx proxy) to my loopback inteface (apache) by filtering on destination port. Incoming Packet -> Eth0 -> 0.0.0.0:80 Nginx -> tc qdisc class/iptable mangle 2525port -> 127.0.0.1:2525 Apache I don't know if it's even possible I'm just experimenting. My rules are the followings : tc qdisc add dev eth0 root handle 1:0 htb tc class add dev eth0 parent 1:0 classid 1:10 htb rate 2mbps ceil 2mbps prio 0 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 iptables -A OUTPUT -t mangle -p tcp --dport 2525 -j MARK --set-mark 10 I also tried to with FORWARD chain but its still the same.

    Read the article

  • How to get rid of NAT in a LAN?

    - by Alberto
    Currently the LAN I manage is organized as follows: internal network (192.168.1.0) which uses a Linux server as a gateway (internal address on interface br0 192.168.1.1, external address on interface br1 10.0.0.2) through NAT; then the 10.0.0.0 network has another gateway (10.0.0.1) which through another NAT connects the whole thing to the internet. What I would like to achieve is to configure the Linux server so that the first layer of NAT is no more necessary, so that for example a computer in the 10.0.0.0 network can ping every computer in the 192.168.1.0 network. I deleted this iptables rule: iptables -t nat -A POSTROUTING -o br1 -j SNAT --to-source 10.0.0.2, but of course now computers on 192.168.1.0 cannot reach the internet; ip forwarding is of course enabled. What's missing here? Thanks

    Read the article

  • rkhunter warns of inode change by no file modification date changes

    - by Nicholas Tolley Cottrell
    I have several systems running Centos 6 with rkhunter installed. I have a daily cron running rkhunter and reporting back via email. I very often get reports like: ---------------------- Start Rootkit Hunter Scan ---------------------- Warning: The file properties have changed: File: /sbin/fsck Current inode: 6029384 Stored inode: 6029326 Warning: The file properties have changed: File: /sbin/ip Current inode: 6029506 Stored inode: 6029343 Warning: The file properties have changed: File: /sbin/nologin Current inode: 6029443 Stored inode: 6029531 Warning: The file properties have changed: File: /bin/dmesg Current inode: 13369362 Stored inode: 13369366 From what I understand, rkhunter will usually report a changed hash and/or modification date on the scanned files to, so this leads me to think that there is no real change. My question: is there some other activity on the machine that could make the inode change (running ext4) or is this really yum making regular (~ once a week) changes to these files as part of normal security updates?

    Read the article

  • Technicolor TG582n with external DHCP server [on hold]

    - by Jack
    We have a small home setup with a Technicolor TG582n on Plusnet ISP. We have a Samba4 DC with DNS forwarding enabled. The DC forwards to the Technicolor. However, the client machines have their DNS settings manually set. This is an annoyance when using laptops on other networks. We would like to have DHCP handled on the server machine, such that when a client connects to the Technicolor, it gets its IP and DNS information from the DHCP server, eliminating the need to manually set adapter DNS settings. However, I cannot find an option to disable DHCP on the Technicolor and am not completely clear on how one would point DHCP services to the server from the Technicolor if there were the option. So, how would one make the Technicolor use an external server for DHCP leases?

    Read the article

  • Nginx server_name is set to mydomain.com, so why is www.mydomain.com getting served too?

    - by Lorenz Forvang
    I have my Nginx conf set up as follows: server { listen 443 ssl; server_name mydomain.com; ... } When I load https://mydomain.com, the site loads fine. But when I load https://www.mydomain.com, the site loads as well. Why is this happening? I set up the DNS records using Amazon Route 53 as: A mydomain.com xxx.xxx.xxx.xxx (IP) CNAME www.mydomain.com mydomain.com So is a request to www.mydomain.com arriving at Nginx as a request to mydomain.com? If so, how do I differentiate requests to www.mydomain.com and mydomain.com at my server?

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >