Search Results

Search found 26001 results on 1041 pages for 'local ip'.

Page 77/1041 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Iptables remote port forwarding and dynamic remote ip

    - by lbwtz2
    Hello, I want to forward a port from my remote vps to my domestic server and I am quite a newbie with iptables. The problem is that I am using a dynamic dns service to reach my home server from the internet so I don't have a fixed ip and iptables doesn't like urls. The rules I am willing to use are these: -t nat -A PREROUTING -p tcp -i eth0 -d xxx.xxx.xxx.xxx --dport 8888 -j DNAT --to myhome.tld:80 -A FORWARD -p tcp -i eth0 -d myhome.tld --dport 80 -j ACCEPT Of course I recevie a Error BAD IP ADDRESS because of myhome.tld. What can I do?

    Read the article

  • How to make local apache server public/visible ?

    - by George
    Hello. I am running an Apache2 server on a Fedora 13. I'd like to make it publicly accessible(visible).For example I'd like when somebody types http://my.ip.numbes/ that they would see what I have in my document root folder. Just for a presentation of a course work at university. Permissions are set to 755. User owning the document root is apache. SELinux is temporarily disabled. But port 80 is closed. I tried to open it by adding an entry to iptables and restarting them, no change. I guess I am missing something big here. Help would be greatly appreciated. Note: I have a static (public, real) IP address.

    Read the article

  • Apache VirtualHost Blockhole (Eats All Requests on All Ports on an IP)

    - by Synetech inc.
    I’m exhausted. I just spent the last two hours chasing a goose that I have been after on-and-off for the past year. Here is the goal, put as succinctly as possible. Step 1: HOSTS File: 127.0.0.5 NastyAdServer.com 127.0.0.5 xssServer.com 127.0.0.5 SQLInjector.com 127.0.0.5 PornAds.com 127.0.0.5 OtherBadSites.com … Step 2: Apache httpd.conf <VirtualHost 127.0.0.5:80> ServerName adkiller DocumentRoot adkiller RewriteEngine On RewriteRule (\.(gif|jpg|png|jpeg)$) /p.png [L] RewriteRule (.*) /ad.htm [L] </VirtualHost> So basically what happens is that the HOSTS file redirects designated domains to the localhost, but to a specific loopback IP address. Apache listens for any requests on this address and serves either a transparent pixel graphic, or else an empty HTML file. Thus, any page or graphic on any of the bad sites is replaced with nothing (in other words an ad/malware/porn/etc. blocker). This works great as is (and has been for me for years now). The problem is that these bad things are no longer limited to just HTTP traffic. For example: <script src="http://NastyAdServer.com:99"> or <iframe src="https://PornAds.com/ad.html"> or a Trojan using ftp://spammaster.com/[email protected];[email protected];[email protected] or an app “phoning home” with private info in a crafted ICMP packet by pinging CardStealer.ru:99 Handling HTTPS is a relatively minor bump. I can create a separate VirtualHost just like the one above, replacing port 80 with 443, and adding in SSL directives. This leaves the other ports to be dealt with. I tried using * for the port, but then I get overlap errors. I tried redirecting all request to the HTTPS server and visa-versa but neither worked; either the SSL requests wouldn’t redirect correctly or else the HTTP requests gave the You’re speaking plain HTTP to an SSL-enabled server port… error. Further, I cannot figure out a way to test if other ports are being successfully redirected (I could try using a browser, but what about FTP, ICMP, etc.?) I realize that I could just use a port-blocker (eg ProtoWall, PeerBlock, etc.), but there’s two issues with that. First, I am blocking domains with this method, not IP addresses, so to use a port-blocker, I would have to get each and every domain’s IP, and update theme frequently. Second, using this method, I can have Apache keep logs of all the ad/malware/spam/etc. requests for future analysis (my current AdKiller logs are already 466MB right now). I appreciate any help in successfully setting up an Apache VirtualHost blackhole. Thanks.

    Read the article

  • TCP/IP communication between Hyper-V host and guests

    - by Tedd Hansen
    This may be a simple one. :) I have a simple Hyper-V setup with a few guest os running. The host has 1 physical network adapter with a static IP assigned to it. The guests have network adapters assigned to "Internet" (Hyper-V network) which is bound to the physical host network adapter (Hyper-V "External" connection type). I am not able to communicate (ping or anything else) between guests and host. I've checked firewall and it seems fine (ports open from anywhere still don't work). I'm trying to communicate with the hosts IP assigned to the same physical interface that the guests are sharing. Guests can communicate between them just fine. I can't seem to find any relevant setting (I might just be missing it). So my questions: How do I fix it so host and guests can communicate?

    Read the article

  • Routing connections to passthrough a local machine

    - by xiamx
    Please tell me if what I'm trying to do is feasible. I have a router named "R" which is connected to WAN. R allows adding rules to the routing table. There are numerous of machines connected to the LAN port of R, they all have ip addresses 192.168.1.* assigned with DHCP on R. Among those machines, there's a machine C with ip address 192.168.1.100. I want all traffic of other machines in the subnet to pass-through machine C where some filtering and logging will be done. Is this possible? Is there a name for what I'm trying to do? (so i can do more googling later)

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • Can IP v4 and IP v6 share a single physical Ethernet?

    - by sleske
    I keep reading about the transition from IP v4 to IP v6, and the possible advantages and problems. One thing that keeps popping up is "dual-stack" networking, meaning (I believe) a host can speak both IPv4 and IPv6. I don't quite understand how this works, however. Can a host actually transmit using IPv4 and IPv6 at the same time over the same physical Ethernet (like e.g. HTTP and FTP can be used simultaneously)? Or is the physical network strictly IPv4 or IPv6, with the "other" protocol sent via tunneling?

    Read the article

  • Where and how does Kindle Cloud Reader store downloaded books, on a Windows 7 system?

    - by einpoklum
    I use Firefox and sometimes Chrome, on Windows 7. Amazon's in-browser Kindle Cloud Reader lets you "download" books for local/offline viewing. Where are these stored, given my OS+browser combination? I've searched the Users subdirectory for my user, and could not find a relevant (separate) file in there, specifically not in the Firefox and Chrome profile directories. To clarify, the files are obviously not downloaded as-is and are stored in some potentially-obfuscated format, possibly in the browser's local store and possibly elsewhere. The question is, where and how exactly? (This was the first of this question, but wasn't answered there since it was not the main focus of the question.)

    Read the article

  • How to exclude IP from htaccess domain redirect

    - by ijujym
    I'm trying to write a custom redirect rule for some testing purposes on 2 domains with exactly same site. The code I am using is: RewriteEngine on RewriteCond %{REMOTE_ADDR} !^1\.2\.3\.4$ RewriteCond %{HTTP_HOST} ^.*site1.com [NC] RewriteRule ^(.*)$ http://www.site2.com/$1 [R=301,L] What I want is to redirect all requests for site1 to site2 except for requests from IP address 1.2.3.4. But currently requests from that IP are also being redirected to site2. Is there something I've missed in settings? ( note: both domains are on the same shared hosting account )

    Read the article

  • Logging the client IP with Nginx/Varnish/Apache

    - by jetboy
    I have Nginx listening on port 443 as an SSL terminator, and proxying unencrypted traffic to Varnish on the same server. Varnish 3 is handling this traffic, and traffic coming in directly on port 80. All traffic is passed, unencrypted, to Apache instances on other servers in the cluster. The Apache instances use mod_rpaf to replace the logged client IP with the contents of the X-Forwarded-For header. My problem is that if the traffic is coming via Nginx, while the 'correct' client IP is getting logged in the VarnishNCSA logs, it looks as if Varnish is (understandably) replacing Nginx's X-Forwarded-For header with 127.0.0.1 downstream, and this is getting logged with Apache. Is there a nice simple way to stop Varnish rewriting X-Forwarded-For if it's already populated?

    Read the article

  • Problem with domain getting turned to IP address for https

    - by user229133
    I have a website that is using Windows Server 2003. The site is called https://mysite.com/ and at ip address 111.1.1.1. Now when I log into the site all my relative links that are generated using NavURL (<%# NavURL("Images/Menu/img.gif")%) are saying "http://111.1.1.1/Images/Menu/img.gif" instead of "https://mysite.com/Images/Menu/img.gif". This is causing an error because it needs to be secure. I'm sure there is a setting on the server somewhere to point to the name and not the ip, but I don't know where. Thanks for your help.

    Read the article

  • Mixed IP and Name Based Virtual Hosts with nginx

    - by nerkn
    I set up many domains but I dont know how to configure if only ip address is given. say foo.com I have a setup to go web/foo.com/htdocs, I want to 88.99.66.55 ip address like a domain to web/fook.com/htdocs server { listen 80; server_name 85.99.66.55; location / { root /home/web/fook.com/htdocs; } location ~ \.(php|php3|php4|php5)$ { root /home/web/fook.com/htdocs; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; } } resulted [warn]: conflicting server name "85.105.65.219" on 0.0.0.0:80, ignored

    Read the article

  • Exchange 2007 automatically adding IP to block list

    - by Tim Anderson
    This puzzled me. We have all mail directed to an ISP's spam filter, then delivered to SBS 2008 Exchange. One of the ISP's IP numbers suddenly appeared in the ES2007 block list, set to expire in 24 hours I think, so emails started bouncing. Quick look through the typically ponderous docs, and I can't see anything that says Exchange will auto-block an IP number, but nobody is admitting to adding it manually and I think it must have done. Anyone know about this or where it is configured? Obviously one could disable block lists completely but I'd like to know exactly why this happened.

    Read the article

  • Windows 7 - Static DHCP server address with dynamic IP Address

    - by mkstreet
    Is this possible? On my LAN, I would like to setup the network properties such that the DHCP server's address is static. However, I want that server to hand out the IP Addresses and DNS addresses dynamically. The reason is that some devices on the LAN will try to behave like a DHCP server. For example, we use software to push images to computers on the LAN (our computer software configurations are centrally managed). When that imaging distribution software happens to be running, the machines being imaged will get confused as to which device is the DHCP -- the real one or the machine that is sending them the image. So, to remove the confusion, I would like to setup my Windows 7 images such that the DHCP server address is statically assigned. And then that server would assign the IP Addresses and the DNS addresses dynamically.

    Read the article

  • IPSec VPN IP addresses

    - by Randomblue
    I have an IPSec VPN on my Windows 7 machine (all using the native Windows 7 gateway). The host I am connecting to has different ISAKMP "Phase 1" and "Phase 2" IP addresses. As I understand, the Phase 1 address is that of the IPsec endpoint, to which I can connect just fine. The Phase 2 address is found in their "crypto map", and the addresses need to match. At the moment, both my Phase 1 and Phase 2 addresses are configured the same. On my side, I get the error "Error 791: The L2TP connection attempt failed because security policy for the connection was not found" How can I configure the Phase 2 IP address for my Windows 7 IPSec VPN to be different to the IPSec endpoint address?

    Read the article

  • Ubuntu 11.04 Static IP doesn't take

    - by mrduclaw
    I'm trying to set a static IP address in Ubuntu 11.04. I did a server install. I edited my /etc/network/interfaces file to include: auto eth0 iface eth0 inet static address 10.0.0.100 netmask 255.255.255.0 gateway 10.0.0.1 When I do a /etc/init.d/networking restart this appears to take. After awhile though, that 10.0.0.100 will turn into something dished out by the DHCP server. My problem appears to be similar to this: Ubuntu intrepid - static IP networking keeps restarting with DHCP But I don't have Gnome installed. Is there anything else that's running in the background that could be doing this? And if so, how do I disable it?

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

  • Windows Firewall 2008 Server - Allow only given IP in, block all others

    - by chumad
    I've got a Windows 2008 Server. It has the built-in windows firewall on it. I've played around with the Advanced settings where I can setup inbound/outbound rules, but it doesn't appear that I can create a rule that would say "Block All incoming traffic except traffic coming from this IP address" I created a rule that Blocks All, but there's no way that I've found to create a rule that will "override" the block rule and allow 1 or more IP's to get in. I accomplished this on a Win2k box using IPSEC, but it seems that IPSEC is now built-in to windows firewall. Any tips?

    Read the article

  • Exchange 2007 automatically adding IP to block list

    - by Tim Anderson
    This puzzled me. We have all mail directed to an ISP's spam filter, then delivered to SBS 2008 Exchange. One of the ISP's IP numbers suddenly appeared in the ES2007 block list, set to expire in 24 hours I think, so emails started bouncing. Quick look through the typically ponderous docs, and I can't see anything that says Exchange will auto-block an IP number, but nobody is admitting to adding it manually and I think it must have done. Anyone know about this or where it is configured? Obviously one could disable block lists completely but I'd like to know exactly why this happened.

    Read the article

  • Changed domain A records for new static ip, but no mail

    - by Tim the Enchanter
    We have recently changed our ISP, I have changed the mail and mailserver DNS A records for our domain name to point to the new external static IP address assigned to the router by the new ISP (the MX record points to mail.<mydomain> as always) but I am not getting any email (though sending email works). Do I just have to wait will the change propagates? I am slightly concerned because I can connect to the web email service made visible through the new router which suggests that the mail.<mydomain> static I.P. address change has happened. Have I missed something?

    Read the article

  • How to make ssh match known_hosts to host/ip:port instead of just host/ip?

    - by Prody
    I have two machines behind a firewall, with the ssh ports forwarded to 2201 and 2202. When I ssh host -p 2201 it asks if I trust the machine, I say yes, it gets added to ~/.ssh/known_hosts Then I ssh host -p 2202 it doesn't let me, because there's already a known_host for this IP in ~/.ssh/known_host:1 (the file was empty when I started, so line 1 is the one added by the previous ssh run) This happens on CentOS 5.4. On other distros (I've tried Arch), it appears that ssh matches the knwown_hosts to the ports too, so I can have multiple fingerprints for multiple ports on the same host/ip without any problems. How can I get this same behavior for CentOS? I couldn't find anything in man ssh_config. (or at least not without disabling fingerprint checking)

    Read the article

  • Set up homeserver with single IP to host multiple sites on Ubuntu [closed]

    - by Ortix92
    I am trying to set up my homeserver so it can function as a regular server one would rent. I am running Ubuntu 12.04 LTS with openpanel. I have a single static IP address. I am used to having two addresses and pointing them to NS1.domain.tld and NS2.domain.tld and setting up the propper DNS records. I would also like to mention I am somewhat new to DNS zones. Either way, how would I go about setting this up correctly (in openpanel) with just a single IP address if possible at all? I have also read about free solutions online, but I would like to keep everything secure and private so other people can't peer into my data somehow. Thanks!

    Read the article

  • Best way to log internet traffic for office network via remote IP

    - by buzzmonkey
    We have a network of about 40 machines running either Win XP or 7 in our office via LAN switches into 1 x Netgear Router (WNDR3700). We have noticed recently that our local network has been added to the CBL Blacklist due to one of our machines being infected with Torpig. I have attempted to use Kaspersky's TDSSKiller Antirootkit Utility to find the infected machine but all appear to be clear. The CBL register advises to find the local machine which is connection to the remote IP address (CBL has provided the range). However, our router does not have the ability to block remote IP addresses - does anyone know of a software which can log all the internet traffic, which we can then use to find the infected machine.

    Read the article

  • iptables change destination IP without DNAT

    - by Mad_Ady
    Hello, I'm trying to workaround a broken application which insists on connecting to the private address (and thus unreachable) of a server, instead of connecting to the public address (even if the relevant port is open). Changing the application is not an option. I'm trying to add iptables rules on the client(s) to change the destination ip for the packets going to 192.168.251.3 to go to 1.2.3.4 instead. DNAT isn't working since 1.2.3.4 is not an IP on any of my client interfaces. Can anyone point me to the relevant documentation that allows me to use MANGLE to change destination IPs?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >