Search Results

Search found 32610 results on 1305 pages for 'internet options'.

Page 598/1305 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • How have multiple web servers and IPs on the same physical network

    - by jsigned
    I do web development out of a small office and need to have multiple physical and virtual servers that can be accessed from the internet. I also have a number of devices (computers, laptops, tablets, printers, etc) that need connections as well. I have gotten a subnet of 8 IP's from my ISP and while that is adequate for the web servers its far too small for everything that needs access to the network. My router is an ASUS RT-N16 running DD-WRT. I'm just smart enough about this routing topic to be dangerous, think 2 year old with a magic marker. I would like to keep my internal network NAT'ed on the 192.168.x.x network and route the 68.69.x.x 255.255.255.248 traffic directly to the servers. The physical network consists of the 4 port DD-WRT router and an unmanaged gig switch. I have a fiber connection to the office that works as an Ethernet port. In other words I can plug my laptop directly into it and have access to the internet. There is no login or password and the router is setup to get DHCP from the ISP, and to provide DHCP addresses for the internal network. What I've done so far is google and try different configurations with little success. In the end I decided I didn't even know how to ask the questions needed. My questions are: Is this the best way to configure the network? How do you do it? VLANs? Multiple routers? I've never had to configure a router using anything more than the GUI so if this is command line stuff be gentle.

    Read the article

  • Enabling mod_rewrite on Amazon Linux

    - by L. De Leo
    I'm trying to enable mod_rewrite on an Amazon Linux instance. My Directory directives look like this: <Directory /> Order deny,allow Allow from all Options None AllowOverride None </Directory> <Directory "/var/www/vhosts"> Order allow,deny Allow from all Options None AllowOverride All </Directory> And then further down in httpd.conf I have the LoadModule directive: ... other modules... #LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so ... other modules... I have commented out all the Apache modules not needed by Wordpress. Still when I issue http restart and then check the loaded modules with /usr/sbin/httpd -l I get only: [root@foobar]# /usr/sbin/httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Inside the virtual host containing the Wordpress site I have an .htaccess containing: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The .htaccess is owned by apache which is the user apache runs under. The apachectl -t command returns Syntax OK What am I doing wrong? What should I check?

    Read the article

  • Suspected Router problems, [closed]

    - by jordon_user
    I am new to this forum. I appreciate any help you can offer. I have been troubleshooting an error with both my WIRED Internet Connection I have 3 desktops running through this working connection (Router) with no problems. Two computers, one laptop and desktop experience the issue. If I connect directly to the Modem, on either, no problem on the internet. Through the Router, my computer connects for a period of time but if ever awaking my comp, or restarting, the yellow explanation mark returns. The only way I can get the connection up = cmd. ipconfig /release, ipconfig /renew. This constant command is annoying and was hoping for a permanent fix. Audio is also not working. I know its likely a driver error, but its an old comp we put together and if there's an easy fix itd be great to hear from you! If you need any more info just let me know. Thanks! THanks guys!

    Read the article

  • Java Swing over Remote Desktop - Strange, weird GUI squashing

    - by ADTC
    I thought this question fits SuperUser more than StackOverflow because it's not about actual Java programming, though programmers might be more likely to encounter the problem. Anyway, let me start of with some stats before I ask the actual question: Laptop: Windows 7 x32 Screen resolution 1024 x 768; Nvidia GeForce Go 6200 Connected to desktop via ad-hoc wireless network Access internet via desktop Desktop: Windows 7 x64 Screen resolution 1920 x 1080 Connected to laptop via ad-hoc wireless network Access internet via cable modem I'm connecting to my laptop via Remote Desktop from my desktop to take advantage of the large screen. I'm doing programming on my laptop (for portability reasons). Everything else runs smooth and fast over Remote Desktop as both computers are connected directly over the ad-hoc wireless. The only problem is this: Java Swing apps don't display the GUI properly. I acquired a Java Swing application and I'm debugging it in Eclipse. Here's what I got when I ran the app: Apparently there doesn't seem to be anything wrong with the GUI application I'm debugging, because the Java Control Panel exhibits the same problem. I've searched high and low in Google about this; the closest I came to a solution is this. But sadly, the use of -Dsun.java2d.nodraw=true has no effect at all. This only happens over Remote Desktop. I have tried locally and the GUI apps display properly. This isn't a dealbreaker for me as I can stop using Remote Desktop when developing Java Swing apps. However, I would like to know if anyone has encountered this and found any solution. PS: All software involved (Eclipse, Java JRE, etc.) are latest versions.

    Read the article

  • PGB Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about pgb multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • Question about domain name registration

    - by Obay
    I received the following email from a certain [email protected] YYY is a company name ZZZ is OUR company name Dear Manager, We are a professional intellectual property rights consultant organization, mainly deal with the global domain name registration and internet intellectual property rights protection. On March. 24th, 2010, we formally received an application from YYY, they applied to register the internet brand “ZZZ” and some relevant domain names with our organization. During our preliminary investigation, we found that these domain names' keyword is fully identical with your trademark. Therefore, we need to confirm with you, whether you consigned YYY to register these domain names with us or not? Or, is YYY your business partner or distributor? If you have no relationship with this company, we assume that they have other purposes to obtain these domain names. Currently, we have already suspended this company's application temporarily due to the seriousness of this isuue. In order to avoid the vicious domain name grabbing, please let the relevant person make a confirmation with me via telephone or email as soon as possible. Thank you for your support to our work! Best Regards XXX Tel: xxxxx-xxxx xxxx Fax: xxxxx-xxxx xxxx Email: [email protected] www.world-wtc.cn This seems legit, or is it? By the way, XXX is just a first name, not a complete name.

    Read the article

  • Interrupted system call during "hg convert"

    - by Aaron Digulla
    When I run "hg convert" to convert a Subversion repository to Mercurial, I get this error: fetching revision log for "/trunk" from 1538 to 0 run hg sink post-conversion action Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 46, in _runcatch return _dispatch(ui, args) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 454, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 324, in runcommand ret = _runcommand(ui, options, cmd, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 505, in _runcommand return checkargs() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 459, in checkargs return cmdfunc() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 453, in <lambda> d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/usr/lib/pymodules/python2.6/mercurial/util.py", line 386, in check return func(*args, **kwargs) File "/usr/lib/pymodules/python2.6/hgext/convert/__init__.py", line 229, in convert return convcmd.convert(ui, src, dest, revmapfile, **opts) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 398, in convert c.convert(sortmode) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 312, in convert parents = self.walktree(heads) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 109, in walktree commit = self.cachecommit(n) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 267, in cachecommit commit = self.source.getcommit(rev) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 433, in getcommit self._fetch_revisions(revnum, stop) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 814, in _fetch_revisions for entry in stream: File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 122, in __iter__ entry = pickle.load(self._stdout) IOError: [Errno 4] Interrupted system call abort: Interrupted system call Apparently, it is possible to restart a read on EINTR but how would I do that with pickle.load()? Also I wonder where that signal comes from? I suspect it's SIGCHILD but shouldn't popen() handle that?

    Read the article

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • Need Routing help (tagged/untagged)

    - by TheCleaner
    I really need some help trying to figure some "basic" routing. My brain is fried from being sick for a week and I'm not thinking clearly. Picture below describes my "setup". I'm trying to accomplish routing a user from their workstation to the Juniper SSG520 and then "OUT" through the internet connection. I can't move the connection as it is physically located where the user's switch is. Here's what I CAN do at this point: I can ping from the Juniper SSG520 eth3/3 to 6x.xxx.253.116 from 6x.xxx.253.114 I can ping from the x450 in the top right to 6x.xxx.253.112 from 6x.xxx.253.116 What I CANNOT DO: I cannot ping from the SSG520 eth3/3 to 6x.xxx.253.112 from 6x.xxx.253.114 (basically from the Juniper box to the gateway. I've tried changing port 1 in the x450 VLAN 666 as tagged but when I do that then I can't even ping from the Juniper SSG520 eth3/3 to the VLAN on the x450 (6x.xxx.253.116). I need to route traffic out the eth3/3 interface on the SSG520 THROUGH the 2 x450 Switches and out the internet connection. The caveat is that the 2 x450 switches are connected via fiber over distance and have tagged VLANs in them for the routing. Thoughts? http://img251.imageshack.us/img251/7752/drawing1.jpg

    Read the article

  • Identifying Httpd error log in Fedora 16

    - by Cerin
    How do you find the cause of httpd errors in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • Strange difference in bash behavior across systems

    - by pinkie_d_pie_0228
    I have two systems, an Ubuntu computer and an Android tablet. I have built and configured bash for Android to be used in adb, so it's the same version as my Ubuntu bash, and they use mostly the same bashrc and configuration, and the same exact options set by shopt. However, there is a slight difference in that the Android bash behaves as I expect when I I try to tab-complete something using a variable in it, but the Ubuntu bash doesn't. #Android ls $HOME/loc<tab> => ls $HOME/local #As expected Basically, the variable is taken into account when completing. But then #Ubuntu ls $HOME/loc<tab> => ls \$HOME/loc #Undesired behavior. The list of options is as follows, and is the same in both builds of bash. autocd:checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:progcomp:promptvars:sourcepath What can be making the Ubuntu version escape the $ instead of using it for completion as in the Android build? What can I do to make both work the same way? Any help will be greatly appreciated.

    Read the article

  • How do I properly configure a ZipInstaller .zic file?

    - by Iszi Rory or Isznti
    As of version 1.20, ZipInstaller is supposed to support the use of a configuration file to customize its installation options. Generally, all the options I want to use are available through the dialog so I really haven't bothered with the configuration file until now. The problem now is that certain tools, such as PsTools from Sysinternals, do not properly show their Product Name to ZipInstaller. ZipInstaller's dialog will let you customize the Start Menu folder and Program Files folder, but that still doesn't change the Product Name that it sees for the software. So, instead of having "PsTools" in my Add/Remove Programs, I get "Sysinternals Software". For some things, the situation is even more confusing. For example, the NIST SP 800-53 Reference Database Application gets installed as "FileMaker Pro Runtime". To rectify this, I've tried to use the aforementioned .zic configuration file. As I understand it, it's a basic INI file you create and put in the root of the ZIP file. ZipInstaller is supposed to read that file, and adjust its parameters accordingly. Mine looks like this: [install] ProductName=NIST_SP_800-53 ProductVersion=1.4.1 CompanyName=NIST Description=NIST_SP_800-53 InstallFolder=%zi.ProgramFiles%\%zi.ProductName% StartMenuFolder=%zi.CompanyName%\%zi.ProductName% I've named it `~zipinst~.zic and placed it in the root of the ZIP file, but when I run ZipInstaller it doesn't seem to recognize any of the information I've given it in the .zic file. What might I be doing wrong here?

    Read the article

  • IPSec for LAN traffic: Basic considerations?

    - by chris_l
    This is a follow-up to my Encrypting absolutely everything... question. Important: This is not about the more usual IPSec setup, where you want to encrypt traffic between two LANs. My basic goal is to encrypt all traffic within a small company's LAN. One solution could be IPSec. I have just started to learn about IPSec, and before I decide on using it and dive in more deeply, I'd like to get an overview of how this could look like. Is there good cross-platform support? It must work on Linux, MacOS X and Windows clients, Linux servers, and it shouldn't require expensive network hardware. Can I enable IPSec for an entire machine (so there can be no other traffic incoming/outgoing), or for a network interface, or is it determined by firewall settings for individual ports/...? Can I easily ban non-IPSec IP packets? And also "Mallory's evil" IPSec traffic that is signed by some key, but not ours? My ideal conception is to make it impossible to have any such IP traffic on the LAN. For LAN-internal traffic: I would choose "ESP with authentication (no AH)", AES-256, in "Transport mode". Is this a reasonable decision? For LAN-Internet traffic: How would it work with the internet gateway? Would I use "Tunnel mode" to create an IPSec tunnel from each machine to the gateway? Or could I also use "Transport mode" to the gateway? The reason I ask is, that the gateway would have to be able to decrypt packages coming from the LAN, so it will need the keys to do that. Is that possible, if the destination address isn't the gateway's address? Or would I have to use a proxy in this case? Is there anything else I should consider? I really just need a quick overview of these things, not very detailed instructions.

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • DNS something is wrong?

    - by Nickolas R.
    Hello I am configuring bind9 on a server with two network interfaces, one is connected to the LAN and the other is connected to the Internet through NAT so bind is not faced directly to the Internet. Everything seems to work fine, clients can do both forward and reverse lookups but somethings seems strange. On the server if i try to ping www.google.com one time, a great amount of network activity is genereated, alot more that one would expect so i decided to sniff the traffic with tcpdump. When loading the dump into Wireshark i can see about 250 entries with "Standard query A" and "Standard query response" Here a some of the entries from the dump DNS Standard query A www.google.com DNS Standard query A blackhole-1.iana.org DNS Standard query A blackhole-2.iana.org DNS Standard query response DNS Standard query A ns2.isc-sns.com DNS Standard query A ns1.isc-sns.net DNS Standard query A ns3.isc-sns.info DNS Standard query response PTR b.iana-servers.net RRSIG DNS Standard query A auth2.dns.cogentco.com DNS Standard query A ns1.crsnic.net DNS Standard query A ns2.nsiregistry.net DNS Standard query A ns3.verisign-grs.net DNS Standard query A ns4.verisign-grs.net DNS Standard query PTR 79.52.19.199.in-addr.arpa I do not have too much experince with DNS yet, but i am pretty sure that something is wrong. Anybody that have an idea of whats is going on?

    Read the article

  • Wireless Activity Monitoring for PCI DSS Compliance

    - by dkusleika
    In an effort to be PCI DSS compliant, I took a trustkeeper.net questionnaire. I failed the question that asks Is the presence of wireless access points tested for by using a wireless analyzer at least quarterly or by deploying a wireless IDS/IPS to identify all wireless devices in use? (SAQ #11.1) My only wireless access point is outside my firewall, so even if you cracked my wireless you couldn't get inside my domain (unless you crack that too). My firewall doesn't have IPS and I couldn't tell if it had IDS. I looked around for a wireless analyzer, but what I found was $500, which is a little pricey for my size business. And even if I got it, I'm not sure I would understand what it tells me. Surely there are smaller/less sophisticated businesses that take credit cards and have solved this. My questions are: What are the risks if someone were to crack my wireless? (Could they read all internet traffic? Just wireless traffic? Just use my internet connection?) And what is the best/cheapest way to test my connection point quarterly? Should I buy the $500 analyzer? Domain is Windows Server 2000. Firewall is Sonicwall Pro 2040. Router is 8 port D-link.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • Debugging "clogged" TCP connections

    - by Nikratio
    I'm having trouble with an internet connection that seems to randomly "freeze" arbitrary tcp connections. The connections stay established, but no data is coming through. When this happens, netstat still shows the connection status as ESTABLISHED on both the local computer: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 53 192.168.0.10:41129 173.255.235.238:143 ESTABLISHED 8219/gnutls-cli on (79.31/13/0) ..and the remote server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 0 173.255.235.238:143 68.5.174.98:41129 ESTABLISHED 5303/imapd off (0.00/0/0) However, it seems that no data at all is transferred. If I run strace on the local and remote process, both just show a repeating sequence of select calls (with different fds of course), e.g. select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) The internet connection overall does not seem affected, I can still establish new connections to the same service on the same server without any problems. However, the affected local applications seem to be unaware of the problem and just hang. When I look at a packet capture of this connection on the client side, the last thing that happens is that the client transmits some data, then nothing happens for about 1100 seconds, and then several TCP Retransmission requests go out, with intervals increasing from 4 seconds to 130 seconds. No activity is captured after that. After about 10 minutes, the connection on the remote end disappears from the netstat (I wasn't able to catch any intermediate state), but still stays ESTABLISHED on the local end. Finally, after some more minutes, the local application aborts with a timeout and disappears from the local netstat output as well. Does anyone have a suggestion of how I could debug this further to find out where the problem lies and how to fix it? Additionaly and/or as a temporary workaround: is is there some way to globally reduce the timeout on client and/or server to reduce the time before the local application aborts?

    Read the article

  • Port Forwarding(?) TD-W8961nd

    - by rich
    I have a bit of a weird internet setup. I am connected via a decent WiFi connection (from work) which I pick up using a Buffalo Airstation Wireless-G box. This simply picks up the signal and gives me 4 ethernet ports to connect to. That's all fine and works as it should. I also have a TP LINK TD-W8961nd router which used to be connected to the Airstation via an ethernet cable so I could essentially have WiFi access in my house. To cut a long story short I can't remember how the hell I got it to work and I can't find the notes I scribbled down on how to do it. I'm pretty sure I need to tell the router what ip to pick up the internet connection from and have the local wifi as a seperate network. How the hell I do that I have no idea right now. Can anyone give me some advice on this? If you need more information ask and I will be able to do so. Cheers in advance. edit I'm in work at the moment so I can't give 100% details but I will be able to later on.

    Read the article

  • How to setup Mac server to use two gateways

    - by Brady
    I recently asked this question: How to set Mac server to use different Gateway for internet bound traffic The answer given works but has presented me with another issue that I didnt make clear in that question. Here is my network layout as it stands: At the moment outside staff members use some services on the existing internet 1 link. Those services are hosted by the Mac server. If I change the gateway of the Mac server to the second modem those outside staff lose visabilty on those services. Now I dont know how to go about solving this issue. I want the second link to be used when the Mac server goes to rsync data offsite but everything else use link one. How do I do this? Thanks Scott EDIT: This has been resolved by setting the default gateway on the Mac server to 192.168.1.254 Thus leaving everything on the network as it was before. but to get the Mac server to use the other link for rsync I've added a route to the Mac server to route traffic to the rsync server through the second gateway. sudo route add -net {server IP's}/{Netmask} 192.168.1.1 I've awarded the answer to gravyface for pointing me to a post on how to make this route persistant in Mac

    Read the article

  • Installing Trac on Windows under Apache 2.2?

    - by Warren P
    Trac is a python-powered bug-tracking and project-management app. According to Trac's wiki, there are several options for installing Trac, a standalone server (tracd), or under a dedicated webserver using one of these options: FastCGI - Not available on windows. mod_wsgi - No version of mod_wsgi available for Apache 2.2.22 and Python 2.7.3-amd64 that actually runs on my system! mod_python - no longer recommended, as mod_python is not actively maintained anymore) CGI -should not be used, as the performance is far from optimal) That leaves me with zero ways to run Trac on Windows. Apache 2.2.22 with ModWSGI loading, crashes the Apache2.2 service on startup without any error logs. Disabling the line in the apache configuration to load mod_wsgi restores sanity. I just want an installation of Trac on windows with Authentication enabled. I am unable to get authenetication to work using basic tracd like this: tracd -p 8000 --basic-auth="c:\tmp,c:\tmp\Passwords.md5.txt,mycompany" c:\tmp\RootFolder And I am unable to get Mod_WSGI installed. I'm going to keep trying to figure out a combination that works, I suspect I should have installed 32 bit python instead of 64 bit python, to start with. Did I do wrong to install Python 64 bit 2.7.3? I tried again with all 32 bit components, and still can't get MOD_WSGI to work with apache 2.2.22. I'm going to try to compile mod_wsgi myself with Visual C++ Express 2010, but it seems to me that it ought to be easier than this to get Trac running on windows, with authentication. Is there a way to run Trac on Windows, under Apache, with authentication? The last "Trac on windows" article died in 2008, leaving only this internet archive link for "Trac on windows" setup.

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >