Search Results

Search found 2788 results on 112 pages for 'symantec endpoint protect'.

Page 82/112 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • How can I solve the apache2 httpd error "mixing * ports and non-* ports with a NameVirtualHost addre

    - by rrc7cz
    Here is the error I get when booting up Apache2: * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [warn] NameVirtualHost *:80 has no VirtualHosts I first followed this guide on setting up Apache to host multiple sites: http://www.debian-administration.org/articles/412 I then found a similar question on ServerFault and tried applying the solution, but it didn't help. Here is an example of my final VirtualHost config: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.xxx.com ServerAlias xxx.com # Indexes + Directory Root. DirectoryIndex index.html DocumentRoot /var/www/www.xxx.com # Logfiles ErrorLog /var/www/www.xxx.com/logs/error.log CustomLog /var/www/www.xxx.com/logs/access.log combined </VirtualHost> with the domain X'd out to protect the innocent :-) Also, I have the conf.d/virtual.conf file mentioned in the guide looking like this: NameVirtualHost * The odd thing is that everything appears to work fine for two of the three sites.

    Read the article

  • Can I subnet a subnet?

    - by Portman
    Apologies in advance for the botched terminology. I have read the Server Fault Subnet Wiki but this is more of an ISP question. I currently have a /27 block of public IPs. I use give my router the first address in this pool and then use 1-to-1 NAT for all the servers behind the firewall, so that they each get their own public IP. The router/firewall is currently using (actual addresses removed to protect the guilty): IP Address: XXX.XXX.XXX.164 Subnet mask: 255.255.255.224 Gateway: XXX.XXX.XXX.161 What I would like to do is break out my subnet into two separate /28 subnets. And do this in a way that is transparent to the ISP (i.e., they see me as continuing to operate a single /27). Currently, my topology looks like: ISP | [Router/Firewall] | [Managed Ethernet Switch] / \ \ [Server1] [Server2] [Server3] (etc) Instead, I would like it to look like: ISP | [Switch] / \ [Router1] [Router2] | | | | [S1] [S2] [S3] [S4] (etc) As you can see, this would partition me into two separate networks. I'm struggling with what the correct IP settings would be on Router1 and Router2. Here's what I have right now: Router1 Router2 IP Address: XXX.XXX.XXX.164 XXX.XXX.XXX.180 Subnet mask: 255.255.255.240 255.255.255.240 Gateway: XXX.XXX.XXX.161 XXX.XXX.XXX.161 Note that normally you would expect Router2 to have a gateway of .177, but I'm trying to get them both to use the gateway originally given to me by the ISP. Is subnetting like this in fact possible, or am I completely botching the most basic concepts?

    Read the article

  • Can connect through Watchguard mobile VPN, but can't ping or access network drives

    - by johnnyb10
    We're having any issue in which some of our employess can no longer connect to our network drives when out of the office. We use Watchguard Mobile VPN (we have a Watchguard Firebox firewall) and the users are able to connect. That is, their status in the the VPN client says "Connected" and they have the correct IP address listed as the VPN Endpoint. The problem is, when they try to map drives, or even ping the IP address of a server on our network, it fails. Last week, we temporarily switched one of our Comcast modems to our backup DSL modem because the Comcast was accidentally shut off by Comcast, and the problem seemed to start around then. We've since switched back and the problem persists, so that doesn't seem to have been it (which makes sense). But we also made other changes at the time that might have thrown something off, although we feel like we've checked them all. Plus, some people can successfully connect to network drives through the VPN. Can someone please suggest some steps to help troubleshoot? We've checked the policies on our Watchguard box, and they seem fine. We've looked at the settings on the Mobile VPN client, but nothing seems like a probable cause. Thanks.

    Read the article

  • SPF hardfail and DKIM failure when recipient has e-mail forwarding

    - by Beaming Mel-Bin
    I configured hardfail SPF for my domain and DKIM message signing on my SMTP server. Since this is the only SMTP server that should be used for outgoing mail from my domain, I didn't foresee any complications. However, consider the following situation: I sent an e-mail message via my SMTP server to my colleague's university e-mail. The problem is that my colleague forwards his university e-mail to his GMail account. These are the headers of the message after it reaches his GMail mailbox: Received-SPF: fail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) client-ip=192.168.128.100; Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) [email protected]; dkim=hardfail (test mode) [email protected] (Headers have been sanitized to protect the domains and IP addresses of the non-Google parties) GMail checks the last SMTP server in the delivery chain against my SPF and DKIM records (rightfully so). Since the last STMP server in the delivery chain was the university's server and not my server, the check results in an SPF hardfail and DKIM failure. Fortunately, GMail did not mark the message as spam but I'm concerned that this might cause a problem in the future. Is my implementation of SPF hardfail perhaps too strict? Any other recommendations or potential issues that I should be aware of? Or maybe there is a more ideal configuration for the university's e-mail forwarding procedure? I know that the forwarding server could possibly change the envelope sender but I see that getting messy.

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • How to remove a plain text protecting single quote from all the selected cells in LibreOffice Calc?

    - by Ivan
    I've imported a CSV file having the first column to be date-time values in ISO 8601 format like 2012-01-01T00:00:00.000Z for the first moment of the year 2012. Then, willing to make LibreOffice to recognize the format (as I was looking forward to plot a diagram), I've selected the column, chosen Format Cells... and entered the custom time format as YYYY-MM-DDTHH:MM:SS.000Z And this seems to work if... I edit a cell to remove a hidden single-quote from its beginning (which serves to protect a cell content from being interpreted) as all the newly formatted cells now store values like '2012-01-01T00:00:00.000Z (note the single quote - it is only visible when you edit a particular cell). And I am to do so for all the cells in the column. How can I automate this? UPDATE: I've already found a solution for the particular case of mine: it helps to set a column format to "time" in the CSV import dialogue. But I am still curious how could this be done in case I wouldn't have the original .csv data file to import but only the .ods file with the data already imported without the format specified at the import time.

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium x64 Edition. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about 4 months. Windows Updates aside, the system is usually quite stable. Thanks in advance for your help!

    Read the article

  • How does cross domain authentication work in a firewalled environment?

    - by LVLAaron
    This is a simplification and the names have been changed to protect the innocent. The assets: Active Directory Domains corp.lan saas.lan User accounts [email protected] [email protected] Servers dc.corp.lan (domain controller) dc.saas.lan (domain controller) server.saas.lan A one way trust exists between the domains so user accounts in corp.lan and log into servers in saas.lan No firewall between dc.corp.lan and dc.saas.lan server.saas.lan is in a firewalled zone and a set of rules exist so it can talk to dc.saas.lan I can log into server.saas.lan with [email protected] - But I don't understand how it works. If I watch firewall logs, I see a bunch of login chatter between server.saas.lan and dc.saas.lan I also see a bunch of DROPPED chatter between server.saas.lan and dc.corp.lan. Presumably, this is because server.saas.lan is trying to authenticate [email protected] But no firewall rule exists that allows communication between these hosts. However, [email protected] can log in successfully to server.saas.lan - Once logged in, I can "echo %logonserver%" and get \dc.corp.lan. So.... I am a little confused how the account actually gets authenticated. Does dc.saas.lan eventually talk to dc.corp.lan after server.saas.lan can't talk to dc.corp.lan? Just trying to figure out what needs to be changed/fixed/altered.

    Read the article

  • Error 720 on VPN (PPTP) attempt

    - by Andy Shulman
    When I attempt to connect to a server running XP x64 (so essentially Server 2003) using a PPTP connection, it fails with client-side error Registering your computer on the network... Error 720: A connection to the remote computer could not be established. You might need to change the network settings for this configuration. and server-side error Event ID: 20050 The user WINSERV3\Andy connected to port VPN8-1 has been disconnected because no network protocols were successfully negotiated. I have configured the router to pass both TCP packets on 1723 and GRE packets. I have used Wireshark (filtering out ARP, UDP, and all TCP ports other than 1723) to observe the packets received by the server. Wireshark does not explicitly name any protocol GRE, but it does tell me the server sent and received TCP, PPTP, PPP LCP, PPP CHAP, PPP CBCP, and PPP IPCP. The connection seems to go wrong at packet 30, where the protocol is PPP LCP, with the payload of the packet being labeled "Protocol Reject". Obviously, this is going from server to client. This would seem to lead to the conclusion that there is something wrong with my client, which runs Windows 7 Ultimate x64. However, it is able to connect to my house's router, which runs the DD-WRT firmware and is thus a PPTP endpoint. I'm thoroughly at a loss. Please help!

    Read the article

  • Azure can't ping or telnet VM from client

    - by Raif
    I have a VM on Azure with an instance sqlserver 2012 running on it. From my work computer and my home computer I can't get sqlserver management studio connect to it. I have looked at ALL the settings recommended in numerous articles. everything is setup correctly. endpoint 1433 Private and public sqlserver tcp enabled. sqlserver tcp listening on right port sqlserver using mixed auth windows fire wall, holes poked and then disabled on both client and VM can log in from VM using the credentials that I'm trying to use remotely further more I can't ping the dns or ip or tellnet address from my local machines. I can however hit the iis from a browser using the ip. strange. CS asked me to download MS Network Monitor, which I did and pinged and telneted. I have the results saved but can't really make heads or tails of them. CS hasn't responded yet. I can post some info here that would help. EDIT Never one to shrink from a challenge, I deleted my VM and re-did everything. Now it works although my confidence azure is somewhat shaken.

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • What is the advantage of not running as root? [closed]

    - by Shmuel Brill
    Possible Duplicate: What's wrong with always being root? All modern brands of Linux highly discourage (or disable) one from running as root instead of a normal user. I do not understand why. As a "normal" user, one could Download a rouge program from the internet. Run it (After all, one isn't root, what can it do). It installs itself in .bashrc or .xinitrc It writes a rouge "sudo" and "su" and adds . to the path Not noticing that . is in path, one runs sudo. The rouge program now has root password and can do anything it wants in the system. Even if 3-6 doesn't happen, the program could still Be part of a botnet. Read all files in the home directory and send them back (mine for SS#, Credit Card numbers, bank account numbers, etc). Send spam. Run a backdoor server to allow an attacker a chance to connect to the machine to determine vulnerabilities. It seems that the whole "permissions" thing (root/non-root) is just to prevent amateur crackers from getting into the system, so the question is: Is there a point in avoiding running as root, and is there a way to protect oneself if one wants to run unsafe code?

    Read the article

  • Cisco ASA: Routing packets based on where the connections started from

    - by DrStalker
    We have a Cisco ASA 5505 (version 8.2(2)) with three interfaces: outside: IP address 11.11.11.11, this is the default route inside: IP address 10.1.1.1, this is the local subnet newlink: 22.22.22.22, this is a new internet connection. We need to move VPN users from the 11.11.11.11 address to the 22.22.22.22 address, and we're using SSH on the ASA as to test and sort out the routing. The problem we have is this: If we define a particular IP as being on a static route out the newlink interface then it can SSH to 22.22.22.22 fine. If we do not define a static route then the traffic hits the ASA, but the return traffic does not come back over newlink; presumably it gets sent over the outside interfcae as that is the default route. We can't define a static route for each remote endpoint because there are dialup VPN users, who obviously change IP a lot What we need to do is configure the ASA so if a connection comes in on the newlink interface then the outgoing packets for that go over the newlink interface, not the default route. With iptables this should be do-able by marking the connection and doing mark-routing, but what is the equivalent for a Cisco ASA?

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • Trouble with local id / remote id configuration of VPN

    - by Lynn Owens
    I have a NetGear UTM firewall and a Windows machine running NetGear's VPN client. The Windows machine I can put on the UTM network and take off of it. When I am cabled into the local (internal) the following configuration works: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: User FQDN: utm_remote1.com Client: Local Id: DNS: utm_remote1.com Remote Id: (The UTM's WAN IP address) Gateway authentication: preshared key Policy remote endpoint: FQDN: utm_remote1.com But when I'm off the UTM's internal local network and simply coming in from the internet, this does not work. It simply repeats SEND phase 1 before giving up. Since I know that the UTM WAN IP is accessible from both inside and outside the network, I figured the problem was with the Client local id. So, I tried the following: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: (A DN of a self-signed certificate I created for the client and uploaded into the UTM certificates) Client: Local Id: (The DN of the aforementioned self signed cert) Remote Id: (The UTM's WAN IP address) Gateway authentication: (the aforementioned self signed cert) Policy remote end point: ... er, ... my choices are IP and FQDN.... Not sure what to put here No matter what I've tried, it just keeps repeating the SEND phase 1. Any ideas?

    Read the article

  • INFORMIX - listener thread err 25582

    - by Samuel Lao
    I´ve been digging different forums in the last 7 days looking for a possible solution.... Our database is based on informix running in a Linux server (LINUX SUSE 11). Suddenly, last saturday informix began to show an error message: listener-thread err=-25582 oserr=0, network connection is broken End users started to call reporting about slow network performance to this server, moments where the database application lost connection with server...so we proceeded doing a ping to the db server, getting good responses (1ms) without losing packets. I tried typing telnet (ipserver) 1526 which is informix's port for the application, it works. We had to disconnect the server and enable a backup db server which is located on another branch. It has been working in a regular way because the backup server hasn´t good specs (it is an old dell server model). So, I scanned the main server looking for viruses using Trend Micro Server Protect, it didn´t find anything (0 viruses and spywares). I revised the switches and routers, but I haven´t find anything strange... What else could be ? Thanks in advanced for your time and help with this issue.....I would really appreciate any advice...

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >