Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 188/537 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • compTIA-AT EXAM

    - by SysPrep2010
    Hello everyone, I have been in the IT field only for two years. I have been dealing with servers, firewalls, routers, switches, backup servers, and desktop. For the desktop, i have been dealing with WDS (Window deployment services). Not a lot of hardware. My question is this, is it really important to have an AT cert under your belt. I dont see the point anymore. When a desktop goes down, what have been seeing, they just buy a new one. I mean I can rebuild systems they are fun, but I haven't in a while?

    Read the article

  • Apache log file problem

    - by Luke
    I've recently set up an Apache 2 web server and I noticed a quite a few lines in the error and access log that start with the follow sequence (but longer). Does anyone know where this comes from? ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ....... My set up is an Apache 2 load balancer with mod_balancer enabled and two Apache 2 web servers. All three servers write to the same log files on a share located on a NFS. My first guess is that my problem has to do with it since it's the only difference that comes to mind from other set ups I've used in the past but I'm not sure.

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • Using Terminal Services 'query' in batch file

    - by dboarman-FissureStudios
    I have a batch file that checks several of our servers for a user. From the command: query user %userID% /server:ServerName I want to capture the output before it goes to the screen. Is there a way to redirect the output to a variable? The basic gist of what I want to accomplish is this - we iterate through our servers: query user %userID% /server:Server1 query user %userID% /server:Server2 query user %userID% /server:Server3 query user %userID% /server:Server4 Instead of outputting a message that the user could not be found on a specific server, I would like for it to only notify me if it finds the user on a server.

    Read the article

  • VPS for Glassfish

    - by Harry Pham
    Our small startup company plan to deploy a web application on Glassfish, I and wonder if some of the experience user out there can answer me couple question. When I shopping for server, I usually look at RAM amount, as GF does required good amount of RAM to run, below are the two sites with significant price different for the same amount of RAM. I wonder why?? Godaddy: http://www.godaddy.com/hosting/virtual-dedicated-servers.aspx?ci=9013 Versus http://entic.net/Servers Does below plan from Godaddy consider good to run GF application. OS: Linux CentOS • RAM: 4 GB • Storage: 60 GB • Bandwidth: 2,000 GB/mo Our web application is a social network, expected to have 2000-4000 users to start with

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Is it possible to host a website in the 'ether' of the Internet -- not on a server -- so that it can

    - by Chris Altman
    This is a theoretical problem I am curious about. Websites are hosted on servers. Servers can be taken offline. Is it possible to host a website in the 'ether' of the Internet -- not on a server -- so that it cannot be taken down? One example, is that the website is hosted on other websites, like a parasite. Another is that it is assembled through storing pieces on DNS machines, routers, etc., so that it get assembled on the fly. The purpose is that this website could live forever because no one person can remove it. The answers I am looking for are plausible idea/approaches on technically how this could be built.

    Read the article

  • Load balancing without a load balancer?

    - by Tom
    I have 4 nginx-powered image servers on their own subdomains which users would access at random. I decided to put them all behind a HAProxy load balancer to improve the reliability and to see the traffic statistics from a single location. It seemed like a no-brainer. Unfortunately, the move was a complete failure as the load balancer's 100mbit port was completely saturated with all requests now going through it. I was wondering what to do about this - I could get a port upgrade ($$) or return to 4 separate image servers that are randomly accessed. I thought about putting HAProxy on each image server which would in turn route to another image server if that server's nginx service was having trouble. What would you do? I would like to not have to spend too much additional money.

    Read the article

  • Web service not accessible from behind corporates firewalls - how come?

    - by Niro
    We run a Saas serving a widget which is embedded in customer websites. The service include static javascript code hosted on amazon S3 and dynamic part hosted on EC2 with Scalr (using scalr name servers). We received some feedback from users behind corporate firewalls that they cant access our service (while they can access the sites including the widget). This does not make sense to me since the service is using normal http calls on port 80 and our URL is quite new without any reason to be banned by firewalls. My questions are: 1. Why is the service is not accessible and what can I do about it? 2. Is it possible that one of the following is blocked by corporate firewalls: Amazon s3, the dynamic IP address provided by amazon, Scalr name servers. Any other possible reasons, way to check them and remedies for this? Thanks!

    Read the article

  • NAT rules betweek 2 network interfaces (with iptables)

    - by Simone Falcini
    this is the current network that I have: UBUNTU: eth0: ip: 212.83.10.10 bcast: 212.83.10.10 netmask 255.255.255.255 gateway 62.x.x.x eth1: ip: 192.168.1.1 bcast: 192.168.1.255 netmask: 255.255.255.0 gateway ? CENTOS: eth0: ip: 192.168.1.2 bcast: 192.168.1.255 netmask 255.255.255.0 gateway 192.168.1.1 I basically want this: Make specific NAT rules from the internet to specific internal servers depending on the port: Connections incoming to port 80 must be redirected to 192.168.1.2:80 Connections incoming to port 3306 must be redirected to 192.168.1.3:3306 and so on... I also need one NAT rule to allow the servers in the subnet 192.168.1.x to browse the internet. I need to route the requests on eth0 to eth1 to be able to exit to internet. Can I do this on the UBUNTU machine with iptables? Thanks!

    Read the article

  • Why is my NTP controlled computer clock two minutes ahead?

    - by Martin Liversage
    The clock in my computer is configured to be synchronized using NTP. To verify this I have tried two NTP clients using various NTP servers. My computer and the NTP clients are in complete agreement about the current time even across a wide range of NTP servers. I also have a GPS and my national phone company provides an accurate clock available by calling a specific phone number. Both my GPS and the phone company agrees on the current time. However, my computer is almost precisely two minutes (or 1 minute and 59 seconds) ahead of what I believe to be the "real" current time where I live. Why is my computer two minutes ahead? I realize that synchronizing clocks using the internet may not be entirely accurate as there is latency, but two minutes is a very long time on the internet. Is NTP really two minutes ahead? I'm running Windows 7 and live in the time zone UTC+1, but I don't think that is important in understanding my problem.

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

  • How IBM Implement WebSphere Application Server SDK for Sun Solaris OS

    - by Eng Al-Rawabdeh
    I deploy the same application in IBM-WAS on different OS ( Windows , AIX and SUN-Solaris ) , SDK errors appeared on SDK for just Solaris OS , I refer some sites and it talk that the SDK on Solaris OS was build based on Sun SDK is it write ? so please I need to now if the IBM build the Solaris SDK from scratch or based on sun SDK ?? More Details : I Installed the same IBM WAS Application Server on two servers as the following : 1- Server1 - OS (AIX) 2- Server2 - OS ( Solaris) these two server on the same network and have the same configuration . Then I deploy Java Application ( X ) on both servers , the Application X was run on Server1 ( AIX ) without any problem but when I run the Application on Server 2 ( Solaris OS) I faced SDK issue . So I need to know what the difference between AIX WAS SDK and Solaris WAS SDK ?? Note : I try windows and it was run without any problem .

    Read the article

  • FTP client that supports 2 concurrent FTP sessions

    - by oninea
    I'm looking for an FTP client that can connect to two different FTP servers at the same time and allow file transfer or synchronization between those two servers. Basically what I want to achieve is to transfer/synchronize files between 2 different sites from my local machine. Are there any clients around that support this functionality? If there are none, is there an alternative to achieve this? I've taken a look at net2ftp, a web based FTP client, which provides almost the same functionality that I need. What I'm looking for though is a desktop app. Any ideas?

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • Web service not accessible from behind corporates firewalls - how come?

    - by Niro
    We run a Saas serving a widget which is embedded in customer websites. The service include static javascript code hosted on amazon S3 and dynamic part hosted on EC2 with Scalr (using scalr name servers). We received some feedback from users behind corporate firewalls that they cant access our service (while they can access the sites including the widget). This does not make sense to me since the service is using normal http calls on port 80 and our URL is quite new without any reason to be banned by firewalls. My questions are: 1. Why is the service is not accessible and what can I do about it? 2. Is it possible that one of the following is blocked by corporate firewalls: Amazon s3, the dynamic IP address provided by amazon, Scalr name servers. Any other possible reasons, way to check them and remedies for this? Thanks!

    Read the article

  • IBM x3620 Server takes a long time to boot past UEFI to OS

    - by Joel Coel
    I have a pair of IBM System x3620 servers. These servers do fine once they finally reach the point where the operating system takes over, but it takes them forever to get past the new-fangled UEFI boot system... a good five minutes or so; maybe longer. I haven't timed it, but it's the kind of thing where you go get a cup of coffee while you wait and it's still going when you come back. Normally the only time I shut these down is for a monthly maintenance cycle (usually just windows updates), and so it's not a big deal. But in the case where I might have an outage I'd sure like to get that 5 minutes back. Is there anything I can do to tell them to just go ahead and boot already?

    Read the article

  • SQL Server Transaction Log RAID

    - by Eric Maibach
    We have three SQL Server servers, and each server has a about five or six databases on it. We are in the process of moving these servers to a new SAN and I am working on the best RAID configuration. Currently all of the log files for all of the databases share a RAID array, there is nothing else on this RAID array except for the log files, but all of the databases use this same array for their log files. I have read that it is best to have log files on separate disks. But in our case I am not sure whether it would be best to have one big array with about 8 drives that all the log files are on. Or would it be better to create four two disk arrays and give some of the larger databases their own dedicated disks for their log files?

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • How to test DNS glue record?

    - by Sunnz
    Hello I have just set up a DNS server for my domain example.org with 2 name servers ns1.example.org and ns2.example.org. I have attempted to set up a glue record for ns1 and ns2 at my registrar. It seems to work for now when I do a dig example.org but when I do a whois example.org it lists ns1.example.org and ns2.example.org but not their IP address which should be set up as a glue record. So I am wondering how do I check for the existence of a glue record? Do I do it with whois? I have seen .com and .net whois records that have both the domain name as well as the IP address for the name servers, is .org different? What's the proper way to test this? Thanks.

    Read the article

  • Load Balancing a UDP server

    - by Hellfrost
    Hello StackOverflow, I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment I'll probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collects data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances. My question is, how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, I mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y. By the way it is a .NET system... what would you recommend?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >