Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 263/537 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • *Simple* way to block DDoS by number of requests

    - by Eduard Luca
    I have 3 Varnish 3.0.2 servers with Apache 2 as backends, which are being load balanced through a HAproxy separate server. I need to find a very simple program (I'm not much of a sysadmin), which blocks requests from an IP, if that IP has made more than X requests in Y seconds. Would something like this be achievable with a simple solution? Right now I have to block all requests manually with iptables.

    Read the article

  • Using both IIS6 and IIS7 with the same SQL State Server

    - by Josef
    We are trying to use new IIS7 (32bit, Classic Mode) webs in addition to our IIS6 webs with one SQL State Server for ASP.NET Session Handling. Unfortunately the number of transactions per seconds in the State Servers spikes (10 times+) as soon as we add the new IIS7 web to the farm. Are there any known issues with the described setup?

    Read the article

  • Ping from specific network adapter on Windows

    - by Dean
    Hey, I've been troubleshooting network issues on servers with 2 NICs and laptops with wired and wireless cards. How can I force the PING and TELNET to be sent from a specific adapter? I know it's a trouble with windows. Turning off one of the adapters is not an option, I am always connected through one of the adapters. There must be some command line option to prefer one adapter over the other. Thanks

    Read the article

  • How to scale out image hosting/serving?

    - by Continuation
    I asked this question on stackoverflow and it was suggested that I try it here: I'm building a website where users can upload photos and I'd also convert uploaded photos into thumbnails. Planning ahead, if the website gets popular, how do I scale it out so that the images (both original and thumbnails) will be stored in and served from multiple servers? Maybe a cluster? Is there any open source software that would help me in this? Thanks.

    Read the article

  • Looking for a good SNMP Browser to run under windows

    - by Littlejon
    I used to use Getif for poking around inside SNMP results from servers and devices. However it no longer works with Windows 7 and 64bit. I am looking for hopefully an OpenSource bit of software that will allow me add MIB's as required and allow me to browse the MIB tree and send a request/walk off to a server to get results. What do you all use?

    Read the article

  • Load balancing with one server and IIS7(.5)

    - by Lieven Cardoen
    Is it possible to configure loadbalancing on one server with IIS7? What I would like is to have three applications in IIS7 (sites). One site should forward the requests to the other two sites (loadbalanced). Problem is that at a customer of ours loadbalancing is used (with virtual servers). We on the other hand do not have (yet) a virtual environment and only one buildserver. (maybe using Application Request Routing module?)

    Read the article

  • Load Balancer sftp persistence when a server goes offline

    - by Cobra Kai Dojo
    Let's say we have the following scenario: We have two identical *nix servers using a shared filesystem. We connect through SFTP (not FTPS) to one of them to upload a file to the shared filesystem, the server goes offline and we get redirected to the second system which is still available. My question is, would there be any connection persistence or the user will have to relogin? I guess a relogin would be needed because the ssh sessions are not shared between the two systems... Thanks in advance :)

    Read the article

  • DMZ and LAN on the same Windows Storage Server 2008 R2

    - by Sergei
    We are moving from EMC Celerra NS20 to Windows Storage Server 2008 R2. I would like to use deduplication feature in WSS (Single Instance Storage) for hosting data for our external FTP server.The idea is to use NFS server on WSS as datastore for Linux FTP server located in DMZ and CIFS services for servers in LAN. Using Celerra fileserver I was able to create multiple instances of fileservers with multiple virtual interfaces and separate filesystems so all data and networks would be separated. Would it be possible to do something similiar on WSS?

    Read the article

  • Dell PE R710 Lifecycle Controller

    - by rihatum
    Hi All, How would one use the dell lifecycle controller on the new R710 Servers ? I have seen demos on youtube and it can install linux, windows from there without inserting the installation discs ? I am on the same LAN as the server and have setup iDrac Express (we dont have drac enterprise). Any suggestions ? Thanks

    Read the article

  • Unable to start sql service when TCP/IP is enabled under SSCM - SS Network Configuration

    - by ebel
    I get error 10048. and this in event history: The SQL server service terminated with server-specific error. Only one usage of each socket (protcol/network address/port) is normall permitted. Any idea howto fix this ? Port set is the default 1433... If this is turned off, which is default of course, SQL service starts like a champ. I have done this config many times on other servers with no problem.

    Read the article

  • Is there Powershell way to re-apply a restored password for the IIS IUSR account?

    - by Philippe Monnet
    On one of our IIS web servers the IUSR account suddenly expired or got corrupted, I recovered the password from the IIS metabase (using Cscript adsutil.vbs get w3svc\anonymoususerpass after switching IsSecureProperty = False). I then reset the password accordingly. Now I have to re-key that password on the Directory Security tab of all virtual directories (for the anonymous account) of all web sites on that server. Is there a way to automate this using Powershell? (I have searched so far in vain)

    Read the article

  • PDF Corruption When Sending with Microsoft Products

    - by Winner
    I have the same PDF corruption problem in two different offices that I am the tech support for. Office 1: Started in the middle of December. PDF received from outside the office and is viewable with no problems. I have no control over how it is created. If it is forwarded to anyone else, the PDF is corrupted. I have forwarded it to multiple people in the office. I have tried viewing with Reader 8, 9, Sumatra and Fox IT. I have tried forwarding to Gmail and their viewer says it is corrupted. If I save the PDF and create a new email, it will be corrupted when sent using Outlook 2003, Outlook 2007, Microsoft Live Mail and Outlook Express. If I create the email using Thunderbird 3, Gmail or the webclient Iclient for IPSwitch IMail it will not be corrupted. I have confirmed the same results when using our IMail SMTP and also Using Gmail as the SMTP server. To be clear, if I created in Thunderbird, Gmail or Iclient and received on any of the MS products, it will be viewable. This office receives PDFs daily from multiple sources. There is only a small subset that are having this problem. So far they problem PDFs are from two different companies they deal with, but not all of the PDFs are bad. Office 2: PDFs are created by a management system. I'm not sure what engine is used to create them. Same exact same issues. At both offices, I noticed that the file size is wrong. One small PDF the proper file size is 12kb for the PDF when it's viewable, when it shows up corrupted it is only 8kb. We handle the email for both offices. Both are POP servers, not Exchange. IMail was updated after these issues start. I have tried different SMTP servers and it still seems to happen only when using Microsoft products to send. Anyone else having problems with PDFs getting corrupted? Any ideas how to find out a resolution?

    Read the article

  • How Amazon ELB Health check Works?

    - by diegodias
    I am having problems configuring ELB for my servers. I start 2 micro instances with the exact same conf and try to do Load Balancing. However they never pass the health check (HTTP port 80 path:"/"). Ping is ok on the website. So is telnet on 80. How did the health check works? Am I doing anything really wrong? EDIT: Both Direct browser access and GET (via curl) works correctly (status 200)

    Read the article

  • CommunicationException when shutting down JBoss 4.2.2

    - by Brian
    I have deployed an application using JBoss 4.2.2 on a 64-bit RHEL5 server. Since there are other JBoss servers, I had to change some port configurations so that there would be no conflicts when starting the server. So right now I'm using ports-01 from the sample-bindings.xml file that came in the docs/examples/binding-manager/samples directory. In addition, below is a list of all the files I've edited to reflect the new ports: JBOSS_HOME/servers/default/deploy/jboss-web.deployer/server.xml: Changed Connector port - 8080 to 8180 Changed AJP 1.3 Connector port - 8009 to 8109 JBOSS_HOME/server/default/deploy/jbossws.beans/META-INF/jboss-beans.xml Changed 8080 to 8180 JBOSS_HOME/server/default/conf/jboss-service.xml: Changed 8083 to 8183 Changed 1099 to 1299 Changed 1098 to 1298 Changed 4444 to 4644 Changed 4445 to 4645 Changed 4446 to 4646 Changed 4447 to 4647 JBOSS_HOME/server/default/conf/jboss-minimal.xml: Changed 1099 to 1299 Changed 1098 to 1298 When I start the server (binding to localhost) everything is fine and I'm able to access the application. But when I try to shutdown the server I get the following error: Exception in thread "main" javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost [Root exception is javax.naming.CommunicationException : Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1562) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:634) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:627) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.jboss.Shutdown.main(Shutdown.java:214) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:274) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1533) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:248) ... 5 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.(Socket.java:372) at java.net.Socket.(Socket.java:273) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:84) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:77) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:244) ... 5 more Is there any other file that I need to change the 1099 to 1299, or am I missing some other step?

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • 4 Magento Requests per second = 210 mbit memcache bandwith?

    - by Karsten
    After searching serverfault for similar questions without success, these are my numbers for one magento instance, running on multiple servers: After varnish about 4 requests per second hit the webservers The magento cache is configured to use one separate memcache server where I'm measuring about 210 Mbit/s bandwith usage. Compared to other projects, magento and non-magento, this number seems way off (as in extremely high). I'd like to get some data to compare to, or even better, if you have any idea what exactly causes this/how to find it and how to improve the situation.

    Read the article

  • Header precendence: Apache Vs. PHP specific to cache-control & expires

    - by David
    My companies production dynamic web servers ( Apache + PHP 5.1x) are using the Apache expires module and there is a clause inside http.conf as follows: <FilesMatch ".+"> ExpiresActive On ExpiresDefault "A0" </FilesMatch> If I were to set inside a php script "Cache-Control" and "Expires", would it get eaten by this clause? Normally I would test this on my own but having trouble convincing the Expires module to function on my workstation and the company Admin's are down at the data-center.

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • WINDOWS - Deleting Temporary Internet Files through Group Policy

    - by Muhammad Ali
    I have a domain controller running on Windows 2008 Server R2 and users login to application servers on which Windows 2003 Server SP2 is installed. I have applied a Group Policy to clean temporary internet files on exit i.e to delete all temporary internet files when users close the browser. But the group policy doesn't seem to work as user profile size keeps on increasing and the major space is occupied by temporary internet files therefore increasing the disk usage. How can i enforce automatic deletion of temporary internet files?

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >