Search Results

Search found 99017 results on 3961 pages for 'server side events'.

Page 1300/3961 | < Previous Page | 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307  | Next Page >

  • Apache2/Shibboleth TCP connections stuck in CLOSE_WAIT

    - by RJT
    I run an Apache2 server which uses the Shibboleth daemon (shibd) as federated authentication module. Certain server connections using Shibboleth seem to stick permanently in CLOSE_WAIT state. tcp 38 0 blah.blah:57346 shib.server.:8443 CLOSE_WAIT tcp 38 0 blah.blah:45601 shib.server2:8443 CLOSE_WAIT tcp 38 0 blah.blah:41737 shib.server3:5057 CLOSE_WAIT From what I can find out, CLOSE_WAIT means that when the remote server disconnects, the local application is failing to close the connection, as it should. I suspect shibd is responsible somehow. Needless to say, if enough CLOSE_WAIT connections accumulate, I have a problem. Trying to get rid of the CLOSE_WAIT connections by simply using /etc/init.d/networking restart does not work. In fact networking seems to refuse to close down and restart, and I get a SIOCADDRT: File exists error (ie networking is trying to start without having stopped first). Same problem with ifup -a So I have two questions - one may be easy, and one harder. What's a good way to force networking to restart, and force whatever connections are stuck in CLOSE_WAIT to clear? Any ideas about how to fix shibboleth and force shibd module to behave?

    Read the article

  • chef clients behind firewall

    - by tec
    I am currently learning about chef. What I understood so far: I have to install chef-server on an own server or use the hosted chef. I have to install chef-client on the servers that I want to manage aka nodes (manually or using knife bootstrap). I installed several chef tools on my own PC that I can use to manage the nodes, e.g. knife. Now in my case the specialty is that the nodes are behind a firewall/load balancer/proxy. The nodes can access servers on the outside via NAT (http works and I can configure chef-specific hosts to work as well). However they can only be contacted from the outside via a ssh tunnel. There is really much documentation about chef available but I did not find an answer to these questions: When using knife, is it enough when I set up a ssh tunnel manually on my own PC or does the chef server need to contact the nodes? When using knife, can I configure it to setup a ssh tunnel automatically? When using the chef server web ui can I configure it to connect to the nodes via ssh tunnel or do I need a setup where I setup the tunnel myself e.g. using monit? Is this even possible with hosted chef? Instead of using knife or the web ui: Can I issue the same management commands directly on the nodes using chef-client? What solution would you recommend? Thanks a lot for taking your time to help and answering one or more of these related questions

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • saslauthd authentication error

    - by James
    My server has developed an expected problem where I am unable to connect from a mail client. I've looked at the server logs and the only thing that looks to identify a problem are events like the following: Nov 23 18:32:43 hig3 dovecot: imap-login: Login: user=, method=PLAIN, rip=xxxxxxxx, lip=xxxxxxx, TLS Nov 23 18:32:55 hig3 postfix/smtpd[11653]: connect from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: xxxxxxx.co.uk[xxxxxxxx]: SASL LOGIN authentication failed: generic failure Nov 23 18:32:56 hig3 postfix/smtpd[11653]: lost connection after AUTH from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:56 hig3 postfix/smtpd[11653]: disconnect from xxxxxxx.co.uk[xxxxxxx] The problem is unusual, because just half an hour previously at my office, I was not being prompted for a correct username and password in my mail client. I haven't made any changes to the server, so I can't understand what would have happened to make this error occur. Searches for the error messages yield various results, with 'fixes' that I'm uncertain of (obviously don't want to make it worse or fix something that isn't broken). When I run testsaslauthd -u xxxxx -p xxxxxx I also get the following result: connect() : No such file or directory But when I run testsaslauthd -u xxxxx -p xxxxxx -f /var/spool/postfix/var/run/saslauthd/mux -s smtp I get: 0: OK "Success." I found those commands on another forum and am not entirely sure what they mean, but I'm hoping they might give an indication of where the problem might lie. If it makes any difference, I'm running Ubuntu 10.04.1, Postfix 2.7.0 and Webmin/ Virtualmin.

    Read the article

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • SSH hangs without password prompt

    - by Wilco
    Just reinstalled OS X and for some reason I now cannot connect to a specific machine on my local network via SSH. I can SSH to other machines on the network without any problems, and other machines can SSH to the problematic one as well. I'm not sure where to start looking for problems - can anyone point me in the right direction? Here's a dump of a connection attempt: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 10.0.1.7 [10.0.1.7] port 22. debug1: Connection established. debug1: identity file /Users/nwilliams/.ssh/identity type -1 debug1: identity file /Users/nwilliams/.ssh/id_rsa type -1 debug1: identity file /Users/nwilliams/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.5 debug1: match: OpenSSH_4.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '10.0.1.7' is known and matches the RSA host key. debug1: Found key in /Users/nwilliams/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic ... at this point it hangs for quite a while, and then resumes ... debug1: Unspecified GSS failure. Minor code may provide more information Server not found in Kerberos database debug1: Unspecified GSS failure. Minor code may provide more information Server not found in Kerberos database debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Trying private key: /Users/nwilliams/.ssh/identity debug1: Trying private key: /Users/nwilliams/.ssh/id_rsa debug1: Trying private key: /Users/nwilliams/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive

    Read the article

  • Apache mod_rewrite and mod_vhost_alias Virtual Hosts and %1

    - by Matt Wall
    I have put the main bits of my httpd.conf down below. I am using %1 to get the host field so I can dynamically add vhosts by just creating dns/folders. One problem is I need to reference this: HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" In Apache when I try say to do this: http://test.domain.com/hds-vod/myfile.mp4.f4m it sees the %1 in the logs, and fails. Apache gives me this: [error] mod_jithttp [403]: No access to D:/Content/%1/DefaultContent/eve.mp4 What I'm looking for is the D:/Content/%1/DefaultContent/eve.mp4 to become D:/Content/test/DefaultContent/eve.mp4 Anyone have any useful resources / hints etc. to help me? Meanwhile my Google searching continues...! Listen 80 ServerName main1.rtmphost.com AccessFileName .htaccess ServerSignature On UseCanonicalName Off HostnameLookups Off Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 RewriteLogLevel 0 RewriteLog logs/rewrite.log DocumentRoot D:/Content LoadModule vhost_alias_module modules/mod_vhost_alias.so VirtualDocumentRoot "D:/Content/%1" RewriteEngine On <Directory /> Options None AllowOverride None Order allow,deny Allow from all Satisfy all </Directory> <IfModule f4fhttp_module> <Location /vod> HttpStreamingEnabled true HttpStreamingContentPath "D:/FMSApps/%1" Options FollowSymLinks </Location> Redirect 301 /live/events/livepkgr/events /hds-live/livepkgr <Location /hds-live> HttpStreamingEnabled true HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" HttpStreamingF4MMaxAge 2 HttpStreamingBootstrapMaxAge 2 HttpStreamingFragMaxAge -1 Options FollowSymLinks </Location> </IfModule>

    Read the article

  • PHP 5.3 on IIS gives 404 error in CGI mode

    - by reinier
    Slowly losing my mind here. I had PHP 5.2 working fine (ISAPI) under IIS, but for some extension I needed 5.3. So no worries, I installed this but it turns out ISAPI is not supplied anymore. I followed the install tutorials for fastcgi and ended up with a 500 internal server error for every PHP page served. So my current situation is: I have fastcgi removed. In my websites I have added PHP (head, get, post) and routed them to c:\php\php-cgi.exe. Result: every PHP page I try (even the ones with just text) gives 404 not found error. Any HTML file I put in the same folder, serves without a hitch. Who can help me please... How hard can something like this be right? For me apparently very hard. Extra information: ran the installer as suggested below. Set it to use fastcgi. my fcgiext.ini file looks like this now: [types] php=c:\php\php-cgi.exe [c:\php\php-cgi.exe] exepath=c:\php\php-cgi.exe from the command-line a 3 line PHP file with just phpinfo(); works fine from the server the same PHP file with just phpinfo(); results in the internal server 500 error. from the server a PHP file with just text works fine when changing the document types in IIS management console and point the PHP extension directly to c:\php\php-cgi.exe results in 404 for every PHP file the php.ini is the php.ini.production file which came in the distribution. No edits were made. Setting the IIS PHP handler directly to PHP (not via fastcgi) c:\php\php-cgi.exe results in the following: display a PHP page with only text....works fine display a page with only phpinfo(); results in 404 not found

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

  • How to configure amavisd-new for only scanning on particular senders/servers?

    - by mailq
    I'd like to know how to configure amavisd-new to only scan for Spam on particular clients (IPs, CIDRs or hostnames) or alternatively sender's email domain. I know that it is possible to do it on a recipient's mail address but not on how to do it for the sender's mail address. It is even possible to do it on a recipient's IP address with policy banks. But my approach should be to be independent of recipient and only relay on the sender. What I want to accomplish is to only scan mails originating from Yahoo, Google, Hotmail and the other big senders. So it is easier to configure which senders should be observed than the ones that shouldn't. I known that it is easier to achieve on the MTA side, but that is not part of the question because I already go a solution on the MTA side. I want to do it on amavisd-new. And it doesn't help to know how to put senders on a whitelist, as this still means that the mail goes through all the scanning but then gets a high negative score. The mail shouldn't be scanned at all unless sent by the big players. So which parameters in amavisd-new is the right one to enable scanning for particular senders and only for these?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Prestashop is not saving Memcached settings

    - by ianenri
    I have a issue with the admin site of prestashop. I'm trying to activate the caching system but it doesn't save the setting. When I add a server (I'm using an Amazon ElasticCache server) saves it, but when I select the enable option and click Save, it redirects me to "Back Office Preferences Performance" but with a blank page and the admin tabs visible. I go back to those settings again and i see that the caching option is disabled. Also, there is a warning: "To use Memcached, you must install the Memcache PECL extension on your server. http://www.php.net/manual/en/memcache.installation.php" , even if i already installed memcached via yum. I tried also, by modifying the settings.inc.php file editing define('_PS_CACHE_ENABLED_', '0'); to: define('_PS_CACHE_ENABLED_', '1'); But i get 500 Internal Server Error in every page, so i prefer to leave it as before. Any ideas? I'm using PrestaShop 1.4.6.2 with Nginx 1.0.11 and PHP-FPM 5.3.8 in a CentOS 5.7 system.

    Read the article

  • 554 - Sending MTA’s poor reputation

    - by Phil Wilks
    I am running an email server on 77.245.64.44 and have recently started to have problems with remote delivery of emails sent using this server. Only about 5% of recipients are rejecting the emails, but they all share the following common message... Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. As far as I can tell my server is not on any blacklists, and it is set up correctly (the reverse DNS checks out and so on). I'm not even sure what the "Sending MTA" is, but I assume it's my server. If anyone could shed any light on this I'd really appreciate it! Here's the full bounce message... Could not deliver message to the following recipient(s): Failed Recipient: [email protected] Reason: Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. -- The header and top 20 lines of the message follows -- Received: from 79-79-156-160.dynamic.dsl.as9105.com [79.79.156.160] by mail.fruityemail.com with SMTP; Thu, 3 Sep 2009 18:15:44 +0100 From: "Phil Wilks" To: Subject: Test Date: Thu, 3 Sep 2009 18:16:10 +0100 Organization: Fruity Solutions Message-ID: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_01C2_01CA2CC2.9D9585A0" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Acosujo9LId787jBSpS3xifcdmCF5Q== Content-Language: en-gb x-cr-hashedpuzzle: ADYN AzTI BO8c BsNW Cqg/ D10y E0H4 GYjP HZkV Hc9t ICru JPj7 Jd7O Jo7Q JtF2 KVjt;1;YwBoAGEAcgBsAG8AdAB0AGUALgBoAHUAbgB0AC0AZwByAHUAYgBiAGUAQABzAHUAbgBkAGEAeQAtAHQAaQBtAGUAcwAuAGMAbwAuAHUAawA=;Sosha1_v1;7;{F78BB28B-407A-4F86-A12E-7858EB212295};cABoAGkAbABAAGYAcgB1AGkAdAB5AHMAbwBsAHUAdABpAG8AbgBzAC4AYwBvAG0A;Thu, 03 Sep 2009 17:16:08 GMT;VABlAHMAdAA= x-cr-puzzleid: {F78BB28B-407A-4F86-A12E-7858EB212295} This is a multipart message in MIME format. ------=_NextPart_000_01C2_01CA2CC2.9D9585A0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • Network profile reverts to 'Unidentified' following Windows Update reboot

    - by user140575
    I have searched high and low for a solution to this problem. I have multiple servers running Windows 2000 Server as well as Windows Server 2003, 2003 R2, and 2008 R2. All of these servers are on the same Active Directory domain. The servers run showing the network profile as Domain Network, which is fine and correct. However, when a Windows update is installed, the server changes the profile to Unidentified Network once it has rebooted. This then doesn't allow any traffic to the server. For security reasons, we can't turn the firewalls off for. The only way to fix the problem is to physically be in front of the machine and work on it to change the profile back. Once the Profile has been reinstated to the Domain profile, it will be fine until the next month's update. This happens on all the Windows software mentioned above. The machines are not all identical, so it's not a hardware problem either. If anyone can help I'd be very grateful.

    Read the article

  • Red Hat 5.3 on HP Proliant DL380 G5 and failed drive on RAID controller

    - by thinkdreams
    I have a development ERP server here in my office that I assist with support on, and originally the DBA requested a single drive setup for some of the drives on the server. Thus the hardware RAID controller (an HP embedded controller) looks like: c0d0 (2 drive) RAID-1 c0d1 (2 drive) RAID-1 c0d2 (1 drive) No RAID <-- Failed c0d3 (1 drive) No RAID c0d4 (1 drive) No RAID c0d5 (1 drive) No RAID c0d2 has failed. I replaced the drive immediately with a spare using the hot-swap, but the c0d2 continues to mark itself as failed, even when I umount the partition. I'm loathe to reboot the server since I'm concerned about the server coming back up in rescue mode but I'm afraid that's the only way to get the system to re-read the drive. I assumed there was some sort of auto-detection routine for this, but I haven't been able to figure out the proper procedure. I have installed the HP ACU CLI utilties, so I can see the hardware RAID setup. I'd really like to find out what the proper procedure should have been, where I went wrong, and how to correct it now. Obviously this goes without saying I should NOT have listened to the DBA and set the drives up as RAID-1 throughout as was my first instinct. He wasn't worried about data loss, but it sure would have been easier to replace the failed drive. :)

    Read the article

  • HTML Redirect issue with Apache2

    - by Vijit Jain
    I am facing an issue with the ProxyPass on my Apache server on Ubuntu. I have configured Apache to deal with Virtual Hosts on my server. There is an application with runs on the server and uses ports 8001 8002. I need to do something like www.example.com/demo/origin to display the contents that I would see when I visit www.example.com:8000. The contents to be displayed are a host of HTML pages. This is the section of the virtual host config that has issues ProxyPass /demo/vader http://www.example.com:8001/ ProxyPassReverse /demo/vader http://www.example:8001/ ProxyPass /demo/skywalker http://www.example.com:8002/ ProxyPassReverse /demo/skywalker http://www.example.com:8002/ Now when I visit example.com/demo/skywalker, I see the first page of port 8002, say the login.html page. The second should have been www.example.com/demo/skywalker/userAction.html, instead the server shows www.example.com:8000/login.html. In the error logs I see something like: [Mon Nov 11 18:01:20 2013] [debug] mod_proxy_http.c(1850): proxy: HTTP: FILE NOT FOUND /htdocs/js/demo.72fbff3c9a97f15a4fff28e19b0de909.min.js I do not have any folder htdocs in the system. This is only an issue while viewing .html pages. Otherwise, no such issue occurs. When I visit localhost:8001 it will show any and all contents without any errors or issues. www.example.com/demo/skywalker displays a separate webpage www.example.com/demo/origin displays a different webpage and www.example.com/demo/vader displays a different webpage. I have also tried to use one more type of combination, <Location /demo/origin/> ProxyPass http://localhost:8000/ ProxyPassReverse http://localhost:8000/ ProxyHTMLURLMap http://localhost:8000/ / </Location> This fails as well. I would greatly appreciate if anyone can help me resolve this issue.

    Read the article

  • Nginx all subdomain points to one subdomain (gitlab) rule

    - by Alkimake
    I have installed gitlab on my server and use nginx as http server... I simply used recipe for gitlab on nginx # GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.250.81:80; # e.g., listen 192.168.1.1:80; server_name gitlab.xxx.com; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } gitlab.xxx.com works fine and i get gitlab web documents. But if i want another subdomain i use for Jira (jira.xxx.com) on port 80 (i setup jira on 8080 port normally) gets gitlab web site also. How can i restrict this rule only serving for gitlab, or may be i can redirect jira.xxx.com to jira.xxx.com:8080

    Read the article

  • how to cause linux system datetime to run faster than real world datetime?

    - by JamesThomasMoon1979
    Background I want to monitor a running linux system over several days. It's a custom gentoo build and with much custom software on board. This software has ongoing maintenance timers and cron scripts and other clock driven events. I need to verify these scheduled events are working. Problem Waiting for the system to step through daily and weekly activity is a long wait time. And modifying all clock-based timers on the system would be time consuming. Yet, I often want to test a system's end-to-end scheduled activities without waiting a week. Potential Solution Have the linux system under test appear to run through it's daily cycle of activity within just a few hours. My Question for Serverfault Is there a way to cause the system's time to run faster than real world time? My first thought is manipulating the ntp daemon to repeatedly and smoothly increment the clock . Any other ideas? And yes, I know this may have strange side affects. However, the system has no important or time critical interactions with systems outside of itself. And this may be a valuable testing technique.

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

< Previous Page | 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307  | Next Page >