Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1247/3479 | < Previous Page | 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254  | Next Page >

  • how to define service runlevel order position?

    - by DmitrySemenov
    I setup bind-dlz and need mysql start prior NAMED when system starts here is what I have [root@semenov]# ./test.sh mysql 0:off 1:off 2:on 3:on 4:on 5:on 6:off named 0:off 1:off 2:off 3:on 4:on 5:on 6:off lrwxrwxrwx. 1 root root 15 Apr 15 18:57 /etc/rc3.d/S93mysql -> ../init.d/mysql lrwxrwxrwx. 1 root root 15 Apr 15 18:57 /etc/rc3.d/S90named -> ../init.d/named here is what I have in mysql init script # Comments to support chkconfig on RedHat Linux # chkconfig: 2345 84 16 # description: A very fast and reliable SQL database engine. # Comments to support LSB init script conventions ### BEGIN INIT INFO # Provides: mysql # Required-Start: $local_fs $network $remote_fs # Should-Start: ypbind nscd ldap ntpd xntpd # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop MySQL # Description: MySQL is a very fast and reliable SQL database engine. ### END INIT INFO so when I remove named from chkconfig and have there just mysql, it starts with order number 84: /etc/rc3.d/S84mysql - ../init.d/mysql but when I add named inside chkconfig it's order changes to 93: /etc/rc3.d/S93mysql - ../init.d/mysql as a result mysql will be starting after named and named will fail (no sql available) any ideas what I'm doing wrong? here is what I have in named init script # chkconfig: 345 90 16 # description: named (BIND) is a Domain Name Server (DNS) \ # that is used to resolve host names to IP addresses. # probe: true ### BEGIN INIT INFO # Provides: $named # Required-Start: $local_fs $network $syslog # Required-Stop: $local_fs $network $syslog # Default-Start:2 3 4 # Default-Stop: 0 1 2 3 4 5 6 # Short-Description: start|stop|status|restart|try-restart|reload|force-reload DNS server # Description: control ISC BIND implementation of DNS server ### END INIT INFO thanks, Dmitry

    Read the article

  • vmdk Recovery after migration from 3.5 to 4 and fallback tentative.

    - by olgirard
    Hy, I've tryed to migrate some VM from my 3.5i environment to a brand new vSphere 4.0 U1. The two platforms are running simultaneously, sharing the same SAN. I Migrate my VM by stopping it, unregistering in vcenter (esx ver. 3.5, i call it esx3), register in vSphere (esx ver. 4, i call it esx4), and migrate upgrade virtual hardware before powering it up (First mistake). vMotion was enabled on esx4, seem to be a second mistake. After a day or so, i encountred problems joigning the esx server (esx4) and decided to unregister my server for esx4 and fallback to esx3. esx3 refused to boot, i supposed this was due to virtual hardware in Version 7 so i recreated a new VM pointing to the vmdk of the old VM. Everithing seemed fine until i log into the server and discover that i was running on the original disk ith every snapshots ignored even those created on esx3. I tried to reboot VM on esx4 but VM doesn't power up because "The parent virtual disk has been modified since the child was created". I've got a copy of a later state of the drive but generated between two snapshots (ovf generated with canverter standalone) as a backup. Do i have a chance to recover at least some files on the virtual drive or (as i tink) all is played, i've done enought mistakes for this time. Thanks for your help.

    Read the article

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • SSL Returning Blank Page, No Catalina Errors

    - by Mr.Peabody
    This is my second, maybe third, time configuring SSL with Tomcat. Earlier I had created a self signed, which worked, and now using my signed is proving fruitless. I am using Tomcat, operating from the Amazon Linux API. When using the signed cert/keystore, my server is starting normally without errors. However, when trying to navigate to the domain it is giving me an "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" error. My server.xml file looks as follows: <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ec2-user/.keystore/starchild.jks" keystorePass="d6b5385812252f180b961aa3630df504" /> It couldn't hurt to also mention that I'm using a wildcard certificate. Please let me know if anything looks amiss! EDIT: After looking more into this, I've determined there may be nothing is wrong with the Server.xml, or the listening ports. This is becoming more of an actual certificate error, as the curl request is giving me this error: curl: (35) Unknown SSL protocol error in connection to jira.mywebsite.com:-9824 Though, I can't seem to figure out what the "-9824" is. When comparing this curl to another similar setup (using the same Wildcard Certificate) it's turning up the full handshake, which is to be expected. I believe this is now between the protocol/cypher set default on JIRA servers.

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • Directory directive: AuthType None but still need an AuthProvider?

    - by Steffen Winkler
    For now I just need the server to let me download files from one specific folder (in my case I chose /opt/myFolder for that task) Distribution is Debian 6.0 *edit_start* Apache version is 2.4, according to their official documentation, the Order/Allow clauses are deprecated and should not be used anymore I'm an idiot: Apache version is 2.2. *edit_end* My directory directives in apache2.conf look like this: <IfModule dir_module> DirectoryIndex index.html index.htm index.php </IfModule> ServerRoot "/etc/apache2" DocumentRoot "/opt/myFolder" <Directory /> Options FollowSymLinks AuthType None AllowOverride None Require all denie </Directory> <Directory "/opt/myFolder/*"> Options FollowSymLinks MultiViews AllowOverride None AuthType None Require all allow </Directory> When I try to access a file inside that folder (http://myserver.de/aTestFile.zip) I get an Internal Server Error. Also Apache writes the following error into it's log: configuration error: couldn't check user. Check your authn provider!: /aTestFile.zip Why would I need an authn provider if I don't want any authentication? Also I hope someone can explain to me what kind of AuthenticationProvider I'd need for that. Everytime I search for those things I get pointed at people asking how to protect files/directories with passwords or restrict access to some IP addresses, which doesn't really help me. ok, since I've Apache version 2.2, here is the error I get when using the Order/Deny/Allow commands instead of AuthType/Require: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • open source solution to a gateway for a network of a housing cooperative of 150 people

    - by SirDinosaur
    i just inherited a barely functioning network for a student housing cooperative of about 150 people. in it's current state, as i understand it from the previous person in charge of the network, we have working wireless access points and working ethernet cords going to working gigabit switches going to a barely functioning gateway (right now a simple home router) to one of three possible outbound connections. it is possible to connect to the network through the wireless or ethernet, but especially during peak hours, packets / connections are likely dropped or otherwise get no response. my intuition tells me to replace the gateway with something that can handle multiple outbound connections (WAN) and one inbound connection (LAN), while the rest of the network seems suitable for now. i'm somewhat knowledgable in Linux (been using Debian after first Arch Linux) and i want to use as much open source as possible, but i'm confused whether or not a simple server that i could easily understand will work for this situation. do i need specialized hardware to handle the switching more effectively? if so, what are my options? (i found this, thoughts?) or if a Debian server would work, anything else i should about the specs required for this type of server? also links to any useful information on using open source to maintain this type of network would be most appreciated. <3 P.S. crossposted http://redd.it/yybp2.

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • Nginx proxy hangs when proxiing to itself

    - by Thomas
    I have Nginx running as a proxy for a number of services including a Geoserver running on port 8080 with the following config: location ^~ /wms/ { rewrite ^/wms/(.*)$ /geoserver/ows$1 break; proxy_pass http://127.0.0.1:8080; proxy_connect_timeout 60s; proxy_read_timeout 150s; } and a proxy service to avoid SOP problems which works as follows: location ^~ /proxy/?targetURL= { rewrite ^/proxy/?targetURL=(.*)$ $1 break; proxy_pass $1; proxy_connect_timeout 60s; proxy_read_timeout 150s; } My web server is also under the same domain, ran by a jetty on port 8888, handled by the same proxy. location / { proxy_pass http://127.0.0.1:8888; proxy_connect_timeout 60s; proxy_read_timeout 150s; } From my web application I make WMS server calls for data via my proxy service. It works fine for external servers but it hangs when I call my own internal geoserver. My geoserver proxy works fine, I can make WMS service queries with the said URL. The call that hangs is basically: http://mywebappdomain.com/proxy/?targetURL=http://mywebappdomain.com/wms/?my_set_of_parameters Which means that the proxy rule applies and the WMS service is called from the same server. Is there an issue with proxying over itself?

    Read the article

  • Puppet inventory service using puppetdb

    - by Oli
    I have 3 servers set up. A puppet master using passenger (puppet-server1), dashboard using passenger (puppet-server2) and puppetdb (puppet-server3). I cannot get the inventory service working in the dashboard. The puppet master is able to sign certs and hand out manifests. The nodes have checked in to the dashboard ok The puppetdb appears to be working - logs files as follows: 2012-12-13 17:53:10,899 INFO [command-proc-74] [puppetdb.command] [8490148f-865a-45c8-b5b5-2c8824d753dd] [replace facts] puppet-server3.test.net 2012-12-13 17:53:11,041 INFO [command-proc-74] [puppetdb.command] [dfcc5168-06df-41d4-9a97-77b4cd3f4a2b] [replace catalog] puppet-server3.test.net 2012-12-13 17:55:28,600 INFO [command-proc-74] [puppetdb.command] [b2cc0a96-0404-49f5-96ad-19c778508d3d] [replace facts] puppet-client2.test.net 2012-12-13 17:55:28,729 INFO [command-proc-74] [puppetdb.command] [4dc4b8f3-06df-4dad-a89a-92ac80447b99] [replace catalog] puppet-client2.test.net The puppet master has the following configured in puppet.conf [master] certname = puppet-server1.test.net storeconfigs = true storeconfigs_backend = puppetdb reports = store, http reporturl = http://puppet-server2.test.net/reports/upload The puppet master have the following configured in auth.conf #access for puppet dashboard facts path /facts auth yes method find, search allow dashboard The puppet dashboard has this configured in /usr/share/puppet-dashboard/config/settings.yml # Hostname of the inventory server. inventory_server: 'puppet-server3.test.net' # Port for the inventory server. inventory_port: 8081 The inventory is on as I see a link to the inventory in the dashboard server But I am getting this error: Inventory Could not retrieve facts from inventory service: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A clearly an SSL error - but I have followed the documentation and have no idea how to fix this. Can anyone help please? Oli

    Read the article

  • Add static route through DHCP

    - by MathieuK
    I'm trying to get an OSX Lion Server to provide a static route to its clients (all OSX Lion) over DHCP. I can't get the client to actually apply the static route. So far, I've managed to get the DHCP server (BOOTPD) to actually serve the DHCP OPTION 33 (static_route) on the DHCP offers by editing /etc/bootpd.plist and adding something like: <key>dhcp_option_33</key> <data>[some base64 goes here]</data> .. and restarting the DHCP service. On the client I've managed to get the client to actually request the dhcp option by modifying and adding option 33 to the DHCPRequestedParameterList key: <key>DHCPRequestedParameterList</key> <array> ... keys snipped for brevity ... <integer>33</integer> </array> .. and rebooting the client. This makes the client request the static_route option from the DHCP server ( i can see the proper output in ipconfig getpacket en0 ) but it doesn't actually apply the rule. Has anyone ever succeeded in applying static_route options on OSX clients through DHCP?

    Read the article

  • 10Gbe sfp+ Cross Over Cable required? Is there such a thing?

    - by dc-patos
    To preface, this is my first experience with 10GBe networking and I have encountered an issue which research does not seem to document a solution for... I have two servers (older DL580G5 and DL380G5), each with a HP NC522SFP 10Gbe dual sfp+ port adapter. I have purchased copper "passive" direct connect adapter cables (which look like twinax), which seem to work well when I connect them to the sfp+ ports on my Dell 5524 switch. However, if I directly connect the two servers with the same cable, the link doesn't come up. I am running WS2012 standard on each server. My intention is to use one of these servers as a home brew SAN and I would like to enable mutiple 10Gbe paths for iSCSI traffic. My question(s): Can I connect the two adapters to each other, such as I would with other less speedy generations of ethernet? If I can, do I require a crossover cable, or some type of other sfp+ cable solution to do this? My 10Gbe sfp+ switch ports are premium, but server to server connections are doable in small numbers for me and I would really like the multiple paths this would give me. Is there a simple solution?

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • High apache load but zero traffic

    - by Adie
    I have a problem with new server.. I use VPS Centos with 1GB of ram and I use wordpress CMS. The traffic <100 visitor/hour, but the apache have high load and make the server hang with zero free of ram and can't connect through ssh. I should reboot the vps to make it works here is the load on Apache looks like Tasks: 66 total, 1 running, 65 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 12.3%sy, 0.0%ni, 48.1%id, 23.0%wa, 4.8%hi, 10.2%si, 0.0% Mem: 1018776k total, 116620k used, 902156k free, 1236k buffers Swap: 1048568k total, 1013052k used, 35516k free, 26628k cached 2949 apache 20 0 459m 42m 3732 D 3.0 4.2 0:09.23 httpd 2959 apache 20 0 460m 29m 3744 D 2.0 3.0 0:02.72 httpd 2968 apache 20 0 460m 26m 3808 D 2.0 2.6 0:02.27 httpd 2972 apache 20 0 460m 24m 3784 D 2.0 2.5 0:02.44 httpd 2986 apache 20 0 460m 29m 3784 R 2.0 2.9 0:02.40 httpd 2969 apache 20 0 458m 29m 3864 D 1.6 3.0 0:02.63 httpd 2974 apache 20 0 460m 25m 3820 D 1.6 2.6 0:02.43 httpd 2990 apache 20 0 460m 23m 3920 D 1.6 2.4 0:02.36 httpd 2994 apache 20 0 460m 31m 3756 D 1.6 3.2 0:02.62 httpd 2956 apache 20 0 460m 26m 3740 D 1.3 2.7 0:02.73 httpd 2957 apache 20 0 465m 22m 3644 D 1.3 2.3 0:02.80 httpd 2967 apache 20 0 458m 24m 3764 D 1.3 2.5 0:02.60 httpd 2970 apache 20 0 463m 25m 3764 D 1.3 2.6 0:03.07 httpd 2971 apache 20 0 451m 22m 3792 D 1.3 2.3 0:02.47 httpd 2973 apache 20 0 458m 25m 3768 D 1.3 2.6 0:02.52 httpd 2987 apache 20 0 465m 20m 3772 D 1.3 2.1 0:03.02 httpd But sometimes the server have uptime more than 5-10hrs but after that the problems start

    Read the article

  • postfix smtpd rejecting mail from outside network match_list_match: no match

    - by Loopo
    My postfix (V: 2.5.5-1.1) running on ubuntu server (9.04) started to reject mail arriving in from outside about 2 weeks ago. Doing a "manual" session via telnet shows that the connection is always closed after the MAIL FROM: [email protected] line is input, with the message "Connection closed by foreign host." Doing the same from another client inside the LAN works fine. In the log files I get the line "lost connection after MAIL from xxxxx.tld[xxx.xxx.xxx.xxx]" This is after some lines like: match_hostaddr: XXX.XXX.XXX.XXX ~? [::1]/128 match_hostname: XXXX.tld ~? 192.168.1.0/24 ... match_list_match: xxx.xxx.xxx.xxx: no match which seem to suggest some kind of filter which checks for allowed addresses. I have been unable to locate where this filter lives, or how to turn it off. I'm not even sure if that's what's causing my problem. Connections from inside the LAN don't get disconnected even though they also show a "match_list_match: ... no match" line. I didn't change any configuration files recently, below is my main.cf as it currently stands. I don't really know what all the parameters do and how they interact. I just set it up initially and it worked fine (up to recently). smtpd_banner = $myhostname ESMTP $mail_name (GNU) biff = no readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/server.crt smtpd_tls_key_file=/etc/ssl/private/server.key #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = no smtp_use_tls=no smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth myhostname = XXXXXXX.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = XXXX.XXXX.com, XXXX.com, localhost.XXXXX.com, localhost relayhost = XXX.XXX.XXX.XXX mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_sasl_local_domain = #smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_ when checking the process list, postfix/smtpd runs as smtpd -n smtp -t inet -u -c -o stress -v -v Any clues?

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • systemd: enabling cherokee service as a `unit file`

    - by Calvin Cheng
    So I am learning how to use systemd to initialize my services automatically on server reboot. So of course, I first make sure I have systemd and some optional systemd related packages installed. pacman -S systemd initscripts-systemd Installation seems to go well and checking, I can see that systemd and its dependency libsystemd are installed. And the optional package initscripts-systemd is also installed:- [root@li280-195 ~]# pacman -Ss systemd extra/libsystemd 44-5 [installed] systemd client libraries extra/systemd 44-5 [installed] system and service manager extra/systemd-sysvcompat 2-2 sysvinit compat symlinks for systemd community/initscripts-systemd 20120412-1 [installed] Arch specific systemd initialization/bootup scripts for systemd community/systemd-arch-units 20120412-2 Arch specific Systemd unit files Next, I ensure that systemd is loaded up when my server reboots, via grub in grub's /boot/grub/menu.lst file like this:- kernel /boot/vmlinuz root=/dev/xvda ro init=/bin/systemd Rebooting my server to check, all loads up well and I can check that systemd is operational via:- systemctl list-unit-files However, I don't see my cherokee initialization script (which is simply created at /etc/rc.d/cherokee when I installed cherokee earlier via pacman -S cherokee) being listed as one of my unit files. So the question is, how do I do that? How do I put my cherokee initialization script under systemd's control?

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • Apache MaxClients doubt

    - by Milan Babuškov
    I have a busy Apache server serving only dynamic PHP+MySQL pages. It is a prefork Apache, version 2.2.4 with following config: KeepAlive off StartServers 8 MinSpareServers 32 MaxSpareServers 64 ServerLimit 512 MaxClients 512 MaxRequestsPerChild 4000 MaxClients/ServerLimit used to be set to 256, but I got the following error in error_log so I increased it: [error] server reached MaxClients setting, consider raising the MaxClients setting It seems to work now, but I have a doubt. Looking at MySQL log of queries, I have a couple hundred clients per seconds, but "ps ax" only shows 8, 9 or 10 processes running: [root@engine ~]# ps ax | grep http | wc -l 10 I even got this many processes when the above error message was shown in error_log. This made me investigate further. When I run netstat -a, I get something like this: tcp 0 0 engine:http adsl-105-143.teol.net:21453 TIME_WAIT tcp 0 0 engine:http 118-36.static.kds:mck-ivpip TIME_WAIT tcp 0 0 engine:http 118-36.static:oce-snmp-trap TIME_WAIT tcp 0 0 engine:http 118-36.static.kd:unifyadmin TIME_WAIT tcp 0 0 engine:http cable-188-2-25-29.dyna:4906 TIME_WAIT tcp 0 0 engine:http adsl-105-143.teol.net:21458 TIME_WAIT tcp 0 0 engine:http 109-92-83-91.dynamic.:62821 TIME_WAIT tcp 0 0 engine:http cable-89-216-142-192.:63576 TIME_WAIT tcp 0 0 engine:http 109-92-83-91.dynamic.:62819 TIME_WAIT tcp 1081 0 engine:http pttnetadsl38-36.ptt.r:50496 ESTABLISHED tcp 0 0 engine:http cable-188-2-36-196.dyn:4136 TIME_WAIT tcp 0 0 engine:http cable-89-216-142-192.:63580 TIME_WAIT tcp 0 0 engine:http cable-89-216-142-192.:63581 TIME_WAIT etc. When counting those, I get: [root@engine ~]# netstat -a | grep http | wc -l 431 Can anyone tell me what is really going on here and how to make sure my server keeps working, because I only use 50% of available RAM in machine?

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • Installing and running a guest OS on KVM-qemu with only serial console access

    - by nixnotwin
    I am trying to installing a bsd distro with virt-installer. With a Linux distro I used this: virt-install -n debian -r 1024 --vcpus=1 --accelerate -v --disk /var/kvm/installation-disks/debian.img,size=6--nographics --network=bridge:br0,model=ne2k_pci,mac=52:54:00:66:68:09 -l http://ftp.de.debian.org/debian/dists/squeeze/main/installer-amd64/current/images/ -x console=ttyS0,115200 This loads the installer directly from the online mirror. With Fedora I used this mirror: http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ Are there such mirrors for freebsd or openbsd? The reason I want direct installable ftp/http mirrors is because I can access my physical server only via ssh, and it doesn't have a X server or a window manager to give me a VNC GUI. When I tried installing centos 6 with an online mirror I was able to finish the installation via serial console, but after I rebooted it, the serial console never worked for me. I tried everything possible---editing menu.lst, inttab and securtty files. Fedora 16 booted fine from serial console, but got stuck when it loaded anaconda installer. I tried editing freebsd iso installation media by adding serial console option to boot option. And installation was successful. But couldn't boot into it becuase it wasn't giving console acess. I couldn't edit any files as ufs partition cannot be loaded with write access on my Ubuntu server 10.04. Only debian squeeze worked well, it worked for me even without editing a single configuration file. I want to have CLI versions of fedora/centos and freebsd/openbsd. But, looks like there isn't any hope for me to have them, as I have to depend on a serial console to do everything.

    Read the article

< Previous Page | 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254  | Next Page >