Search Results

Search found 10674 results on 427 pages for 'glib config'.

Page 232/427 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Monitoring on java daemon on centos

    - by user111196
    I have a java application which I run using yasjw tool as a daemon. I need to monitor it in case it goes down I need some kind of alert or even restart it. Is there any tool can help me do this on centos environment? The results of ps -ef | grep java root 3109 1 0 Apr06 ? 00:04:35 /usr/java/jdk1.6.0_18/bin/java -Dwrapper.pidfile=/var/run/wrapper.commServer.pid -Dwrapper.service=true -Dwrapper.visible=false -jar /usr/local/yajsw-beta-10.2/wrapper.jar -c /usr/local/yajsw-beta-10.2/conf/wrapper.conf root 3132 3109 0 Apr06 ? 00:25:26 /usr/java/jdk1.6.0_18/bin/java -classpath /usr/local/yajsw-beta-10.2/./wrapperApp.jar:/usr/local -Xrs -Dwrapper.service=true -Dwrapper.console.visible=false -Dwrapper.visible=false -Dwrapper.pidfile=/var/run/wrapper.commServer.pid -Dwrapper.config=/usr/local/yajsw-beta-10.2/conf/wrapper.conf -Dwrapper.port=15003 -Dwrapper.key=4276015160565963367 -Dwrapper.teeName=4276015160565963367$1333699547154 -Dwrapper.tmpPath=/tmp org.rzo.yajsw.app.WrapperJVMMain root 23986 23945 0 16:53 pts/0 00:00:00 grep java pidof java 3132 3109

    Read the article

  • apc.stat causes 500 internal server error

    - by Legit
    When I turn off apc.stat it causes a 500 internal server error. I checked the apache error_log and it's something about: [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Warning: require(): Filename cannot be empty in /var/www/site1/public/index.php on line 17 [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/site1/public/index.php on line 17 I checked that line and here's what it contains: require('./wp-blog-header.php'); I don't see anything wrong with it. Here's my current APC config: APC version: 3.1.10 PHP Version: 5.4.4 How do I resolve this error when i disable apc.stat?

    Read the article

  • Why can't add a hot spare in freebsd? Can anybody help me fix it?

    - by hamlet
    Why can't add a hot spare? Can anybody help me fix it? mfiutil add e1:s1 mfid0 mfiutil: Drive 1 is not available My mfi status:: mfiutil show config mfi0 Configuration: 1 arrays, 1 volumes, 0 spares array 0 of 2 drives: drive 0 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWHSB4C> SAS enclosure 1, slot 0 drive 1 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWJ3AEC> SAS enclosure 1, slot 1 volume mfid0 (136G) RAID-1 64K OPTIMAL spans: array 0 mfiutil show events 1468 (boot + 25s/BATTERY/WARN) - Battery removed 1475 (boot + 52s/DRIVE/WARN) - PD 00(e1/s0) is not a certified drive 1478 (boot + 52s/DRIVE/WARN) - PD 01(e1/s1) is not a certified drive 1480 (boot + 64s/BATTERY/WARN) - BBU disabled; changing WB virtual disks to WT mfiutil show volumes mfi0 Volumes: Id Size Level Stripe State Cache Name mfid0 ( 136G) RAID-1 64K OPTIMAL Disabled

    Read the article

  • SVN : how to change hostname?

    - by elon
    I'd like to sep up SVN repo on local machine. But we already have apache running under localhost. When I use instalator form subversion site with apache option it installs another apache and when I type "localhost" in browser I see this new apache (not the old one). Question is how to run this new apache under other host name. When installing it asks about it, so I set different name, but it still works under localhost (nothing happens). I'd like to have access to svn via URL e.g. "svnrepo" not "localhost". What can I do about it? Which lines of config should be changed (and/or what's more should be changed?) Another way I'm thinking of to solve this problem is to integrate this svn-apache module with mine apache. But still I don't really know how to do it (my apache is 2.2.6)

    Read the article

  • run two apache servers on one computer

    - by harry_T
    I would like to run two XAMPP apache servers and mysql on one Windows computer. My first idea was to run one under directory XAMPP, the other under XAMPP_B. Why you ask? I have two applications that have to be in the "root" directory of localhost. Both servers do not have to be active at same time, so I don't think I will have any conflicts I will have to modify my.cnf in mySQL httpd.conf, apache_start and maybe other config files as well. Or maybe someone can suggest a better way...

    Read the article

  • SSH and Active Directory authentication

    - by disserman
    Is it possible to set up Linux (and Solaris) SSH server to authenticate users in this way: i.e. user john is a member of the group Project1_Developers in the Active Directory. we have something on the server A (running Linux, the server has an access to the AD via i.e. LDAP) in the SSH server LDAP (or other module) authentication config like root=Project1_Developers,Company_NIX_Admins. when john connects to the server A using his username "john" and domain password, the server checks the john's group in the domain and if the group is "Project1_Developers" or "Company_NIX_Admins", makes him locally as a root with a root privileges. The idea is also to have only a "root" and a system users on the server, without adding user "john" to all servers where John can log in. Any help or the idea how to make the above or something similar to the above? Preferred using AD but any other similar solution is also possible. p.s. please don't open a discussions is it secure to login via ssh as root or not, thanks :)

    Read the article

  • vsftpd: refusing to run with writable root inside chroot

    - by MrROY
    I want to setup a anonymous only ftp server (able to upload files). Here is my config file: listen=YES anonymous_enable=YES anon_root=/var/www/ftp local_enable=YES write_enable=YESr. anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES dirmessage_enable=YES use_localtime=YES secure_chroot_dir=/var/run/vsftpd/empty rsa_cert_file=/etc/ssl/private/vsftpd.pem pam_service_name=vsftpd But when i try to connect it: kan@kan:~$ ftp yxxxng.bej Connected to yxxx. 220 (vsFTPd 2.3.5) Name (yxxxg.bej:kan): anonymous 331 Please specify the password. Password: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() Login failed Can anyone help ?

    Read the article

  • tracd multiple projects+nginx reverse proxy

    - by Xeross
    I am trying to setup nginx with a reverse proxy to tracd, however I only want to use 1 tracd. Now first here's my config for this domain server { listen 80; server_name bugs.XXXXXXXX.com; access_log /var/log/nginx/XXXXXXXX-bugtracker.access.log proxy; location / { rewrite ^/bugtracker/(.*)$ /$1; rewrite ^/bugtracker$ /; proxy_pass http://127.0.0.1:81/bugtracker/; proxy_redirect default; proxy_set_header Host $host; } location ~ /\.ht { deny all; } } As you can see there's the rewrite rules, because for some reason all the urls that tracd spews out are like /bugtracker/something. Now this is indeed caused by tracd just sending urls like it normally should however trac is at bugs.XXXXXXXX.com/ and not at bugs.XXXXXXXX.com/bugtracker. So how can I make tracd/trac display the (In this case) correct urls ?

    Read the article

  • issue using Postfix as authen SMTP client relay to Exchange 2010

    - by Gk
    Hi, I'm using postfix to relay mail to Exchange 2010. Here is my config: relayhost = [smtp.exchange.2010] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/relay_passwd smtp_sasl_security_options = #smtp_sasl_mechanism_filter = ntlm (/etc/postfix/relay_passwd contains login information of some accounts on Exchange) With this configuration I can relay email to Exchange. The problem is: the message send from Postfix has header: X-MS-Exchange-Organization-AuthAs: Anonymous and the message is treated like unAuthenicated message on Exchange system (i.e when sending to distribution group require senders are authenicated, I received error: #550 5.7.1 RESOLVER.RST.AuthRequired; authentication required ##rfc822;[email protected]). I using Outlook with the same account as in Postfix and it can send without problem. The different I realized between two case is: Outlook send with NTLM auth mech, Postfix using LOGIN mech. Any idea?

    Read the article

  • How to document linux server configuration?

    - by Margaret Thorpe
    Hi, I have about 20 linux servers which I need to document the configuration of. I do not mean the detailed configuration of services, but rather user accounts, databases, databases accounts, ip addresses, physical location, SSH port etc. etc. I know all this data is stored in config files, but I want to centralize it all. I am considering just creating a spreadsheet to record this data, but was wondering if there is something better (perhaps a small php/mysql app) which would be more structured and complete than a hacked together spreadsheet. What do you use?

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • capistrano still asks for the 1st password even though I've set up an ssh key???

    - by Greg
    Hi, Background: I've setup an ssh key to avoid having to use passwords with capistrano per http://www.picky-ricky.com/2009/01/ssh-keys-with-capistrano.html. A basic ssh to my server does work fine without asking for passwords. I'm using "dreamhost.com" for hosting. Issue - When I run 'cap deploy' I still get asked for the 1st password (even through the previous 2nd and 3rd password requests are now automated). It is the capistrano command that start with "git clone - q ssh:....." for which the password is being requested. Question - Is there something I've missed? How can I get "cap deploy" totally passwordless? Some excerts from config/deploy.rb are: set :use_sudo, false ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "id_rsa")] default_run_options[:pty] = true thanks PS. The permissions on the server are: drwx------ 2 mylogin pg840652 4096 2010-02-22 15:56 .ssh -rw------- 1 mylogin pg840652 404 2010-02-22 15:45 authorized_keys

    Read the article

  • Set Users as chrooted for sftp, but allow user to login in SSH

    - by Eghes
    I have setup a ssh server on debian 7, to use sftp connection. I chrooted some user, with this config: Match Group sftpusers ChrootDirectory /sftp/%u ForceCommand internal-sftp But if i want login with one of this chrooted users in ssh console, they get logged, but autoclose the connection. In logs I see: Oct 17 13:39:32 xxxxxx sshd[31100]: Accepted password for yyyyyy from zzz.zzz.zzz.zzz port 7855 ssh2 Oct 17 13:39:32 xxxxxx[31100]: pam_unix(sshd:session): session opened for user yyyyyyyyyyyy by (uid=0) Oct 17 13:39:32 d00hyr-ea1 sshd[31100]: pam_unix(sshd:session): session closed for user yyyyyyyyyyyy How can I chroot a user only for sftp, and use it as a normal user for ssh?

    Read the article

  • What is the maximum number of virtualhosts Apache can handle?

    - by FractalizeR
    Hello. What is the maximum number of VirtualHosts Apache can handle on a single machine (I don't mean anything related to load, let's suppose it's irrelevant for the question). And we take only Apache without any proxifying things like nginx. I am asking because on one forum one guy reported that his Apache works unstable with the number of sites more than 400 on a single machine. If you have a config, that handles more than 400, please tell me here. Thanks.

    Read the article

  • How to get Atheros ar242x wireless adapter working under Debian Linux?

    - by Mark
    Does anybody know how to get the Atheros ar242x wireless adapter working under Debian Linux (5.0.2 and/or 5.0.3)? My Debian live CDs and install CDs both don't like this card at all. Curisouly, it seems to work on other, Debian-based, Linuxes. Is this a free/non-free Driver issue? I know Debian gets mardy about that. Although for what it's worth, the Live CD doesn't seem to detect my wired LAN connection either... Specifically this is on a Samsung R610 laptop (some version of which seem to have an intel wireless adapter - this one definitely doesn't!) I've tried all sorts of things but obviously on a live CD installing software is limited. I've also tinkerering with network config files and kernel modules etc but to no avail.

    Read the article

  • buggy mouse click after a while

    - by sputnick
    The cursor of my mouse is moving as well on my twin view, but after a WHILE, or if I start kde with runlevels (I don't know why, but startx works better) I can't click anything. I have had replaced the mouse with another one, but it's the same problem, so it's not a hardware problem. The problem occurs both with gnome and kde. My config : PC Dell Optiplex 780 archlinux x86_64 (up to date) xf86-input-evdev 2.6.0-4 xorg-server 1.11.2-2 kernel linux-3.1.1-1-ARCH My Xorg log doesn't contains "EE" string for errors. Any clue ?

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • Nginx Ip Whitelist

    - by Will
    Is it possible to create a ip whitelist for my nginx proxy server without adding allow or deny in the config file is it possible i can get nginx to link to a separate database to check if the user is allowed to access the website . Ideally i could do with nginx linking to an external database or at minimum a list off allowed ips on the same server so i can easily update the list whit out restarting nginx every time. In the future i would like to link nginx to my website and a user will login and there ip will be linked to there account and they will be able to update there ip if it has changed to there new one to grant them access so i need to keep in mind that it would be easyer to do this if i have external list off ips in some kind off database any help is apreshiated

    Read the article

  • Threading in Thunderbird based on subject

    - by MrStatic
    I am in Thunderbird 3.0.4 which is the latest (as of now) for windows. I have edited the about:config as per the Mozilla Mail/News wiki I have tried with mail.correct_threading as false as well. After restarting Thunderbird it still does not thread per just subject/date. We use DeskPro which emails out to each tech every time a support ticket is created/replied to but since these are notices they do no include In-Reply-To headers. For these emails each subject line is exactly the same for each ticket. Curious if anyone can shed some light on this.

    Read the article

  • MySQL port 3306 became filtered when configured with Keepalived on Ubuntu server 12.04 lts

    - by Ludwig
    I'm configuring two load balancer (lb01 & lb02) with keepalived for my two mysql server (db01 & db02) with standard port 3306. There is virtual ip address (192.168.205.10) to access it also act as failover, but somehow the web server in the front can't access this mysql server using vip. Here is my config: Keepalived: Only the mysql part that i added here. LB01: virtual_server 192.168.205.10 3306 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.205.4 3306 { weight 10 TCP_CHECK { connect_port 3306 connect_timeout 2 } } } LB02: virtual_server 192.168.205.10 3306 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.205.6 3306 { weight 10 TCP_CHECK { connect_port 3306 connect_timeout 2 } } } I already comment out the "bind-address=127.0.0.1" part in both server my.cnf. Also, remove all the firewall prog from my ubuntu server (ufw or iptables). Any help? thanks.

    Read the article

  • How to stop Nginx sending static file requests to the CakePHP app controller when running Cake in a

    - by Throlkim
    I'm trying to run a CakePHP app from within a subfolder on Nginx, but the static files are not being found and are instead being passed to the app controller. Here's my current config: location /uniquetv { index index.php index.html; if (-f $request_filename) { break; } if (!-e $request_filename) { rewrite ^/uniquetv(.+)$ /uniquetv/webroot/$1 last; break; } } location /uniquetv/webroot { index index.php; if (!-e $request_filename) { rewrite ^/uniquetv/webroot/(.+)$ /uniquetv/webroot/index.php?url=$1 last; break; } } Any ideas? :)

    Read the article

  • Keepalived takes several minutes to recover in a particular situation

    - by NathanE
    I've setup Keepalived for a master-slave style virtual IP and it seems to work well. Both are hosted in almost identical VMs. If I "pause" the VM that is running the Master. The Slave will take over, as expected, almost instantly. However if I then "unpause" the VM that runs the Master. The virtual IP will stop responding the pings. And it takes a good 4 or 5 minutes for it to start pinging again. It seems to be getting desynchronised due to the nature of the way I'm testing it (by pausing/unpausing the VMs). I admit that pausing and unpausing VMs is a slightly dodgy way to test this. But it has raised a concern for me that there could be other scenarios that cause the same undesirable behaviour. Is this expected / by design? Is there anything I can do to the config to improve it? Thanks.

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >