Search Results

Search found 3039 results on 122 pages for 'centos 6'.

Page 94/122 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

  • Installed Percona mySQL on CPanel but getting an error

    - by user1227914
    I installed Percona mySQL on my fresh CPanel server (no databases yet) according to: http://www.ecommy.com/linux/install-...el-environment Everything seemed to be OK and the server also starts fine, except some commands return this error: root@server [/var/lib/mysql]# mysql -A -sN information_schema -e "select * from user_statistics;" mysql: unknown variable 'innodb_file_per_table=1' root@server [/var/lib/mysql]# mysql -A mysql: unknown variable 'innodb_file_per_table=1' In my /etc/my.cnf I have: [mysql] innodb_file_per_table=1 userstat_running=1 I am planning on using InnoDB for the databases. Anyone know what the problem is? Or even better, how to fix it? I have installed Percona 5.5 with yum on CentOS.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • script to automatically test if a web site is available

    - by Xoundboy
    I'm a lone web developer with my own Centos VPS hosting a few small web sites for my clients. Today I discovered my httpd service had stopped (for no apparent reason - but that's another thread). I restarted it but now I need to find a way that I can be notified by email and/or SMS if it happens again - I don't like it when my client rings me to tell me their web site doesn't work! I know there are probably many different possibilities, including server monitoring software. I think all I really need is a script that I can run as a cron job from my dev host (which is permanently running in my office) that attempts to load a page from my production server and if it doesn't load within say 30 seconds then it sends me an email or SMS. I'm pretty rubbish at shell scripting, hence this question. Any suggestions would be gratefully appreciated, thanks to all you clever sysadmin guys and girls out there :)

    Read the article

  • Why does m4 error "linux-gnu.m4 - No such file or directory" appear the first time after updating sendmail.mc?

    - by Mike B
    SendMail 8.14.x | CentOS 5.x I've noticed that if I manually update /etc/mail/sendmail.mc (for example, enable TLS support), and then bounce sendmail, I get the following error: Shutting down sm-client: [ OK ] Shutting down sendmail: [ OK ] Starting sendmail: sendmail.mc:18: m4: cannot open `/usr/share/sendmail-cf/ostype/linux-gnu.mf': No such file or directory [ OK ] Starting sm-client: [ OK ] This only happens one time after I update a sendmail.mc file. If I bounce sendmail again (without making any other change), I don't see the error any more. Any idea why this happens? It doesn't cause any errors - I'm just curious.

    Read the article

  • Redirecting to Login page in apache

    - by Shailesh Sutar
    I am working on OTRS where i want to set OTRS Login page on otrs.mydomain.com. I am having machine CentOS release 6.2 (Final). Currently I am accessing it,using otrs.mydomain.com/otrs/customer.pl for customer login AND otrs.mydomain.com/otrs/index.pl for admin login. I changed DocumentRoot to /opt/otrs but its not working as it should. OTRS is installed in /opt/otrs/ I am using Apache Server version: Apache/2.2.15 (Unix). Now i am stuck.

    Read the article

  • quick check of open port

    - by shantanuo
    The following is working as expected. (do not want to use nmap) I need to use nc (or any other built-in centOS) command in shell script to check the port 6379 of a remote server. I want the script to exit quickly if no response received in less than 1 second. But it seems that nc will wait for too long before quitting with exit code of 1 How do I "quickly" check if the port is listening? # time nc -z 1.2.3.4 1234 real 0m21.001s user 0m0.000s sys 0m0.000s # echo $? 1 # time nc -z 1.2.3.4 6379 Connection to 1.2.3.4 6379 port [tcp/*] succeeded! real 0m0.272s user 0m0.000s sys 0m0.008s # echo $? 0

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Why is MySQL making the CPU run at about 80%?

    - by Robert
    MySQL is eating up about 80% of my CPU for no reason as far as I can see. Right now this server is rarely used, more of a test site I set up that will eventually be a used for production once I fix small problems like this. I run 3 instances of MySQL but it seems that my first instance is taking up all the CPU. When I turn off the first instance and leave the other two on everything runs fine. Any suggestions? I tried Show Processlist and no statements are being run besides "Sleep" and the query "Show Processlist" (obviously) at the time it's using up all this CPU. my.cnf is basic. I did not optimize or change any MySQL settings. Do you think this would cause such strange behavior? The machine is running Linux Centos 5.7 64 bit and MySQL 5.0.95. Thanks

    Read the article

  • Set ReturnPath globally in Postfix

    - by Gaia
    I have Magento using Sendmail and Wordpress using PHPmailer to send webapp-generated mail. Occasionally, someone will enter their email address incorrectly and the mail (let's say, a purchase receipt) will bounce back to the return-path specified by the script. I dont want to set the return path for each vhost, especially because it is not easily done. Ideally, WP would use the address of the blog admin and Magento would use one of the numerous email fields specified, but they default to using username@machinename (in my case, username is the system user and machinename is a FQDN, but it is not the same as the actual vhost FQDN). The result is that bounced mail returns to the server and, since the server is used only for outbound SMTP, the messages sit there, undelivered and worse, unread. I'm Postfix 2.6.6 on CentOS 6.3, is it possible to globally force a specific returnpath for all messages sent via PHP on the server?

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high

    - by Gk.
    I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • how to start with unmanaged vps?

    - by GaVrA
    Hello! I have a managed VPS, so whatever i need i can just ask my support, and they will do it for me. Now i plan to migrate to unmanaged VPS, so i need some guides, tips on how and where to start learning. I will have more specific questions once i start using it, but now i just need some general answers about this topic. Thanks. Update: Ok, i have decided to go for unmanaged VPS with cPanel. OS is CentOS-5. I contacted support only for some small(i think) things like creating new account in whm, some database importing, installing new software(rare)... What i will be using is apache, php, mysql. I think i will be able to cope with upgrading to new versions, so the thing that interests me the most is security i guess.

    Read the article

  • SSH Access Denied despite correct credentials being used

    - by columbo
    Hello, I have a remote CentOS server to which I had SSH access to. Today when I try to log in via SSH I just get Access Denied even though I am using the correct credentials. I have plesk 9 access and so have reset the admin password and tried to SSH using that password but to no avail. I even created a new user with SSH access rights and tried to log in as them but again failed with the same access denied. I have rebooted. Can anyone offer any advice? There is no file manager in plesk other than for the web domains so I can't get at any system files to see what is going on. Any advice appreciated.

    Read the article

  • I don't see the running guest in virsh

    - by Louise Hoffman
    Using CentOS 5 with KVM. I have downloaded this KVM applicance, and when unzipped it is just a .img file. No xml file supplied. I can start the guest with /usr/libexec/qemu-kvm -hda /data/kvm/slash.img -m 512 and it works. Now I would like to make a config file for the guest. The problem is when I do # virsh -c qemu:///system list Id Name State ---------------------------------- # I don't see the guest as expected. Does anyone know what is wrong?

    Read the article

  • mail refused by port 25

    - by shantanuo
    When I try to send a mail from my Linux (CentOS) server, the exit status is 0, but the mail never reaches it's destination. The /var/log/maillog file has an entry something like this... Mar 18 06:33:01 app11 postfix/qmgr[22454]: F18FD9F6074: to=<[email protected]>, relay=none, delay=0.01, delays=0/0/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to alt4.gmail-smtp-in.l.google.com[74.125.45.27]: Connection refused) Am I blocked by google? I tried to send a mail to some other mail server and got the similar result. Mar 18 06:33:01 app1 postfix/smtp[15460]: connect to acsinet11.xxx.com[111.222.333.444]: Connection refused (port 25) How do I correct this problem?

    Read the article

  • Updating autoreconf

    - by AzaraT
    So I need to use the autoreconf to configure a package. However I need at least version 2.61. I'm on CentOS 5.8 and it seems like there's no package for it so I went on to compile it myself. So I get the source of autoconf from http://www.gnu.org/software/autoconf/ and compiled that. And sure when I do autoconf -V it shows up as version 2.68 which is indeed the latest version. However autoreconf (nothing the re) still shows up as the old version 2.59 which causes me some problems. So could someone help a relatively new linux user, updating autoreconf properly? Thanks

    Read the article

  • Configuring Linux Network

    - by Reiler
    Hi I'm working on some software, that runs on a Centos 5.xx installation. I'ts not allowed for our customers to log in to Linux, everything is done from Windows applications, developed by us. So we have build a frontend for the user to configure network setup: Static/DHCP, ip-address, gateway, DNS, Hostname. Right now I let the user enter the information in the Windows app, and then write it on the Linux server like this: Write to /etc/resolv.conf: Nameserver Write to /etc/sysconfig/network: Gateway and Hostname Write to /etc/sysconfig/network-scripts/ifcfg-eth0: Ipaddress, Netmask, Bootproto(DHCP or Static) I also (after some time) found out that I was unable to send mail, unless I wrote in /etc/hosts: 127.0.0.1 Hostname All this seems to work, but is there a better/easier way to do this? Also, I read the network configuration nearly the same way, but if I use DHCP, I miss som information, for instance the Ip-address. I know that I can get some information from the commandline (ifconfig), but I dont get for instance Hostname, Gateway and DNS. Is there a commandline tool that will display this?

    Read the article

  • How to set umask for a folder and it's subfolder?

    - by Cyril N.
    I'm working on the same directory with some friends and they access it via SSH. I added us in the same group and defined a sticky bit to keep the user:group values the same. But when a user create a file/folder, the Write attribute is not defined for the group, disabling other to write it/on it. How can I define the Umask to add the Write value for groups in the specific directory and it's subfolders ? I tried to find some help before, but I only saw helps for Fedora/CentOs, and I'm using Debian Squeeze. Thanks for your help

    Read the article

  • rsync remote to local automatic backup

    - by Mark Molina
    Because all my work is stored on a remote server I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.

    Read the article

  • How to watch for count of new lines in tail

    - by fl00r
    I want to do something like this: watch tail -f | wc -l #=> 43 #=> 56 #=> 61 #=> 44 #=> ... It counts new lines of tail each second / Linux, CentOs To be more clear. I have got something like this: tail -f /var/log/my_process/*.log | grep error I am reading some error messages. And now I want to count them. How many ~ errors I have got in a second. So one line in a log is one error in a proccess.

    Read the article

  • How to setup Wordpress High Availability

    - by Ketam
    I have installed Galera Cluster on 3 cluster + 1 management. I wanted to make it like this, Server1: Home (www.domain.com) Server2: For BBpress/Forum (Forum Tab Menu will forward to forum.domain.com) Server3: BuddyPress Activity (Social Tab Menu will forward to social.domain.com) The purpose I am doing this is to distribute my resource and load balancing each other at same time. However, I have difficulty to setup Apache Load-Balancing/mod_proxy/clustering or any suitable to have high availability WordPress. Any best suggestion/solution to make high availability WordPress? Or how to? And another question is I tried to copy whole WordPress files & folders to Server2 connecting to local database (same data inside since it is already on Galera Cluster) but the page blank. Any advice? OS: Centos 6.2 Thanks in advanced.

    Read the article

  • rsyslog forward all except ldap

    - by Brian
    I have Centos 6 servers running openLDAP. In the rsyslog.conf, I forward the logs to my central server with this line: *.* @10.10.10.10:514 openldap seems incredibly chatty. I have 3 servers in a multi-master cluster. Those 3 servers generate twice as many logs as my other 80 servers combined. I have been unsuccessful in figuring out how to tell openLDAP to use a sensible log level. (we never specifically set the log level) Since these are my main authentication sources, I'm a bit hesitant to "play around" with them. Is there a way to tell rsyslog to forward everything EXCEPT LOCAL4?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >