Search Results

Search found 24933 results on 998 pages for 'arch linux'.

Page 310/998 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • How to tell X.org to reload input device module?

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • MySQL - Why would SHOW SLAVE HOSTS cause a binlog dump?

    - by Rory McCann
    We're getting loads of binlog files in our MySQL 5.0.x. We have a normal master/slave replication thing going with 1 master, 1 slave. Looking at /var/log/mysql.log, nearly 90% of the time the replicator connects and does a SHOW SLAVE HOSTS causes a bin log dump. For example: 7020 Query SHOW SLAVE HOSTS 7020 Binlog Dump Log: 'mysql-bin.029634' Pos: 13273 However when I do a SHOW SLAVE HOSTS on the mysql myself, I get no results. Occasionally when the replicator does a SHOW SLAVE HOSTS, mysql will hang for hours. I see nothing in the /var/log/syslog at the same time... What's going on here? How can I debug this more? For the record the MySQL master and slave servers are ubuntu dapper.

    Read the article

  • Cron Tips for not running cron jobs on holidays (the monday of a three day weekend)

    - by Poul
    We have about a one hundred machine set-up with each machine running cron jobs like starting and stopping services and archiving these services' log files at the end of the day to a centralized repository. One headache we have is the three-day weekend (we're closed on holidays). We don't want the services starting up on those days and connecting to our business partner's machines. We currently do this by manually commenting out the most critical jobs and letting a bunch of errors happen all day. Not ideal. Basically if a job has '1-5' set in the day field we want this to mean 'work days' and not Monday to Friday'. We have a database that keeps track of which days are indeed 'work days' So, is it possible to override Cron's day-matching algorithm, or is there some other way to easily set a cron setting to avoid things starting up on a Monday holiday? Thanks!

    Read the article

  • HAproxy - Redirect issue - Uri Variables ?

    - by Justin
    I'm using haproxy 1.5dev3 and I was wondering if there is any possible way to grab uri variables from a request to reappend the query on the end of a redirect url? What I'm trying to do is redirect from: http://www.domain.com/page/example.htm?id=1234567 to: http://www.domain.com/frame/newpage.cfm?id=1234567 redirect prefix doesn't work properly as it tries to append /page/example.htm to the end of the redirect url. Can I do some sort of rewrite to accomplish this? It would be awesome if you could use uri and queries as variables for redirection/pool selection like on F5. Please help...Thanks!

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Explanation for kernel error - Eeeek! page_mapcount went negative

    - by Aditya Advani
    Internet says this is a genuine Kernel Bug but does anyone know what triggers it?? Server running CentOS x86_64 with kernel 2.6.27.24 Here is my crash output: [root@u15345757 httpdocs]# Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.506380] Eeek! page_mapcount(page) went negative! (-1) Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.517515] page pfn = d0a3 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.523814] page->flags = 10000000000083c Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.532489] page->count = 2 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.538741] page->mapping = ffff88001f01a110 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.547924] vma->vm_ops = 0x0 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.554543] [ cut here ] Message from syslogd@ at Thu Aug 6 01:42:23 2009 ... u15345757 kernel: [1145736.564528] invalid opcode: 0000 [1] SMP Message from syslogd@ at Thu Aug 6 01:42:23 2009 ... u15345757 kernel: [1145736.564528] Code: 80 e8 22 51 fd ff 48 8b 85 90 00 00 00 48 85 c0 74 19 48 8b 40 20 48 85 c0 74 10 48 8b 70 58 48 c7 c7 10 7f 7d 80 e8 fd 50 fd ff <0f> 0b eb fe 8b 77 18 41 58 5b 5d 83 e6 01 f7 de 83 c6 04 e9 df Broadcast message from root (pts/3) (Thu Aug 6 01:49:29 2009): The system is going down for reboot NOW!

    Read the article

  • Where is xorg.conf in Ubuntu 10.04?

    - by Mikey.B
    Hi Guys, I'm in the middle of trying to setup dual monitors on ubuntu and would like to backup my xorg.conf... The documentation I've been thus far say to do the following: sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf_backup But I don't see the xorg.conf file anywhere... Am I missing something? Where is this located?

    Read the article

  • multiple wildcard entries

    - by Murali
    my client has around 300,000 domains and they just have a wildcard for all of them * A 12.12.12.12 Now they want to create a sub domain that points to a different IP and still have the flexibility of wildcard, something like ww1.* A 24.24.24.24 * A 12.12.12.12 Looks like in BIND, the lower "*" is catch-all and taking over every query and hence ww1 is not working. One of solutions offered by IT folks was to create seperate 300K zones for just "ww1" and leave the "*" wildcard. Are there any other DNS software's that can achieve this task easily? Any other ways to deal?

    Read the article

  • Monitor LSI 3ware raid controller on ESXi

    - by aseq
    This concerns a server that runs ESXi (v. 4.x or 5.x) installed on drives that are configured into a raid10 using an LSI 3ware 97050 raid controller. I would like to know if there is a way to monitor the LSI 3ware series of controllers, in particular the 9750, through ESXi. And to hopefully also run the monitoring daemon LSI provides. I know you can set up a cronjob to execute tw_cli through ssh on the ESXi server. However that's not really ideal. I am not using vcenter by the way. It would be nice to have more than just monitoring working, since the 3ware software has a very useful web client, besides tw_cli.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Virtual host “Forbidden You don't have permission to access / on this server” on debian

    - by ulduz114
    Before I created a virtual host I could see "http://localhost", but when I created a virtual host I could not see "http://localhost" and my virtual host "http://test" Here is my virtualhost config file: <VirtualHost test:80> ServerAdmin [email protected] ServerName test ServerAlias test DocumentRoot "/home/javad/Public/test/public" <Directory "/home/javad/Public/test/public/" > Options Indexes FollowSymLinks MultiViews ExecCGI DirectoryIndex index.php AllowOverride all Order allow,deny allow from all </Directory> </VirtualHost> so I ran a2ensite test and added 127.0.0.1 test to /etc/hosts file and restart apapche2 fine But after that I cannot access to http://test or even http://localhost i get Forbidden You don't have permission to access / on this server. When I delete my virtual host setting I can access http://localhost

    Read the article

  • Getting Error while running RED5 server - class path resource [red5.xml] cannot be opened because it does not exist

    - by sunil221
    HI , I have installed java version "1.6.0_14" and Ant version 1.8.2 for red5 Server. when i am trying to run red5 server i am getting the following error please help Root: /usr/local/red5 Deploy type: bootstrap Logback selector: org.red5.logging.LoggingContextSelector Setting default logging context: default 11:27:39.838 [main] INFO org.red5.server.Launcher - Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/red5/red5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/red5/lib/logback-classic-0.9.26.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:27:39.994 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext@39d85f79: startup date [Mon Dec 21 11:27:39 EST 2009]; root of context hierarchy 11:27:40.149 [main] INFO o.s.b.f.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [red5.xml] Exception org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [red5.xml]; nested exception is java.io.FileNotFoundException: class path resource [red5.xml] cannot be opened because it does not exist Bootstrap complete

    Read the article

  • How to tell X.org to reload input device module? (Working around suspend-to-ram crash on Acer laptop

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • configuring vsftpd anonymous upload. Creates files but freezes at 0 bytes

    - by Wayne
    vsftpd on ubuntu after sudo apt-get install vsftpd Then did configuration as in the attached /etc/vsftpd.conf file. Anonymous ftp allows cd to the upload directly and allows put myfile.txt which gets created on the server but then the client hangs and never proceeds. The file on the server remains at 0 bytes. Here's the folders and permissions: root@support:/home/ftp# ls -ld . drwxr-xr-x 3 root root 4096 Jun 22 00:00 . root@support:/home/ftp# ls -ld pub drwxr-xr-x 3 root root 4096 Jun 21 23:59 pub root@support:/home/ftp# ls -ld pub/upload drwxr-xr-x 2 ftp ftp 4096 Jun 22 00:06 pub/upload root@support:/home/ftp# Here's the vsftpd.conf file: root@support:/home/ftp# grep -v '#' /etc/vsftpd.conf listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES dirmessage_enable=YES xferlog_enable=YES anon_root=/home/ftp/pub/ connect_from_port_20=YES chown_uploads=YES chown_username=ftp nopriv_user=ftp secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Here's a file example that attempted to upload: root@support:/home/ftp/pub/upload# ls -l total 0 -rw------- 1 ftp nogroup 0 Jun 22 00:06 build.out This is the client attempting to upload...it is frozen at this point: $ ftp 173.203.89.78 Connected to 173.203.89.78. 220 (vsFTPd 2.0.6) User (173.203.89.78:(none)): ftp 331 Please specify the password. Password: 230 Login successful. ftp> put build.out 200 PORT command successful. Consider using PASV. 553 Could not create file. ftp> cd upload 250 Directory successfully changed. ftp> put build.out 200 PORT command successful. Consider using PASV. 150 Ok to send data.

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Failed to bring up eth1 in a dual ips solution in ubuntu

    - by lxyu
    I'm using ubuntu 12.04. I tried to assign two ips to two ethernet cards in my server. The content of /etc/network/interfaces is like this: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 114.80.156.a netmask 255.255.255.224 gateway 114.80.156.b auto eth1 iface eth1 inet static address 114.80.156.c netmask 255.255.255.240 gateway 114.80.156.d a b c d have different values, which means the two ips are in different vlans. But I can only bring up eth0 with this command: $ /etc/init.d/networking restart RTNETLINK answers: File exists Failed to bring up eth1. ...done. I have checked the question here which shows the same problem like the one I encountered: Can only bring up one of two interfaces But it seems it's not really solved. And in my situation, I need the 2 ips to use 2 different gateways. So how to fix this problem? Edit1, changed the example config ip from 192.168.0.0/16 subnet to another 'real' subnet. Edit2, the purpose of doing this is fairly simple. Because the ip range I previous in don't have more room for new servers, and I have to move to another ip range. So I want to make the public servers bind to 2 ips for the transition period. I only have really limited knowledge about routing and subnet. @BillThor @rackandboneman, would you please give me some keywords or links on how to setup route for 2 ips? and @Mike Pennington, how do you know I speak chinese?

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • Varnish "FetchError no backend connection" error

    - by clueless-anon
    Varnishlog: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829925 1.0 12 SessionOpen c 79.124.74.11 3063 :80 12 SessionClose c EOF 12 StatSess c 79.124.74.11 3063 0 1 0 0 0 0 0 0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829928 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829931 1.0 12 SessionOpen c 108.62.115.226 46211 :80 12 ReqStart c 108.62.115.226 46211 467185881 12 RxRequest c GET 12 RxURL c / 12 RxProtocol c HTTP/1.0 12 RxHeader c User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) 12 RxHeader c Host: www.mysite.com 12 VCL_call c recv lookup 12 VCL_call c hash 12 Hash c / 12 Hash c www.mysite.com 12 VCL_return c hash 12 VCL_call c miss fetch 12 FetchError c no backend connection 12 VCL_call c error deliver 12 VCL_call c deliver deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 503 12 TxResponse c Service Unavailable 12 TxHeader c Server: Varnish 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Retry-After: 5 12 TxHeader c Content-Length: 418 12 TxHeader c Accept-Ranges: bytes 12 TxHeader c Date: Wed, 27 Jun 2012 20:45:31 GMT 12 TxHeader c X-Varnish: 467185881 12 TxHeader c Age: 1 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 Length c 418 12 ReqEnd c 467185881 1340829931.192433119 1340829931.891024113 0.000051022 0.698516846 0.000074035 12 SessionClose c error 12 StatSess c 108.62.115.226 46211 1 1 1 0 0 0 256 418 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829934 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829937 1.0 netstat -tlnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3086/nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1915/varnishd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1279/sshd tcp 0 0 127.0.0.2:25 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:6082 0.0.0.0:* LISTEN 1914/varnishd tcp 0 0 127.0.0.2:9000 0.0.0.0:* LISTEN 1317/php-fpm.conf) tcp 0 0 127.0.0.2:3306 0.0.0.0:* LISTEN 1192/mysqld tcp 0 0 127.0.0.2:587 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:11211 0.0.0.0:* LISTEN 3072/memcached tcp6 0 0 :::8080 :::* LISTEN 3086/nginx tcp6 0 0 :::80 :::* LISTEN 1915/varnishd tcp6 0 0 :::22 :::* LISTEN 1279/sshd /etc/nginx/site-enabled/default server { listen 8080; ## listen for ipv4; this line is default and implied listen [::]:8080 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.2; deny all; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.2:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } /etc/nginx/sites-enabled/www.mysite.com.vhost server { listen 8080; server_name www.mysite.com mysite.com.net; root /var/www/www.mysite.com/web; if ($http_host != "www.mysite.com") { rewrite ^ http://www.mysite.com$request_uri permanent; } index index.php index.html; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). location ~ /\. { deny all; access_log off; log_not_found off; } location / { try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; location ~* \.(jpg|jpeg|png|gif|css|js|ico)$ { expires max; log_not_found off; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.2:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } include /var/www/www.mysite.com/web/nginx.conf; location ~ /nginx.conf { deny all; access_log off; log_not_found off; } } /etc/varnish/default.vcl # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.2"; .port = "8080"; # .connect_timeout = 600s; #.first_byte_timeout = 600s; # .between_bytes_timeout = 600s; # .max_connections = 800; Note: uncommenting the last four options at default.vcl made no difference. cat /etc/default/varnish # Configuration file for varnish # # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK # to be set from this shell script fragment. # # Should we start varnishd at boot? Set to "yes" to enable. START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) # Used for locking the shared memory log in memory. If you increase log size, # you need to increase this number as well MEMLOCK=82000 # Default varnish instance name is the local nodename. Can be overridden with # the -n switch, to have more instances on a single server. INSTANCE=$(uname -n) # This file contains 4 alternatives, please use only one. ## Alternative 1, Minimal configuration, no VCL # # Listen on port 6081, administration on localhost:6082, and forward to # content server on localhost:8080. Use a 1GB fixed-size cache file. # # DAEMON_OPTS="-a :6081 \ # -T localhost:6082 \ # -b localhost:8080 \ # -u varnish -g varnish \ # -S /etc/varnish/secret \ # -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" ## Alternative 2, Configuration with VCL # # Listen on port 6081, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a 1GB # fixed-size cache file. # DAEMON_OPTS="-a :80 \ -T 127.0.0.2:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" If you need any other info let me know. I am all out of clue as to whats the problem.

    Read the article

  • standart packages list

    - by Valintinr
    Im learning puppet system and now need to do next task. So we have few servers with same OS (Altlinux p6,t6) - puppet-agents and have puppet-master. On agents installed some packages, eg. 200 packages on first, 300 on second .... But necessary only 180 installed. We know names of necessary packages but dont know names of other (unnecessary packages) So task: Have i can check or install (if not installed yet) necessary packages and delete other packages (we dont know names of other installed packages) Help please WBR Valentin

    Read the article

  • Copying files SSH vs sFTP

    - by jackquack
    I'm a bit of a unix noob, but this question seems super basic, yet I can't find an answer anywhere. Basically, to my knowledge, sFTP is just FTP over ssh. So, why can't I drag and drop files from one folder to another on the server side like I can on ssh. Why when I want to unzip a .tar in a server folder, does it first want to copy it to my machine and then back? Why can't it just unzip like it can when I'm using the command line. I know that when I use the command line it is using the resources of the remote machine, but why can't sFTP do that too? Is there a way to execute commands which I would normally do over SSH, but in a gui? I'm tried mapping to the drive to my own machine, I've tried so many sFTP clients that it's silly. Is there another class of program that I just don't know of?

    Read the article

  • changing ext4 journal data mode with remount?

    - by Amos Shapira
    I'm tweaking ext4 file system for speed, one tweak at a time. First tweak is to change from "data=ordered" to "data=writeback". To test this, I execute "mount -n -o remount,data=ordered /" but I keep getting "mount: / not mounted already, or bad option". From lots of google'ing I found many questions about similar problems and one answer circa 2001+ext3 which says that you can't change the journal mode with remount. Is this limitation still current?

    Read the article

  • Python error after installing libboost-all-dev on debian [migrated]

    - by Cameron Metzke
    A friend of mine wanted the liboost libraries installed on our shared computer so after installing libboost-all-dev 1.49.0.1 ( A debian wheezy machine ), I get this error when using the "pydoc modules" command on the commandline. It spits out the following error -- root@debian:/usr/include/c++/4.7# pydoc modules Please wait a moment while I gather a list of all available modules... **[debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 357 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 230 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../orte/runtime/orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "A system-required executable either could not be found or was not executable by this user" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** The MPI_Init() function was called before MPI_INIT was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [debian:49065] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!** root@debian:/usr/include/c++/4.7# I tried looking into the problem and ended up uninstalling the following to get it to work again. openmpi common all 1.4.5-1 libibverbs-dev amd64 1.1.6-1 libopenmpi-dev amd64 1.4.5-1 mpi-default-dev amd64 1.0.1 libboost-mpi-python1.49.0 although pydoc works again, I'm assuming the packages I removed are gunna hurt somethiong else down the track ? As you guessed im not a c/c++ programmer. So I guess my question is, will this hurt something later ? is their a way to install those packages without hurting python ?

    Read the article

  • Accessing Netatalk/AFP Shares from OS X Snow Leopard

    - by j4nus_
    Recently upgraded Ubuntu home server from 8.04 client to 10.04 server and reinstalled all services therein. One of them is a Netatalk daemon that I configured in a fashion similar to this website: http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/ Finder recognizes my server and the afp service, yet when I attempt to log in (using valid credentials), Finder indicates its the wrong username and password. I've tried altering some of the config files and my Google-fu to look for solutions, but no luck. Any tips? (This was not an issue under 8.04, if it matters)

    Read the article

  • how to uninstall mariadb and re-install mysql ? Mysql install turns into mariadb install

    - by Suma
    I recently upgraded my centos system via the desktop. mistake! I had mariadb, phpmyadmin working just fine before - but after the upgrade they stopped. I frantically googled and tried to follow some tutorials about mariadb * mysql reinstall untill I came to this one: http://centosforge.com/node/how-replace-mysql-mariadb-centos-6-including-mysql-uninstall-instructions-and-yum-install I executed this command to remove all of mysql: yum remove mysql-server mysql-libs mysql-devel mysql* and then tried to reinstall mysql: as below - it crashes with errors as follows: ***************************************************************** [root@localhost ~]# yum install mysql-server mysql mysql-devel ***************************************************************** Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.serverspace.co.uk * extras: centos.serverspace.co.uk * rpmforge: www.mirrorservice.org * updates: mirror.rmg.io Setting up Install Process Package mysql-server is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql-devel is obsoleted by MariaDB-devel, trying to install MariaDB-devel-5.5.29-1.i686 instead Resolving Dependencies --> Running transaction check ---> Package MariaDB-devel.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-common for package: MariaDB-devel ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-common.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-compat for package: MariaDB-common ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-compat.i686 0:5.5.29-1 set to be updated ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Finished Dependency Resolution MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest [root@localhost ~] If I now try to install libssl.10, i get asked to install glibc libraries. 2.17 and 2.7 - other discussions have said to stay clear of the as this will explode my system - I tried download 2.17 and it's huge - took ages to unzip. Could someone please help me to completelty remove maraidb and install mysql - so that I don't get the above errors and pushed over to mariadb when I run: yum install mysql-server mysql mysql-devel There are tons of material on how to install mariadb - but none i found so far that plainly explains how to go backwards to mysql.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >