Search Results

Search found 24965 results on 999 pages for 'linux kvm'.

Page 372/999 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Strange DNS problem [seems to be IPv6 issue]

    - by Homer J. Simpson
    Hi, I'm experiencing strange problems with my Kubuntu 9.10 when doing DNS requests from various applications. The requests are extremely slow, so loading any pages in Firefox or Konqueror, doing package installations in Kpackagemanager and other apps is really painful, while for example Opera doesnt have any problems, and ping is normally fast as well for DNS pings. I checked the proxy settings of both the used applications as well as of the general system and there are none, so to me it doesn't seem as there was something inbetween.. Does anybody have an idea on what to check for possible problem sources or how to solve this ? I'm behind a DSL home router which does the DHCP (and works well with my other computer). Any kind of advice would be really helpful. Edit: It seems to be some kind of IPv6 problem, as I could get it to work by disabling IPv6 explicitly in Firefox. Is there a general solution to this ?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Make errors when compiling HPL-2.1 on MOSIX-clustered Debian server

    - by tlake
    I'm trying to compile HPL 2.1 on a MOSIX-clustered Debian server, but the make process terminates with errors as seen below. Included are my makefile and two versions of output: one from a standard execution, and one from an execution run with the debug flag. Any help and guidance would be very much appreciated! The makefile: # ---------------------------------------------------------------------- # - shell -------------------------------------------------------------- # ---------------------------------------------------------------------- # SHELL = /bin/bash # CD = cd CP = cp LN_S = ln -s MKDIR = mkdir RM = /bin/rm -f TOUCH = touch # # ---------------------------------------------------------------------- # - Platform identifier ------------------------------------------------ # ---------------------------------------------------------------------- # ARCH = Linux_PII_CBLAS # # ---------------------------------------------------------------------- # - HPL Directory Structure / HPL library ------------------------------ # ---------------------------------------------------------------------- # TOPdir = $(HOME)/hpl-2.1 INCdir = $(TOPdir)/include BINdir = $(TOPdir)/bin/$(ARCH) LIBdir = $(TOPdir)/lib/$(ARCH) # HPLlib = $(LIBdir)/libhpl.a # # ---------------------------------------------------------------------- # - Message Passing library (MPI) -------------------------------------- # ---------------------------------------------------------------------- # MPinc tells the C compiler where to find the Message Passing library # header files, MPlib is defined to be the name of the library to be # used. The variable MPdir is only used for defining MPinc and MPlib. # MPdir = /usr/local MPinc = -I$(MPdir)/include MPlib = $(MPdir)/lib/libmpi.so # # ---------------------------------------------------------------------- # - Linear Algebra library (BLAS or VSIPL) ----------------------------- # ---------------------------------------------------------------------- # LAinc tells the C compiler where to find the Linear Algebra library # header files, LAlib is defined to be the name of the library to be # used. The variable LAdir is only used for defining LAinc and LAlib. # LAdir = $(HOME)/CBLAS/lib LAinc = LAlib = $(LAdir)/cblas_LINUX.a # # ---------------------------------------------------------------------- # - F77 / C interface -------------------------------------------------- # ---------------------------------------------------------------------- # You can skip this section if and only if you are not planning to use # a BLAS library featuring a Fortran 77 interface. Otherwise, it is # necessary to fill out the F2CDEFS variable with the appropriate # options. **One and only one** option should be chosen in **each** of # the 3 following categories: # # 1) name space (How C calls a Fortran 77 routine) # # -DAdd_ : all lower case and a suffixed underscore (Suns, # Intel, ...), [default] # -DNoChange : all lower case (IBM RS6000), # -DUpCase : all upper case (Cray), # -DAdd__ : the FORTRAN compiler in use is f2c. # # 2) C and Fortran 77 integer mapping # # -DF77_INTEGER=int : Fortran 77 INTEGER is a C int, [default] # -DF77_INTEGER=long : Fortran 77 INTEGER is a C long, # -DF77_INTEGER=short : Fortran 77 INTEGER is a C short. # # 3) Fortran 77 string handling # # -DStringSunStyle : The string address is passed at the string loca- # tion on the stack, and the string length is then # passed as an F77_INTEGER after all explicit # stack arguments, [default] # -DStringStructPtr : The address of a structure is passed by a # Fortran 77 string, and the structure is of the # form: struct {char *cp; F77_INTEGER len;}, # -DStringStructVal : A structure is passed by value for each Fortran # 77 string, and the structure is of the form: # struct {char *cp; F77_INTEGER len;}, # -DStringCrayStyle : Special option for Cray machines, which uses # Cray fcd (fortran character descriptor) for # interoperation. # F2CDEFS = # # ---------------------------------------------------------------------- # - HPL includes / libraries / specifics ------------------------------- # ---------------------------------------------------------------------- # HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib) # # - Compile time options ----------------------------------------------- # # -DHPL_COPY_L force the copy of the panel L before bcast; # -DHPL_CALL_CBLAS call the cblas interface; # -DHPL_CALL_VSIPL call the vsip library; # -DHPL_DETAILED_TIMING enable detailed timers; # # By default HPL will: # *) not copy L before broadcast, # *) call the BLAS Fortran 77 interface, # *) not display detailed timing information. # HPL_OPTS = -DHPL_CALL_CBLAS # # ---------------------------------------------------------------------- # HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES) # # ---------------------------------------------------------------------- # - Compilers / linkers - Optimization flags --------------------------- # ---------------------------------------------------------------------- # CC = /usr/bin/gcc CCNOOPT = $(HPL_DEFS) CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops # # On some platforms, it is necessary to use the Fortran linker to find # the Fortran internals used in the BLAS library. # LINKER = ~/BLAS LINKFLAGS = $(CCFLAGS) # ARCHIVER = ar ARFLAGS = r RANLIB = echo # # ---------------------------------------------------------------------- Make output: ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so /bin/bash: /homes/laket/BLAS: Is a directory make[2]: *** [dexe.grd] Error 126 make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' make[1]: *** [build_tst] Error 2 make[1]: Leaving directory `/homes/laket/hpl-2.1' make: *** [build] Error 2 make: Target `all' not remade because of errors. Make -d output: Considering target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Looking for an implicit rule for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/s.libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/SCCS/s.libhpl.a'. No implicit rule found for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. No need to remake target `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `dexe.grd'. Must remake target `dexe.grd'. ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so Putting child 0x0129a2c0 (dexe.grd) PID 24853 on the chain. Live child 0x0129a2c0 (dexe.grd) PID 24853 /bin/bash: /homes/laket/BLAS: Is a directory make[2]: Reaping losing child 0x0129a2c0 PID 24853 *** [dexe.grd] Error 126 Removing child 0x0129a2c0 PID 24853 from chain. Failed to remake target file `dexe.grd'. Finished prerequisites of target file `dexe'. Giving up on target file `dexe'. Finished prerequisites of target file `all'. Giving up on target file `all'. make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' Reaping losing child 0x010ce900 PID 24841 make[1]: *** [build_tst] Error 2 Removing child 0x010ce900 PID 24841 from chain. Failed to remake target file `build_tst'. make[1]: Leaving directory `/homes/laket/hpl-2.1' Reaping losing child 0x00d91ae0 PID 24774 make: *** [build] Error 2 Removing child 0x00d91ae0 PID 24774 from chain. Failed to remake target file `build'. Finished prerequisites of target file `install'. make: Target `all' not remade because of errors. Giving up on target file `install'. Finished prerequisites of target file `all'. Giving up on target file `all'. Thanks!

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • why my server has a dir named "?"

    - by liuxingruo
    These are all the dirs in my server: ? bin boot dev etc home lib lost+found media media2 misc mnt net opt proc root sbin selinux srv sys tmp usr var why there is a "?" dir? Thanks very much. BTW: the touch command was found on my server(wiered). I list the bin dir: alsacard cp dd env hostname loadkeys more ps sed tcptraceroute alsaunmute cpio df ex igawk loadkeys.static mount pwd setfont traceroute6 arch csh dmesg false ipcalc logger mountpoint raw setserial tracert awk cut dnsdomainname fgrep kbd_mode login mv red sh view basename date doexec gawk keyctl ls netstat redhat_lsb_init sleep ypdomainname bash dbus-cleanup-sockets domainname gettext kill mail nice rm sort cat dbus-daemon dumpkeys grep ksh mailx nisdomainname rmdir stty chgrp dbus-monitor echo gtar ksh93 mkdir pgawk rpm su chmod dbus-send ed gunzip link mknod ping rvi sync chown dbus-uuidgen egrep gzip ln mktemp ping6 rview tar touch is missing, how can i get it back?

    Read the article

  • Did something just get borked with glibc/perl/irssi?

    - by Wayne Werner
    I was using irssi about 30 minutes ago now on Ubuntu server 12.04. Everything was perfectly fine and then all of the sudden something happened (my guess is a power failure). The box was restarted. When I logged back in and ran irssi, I got the following: *** glibc detected *** irssi: double free or corruption (out): 0x0000000002085a40 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7e626)[0x7ffc01d87626] irssi(config_node_set_str+0x98)[0x491768] irssi[0x491f12] irssi[0x491e61] irssi(config_parse+0x52)[0x492112] irssi[0x48ab81] irssi(settings_init+0xd1)[0x48bf81] irssi(core_init+0x79)[0x47a849] irssi(main+0xd8)[0x4167e8] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7ffc01d2a76d] irssi[0x416b41] ======= Memory map: ======== 00400000-004d0000 r-xp 00000000 08:01 1319015 /usr/bin/irssi 006cf000-006d0000 r--p 000cf000 08:01 1319015 /usr/bin/irssi 006d0000-006dc000 rw-p 000d0000 08:01 1319015 /usr/bin/irssi 006dc000-006dd000 rw-p 00000000 00:00 0 02078000-02099000 rw-p 00000000 00:00 0 [heap] 7ffc0025b000-7ffc00270000 r-xp 00000000 08:01 655404 /lib/x86_64-linux-gnu/libgcc_s.so.1 7ffc00270000-7ffc0046f000 ---p 00015000 08:01 655404 /lib/x86_64-linux-gnu/libgcc_s.so.1 7ffc0046f000-7ffc00470000 r--p 00014000 08:01 655404 /lib/x86_64-linux-gnu/libgcc_s.so.1 7ffc00470000-7ffc00471000 rw-p 00015000 08:01 655404 /lib/x86_64-linux-gnu/libgcc_s.so.1 7ffc00471000-7ffc0073a000 r--p 00000000 08:01 1320172 /usr/lib/locale/locale-archive 7ffc0073a000-7ffc00746000 r-xp 00000000 08:01 655391 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7ffc00746000-7ffc00945000 ---p 0000c000 08:01 655391 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7ffc00945000-7ffc00946000 r--p 0000b000 08:01 655391 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7ffc00946000-7ffc00947000 rw-p 0000c000 08:01 655391 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7ffc00947000-7ffc00951000 r-xp 00000000 08:01 655392 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7ffc00951000-7ffc00b51000 ---p 0000a000 08:01 655392 /lib/x86_64-linux-gnu/libnss_nis-2.15.so Followed by many more lines. Is there anything I can do to fix this?

    Read the article

  • have a bash script remotely shutdown another computer on the lan

    - by gletscher
    Hi I want to write a bash script that when called shuts down another computer on the lan. Maybe using ssh? The other computer is an ubuntu machine. Now I'm not sure how to send e.g. a sudo shutdown -h now command from withing a bash script to the ssh after logging in. Also I'm not sure how to obtain the rights for the sudo command, hence how to handle the communication between the server and client from within a bash script. Any suggestions are greatly appreciated.

    Read the article

  • ios7 loops on the "trust this computer" dialog

    - by gcb
    trying to transfer files to the work ipad via my debian7 box. When i plug it on the computer usb port, it shows the dialog about trusting this computer, and the computer shows a gnome alert about the ipad being locked and that i should unlock it and try again. i press "trust" on the ipad and try again on gnome. and it starts again. over and over. endlessly. there are dozen threads about this on apple support forums. no solution. just dozens of "me too" flags. e.g. https://discussions.apple.com/message/23082859#23082859 (44 me-too, 2k views) here is the log/messages i get Oct 23 21:17:39 dotmatrix kernel: [ 1928.517766] usb 2-1.7: USB disconnect, device number 16 Oct 23 21:17:39 dotmatrix kernel: [ 1928.715441] usb 2-1.7: new high-speed USB device number 17 using ehci_hcd Oct 23 21:17:40 dotmatrix kernel: [ 1928.811031] usb 2-1.7: New USB device found, idVendor=05ac, idProduct=12ab Oct 23 21:17:40 dotmatrix kernel: [ 1928.811036] usb 2-1.7: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Oct 23 21:17:40 dotmatrix kernel: [ 1928.811039] usb 2-1.7: Product: iPad Oct 23 21:17:40 dotmatrix kernel: [ 1928.811041] usb 2-1.7: Manufacturer: Apple Inc. Oct 23 21:17:40 dotmatrix kernel: [ 1928.811043] usb 2-1.7: SerialNumber: fec5e0f6a6fa18a936de3c53af661051d290275e Oct 23 21:17:40 dotmatrix mtp-probe: checking bus 2, device 17: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.7" Oct 23 21:17:40 dotmatrix mtp-probe: bus: 2, device: 17 was not an MTP device Oct 23 21:17:43 dotmatrix kernel: [ 1932.346505] usb 2-1.7: USB disconnect, device number 17 If i never press the trust dialog it will stay there until i remove the cable. but the logs shows that it gave up 3sec after the cable was connected.

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • traffic shaping for certain (local) users

    - by JMW
    Hello, i'm using ubuntu 10.10 i've a local backup user called "backup". :) i would like to give this user just a bandwidth of 1Mbit. No matter which software wants to connect to the network. this solution doesn't work: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 iptables -t mangle -A POSTROUTING -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 2 htb default 1 tc filter add dev eth0 parent 2: protocol ip pref 2 handle 50 fw classid 2:6 tc class add dev eth0 parent 2: classid 2:6 htb rate 10Kbit ceil 1Mbit tc qdisc show dev eth0 tc class show dev eth0 tc filter show dev eth0 does anyone know how to do it? thanks a lot in advance

    Read the article

  • Why are my files in /var/lock and where did they just go?!

    - by Nicky Hajal
    I am hosting a website on Debian 5.0 & Apache2. Today one of my websites was down, Apache said it couldn't find the directory. I located the files and the whole site once in /var/www/site was now /var/lock/site. All the files were present. I was confused, but figured I'd just move it back. mv /var/lock/site /var/www All looked fine... Except that only the directories moved and the files appear to be lost! I am working on restoring from backups but I would really love to know what happened and where my files went (the backups are a few days old). Thanks for your help!

    Read the article

  • OpenSSL missing during ./configure. How to fix?

    - by P K
    I was trying to install node.js and found OpenSSL support missing during ./configure. How can I fix it? Is it a mandatory step? # ./configure Checking for gcc : ok Checking for library dl : not found Checking for openssl : not found Checking for function SSL_library_init : not found Checking for header openssl/crypto.h : not found /home/ec2-user/node-v0.6.6/wscript:374: error: Could not autodetect OpenSSL support. Make sure OpenSSL development packages are installed. Use configure --without-ssl to disable this message.

    Read the article

  • What does this RPC error message mean?

    - by user161834
    I have OS RHNL release 6.2 and use the NFS service (nfs-utils-1.2.3) to connect to NFS server, And found a lot of messages in a file /var/log/messages: Apr 1 11:08:35 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:14:26 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:18:36 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:24:27 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:28:37 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:34:27 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:38:37 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:44:28 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:48:37 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' What does this message mean ?

    Read the article

  • How can I limit the upload/download bandwidth on my CentOS server?

    - by Dan Nestor
    How can I limit the upload and download bandwidth on my CentOS server? This is a box with a single interface, eth0. Ideally, I would like a command-line solution (I've been trying to use tc), something that I could easily switch on and off in a script. So far I've been trying to do something like tc filter add dev eth0 protocol ip prio 50 u32 police rate 100kbit burst 10240 drop but I'm obviously missing a lot of knowledge and information. Can somebody help with a quick one-liner? Many thanks, Dan

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • How to prevent unison syncronize file when file process uploading

    - by user134600
    I use CentOS 5.8 Final. My situation is I running unison with cron where script below : */1 * * * * /usr/bin/unison /dev/null 2&1 and default profile like below : root = /var/www root = ssh://web02.example.com//var/www auto=true batch=true confirmbigdel=true fastcheck=true group=true owner=true prefer=newer silent=true times=true So in every minutes will syncronized www folder . My problem are : I upload file with size bigger than 10 MB to www from client with user1 permission where www folder is user1 owner. file in processing uploading then unison running in that minute and suddenly file upload owner changed to root:root When I editing file in www folder then I save when unison running, file owner changed to root:root where should be user1:user1 Is there anyone know about this problem?

    Read the article

  • How do I configure a secondary gateway in RHEL5?

    - by Brett Ryan
    Greetings, we have been experiencing a random timeout issue with VPN users connecting to one of our servers which is causing a problem. My network administrator has instructed me to configure a secondary gateway to include the VPN connection. My current connection resides as follows, 10.1.9.1 is the internal gateway to the internet, I'd like to add 10.1.1.20 as the VPN gateway. # Broadcom Corporation NetXtreme II BCM5708S Gigabit Ethernet DEVICE=eth0 BOOTPROTO=none BROADCAST=10.1.255.255 IPADDR=10.1.1.22 IPV6_AUTOCONF=yes NETMASK=255.255.0.0 NETWORK=10.1.0.0 ONBOOT=yes GATEWAY=10.1.9.1 TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes

    Read the article

  • Heartbeat won't start up from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. Let's say one node is down... The servers are directory connected with a 1000Mbps cross over cable on eth1 and have access to a IP camera LAN on eth0 The node that is still functioning won't start up heartbeat and provide access to the drbd resource. I have to manually restart heartbeat by "sudo service heartbeat restart" to get everything up and running. How can I get it to start fine from a cold start? Here is the my ha.cf and some material from the syslog... If I'm missing any information that might be of some help. http://pastebin.com/rGvzVSUq <--- Syslog http://pastebin.com/VqpaPSb5 <--- ha.cf

    Read the article

  • Xen domain migration locking problem

    - by brodie
    I am trying to live migrate a VM (domain) between two Xen servers. I have xen locking (xend-domain-lock = yes) configured with a ocfs2 shared storage between them. This locking is working fine. If I try to start up the VM on the secondary server it refuses to start (which is correct). The problem I am having is when trying to do live migration, it seems like it is trying to remove the lock twice. The first lock it removes is for "domain test", the second is for "migrating-test" which does not exist. Should their be a lock for this "migrating-test" VM? These are the relevant options in the xen config file: (xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-address '') (xend-relocation-hosts-allow '') (xend-domain-lock yes) (xend-domain-lock-path /var/lib/xen/lock) This is the section of the log: [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain test [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) SUSPEND shinfo 000c6ceb [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) delta 21ms, dom0 95%, target 0%, sent 57Mb/s, dirtied 173Mb/s 111 pages 4: sent 111, skipped 0, delta 6ms, dom0 100%, target 0%, sent 606Mb/s, dirtied 606Mb/s 111 pages [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Total pages sent= 131295 (0.99x) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) (of which 0 were fixups) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) All memory is saved [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Save exit rc=0 [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:123) Domain 22 suspended. [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=22 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2227) Destroying device model [2010-06-10 10:45:58 14488] INFO (image:567) migrating-test device model terminated [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2234) Releasing devices [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vif, device = vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vkbd, device = vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = console, device = console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain migrating-test [2010-06-10 10:45:59 14488] ERROR (XendDomainInfo:4070) Failed to remove unmanaged directory /var/lib/xen/lock/b01515ae-9173-03cb-0cb7-06f3dfbede8b.

    Read the article

  • Hang while starting several daemons [solved]

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before. Update3: I found the problem. My /etc/nsswitch.conf specified ldap as hosts lookup backup, which is not available at that time of the boot. Relying on dns solely fixes my boot problems.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >