Search Results

Search found 8323 results on 333 pages for 'execute'.

Page 310/333 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • sudo in Debian squeeze inside linux-vserver always wants password

    - by mark
    Every since I upgraded all my linux-vserver Debian guests from Lenny to Squeeze I've the apparent problem that whenever I want to use sudo it asks me for my password. Every time. I've configured sudo to have a timeout of 30 minutes: Defaults timestamp_timeout=30 . This has been configured when it was still Lenny (note: as suggested by EightBitTony I've also tried without this setting - no change). I've a hard time figuring out what the problem here is, since I think my configuration is right. I thought about it being a problem with the file used to record the timestamp, maybe a permission issue, but was unlucky to find any hard evidence. I've compared the contents of /var/lib/sudo/ between a working and a non-working system but couldn't spot any difference. The version of sudo used in both environments is 1.7.4p4-2.squeeze.3. My non-working system(s): find /var/lib/sudo/ -ls 17319289 4 drwx------ 4 root root 4096 Jan 1 1985 /var/lib/sudo/ 17319286 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 17319312 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 17319361 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 17319490 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 17319326 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/4 17319491 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/2 A working system: find /var/lib/sudo -ls 2598921 4 drwx------ 5 root root 4096 Jan 1 1985 /var/lib/sudo 1999522 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 2000781 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/8 1998998 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/17 1999459 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/26 1998930 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/24 2000771 4 -rw------- 1 root mark 40 Jun 25 11:39 /var/lib/sudo/mark/4 2000773 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/5 1999223 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/0 1998908 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/14 2000769 4 -rw------- 1 root mark 40 Jul 9 13:30 /var/lib/sudo/mark/2 2000770 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/3 2000782 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 2000778 4 -rw------- 1 root mark 40 Jul 8 00:11 /var/lib/sudo/mark/7 1998892 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/19 1999264 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/23 2000789 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/12 1999093 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/25 1998880 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/18 1998853 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/20 2000790 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/15 1998878 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/16 1998874 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/13 2000774 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 2000786 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/11 1998893 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/22 2000783 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 1998949 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/1 Despite the obvious (some up2date timestamps on the working system) I don't see anything wrong here, so it could be as well be a wrong track. Here's my current /etc/sudoers: # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification User_Alias FULLADMIN = user1, user2, user3 # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL FULLADMIN ALL = (ALL) ALL # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d #Defaults always_set_home,timestamp_timeout=30

    Read the article

  • shell script over SSH ends unexpectedly after running 'ant build'

    - by YShin
    I wrote a shell script that runs on remote host to build source code with 'ant build' command, and then distribute the built binary to other servers. However, right after Ant build is over successfully(I can see the command line output saying Build was successful), the ssh session ends and whatever commands after 'ant build' does not get executed. I'm confused what might be cause of this behavior. I suspected that it might be because the 'ant build' command takes too long time, and SSH somehow quits itself after that long command. But I don't think that's correct since if I just do 'sleep 60' in place of 'ant build' command, it actually execute latter commands as intended. I'm new at shell programming, so I might have made some silly misassumption. Can someone provide a pointer to a possible cause of this problem? My shell script #!/bin/bash # Inject some variables ssh -T $SSH_USER@$SSH_URL "setenv REMOTE_BASE_DIR $REMOTE_BASE_DIR; setenv CASSANDRA_SRC_TAR_FILE $CASSANDRA_SRC_TAR_FILE; setenv CASSANDRA_SRC_DIR_NAME $CASSANDRA_SRC_DIR_NAME; setenv CLUSTER_SIZE $CLUSTER_SIZE; setenv REMOTE_REDEPLOY_SCRIPT $REMOTE_REDEPLOY_SCRIPT; /bin/bash" << 'EOF' export JAVA_HOME=/usr/lib/jvm/jdk1.7.0 cd $REMOTE_BASE_DIR/$CASSANDRA_SRC_DIR_NAME echo "## Building Cassandra source" ant clean build # Anything after this doesn't run echo "## Ant Build is over. Invoking redeploy script on remote nodes" # Invoke redeploy script for each node for (( i=0; i < CLUSTER_SIZE; i++)) do echo "## Invoking redeploy script on node-$i" done Command-line output ## Building Cassandra source Buildfile: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build.xml clean: [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/test [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/gen-java [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/resources/org/apache/cassandra/config init: [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/main [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/thrift [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/test/lib [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/test/classes [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/gen-java maven-ant-tasks-localrepo: maven-ant-tasks-download: maven-ant-tasks-init: maven-declare-dependencies: maven-ant-tasks-retrieve-build: init-dependencies: [echo] Loading dependency paths from file: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/build-dependencies.xml check-gen-cli-grammar: gen-cli-grammar: [echo] Building Grammar /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/java/org/apache/cassandra/cli/Cli.g .... check-gen-cql2-grammar: gen-cql2-grammar: [echo] Building Grammar /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/java/org/apache/cassandra/cql/Cql.g ... check-gen-cql3-grammar: gen-cql3-grammar: [echo] Building Grammar /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/java/org/apache/cassandra/cql3/Cql.g ... build-project: [echo] apache-cassandra: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build.xml [javac] Compiling 43 source files to /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/thrift [javac] Note: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java uses or overrides a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [javac] Compiling 865 source files to /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/main [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. createVersionPropFile: [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/resources/org/apache/cassandra/config [propertyfile] Creating new property file: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/resources/org/apache/cassandra/config/version.properties [copy] Copying 3 files to /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/main build: BUILD SUCCESSFUL Total time: 32 seconds

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • Internal drives vs USB-3 with external SSD or eSata with External SSD

    - by normstorm
    I have a need to carry VMWare Virtual Machines with me for work. These are very large files (each VM is 20GB or more) and I carry around about 40 to 50 VM's to simulate different software configurations for different client needs. Key: they won't fit on the internal hard drive of my current laptop. I currently execute the VM's from an external 7200RPM 2.5" USB-2 drive. I keep copies of the VM's on other 5400 external USB-2 drives. The VM's work from this drive, but they are slow, costing me much time and frustration. It can take upwards of 30 minutes just to make a copy of one of the VM's. They can take upwards of 10-15 minutes to fully launch and then they operate sluggishly. I am buying a new laptop (Core I7, 8GB RAM and other high-end specs). I intend to buy an SSD for the O/S volume (C:). This SSD will not be large enough to hold the VM's. I have always wanted a second internal hard drive to operate the VM's. To have two hard drives, though, I am finding that I will have to go to a 17" laptop which would be bulky/heavy. I am instead considering purchasing a 15" laptop with either an eSATA port or USB-3 ports and then purchasing two external drives. One of the drives might be an external SSD (maybe OCX brand) for operating the VM's and the other a 7400RPM 1TB hard drive for carrying around the VM's not currently in use. The question is which options would give me the biggest bang for the buck and the weight: 1) 2nd Internal SSD hard drive. This would mean buying a 17" laptop with two drive "bays". The first bay would hold an SSD drive for the C: drive. I would leave the first bay empty from the manufacture and then purchase/install an aftermarket SSD drive. This second SSD drive would have to be very large (256 GB), which would be expensive. I would still also need another external hard drive for carrying around the VM's not in use. 2) 2nd internal hard drive - 7400 RPM. Again, a 17" laptop would be required, but there are models available with on SSD drive for the C: drive and a second 7200 RPM hard drives. The second drive could probably be large enough to hold the VM's in use as well as those not in use. But would it be fast enough to drive the VM's? 3) USB-3 with External SSD. I could buy a 15" laptop with an SSD drive for the C: drive and a second hard drive for general files. I would operate the VM's from an external USB-3 SSD drive and have a third USB-3 external 7200 RPM drive for holding the VM's not in use. 4) eSATA with External SSD. Ditto, just eSATA instead of USB-3 5) USB-3 with External 7400 RPM drive. Ditto, but the drive running the VM's would be USB-3 attached 7400 RPM drives rather than SSD. 6) eSATA with External 7400 RPM drive. Dittor, but the drive running the VM's would be eSATA attached 7400 RPM drives rather than SSD. Any thoughts on this and any creative solutions?

    Read the article

  • collectd does not work

    - by bery
    I have installed collectd-5.0.0 on Fedora12 server and would like to run its service for receiving data from clients. I have enabled network plugin and rddtool plugin as commented: collectd.conf in server: BaseDir "/opt/collectd/var/lib/collectd" LoadPlugin "logfile" LoadPlugin network LoadPlugin rrdtool <Plugin network> Listen "192.168.8.37" "25826" </Plugin> collectd.conf in client: LoadPlugin logfile LoadPlugin cpu LoadPlugin network LoadPlugin memory <Plugin network> Server"192.168.8.37" "25826" </Plugin> collectd.log in server: [2011-08-03 02:36:04] Exiting normally. [2011-08-03 02:36:04] rrdtool plugin: Shutting down the queue thread. [2011-08-03 02:36:04] network plugin: Stopping receive thread. [2011-08-03 02:36:04] network plugin: Stopping dispatch thread. [2011-08-03 02:37:11] Initialization complete, entering read-loop. collectd.log in client: [2011-08-02 17:31:44] Initialization complete, entering read-loop. results thst execute netstat on server: netstat -ulpn | grep 25826 udp 0 0 192.168.8.37:25826 0.0.0.0:* 4744/collectd problem: but there is noting in "/opt/collectd/var/lib/collectd/" on ser yes,I move the port number of "25826" as your propose(But I think this is the default port for coolectd).there is no rdd files recived on server. collectd.log in client collectd [2011-08-03 10:01:36] plugin_read_thread: Handling memory'. [2011-08-03 10:01:36] plugin_read_thread: Handlingcpu'. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = used; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = user; [2011-08-03 10:01:36] uc_update: uml/memory/memory-used: ds[0] = 280412160.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-user: ds[0] = 0.100008 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = buffered; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = nice; [2011-08-03 10:01:36] uc_update: uml/memory/memory-buffered: ds[0] = 344182784.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-nice: ds[0] = 0.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] network plugin: flush_buffer: send_buffer_fill = 1340 [2011-08-03 10:01:36] network plugin: network_send_buffer: buffer_len = 1340 ... [2011-08-03 10:01:36] plugin_read_thread: Next read of the cpu plugin at 1312380106.429064774. collectd.log in server collectd: [2011-08-03 20:18:08] type = network [2011-08-03 20:18:08] type = rrdtool [2011-08-03 20:18:08] network plugin: sockent_open: node = 192.168.8.37; service = 25826; [2011-08-03 20:18:08] fd = 3; calling bind' [2011-08-03 20:18:08] Done parsing/opt/collectd//share/collectd/types.db' [2011-08-03 20:18:08] interval_g = 10; [2011-08-03 20:18:08] timeout_g = 2; [2011-08-03 20:18:08] hostname_g = localhost.localdomain; [2011-08-03 20:18:08] Initialization complete, entering read-loop. It looks like, data is sending but doesn't be recived. Where is the mistake?

    Read the article

  • Need help merging 2 AHK scripts

    - by Mikey
    i have two functioning scripts that i want to merge into a single AHK File. My problem is that when i combine both scripts, the second script doesnt function or causes an error on script 1. Either way, script 2 ist not functioning at all. Here are some facts: Script 1 = a simple menu script where i want to assign hotkeys to. Script 2 = A small launcher script from a user named Tertius in autohotkey forum. Can someone please look at both codes and help me merge this? The INI File for script 2 looks like this: Keywords.ini npff|Firefox|Firefox gm|Gmail|http://gmail.google.com ;;;;;;;;;;;; BEGIN SCRIPT 2 DetectHiddenWindows, On SetWinDelay, -1 SetKeyDelay, -1 SetBatchLines, -1 GoSub Remin SetTimer, Remin, % 1000 * 60 Loop, read, %A_ScriptDir%\keywords.ini { LineNumber = %A_Index% Loop, parse, A_LoopReadLine, | { if (A_Index == 1) abbrevs%LineNumber% := A_LoopField else if (A_Index == 2) tips%LineNumber% := A_LoopField else if (A_Index == 3) programs%LineNumber% := A_LoopField else if (A_Index == 4) params%LineNumber% := A_LoopField } tosay := abbrevs%LineNumber% } cnt = %LineNumber% Loop { Input, Key, L1 V, % "{LControl}{RControl}{LAlt}{RAlt}{LShift}{RShift}{LWin}{RWin}" . "{AppsKey}{F1}{F2}{F3}{F4}{F5}{F6}{F7}{F8}{F9}{F10}{F11}{F12}{Left}{Right}{Up}{Down}" . "{Home}{End}{PgUp}{PgDn}{Del}{Ins}{BS}{Capslock}{Numlock}{PrintScreen}{Pause}{Escape}" If( ( Asc(Key) = 65 && Asc(Key) <= 90 ) || ( Asc(Key) = 97 && Asc(Key) <= 122 ) ) Word .= Key Else { Word := "" Continue } tipup := false Loop %cnt% { if (Word == abbrevs%A_index%) { tip := tips%A_index% ToolTip %tip% tipup := true } else { if (tipup == false) ToolTip } } } $Tab:: Loop %cnt% { if (Word != "" && Word == abbrevs%A_index%) { Word := "" StringLen, len, abbrevs%A_index% Loop %len% Send {Shift Down}{Left} Send {Shift Up}{BS} ToolTip program := programs%A_index% param := params%A_index% run, %program% %param% return } } Word := "" Send {Tab} Return ~LButton:: ~MButton:: ~RButton:: ~XButton1:: ~XButton2:: Word := "" Tooltip Return Remin: WinMinimize, %A_ScriptFullPath% - AutoHotkey v WinHide, %A_ScriptFullPath% - AutoHotkey v Return ;;;;;;;;;; END SCRIPT 2 ;;;;;;;;;;;;;; BEGIN SCRIPT 1 ;This is a working script that creates a popup menu. ; Create the popup menu by adding some items to it. Menu, MyMenu, Add, FIS 201, MenuHandler Menu, MyMenu, Add ; Add a separator line. Menu, MyMenu, Color, Lime, Single ;Define the Menu Color ; Create another menu destined to become a submenu of the above menu. Menu, Submenu1, Add, Item2, MenuHandler Menu, Submenu1, Add, Item3, MenuHandler Menu, Submenu1, Color, Yellow ;Define the Menu Color ; Create another menu destined to become a submenu of the above menu. Menu, Submenu2, Add, Item1a, MenuHandler Menu, Submenu2, Add, Item2a, MenuHandler Menu, Submenu2, Add, Item3a, MenuHandler Menu, Submenu2, Add, Item4a, MenuHandler Menu, Submenu2, Add, Item5a, MenuHandler Menu, Submenu2, Add, Item6a, MenuHandler Menu, Submenu2, Color, Aqua ;Define the Menu Color ; Create a submenu in the first menu (a right-arrow indicator). When the user selects it, the second menu is displayed. Menu, MyMenu, Add, BKRS 119, :Submenu1 Menu, MyMenu, Add ; Add a separator line below the submenu. Menu, MyMenu, Add, BKRS 201, :Submenu2 Menu, MyMenu, Add ; Add a separator line below the submenu. Menu, MyMenu, Add ; Add a separator line below the submenu. Menu, MyMenu, Add, Google Search, Google ; Add another menu item beneath the submenu. return ; End of script's auto-execute section. Capslock & LButton::Menu, MyMenu, Show ; i.e. press the Win-Z hotkey to show the menu. MenuHandler: MsgBox You selected %A_ThisMenuItem% from the menu %A_ThisMenu%. return ;;;;;;;;;;;;;;;;;;;;;;;; ;;;;;;;; Google Search ;;; FORMAT InputBox, OutputVar [, Title, Prompt, HIDE, Width, Height, X, Y, Font, Timeout, Default] Google: InputBox, SearchTerm, Google Search,,,350, 120 if SearchTerm < "" Run http://www.google.de/search?sclient=psy-ab&hl=de&site=&source=hp&q=%SearchTerm%&btnG=Suche return ; Make Window Transparent Space::WinSet, Transparent, 125, A ^!Space UP::WinSet, Transparent, OFF, A return ;;;;;;;;;;; END SCRIPT 1 Help is appreciated. Kind Regards, Mikey

    Read the article

  • Ubuntu 12.04 KVM running Ubuntu 12.04 with linux-image-virtual crash on boot

    - by D.Mill
    One of my VMs is stuck on "pause" in virsh. If I destroy and restart it, it will go to pause after a bit of time as "running". I can at best enter my username at login if I'm quick but it'll then shutdown. I don't know where to start with this so any help would be great!! I can access the VMs files via guestfish. the kern.log and syslog don't populate new lines. This is the last input I get from kern.log: Dec 13 11:21:08 soft201 kernel: imklog 5.8.6, log source = /proc/kmsg started. Dec 13 11:21:08 soft201 kernel: [ 0.000000] Initializing cgroup subsys cpuset Dec 13 11:21:08 soft201 kernel: [ 0.000000] Initializing cgroup subsys cpu Dec 13 11:21:08 soft201 kernel: [ 0.000000] Linux version 3.2.0-34-virtual (buildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #53-Ubuntu SMP Thu Nov 15 11:08:40 UTC 2012 (Ubuntu 3.2.0-34.53-virtual 3.2.33) Dec 13 11:21:08 soft201 kernel: [ 0.000000] Command line: root=UUID=61d48b48-a06a-48fb-842e-b38014086a93 ro quiet splash Dec 13 11:21:08 soft201 kernel: [ 0.000000] KERNEL supported cpus: Dec 13 11:21:08 soft201 kernel: [ 0.000000] Intel GenuineIntel Dec 13 11:21:08 soft201 kernel: [ 0.000000] AMD AuthenticAMD Dec 13 11:21:08 soft201 kernel: [ 0.000000] Centaur CentaurHauls Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-provided physical RAM map: Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 0000000000100000 - 00000000dfffc000 (usable) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000dfffc000 - 00000000e0000000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 0000000100000000 - 0000000a20000000 (usable) Dec 13 11:21:08 soft201 kernel: [ 0.000000] NX (Execute Disable) protection: active Dec 13 11:21:08 soft201 kernel: [ 0.000000] DMI 2.4 present. Dec 13 11:21:08 soft201 kernel: [ 0.000000] DMI: Bochs Bochs, BIOS Bochs 01/01/2007 Dec 13 11:21:08 soft201 kernel: [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable) Dec 13 As you can see the last line gets cut off. I don't even know if this is that relevant. dmesg logs are empty. The qemu log for the VM returns this: 2012-12-13 12:29:47.584+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 40960 -smp 14,sockets=14,cores=1,threads=1 -name numerink201 -uuid f4a889ed-a089-05d0-cc9d-9825ab1faeba -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/numerink201.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/client.soft.fr/tmpcZAD9U.qcow2,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -fsdev local,security_model=none,id=fsdev-fs0,path=/home/shared_folders/soft201 -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=hostshare,bus=pci.0,addr=0x5 -netdev tap,fd=18,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:00:1d:b9:e7,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/3 qemu: terminating on signal 15 from pid 28248 2012-12-13 12:30:14.455+0000: shutting down I've added more logging, libvirt.log gives me this: 2012-12-13 13:24:38.525+0000: 27694: info : libvirt version: 0.9.8 2012-12-13 13:24:38.525+0000: 27694: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory 2012-12-13 13:24:38.525+0000: 27694: warning : qemuCapsInit:856 : Failed to get host power management capabilities 2012-12-13 13:24:39.865+0000: 27694: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory 2012-12-13 13:24:39.865+0000: 27694: warning : lxcCapsInit:77 : Failed to get host power management capabilities 2012-12-13 13:24:39.866+0000: 27694: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory 2012-12-13 13:24:39.866+0000: 27694: warning : umlCapsInit:87 : Failed to get host power management capabilities I don't really know where to go from here. I'll post whatever info you require

    Read the article

  • Deploying play! 2.0 application on an apache server with a reverse proxy

    - by locrizak
    I'm trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this: <Directory /var/www/stage.domain.com AllowOverride None Order Deny,Allow Deny from all </Directory <VirtualHost *:80 DocumentRoot /var/www/stage.domain.com/web ServerName stage.domain.com ServerAdmin [email protected] # ProxyRequests Off # ProxyPreserveHost On <Proxy * Order allow,deny Allow from all </Proxy # ProxyVia On # ProxyPass /play/ http://localhost:9000/ # ProxyPassReverse /play/ http://localhost:9000/ ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log ErrorDocument 400 /error/400.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 405 /error/405.html ErrorDocument 500 /error/500.html ErrorDocument 502 /error/502.html ErrorDocument 503 /error/503.html <IfModule mod_ssl.c </IfModule <Directory /var/www/stage.domain.com/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory <Directory /var/www/clients/client2/web7/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory # Clear PHP settings of this website <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch # mod_php enabled AddType application/x-httpd-php .php .php3 .php4 .php5 php_admin_value sendmail_path "/usr/sbin/sendmail -t -i [email protected]" php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp php_admin_value session.save_path /var/www/clients/client2/web7/tmp # PHPIniDir /var/www/conf/web7 php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$ # add support for apache mpm_itk <IfModule mpm_itk_module AssignUserId web7 client2 </IfModule <IfModule mod_dav_fs.c # Do not execute PHP files in webdav directory <Directory /var/www/clients/client2/web7/webdav <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch </Directory # DO NOT REMOVE THE COMMENTS! # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE! # WEBDAV BEGIN # WEBDAV END </IfModule # <Location /play/ # ProxyPass http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 # </Location ProxyRequests Off ProxyPass /play/ http://localhost:9000/ ProxyPassReverse /play/ localhost:9000/ ProxyPass /play http://localhost:9000/ ProxyPassReverse /play http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 </VirtualHost This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a '502 Bad Gateway`. I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above. My application.conf file looks like this: db info ....... application.mode=PROD logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG http.path="/play/" XForwardedSupport="127.0.0.1" And my hosts file looks like this (I have never changed or added anything to the host file): 127.0.0.1 localhost 127.0.1.1 matrix # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!! Edit Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It's like Netty refuses the connection to the page. Edit 2 output from apachectl -S _default_:8081 127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10) *:8090 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) *:80 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • A faulty Caviar Blue hard drive?

    - by Glister
    We have a small "homemade" server running fully updated Debian Wheezy (amd64). One hard drive installed: WDC WD6400AAKS. The motherboard is ASUS M4N68T V2. The usual load: CPU: an average of 20% Each week about 50GB of additional space is occupied. About 47GB of uploaded files and 3GB of MySQL data. I'm afraid that the hard drive may be about to fail. I saw Pre-fail on few places when I ran: root@SERVER:/tmp# smartctl -a /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD6400AAKS-XXXXXXX Serial Number: WD-XXXXXXXXXXXXXXXXXXX LU WWN Device Id: 5 0014ee XXXXXXXXXXXXX Firmware Version: 01.03B01 User Capacity: 640,135,028,736 bytes [640 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Oct 28 18:55:27 2013 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 247) Self-test routine in progress... 70% of test remaining. Total time to complete Offline data collection: (11580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 136) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 157 146 021 Pre-fail Always - 5108 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2968 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 079 079 000 Old_age Always - 15445 10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2950 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 426 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2968 194 Temperature_Celsius 0x0022 111 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 160 000 Old_age Always - 21716 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15444 - Error SMART Read Selective Self-Test Log failed: scsi error aborted command Smartctl: SMART Selective Self Test Log Read Failed root@SERVER:/tmp# In one tutorial I read that the pre-fail is a an indication of coming failure, in another tutorial I read that it is not true. Can you guys help me decode the output of smartctl? It would be also nice to share suggestions what should I do if I want to ensure data integrity (about 50GB of new data each week, up to 2TB for the whole period I'm interested in). Maybe I will go with 2x2TB Caviar Black in RAID4?

    Read the article

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • Why wireless adatper stop to work?

    - by AndreaNobili
    today I correctly installed the driver for the TP-LINK TL-WN725N USB wireless adapter on my RaspBerry Pi (I use RaspBian that is a Debian), then I setted up the wifi using the wpa-supplicant as explained in this tutorial: http://www.maketecheasier.com/setup-wifi-on-raspberry-pi/ This worked fine untill this evening. Then suddenly it stopped to work when I try to connect in SSH and the Raspberry is on the wireless (or rather it should be, as this is not in the list of my router's DHCP connected Client) The strange thing is that the USB wirless adapter blink so I think that this is not a driver problem. If I try to connect it by the ethernet I have no problem. It appear in my router's DHCP connected Client and I can connect to it by SSH. When I connect to it using ethernet if I perform an ifconfig command I obtain: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.9 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6006 (5.8 KiB) TX bytes:8268 (8.0 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1104 (1.0 KiB) TX bytes:1104 (1.0 KiB) wlan0 Link encap:Ethernet HWaddr e8:94:f6:19:80:4c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So it seems that the wlan0 USB wireless adapter driver is correctly loaded. If I remove the USB wireless adapter and put it again into the USB port, the lasts lines of dmesg log is: [ 20.303172] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 20.306340] RTL871X: set bssid:00:00:00:00:00:00 [ 20.306726] RTL871X: set ssid [g\xffffffc6isQ\xffffffffJ\xffffffec)\xffffffcd\xffffffba\xffffffba\xffffffab\xfffffff2\xfffffffb\xffffffe3F|\xffffffc2T\xfffffff8\x1b\xffffffe8\xffffffe7\xffffff8dvZ.c3\xffffff9f\xffffffc9\xffffff9a\xffffff9aD\xffffffa7\x1a\xffffffa0\x1a\xffffff8b] fw_state=0x00000008 [ 21.614585] RTL871X: indicate disassoc [ 21.908495] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 [ 25.006282] Adding 102396k swap on /var/swap. Priority:-1 extents:1 across:102396k SSFS [ 26.247997] RTL871X: nolinked power save enter As you can see some of these line are related to the RTL871X that is my USB wireless adapter, but I don't know is that these line report an error or if it is all ok. Looking at the adapter status I obtain: pi@raspberrypi ~ $ ip link list dev wlan0 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000 link/ether e8:94:f6:19:80:4c brd ff:ff:ff:ff:ff:ff As you can see the mode is DORMANT but I think that this is normal because now I am connected using ethernet. I tryied to set up the adapter but it seems that I obtain no result, infact: pi@raspberrypi ~ $ sudo ip link set dev wlan0 up pi@raspberrypi ~ $ ip link list dev wlan0 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000 link/ether e8:94:f6:19:80:4c brd ff:ff:ff:ff:ff:ff pi@raspberrypi ~ $ sudo ip link set dev wlan0 up This is my /etc/network/interfaces file content and it is ok: auto lo iface lo inet loopback iface eth0 inet dhcp allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp and it is the /etc/wpa_supplicant/wpa_supplicant.conf that I think is ok (I did not change it compared to when it worked): ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 network={ ssid="MY-NETWORK" psk="mypassword" key_mgmt=WPA-PSK } and infact if I execute a network scan I correctly find MY-NETWORK in the network list,infact: pi@raspberrypi ~ $ sudo iwlist wlan0 scan | grep ESSID ESSID:"TeleTu_74888B0060AD" ESSID:"MY-NETWORK" ESSID:"FASTWEB-1-PT6NtjL4TOSe" ESSID:"DC" So I reboot the system and I remove the ethernet cable but when I try to connect again to my raspberry I obatin the following error message: andrea@andrea-virtual-machine:~$ sudo ssh [email protected] ssh: connect to host 192.168.1.9 port 22: No route to host It seems that it can't connect using wireless. What could be the problem? What am I missing? How can I solve this situation? Tnx

    Read the article

  • What Apache/PHP configurations do you know and how good are they?

    - by FractalizeR
    Hello. I wanted to ask you about PHP/Apache configuration methods you know, their pros and cons. I will start myself: ---------------- PHP as Apache module---------------- Pros: good speed since you don't need to start exe every time especially in mpm-worker mode. You can also use various PHP accelerators in this mode like APC or eAccelerator. Cons: if you are running apache in mpm-worker mode, you may face stability issues because every glitch in any php script will lead to unstability to the whole thread pool of that apache process. Also in this mode all scripts are executed on behalf of apache user. This is bad for security. mpm-worker configuration requires PHP compiled in thread-safe mode. At least CentOS and RedHat default repositories doesn't have thread-safe PHP version so on these OSes you need to compile at least PHP yourself (there is a way to activate worker mpm on Apache). The use of thread-safe PHP binaries is considered experimental and unstable. Plus, many PHP extensions does not support thread-safe mode or were not well-tested in thread-safe mode. ---------------- PHP as CGI ---------------- This seems to be the slowest default configuration which seems to be a "con" itself ;) ---------------- PHP as CGI via mod_suphp ---------------- Pros: suphp allows you to execute php scipts on behalf of the script file owner. This way you can securely separate different sites on the same machine. Also, suphp allows to use different php.ini files per virtual host. Cons: PHP in CGI mode means less performance. In this mode you can't use php accelerators like APC because each time new process is spawned to handle script rendering the cache of previous process useless. BTW, do you know the way to apply some accelerator in this config? I heard something about using shm for php bytecode cache. Also, you cannot configure PHP via .htaccess files in this mode. You will need to install PECL htscanner for this if you need to set various per-script options via .htaccess (php_value / php_flag directives) ---------------- PHP as CGI via suexec ---------------- This configuration looks the same as with suphp, but I heard, that it's slower and less safe. Almost same pros and cons apply. ---------------- PHP as FastCGI ---------------- Pros: FastCGI standard allows single php process to handle several scripts before php process is killed. This way you gain performance since no need to spin up new php process for each script. You can also use PHP accelerators in this configuration (see cons section for comment). Also, FCGI almost like suphp also allows php processes to be executed on behalf of some user. mod_fcgid seems to have the most complete fcgi support and flexibility for apache. Cons: The use of php accelerator in fastcgi mode will lead to high memory consumption because each PHP process will have his own bytecode cache (unless there is some accelerator that can use shared memory for bytecode cache. Is there such?). FastCGI is also a little bit complex to configure. You need to create various configuration files and make some configuration modifications. It seems, that fastcgi is the most stable, secure, fast and flexible PHP configuration, however, a bit difficult to be configured. But, may be, I missed something? Comments are welcome!

    Read the article

  • Automatically starting svnserve on Snow Leopard

    - by Cleggy
    Note: I originally asked this question on Server Fault (http://serverfault.com/questions/148052/automatically-starting-svnserve-on-snow-leopard), but I thought this may be a more appropriate place to ask. I have installed Subversion onto my iMac running Snow Leopard, but am having trouble getting svnserve to start up automatically. As I understand it (I'm still fairly green with OSX), the best way to do that is to utilize launchd. To that end, I have created the following .plist file in the /Library/LaunchDaemons folder. If I use launchctl to execute this file, svnserve starts as expected, but it doesn't automatically start when the system starts up or I log in. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>Label</key> <string>org.tigris.subversion.svnserve</string> <key>UserName</key> <string>Dave</string> <key>ProgramArguments</key> <array> <string>/opt/subversion/bin/svnserve</string> <string>--inetd</string> <string>--root=/Users/Shared/SVNrep</string> </array> <key>ServiceDescription</key> <string>Subversion Standalone Server</string> <key>Sockets</key> <dict> <key>Listeners</key> <array> <dict> <key>SockFamily</key> <string>IPv4</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> <dict> <key>SockFamily</key> <string>IPv6</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> </array> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> </dict> </plist> I have tried many different configs in the .plist, including auto-starting, simplifying the listeners section, removing dependence on inetd, but they all show the same symptom. The files work when started using launchctl load, but do not automatically start up svnserve if the iMac is rebooted. If anyone here could provide any suggestions as to how to get this to work, I'd really appreciate it.

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • FreeBSD with Vagrant - don't know how to check guest additions version

    - by joelmaranhao
    On Mac OS X 10.9.3 Picked a box from the VagrantCloud Init the vagrant box $ vagrant init chef/freebsd-9.2-i386 A `Vagrantfile` has been placed in this directory. You are now ready to `vagrant up` your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on `vagrantup.com` for more information on using Vagrant. List the files $ ls -al -rw-r--r-- 1 joel staff 4831 Jun 5 17:17 Vagrantfile Vagrantfile content VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "chef/freebsd-9.2-i386" end Starting my virtual box leads to Errors $ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'chef/freebsd-9.2-i386' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 0 ==> default: Loading metadata for box 'chef/freebsd-9.2-i386' default: URL: https://vagrantcloud.com/chef/freebsd-9.2-i386 ==> default: Adding box 'chef/freebsd-9.2-i386' (v1.0.0) for provider: virtualbox default: Downloading: https://vagrantcloud.com/chef/freebsd-9.2-i386/version/1/provider/virtualbox.box ==> default: Successfully added box 'chef/freebsd-9.2-i386' (v1.0.0) for 'virtualbox'! ==> default: Importing base box 'chef/freebsd-9.2-i386'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'chef/freebsd-9.2-i386' is up to date... ==> default: Setting the name of the VM: freebsd92-i386_default_1401982167145_49633 ==> default: Fixed port collision for 22 => 2222. Now on port 2201. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2201 (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2201 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Connection timeout. Retrying... default: Warning: Connection timeout. Retrying... ==> default: Machine booted and ready! Sorry, don't know how to check guest version of Virtualbox Guest Additions on this platform. Stopping installation. ==> default: Checking for guest additions in VM... default: The guest additions on this VM do not match the installed version of default: VirtualBox! In most cases this is fine, but in rare cases it can default: prevent things such as shared folders from working properly. If you see default: shared folder errors, please make sure the guest additions within the default: virtual machine match the version of VirtualBox you have installed on default: your host and reload your VM. default: default: Guest Additions Version: 4.2.16 default: VirtualBox Version: 4.3 ==> default: Mounting shared folders... default: /vagrant => /Users/joel/Code/anybots/operations/robot/freebsd92-i386 Vagrant attempted to execute the capability 'mount_virtualbox_shared_folder' on the detect guest OS 'freebsd', but the guest doesn't support that capability. This capability is required for your configuration of Vagrant. Please either reconfigure Vagrant to avoid this capability or fix the issue by creating the capability. Note that I have recently installed the latest version of VirtualBox, but somehow I can't find the Guest Additions.

    Read the article

  • IIS 7.5 Manager crashes when adding a custom error page

    - by dig412
    I'm running a local IIS 7.5 server in Win 7 Pro, and I'm trying to add a custom error page for 403 responses. When I click OK to add a custom error page for my site, IIS Manager just vanishes. The server is still running, and I can re-start IIS Manager, but the new page has not been saved. I've also tried adding it directly to web.config, but that just gives me The page cannot be displayed because an internal server error has occurred. Does anyone know why this might be happening? Edit: The event log implies that an invalid character in the path caused the crash, but It occured even when I copied & pasted a path from a valid entry. Application error log: IISMANAGER_CRASH IIS Manager terminated unexpectedly. Exception:System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.ArgumentException: Illegal characters in path. at System.IO.Path.CheckInvalidPathChars(String path) at System.IO.Path.IsPathRooted(String path) at Microsoft.Web.Management.Iis.CustomErrors.CustomErrorsForm.OnAccept() at Microsoft.Web.Management.Client.Win32.TaskForm.OnOKButtonClick(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Form.ShowDialog(IWin32Window owner) at Microsoft.Web.Management.Host.UserInterface.ManagementUIService.ShowDialogInternal(Form form, IWin32Window parent) at Microsoft.Web.Management.Host.UserInterface.ManagementUIService.Microsoft.Web.Management.Client.Win32.IManagementUIService.ShowDialog(DialogForm form) at Microsoft.Web.Management.Client.Win32.ModulePage.ShowDialog(DialogForm form) at Microsoft.Web.Management.Iis.CustomErrors.CustomErrorsPage.AddCustomError() --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at Microsoft.Web.Management.Client.TaskList.InvokeMethod(String methodName, Object userData) at Microsoft.Web.Management.Host.UserInterface.Tasks.MethodTaskItemLine.InvokeMethod() at System.Windows.Forms.LinkLabel.OnMouseUp(MouseEventArgs e) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Label.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.Web.Management.Host.Shell.ShellApplication.Execute(Boolean localDevelopmentMode, Boolean resetPreferences, Boolean resetPreferencesNoLaunch) Process:InetMgr

    Read the article

  • Call to daemon in a /etc/init.d script is blocking, not running in background

    - by tony
    I have a Perl script that I want to daemonize. Basically this perl script will read a directory every 30 seconds, read the files that it finds and then process the data. To keep it simple here consider the following Perl script (called synpipe_server, there is a symbolic link of this script in /usr/sbin/) : #!/usr/bin/perl use strict; use warnings; my $continue = 1; $SIG{'TERM'} = sub { $continue = 0; print "Caught TERM signal\n"; }; $SIG{'INT'} = sub { $continue = 0; print "Caught INT signal\n"; }; my $i = 0; while ($continue) { #do stuff print "Hello, I am running " . ++$i . "\n"; sleep 3; } So this script basically prints something every 3 seconds. Then, as I want to daemonize this script, I've also put this bash script (also called synpipe_server) in /etc/init.d/ : #!/bin/bash # synpipe_server : This starts and stops synpipe_server # # chkconfig: 12345 12 88 # description: Monitors all production pipelines # processname: synpipe_server # pidfile: /var/run/synpipe_server.pid # Source function library. . /etc/rc.d/init.d/functions pname="synpipe_server" exe="/usr/sbin/synpipe_server" pidfile="/var/run/${pname}.pid" lockfile="/var/lock/subsys/${pname}" [ -x $exe ] || exit 0 RETVAL=0 start() { echo -n "Starting $pname : " daemon ${exe} RETVAL=$? PID=$! echo [ $RETVAL -eq 0 ] && touch ${lockfile} echo $PID > ${pidfile} } stop() { echo -n "Shutting down $pname : " killproc ${exe} RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f ${lockfile} rm -f ${pidfile} fi } restart() { echo -n "Restarting $pname : " stop sleep 2 start } case "$1" in start) start ;; stop) stop ;; status) status ${pname} ;; restart) restart ;; *) echo "Usage: $0 {start|stop|status|restart}" ;; esac exit 0 So, (if I have well understood the doc for daemon) the Perl script should run in the background and the output should be redirected to /dev/null if I execute : service synpipe_server start But here is what I get instead : [root@master init.d]# service synpipe_server start Starting synpipe_server : Hello, I am running 1 Hello, I am running 2 Hello, I am running 3 Hello, I am running 4 Caught INT signal [ OK ] [root@master init.d]# So it starts the Perl script but runs it without detaching it from the current terminal session, and I can see the output printed in my console ... which is not really what I was expecting. Moreover, the PID file is empty (or with a line feed only, no pid returned by daemon). Does anyone have any idea of what I am doing wrong ? EDIT : maybe I should say that I am on a Red Hat machine. Scientific Linux SL release 5.4 (Boron) Would it do the job if instead of using the daemon function, I use something like : nohup ${exe} >/dev/null 2>&1 & in the init script ?

    Read the article

  • Do email forms need to be santized before sending?

    - by levi
    I have a client that keeps getting reports from godaddy's "websiteprotection.com" stating how the website is insecure. Your website contains pages that do not properly sanitize visitor-provided input to make sure it contains no malicious content or scripts. Cross-site scripting vulnerabilities let malicious users execute arbitrary HTML or script code in another visitor's browser. Output: The request string used to detect this flaw was : /cross_site_scripting.?nasl.asp The output was : HTTP/1.1 404 Not Found\r Date: Wed, 21 Mar 2012 08:12:02 GMT\r Server: Apache\r X-Pingback:http://?CLIENTSWEBSITE.com/?xmlrpc.php\r Expires: Wed, 11 Jan 1984 05:00:00 GMT\r Cache-Control: no-cache, must-revalidate, max-age=0\r Pragma: no-cache\r Set-Cookie: PHPSESSID=?1jsnhuflvd59nb4trtquston50; path=/\r Last-Modified: Wed, 21 Mar 2012 08:12:02 GMT\r Keep-Alive: timeout=15, max=100\r Connection: Keep-Alive\r Transfer-Encoding: chunked\r Content-Type: text/html; charset=UTF-8\r \r <div id="contact-form" class="widget"><form action="http://?CLIENTSWEBSITE.c om/<script>cross_site_?scripting.nasl</script>.asp" id="contactForm" meth od="post"> It looks like it has an issue with the contact form. All the contact form does is posts an ajax request to the same page, and than a PHP script mails the data (no database stuff). Is there any a security issues here? Any ideas on how I can satisfy the security scanner? Here is the form and script: <form action="<?php echo $this->getCurrentUrl(); ?>" id="contactForm" method="post"> <input type="text" name="Name" id="Name" value="" class="txt requiredField name" /> //Some more text inputs <input type="hidden" name="sendadd" id="sendadd" value="<?php echo $emailadd ; ?>" /> <input type="hidden" name="submitted" id="submitted" value="true" /><input class="submit" type="submit" value="Send" /> </form> // Some initial JS validation, if that passes an ajax post is made to the script below //If the form is submitted if(isset($_POST['submitted'])) { //Check captcha if (isset($_POST["captchaPrefix"])) { $capt = new ReallySimpleCaptcha(); $correct = $capt->check( $_POST["captchaPrefix"], $_POST["Captcha"] ); if( ! $correct ) { echo false; die(); } else { $capt->remove( $_POST["captchaPrefix"] ); } } $dateon = $_POST["dateon"]; $ToEmail = $_POST["sendadd"]; $EmailSubject = 'Contact Form Submission from ' . get_bloginfo('title'); $mailheader = "From: ".$_POST["Email"]."\r\n"; $mailheader .= "Reply-To: ".$_POST["Email"]."\r\n"; $mailheader .= "Content-type: text/html; charset=iso-8859-1\r\n"; $MESSAGE_BODY = "Name: ".$_POST["Name"]."<br>"; $MESSAGE_BODY .= "Email Address: ".$_POST["Email"]."<br>"; $MESSAGE_BODY .= "Phone: ".$_POST["Phone"]."<br>"; if ($dateon == "on") {$MESSAGE_BODY .= "Date: ".$_POST["Date"]."<br>";} $MESSAGE_BODY .= "Message: ".$_POST["Comments"]."<br>"; mail($ToEmail, $EmailSubject, $MESSAGE_BODY, $mailheader) or die ("Failure"); echo true; die(); }

    Read the article

  • apache webserver unresponsible with server-status showing all child processes waiting for connection

    - by Jeff
    My setup: i have 3 nearly identical webserver machines serving the same high loaded dynamic website with simple load balancing over dns. The service has been working for over two ears with the same apache config. apache2, php5, ubuntu 8.04 linux 2.6.24-29-server My problem: since about two weeks i'm experiencing problems with this config. Nearly every day i have one small moment about 5 minutes, in which the website is unreachable. I'm still able to login to the servers over ssh. If i run htop, i see the machine simply doing nothing. i have about 1000 apache processes running, but no cpu activity. i've used the apache mod_status to debug this situation. the process scoreboard looks like this: _C.___K_______________________R._______.__K_K____K___C_______.__ _______C__________.___________________________________.________C _.____K__________K___K_WK_____._K_____________________________._ W______K__________K________.____________________._______C_______ _C_.__K__K____.._.._____________________________________C_______ _R___________K___.______C________.C_________.______._____C______ ____________KKC____K_____K__WC_________________C_____.__.____.__ _____________________C_________K______.____C______._____________ _.___C____.___.___________________________.K______.____K________ W__.___________________C.__.____K________K_______R_._.__._______ __C__C_.__________C__C_______._____W______________C_.___C_______ ____.______C_____________C________.____C____________.________._K __.__________.K_____________K_________._____C____.K__________KW_ __K.W________R_________._______.___W___________.____.__K_____W__ W___.___..________W____K Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process So the most of the processes are just waiting for connection. after about 5 minutes the situation will return to normal: i have lot least processes on every machine, the most workers have the "."-status (meaing they are open to process a request) and of course the website is reachable! so i'm trying to find something in the logs, but there is simply nothing... the apache access log is silent for about 4 minutes, the same is for the error log. i also can not figure out anything wrong in other system logs. the situation is the same on all 3 webservers (all of them have this load peak and unresposibility at the same time), so i do not thing this is hardware related. but i think, this might be related to some network (tcp) issue. any ideas? EDIT: some more information, that i have just discovered: it has just happened again. and i was able to verify that i'm also not able to connect locally when this problem occurs. i have made some connection statistics with the following command after it happend netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 109 CLOSE_WAIT 2652 ESTABLISHED 2 FIN_WAIT1 11 LAST_ACK 12 LISTEN 91 SYN_RECV 1 SYN_SENT 16 TIME_WAIT If i execute the same command some time later, i have something like this: 4 CLOSING 108 ESTABLISHED 18 FIN_WAIT1 182 FIN_WAIT2 37 LAST_ACK 12 LISTEN 50 SYN_RECV 11276 TIME_WAIT So in the normal situation i have only 100-200 open connections by clients beeing handled by apache in this moment. when i have this "crash", i have a lot more connections. what is the best way to analyse this? EDIT2: the important lines in apache2.conf are: KeepAlive On MaxKeepAliveRequests 20 KeepAliveTimeout 1 <IfModule mpm_prefork_module> ServerLimit 920 StartServers 30 MinSpareServers 80 MaxSpareServers 120 MaxClients 920 MaxRequestsPerChild 700 </IfModule> it is an apache2 prefork with php_mod. the server has 8GB ram and a 4gb swap partition.

    Read the article

  • Automatically starting svnserve on Snow Leopard

    - by Cleggy
    I have installed Subversion onto my iMac running Snow Leopard, but am having trouble getting svnserve to start up automatically. As I understand it (I'm still fairly green with OSX), the best way to do that is to utilize launchd. To that end, I have created the following .plist file in the /Library/LaunchDaemons folder. If I use launchctl to execute this file, svnserve starts as expected, but it doesn't automatically start when the system starts up or I log in. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>Label</key> <string>org.tigris.subversion.svnserve</string> <key>UserName</key> <string>Dave</string> <key>ProgramArguments</key> <array> <string>/opt/subversion/bin/svnserve</string> <string>--inetd</string> <string>--root=/Users/Shared/SVNrep</string> </array> <key>ServiceDescription</key> <string>Subversion Standalone Server</string> <key>Sockets</key> <dict> <key>Listeners</key> <array> <dict> <key>SockFamily</key> <string>IPv4</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> <dict> <key>SockFamily</key> <string>IPv6</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> </array> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> </dict> </plist> If anyone here could provide any suggestions as to how to get this to work, I'd really appreciate it.

    Read the article

  • Why am I getting a 403 error on a POST to a PHP script?

    - by John Gallagher
    Background I want to allow my users to submit a crash report which will get emailed to me. I'm using UKCrashReporter with the bundled PHP script I've modified. This code does a POST to a specified URL along with the crash report. I'm on a shared server running Linux. My main domain is synapticmishap.co.uk. The Problem When I send the crash report off, on the Cocoa side, it reports as having sent it successfully, but I don't receive an email. The code has been used in lots of other well established Cocoa projects and it was working for me a few months ago. That leads me to conclude that the problems are related to my web server setup, something I know almost nothing about. When I look at my log files, I see entries like this: IP Redacted - - [10/Jun/2010:09:47:53 +0100] "POST /synapticmishap/crashreportform.php HTTP/1.1" 403 74 "-" "UKCrashReporter" What I've tried I've tried accessing the page at http://synapticmishap.co.uk/synapticmishap/crashreportform.php via a browser. It loads fine. I've made sure the permissions on this php script are set so anyone can execute it. I've tried removing the deny entries from the section of .htaccess at various levels starting with root. I've downloaded the URLParams plugin for Firefox which allows you to simulate POSTs. I put in the URL above and tried a post with "crashlog" as the parameter and "test" as the value. This generated a 200 log entry in my log file - it seemed to work, although no mail message was sent. Code I've got the following at http://synapticmishap.co.uk/synapticmishap/crashreportform.php. I've simplified it to just the bare bones in an effort to get it working. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Crash Report</title> </head> <body> <p>This page contains super special magic which submits a crash report item to me.</p> <p>Nothing to see here - move along.</p> <?php mail( "[email protected]", "Crash Report", "\r\n\r\nThis is a test."); ?> </body> </html> This is my top level .htaccess file: RewriteEngine on # -FrontPage- IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> Options All -Indexes RewriteCond %{HTTP_HOST} ^synapticmishap.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.synapticmishap.co.uk$ RewriteCond %{HTTP_HOST} ^lapsusapp.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.lapsusapp.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/synapticmishap\/lapsuspromo\/" [R=301,L] RewriteCond %{HTTP_HOST} ^jgtutoring.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.jgtutoring.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/tutoring" [R=301,L] RewriteCond %{HTTP_HOST} ^synapticmishap.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.synapticmishap.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/synapticmishap" [R=301,L] RewriteCond %{HTTP_HOST} ^jgediting.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.jgediting.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/editing" [R=301,L] RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.synapticmishap.co.uk/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.synapticmishap.co.uk$ [NC] RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk/synapticmishap/crashreportform.php/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk/synapticmishap/crashreportform.php$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp)$ - [F,NC] Help! I'm at the end of my tether with this and I'm in a very unfamiliar space with all this web stuff. I'd be most appreciative of any thoughts people had on why this isn't working. Thanks.

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • mysqld service crashes on restart, after importing mysqldump #innodb

    - by ubunut
    I have 2 mysql servers. Let's call them server01 & server02. Both have the same configuration: mysqladmin Ver 8.42 Distrib 5.1.61, for redhat-linux-gnu on x86_64 [client] default-character-set=utf8 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet = 16M default-character-set=utf8 default-collation=utf8_unicode_ci character-set-server=utf8 collation-server=utf8_unicode_ci default-storage-engine = InnoDB innodb_data_home_dir = /var/lib/mysql innodb_log_group_home_dir = /var/lib/mysql innodb_data_file_path = ibdata1:10M:autoextend innodb_additional_mem_pool_size = 2M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 1 innodb_buffer_pool_size = 700M table_cache = 300 thread_cache_size = 4 query_cache_size = 200m query_cache_limit = 10m [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I make a mysqldump on server01: mysqldump -uuser -ppassword --all-databases testservers.sql (most tables in these databases are innodb, some of the mysql.* tables are Innodb too) Then I import the testservers.sql on server02: mysql -uuser < testservers.sql (mysqld has been started with --skip-network). So far so good, I can login into mysql & everything seems to be ok. BUT when I exit to the shell and execute service mysqld restart, The service fails to start. stack-trace in /var/log/mysqld.log: 121022 14:53:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121022 14:53:19 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 121022 14:53:19 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead. 12:53:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=0 max_threads=151 thread_count=0 connection_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338324 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x267e630 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fff3efe0be0 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84bd89] /usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a0be3] /lib64/libpthread.so.0() [0x338d60f500] /usr/libexec/mysqld(ha_resolve_by_name(THD*, st_mysql_lex_string const*)+0x81) [0x6956e1] /usr/libexec/mysqld(open_table_def(THD*, st_table_share*, unsigned int)+0xe0a) [0x60e5ba] /usr/libexec/mysqld(get_table_share(THD*, TABLE_LIST*, char*, unsigned int, unsigned int, int*)+0x20b) [0x602b0b] /usr/libexec/mysqld() [0x603597] /usr/libexec/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x7a1) [0x6079a1] /usr/libexec/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5d0) [0x608570] /usr/libexec/mysqld(open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0x6a) [0x60877a] /usr/libexec/mysqld(plugin_init(int*, char**, int)+0x622) [0x715af2] /usr/libexec/mysqld() [0x5bd3b2] /usr/libexec/mysqld(main+0x1b3) [0x5bfc93] /lib64/libc.so.6(__libc_start_main+0xfd) [0x338d21ecdd] /usr/libexec/mysqld() [0x5087b9] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 121022 14:53:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended A typical mysqdump entry looks like this: DROP TABLE IF EXISTS `adodb_logsql`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `adodb_logsql` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `created` datetime NOT NULL, `sql0` varchar(250) NOT NULL DEFAULT '', `sql1` text, `params` text, `tracer` text, `timer` decimal(16,6) NOT NULL DEFAULT '0.000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='to save some logs from ADOdb'; /*!40101 SET character_set_client = @saved_cs_client */; IF I change all occurrences of "ENGINE=InnoDB" to "ENGINE=MyISAM" before import, then the service has no problem restarting. I'm quite puzzled as to what's happening, maybe I'm just an idiot, then by all means tell me so. Any help would be greatly appreciated!

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >