Search Results

Search found 14125 results on 565 pages for 'apache commons io'.

Page 261/565 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Solr authentication possible? (or apache port authentication would also work)

    - by Camran
    Currently anybody can access the solr admin page by going to my_ip:8983/solr I can't have it like that, so how can I make it prompt for password or something? I have setup my servers apache2.conf file to prompt for password whenever my site is accessed by www.mydomain.com. But when using another port, the "require password" wont show up. Any ideas how to secure this? Don't point me to the SolrSecurity wiki because it's simply too outdated. I have tried it without luck. Thanks

    Read the article

  • guvcview recording video and audio out of synchronisation in Ubuntu 10.10

    - by SIJAR
    I finally got Guvcview, a great software for Logitech webcam and it does all the stuff that one wants out of it. But I'm not satisfy with the video recording, video and audio out of synchronisation also video seems to be in slow motion. Please help so that I can tweak in and get a good video recording with the webcam. Below is the log of Guvcview ------------------------------------------------------------------------------- guvcview 1.4.1 video_device: /dev/video0 vid_sleep: 0 cap_meth: 1 resolution: 640 x 480 windowsize: 1024 x 715 vert pane: 578 spin behavior: 0 mode: mjpg fps: 1/25 Display Fps: 0 bpp: 0 hwaccel: 1 avi_format: 4 sound: 1 sound Device: 4 sound samp rate: 0 sound Channels: 0 Sound delay: 0 nanosec Sound Format: 85 Pan Step: 2 degrees Tilt Step: 2 degrees Video Filter Flags: 0 image inc: 0 profile(default):/home/sijar/default.gpfl starting portaudio... bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started language catalog= dir:/usr/share/locale type:UTF-8 lang:en_US.utf8 cat:guvcview.mo mjpg: setting format to 1196444237 capture method = 1 video device: /dev/video0 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! /dev/video0 - device 1 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! Init. UVC Camera (046d:0825) (location: usb-0000:00:1d.7-5) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 176 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 432, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 544, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 640, height = 360 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, ... repeats a couple of times ... vid:046d pid:0825 driver:uvcvideo Adding control for Pan (relative) UVCIOC_CTRL_ADD - Error: Operation not permitted checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls control[0]: 0x980900 Brightness, 0:255:1, default 128 control[0]: 0x980901 Contrast, 0:255:1, default 32 control[0]: 0x980902 Saturation, 0:255:1, default 32 control[0]: 0x98090c White Balance Temperature, Auto, 0:1:1, default 1 control[0]: 0x980913 Gain, 0:255:1, default 0 control[0]: 0x980918 Power Line Frequency, 0:2:1, default 2 control[0]: 0x98091a White Balance Temperature, 0:10000:10, default 4000 control[0]: 0x98091b Sharpness, 0:255:1, default 24 control[0]: 0x98091c Backlight Compensation, 0:1:1, default 1 control[0]: 0x9a0901 Exposure, Auto, 0:3:1, default 3 control[0]: 0x9a0902 Exposure (Absolute), 1:10000:1, default 166 control[0]: 0x9a0903 Exposure, Auto Priority, 0:1:1, default 0 resolutions of format(2) = 19 frame rates of 1º resolution=6 Def. Res: 0 numb. fps:6 --------------------------------------- device #0 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 (hw:0,0) Host API = ALSA Max inputs = 2, Max outputs = 2 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #1 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC ADC (hw:0,1) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #2 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC2 ADC (hw:0,2) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #3 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - ADC2 (hw:0,3) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #4 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - IEC958 (hw:0,4) Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #5 Name = USB Device 0x46d:0x825: USB Audio (hw:1,0) Host API = ALSA Max inputs = 1, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #6 Name = front Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.012 Def. high input latency = -1.000 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #7 Name = iec958 Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #8 Name = spdif Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #9 Name = pulse Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #10 Name = dmix Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.043 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #11 [ Default Input, Default Output ] Name = default Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 ---------------------------------------------- SampleRate:0 Channels:0 Video driver: x11 A window manager is available VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25371756K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cbd8b0]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cbd8b0]profile Baseline, level 3.0 [libx264 @ 0x8cbd8b0]non-strictly-monotonic PTS shift sound by -9 ms shift sound by -9 ms shift sound by -9 ms AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... AUDIO: droping audio data (/home/sijar/Videos/Webcam) 25371748K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... Cap Video toggled: 0 Shuting Down IO Thread AUDIO: droping audio data stop= 4426644744000 start=4416533023000 VIDEO: 146 frames in 10111.000000 ms = 14.439719 fps Stoping audio stream Closing audio stream... close avi Last message repeated 145 times [libx264 @ 0x8cbd8b0]frame I:2 Avg QP:14.10 size: 24492 [libx264 @ 0x8cbd8b0]frame P:103 Avg QP:16.06 size: 20715 [libx264 @ 0x8cbd8b0]mb I I16..4: 48.4% 0.0% 51.6% [libx264 @ 0x8cbd8b0]mb P I16..4: 57.5% 0.0% 0.0% P16..4: 40.2% 0.0% 0.0% 0.0% 0.0% skip: 2.3% [libx264 @ 0x8cbd8b0]final ratefactor: 62.05 [libx264 @ 0x8cbd8b0]coded y,uvDC,uvAC intra: 79.7% 92.2% 68.4% inter: 62.4% 87.5% 48.0% [libx264 @ 0x8cbd8b0]i16 v,h,dc,p: 23% 17% 41% 19% [libx264 @ 0x8cbd8b0]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 24% 26% 2% 5% 3% 3% 3% 4% [libx264 @ 0x8cbd8b0]i8c dc,h,v,p: 53% 20% 23% 4% [libx264 @ 0x8cbd8b0]ref P L0: 63.0% 37.0% [libx264 @ 0x8cbd8b0]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25379744K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cfba20]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cfba20]profile Baseline, level 3.0 [libx264 @ 0x8cfba20]non-strictly-monotonic PTS shift sound by -236 ms shift sound by -236 ms shift sound by -236 ms (/home/sijar/Videos/Webcam) 25377044K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25373408K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25370696K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25367680K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25364052K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25360312K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25356628K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25352908K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25349316K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25345552K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25341828K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25338092K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25334412K bytes free on a total of 39908968K (used: 36 %) treshold=51200K Cap Video toggled: 0 Shuting Down IO Thread stop= 4708817235000 start=4578624714000 VIDEO: 1604 frames in 130192.000000 ms = 12.320265 fps Stoping audio stream Closing audio stream... close avi Last message repeated 1603 times [libx264 @ 0x8cfba20]frame I:16 Avg QP:14.78 size: 42627 [libx264 @ 0x8cfba20]frame P:1547 Avg QP:16.44 size: 28599 [libx264 @ 0x8cfba20]mb I I16..4: 21.6% 0.0% 78.4% [libx264 @ 0x8cfba20]mb P I16..4: 28.1% 0.0% 0.0% P16..4: 70.5% 0.0% 0.0% 0.0% 0.0% skip: 1.4% [libx264 @ 0x8cfba20]final ratefactor: 88.17 [libx264 @ 0x8cfba20]coded y,uvDC,uvAC intra: 74.4% 95.8% 83.2% inter: 75.2% 94.6% 69.2% [libx264 @ 0x8cfba20]i16 v,h,dc,p: 27% 17% 40% 16% [libx264 @ 0x8cfba20]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 25% 21% 3% 6% 4% 5% 4% 7% [libx264 @ 0x8cfba20]i8c dc,h,v,p: 61% 18% 18% 4% [libx264 @ 0x8cfba20]ref P L0: 64.0% 36.0% [libx264 @ 0x8cfba20]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Shuting Down Thread Thread terminated... cleaning Thread allocations: 100% SDL Quit Video Thread finished write /home/sijar/.guvcviewrc OK free audio mutex closed v4l2 strutures free controls free controls - vidState cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK

    Read the article

  • Cannot log in to the desktop on ubuntu 11.10?

    - by Jichao
    The problem is, I could log in under the terminal, i could ifup eth0, i could do anything I want in the terminal, but if I use ctrl+alt+f7 goto the gnome login screen, after I input the correct password, the system just send me back to same login screen again. I have created a new user, but it didn't work. I have change all the files under ~/ to jichao:jichao(which is my username) with chown -hR jichao:jichao /home/jichao, but it didn't work too. I searched the internet, somebody said I should see the logs under /var/log/gdm, but there is not a /var/log/gdm directory in my box. Here are the tail of files under /var/log/ tail X.org.log [ 3263.348] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 3263.348] (**) Dell Dell USB Keyboard: always reports core events [ 3263.348] (**) Dell Dell USB Keyboard: Device: "/dev/input/event5" [ 3263.348] (--) Dell Dell USB Keyboard: Found keys [ 3263.348] (II) Dell Dell USB Keyboard: Configuring as keyboard [ 3263.348] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input29/event5" [ 3263.348] (II) XINPUT: Adding extended input device "Dell Dell USB Keyboard" (type: KEYBOARD) [ 3263.348] (**) Option "xkb_rules" "evdev" [ 3263.348] (**) Option "xkb_model" "pc105" [ 3263.348] (**) Option "xkb_layout" "us" kern.log Mar 20 09:32:58 jichao-MS-730 kernel: [ 3182.701247] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input27 Mar 20 09:32:58 jichao-MS-730 kernel: [ 3182.701392] generic-usb 0003:413C:2003.0018: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1d.0-1.4/input0 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.642572] usb 2-1.3: new low speed USB device number 17 using ehci_hcd Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.741892] input: Microsoft Microsoft 5-Button Mouse with IntelliEye(TM) as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input28 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.742080] generic-usb 0003:045E:0047.0019: input,hidraw2: USB HID v1.10 Mouse [Microsoft Microsoft 5-Button Mouse with IntelliEye(TM)] on usb-0000:00:1d.0-1.3/input0 Mar 20 09:33:27 jichao-MS-730 kernel: [ 3212.473901] usb 2-1.3: USB disconnect, device number 17 Mar 20 09:33:28 jichao-MS-730 kernel: [ 3212.702031] usb 2-1.4: USB disconnect, device number 16 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.022655] usb 2-1.4: new low speed USB device number 18 using ehci_hcd Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124278] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input29 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124423] generic-usb 0003:413C:2003.001A: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1d.0-1.4/input0 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.741892] input: Microsoft Microsoft 5-Button Mouse with IntelliEye(TM) as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input28 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.742080] generic-usb 0003:045E:0047.0019: input,hidraw2: USB HID v1.10 Mouse [Microsoft Microsoft 5-Button Mouse with IntelliEye(TM)] on usb-0000:00:1d.0-1.3/input0 syslog Mar 20 09:33:02 jichao-MS-730 mtp-probe: bus: 2, device: 17 was not an MTP device Mar 20 09:33:27 jichao-MS-730 kernel: [ 3212.473901] usb 2-1.3: USB disconnect, device number 17 Mar 20 09:33:28 jichao-MS-730 kernel: [ 3212.702031] usb 2-1.4: USB disconnect, device number 16 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.022655] usb 2-1.4: new low speed USB device number 18 using ehci_hcd Mar 20 09:34:08 jichao-MS-730 mtp-probe: checking bus 2, device 18: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4" Mar 20 09:34:08 jichao-MS-730 mtp-probe: bus: 2, device: 18 was not an MTP device Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124278] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input29 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124423] generic-usb 0003:413C:2003.001A: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1d.0-1.4/input0 auth.log Mar 20 09:18:52 jichao-MS-730 lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Mar 20 09:18:53 jichao-MS-730 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "jichao" Mar 20 09:18:53 jichao-MS-730 dbus[835]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.240" (uid=104 pid=6457 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.11" (uid=0 pid=1156 comm="/usr/sbin/console-kit-daemon --no-daemon ") Mar 20 09:19:38 jichao-MS-730 sudo: jichao : TTY=tty6 ; PWD=/home ; USER=root ; COMMAND=/bin/chown -hR jichao:jichao jicha Mar 20 09:19:39 jichao-MS-730 sudo: jichao : TTY=tty6 ; PWD=/home ; USER=root ; COMMAND=/bin/chown -hR jichao:jichao jichao Mar 20 09:20:10 jichao-MS-730 lightdm: pam_unix(lightdm-autologin:session): session closed for user lightdm Mar 20 09:20:11 jichao-MS-730 lightdm: pam_unix(lightdm-autologin:session): session opened for user lightdm by (uid=0) Mar 20 09:20:11 jichao-MS-730 lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Mar 20 09:20:12 jichao-MS-730 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "jichao" Mar 20 09:20:12 jichao-MS-730 dbus[835]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.247" (uid=104 pid=6572 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.11" (uid=0 pid=1156 comm="/usr/sbin/console-kit-daemon --no-daemon ") It seems that my .xsession-errors does not grow since yesterday. Here is my .xsession-error: (gnome-settings-daemon:1550): Gdk-WARNING **: The program 'gnome-settings-daemon' received an X Window System error. This probably reflects a bug in the program. The error was 'BadWindow (invalid Window parameter)'. (Details: serial 26702 error_code 3 request_code 2 minor_code 0) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.) (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed WARN 2012-03-17 19:28:46 glib <unknown>:0 Unable to fetch children: Method "Children" with signature "" on interface "org.ayatana.bamf.view" doesn't exist WARN 2012-03-17 19:28:46 glib <unknown>:0 Unable to fetch children: Method "Children" with signature "" on interface "org.ayatana.bamf.view" doesn't exist (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, /usr/share/system-config-printer/applet.py:336: GtkWarning: ??????????????:“pixmap”, self.loop.run () (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, common-plugin-Message: checking whether we have a device for 4: yes common-plugin-Message: checking whether we have a device for 5: yes common-plugin-Message: checking whether we have a device for 6: yes common-plugin-Message: checking whether we have a device for 7: yes common-plugin-Message: checking whether we have a device for 10: yes common-plugin-Message: checking whether we have a device for 8: yes common-plugin-Message: checking whether we have a device for 9: yes (gnome-settings-daemon:13791): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed [1331983727,000,xklavier.c:xkl_engine_start_listen/] The backend does not require manual layout management - but it is provided by the application ** (gnome-fallback-mount-helper:1584): DEBUG: ConsoleKit session is active 0 (gnome-fallback-mount-helper:1584): Gdk-WARNING **: gnome-fallback-mount-helper: Fatal IO error 11 (???????) on X server :0. (gdu-notification-daemon:1708): Gdk-WARNING **: gdu-notification-daemon: Fatal IO error 11 (???????) on X server :0. unity-window-decorator: Fatal IO error 11 (???????) on X server :0.0. (bluetooth-applet:1583): Gdk-WARNING **: bluetooth-applet: Fatal IO error 11 (???????) on X server :0. (nm-applet:1596): Gdk-WARNING **: nm-applet: Fatal IO error 11 (???????) on X server :0. (nautilus:3106): IBUS-WARNING **: _connection_closed_cb: Underlying GIOStream returned 0 bytes on an async read (update-notifier:1821): Gdk-WARNING **: update-notifier: Fatal IO error 11 (???????) on X server :0. applet.py: Fatal IO error 11 (???????) on X server :0. (nautilus:3106): Gdk-WARNING **: nautilus: Fatal IO error 11 (???????) on X server :0. Could you help me, Thanks.

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

  • Importing tab delimited file into array in Visual Basic 2013 [migrated]

    - by JaceG
    I am needing to import a tab delimited text file that has 11 columns and an unknown number of rows (always minimum 3 rows). I would like to import this text file as an array and be able to call data from it as needed, throughout my project. And then, to make things more difficult, I need to replace items in the array, and even add more rows to it as the project goes on (all at runtime). Hopefully someone can suggest code corrections or useful methods. I'm hoping to use something like the array style sMyStrings(3,2), which I believe would be the easiest way to control my data. Any help is gladly appreciated, and worthy of a slab of beer. Here's the coding I have so far: Imports System.IO Imports Microsoft.VisualBasic.FileIO Public Class Main Dim strReadLine As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim sReader As IO.StreamReader = Nothing Dim sRawString As String = Nothing Dim sMyStrings() As String = Nothing Dim intCount As Integer = -1 Dim intFullLoop As Integer = 0 If IO.File.Exists("C:\MyProject\Hardware.txt") Then ' Make sure the file exists sReader = New IO.StreamReader("C:\MyProject\Hardware.txt") Else MsgBox("File doesn't exist.", MsgBoxStyle.Critical, "Error") End End If Do While sReader.Peek >= 0 ' Make sure you can read beyond the current position sRawString = sReader.ReadLine() ' Read the current line sMyStrings = sRawString.Split(New Char() {Chr(9)}) ' Separate values and store in a string array For Each s As String In sMyStrings ' Loop through the string array intCount = intCount + 1 ' Increment If TextBox1.Text <> "" Then TextBox1.Text = TextBox1.Text & vbCrLf ' Add line feed TextBox1.Text = TextBox1.Text & s ' Add line to debug textbox If intFullLoop > 14 And intCount > -1 And CBool((intCount - 0) / 11 Mod 0) Then cmbSelectHinge.Items.Add(sMyStrings(intCount)) End If Next intCount = -1 intFullLoop = intFullLoop + 1 Loop End Sub

    Read the article

  • ??OSW (OSWatcher Black Box) ????

    - by Feng
       OSWatcher Black Box, ??OSW,?oracle???????????????,?????OS??????????OS??????????,??CPU/Memory/Swap/Network IO/Disk IO?????? +++ ????????OSW? OSW?????????,????????????????,???mrtg, cacti, sar, nmon, enterprise manger grid control. ????OSW?????: 1. ???????,???????2. ???????,????CPU,???????????3. ???????,????????????????????????OS? ???????OS???,??OS?????,?????????????;??????????????????????,???????. ???????,????????:?????????,??????????,????????????(root cause),?????????????????????????,OSW??????,??????: 1. ??????????OS??????????????????????????OSW??,?????????OS??,??????DB/???? 2. ??ORACLE Database Performance???,?????????????OS??????OS?????????????Swapping,???????????????,?????????,???AWR?????????latch/mutex?????? 3. ??????????????AWR??????????,top5??????????;?CPU,??,Swap, Disk IO?????????????OSW??????????,????????????????????????OSW???,??????????????? 4. ?????ORA-04030?????CJQ0, P00X, J00X?????????,???????OSW,???????????????????OS????????? 5. ????server process??hung?,??????OSW????????????????suspend???,?????????CPU/Memory? 6. ??Listener hung???,?????OSW??????????????? 7. Login Storm??:????????????,????,????ASH,AWR????????????????OSW?ps?????,??????, oracle ?server process????????? ???,OSW????????????????????OS?????????????,??????DBA???OSW??????????????OSW,????DB Performance????,????????OSW???? +++ ?????OSW??????: 1. ??????????????,???????,???????? 2. OSW???????? OSW??????????????OS???????,??ps, vmstat, netstat, mpstat, top;????????????????? ?????????CPU, Disk IO, Disk Space, Memory;???????????????,??????????????????????????,??OSW????????:?????????,CPU????90%??;???free space???????????????????????????,??OSW????????? +++ ????????UNIX/LINUX???/??OSW: 1. ???301137.1???OSW 2. ????????(/tmp??),??????????root?? $ tar xvf osw.tar 3. ?? $ nohup ./startOSWbb.sh 60 48 gzip & ????????,??OSW,????60???????,???????48?????(??????????),???????gzip?????? 4. ????? $ ./stopOSWbb.sh ?????????archive???? ????????????????????OSW???????,???????

    Read the article

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • 1600+ 'postfix-queue' processes - OK to have this many?

    - by atomicguava
    I have a Plesk 9.5.4 CentOS server running Postfix. I had been having massive problems with the mailq being full of 'double-bounce' email messages containing errors relating to 'Queue File Write Error', but I believe these are now fixed thanks to this thread. My new problem is that when I run top, I can see lots of processes called 'postfix-queue' and have fairly high load: top - 13:59:44 up 6 days, 21:14, 1 user, load average: 2.33, 2.19, 1.96 Tasks: 1743 total, 1 running, 1742 sleeping, 0 stopped, 0 zombie Cpu(s): 5.1%us, 8.8%sy, 0.0%ni, 85.3%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3145728k total, 1950640k used, 1195088k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1324 apache 16 0 344m 33m 5664 S 21.7 1.1 0:03.17 httpd 32443 apache 15 0 350m 36m 6864 S 14.4 1.2 0:13.83 httpd 1678 root 15 0 13948 2568 952 R 2.0 0.1 0:00.37 top 1890 mysql 15 0 689m 318m 7600 S 1.0 10.4 219:45.23 mysqld 1394 apache 15 0 352m 41m 5972 S 0.7 1.3 0:03.91 httpd 1369 apache 15 0 344m 33m 5444 S 0.3 1.1 0:02.03 httpd 1592 apache 15 0 349m 37m 5912 S 0.3 1.2 0:02.52 httpd 1633 apache 15 0 336m 20m 1828 S 0.3 0.7 0:00.01 httpd 1952 root 19 0 335m 28m 10m S 0.3 0.9 1:35.41 httpd 1 root 15 0 10304 732 612 S 0.0 0.0 0:04.41 init 1034 mhandler 15 0 11520 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1036 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1041 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1043 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1063 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1068 mhandler 15 0 11516 1128 860 S 0.0 0.0 0:00.00 postfix-queue 1071 mhandler 17 0 11512 1152 884 S 0.0 0.0 0:00.00 postfix-queue 1072 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1081 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1082 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1089 popuser 15 0 33892 1972 1200 S 0.0 0.1 0:00.02 pop3d 1116 mhandler 16 0 11516 1164 884 S 0.0 0.0 0:00.00 postfix-queue 1117 mhandler 15 0 11516 1124 860 S 0.0 0.0 0:00.00 postfix-queue 1120 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1121 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1130 mhandler 17 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1131 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1149 root 17 -4 12572 680 356 S 0.0 0.0 0:00.00 udevd 1181 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1183 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1224 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1225 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1228 apache 15 0 345m 34m 5472 S 0.0 1.1 0:04.64 httpd 1241 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1242 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1251 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1252 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1258 apache 15 0 349m 37m 5444 S 0.0 1.2 0:01.28 httpd When I run ps -Al | grep -c postfix-queue it returns 1618! My question is this: is this normal or is there something else going wrong with Postfix? Right now, if I run mailq it is empty, and qshape deferred / qshape active are empty too. Thanks in advance for your help.

    Read the article

  • Exception with RubyAMF and Ruby 1.9 although code works

    - by Tam
    I'm getting an exception with RubyAMF using Ruby 1.9 and Rails 2.3.5. Although code afterward executes normally I'm not very comfortable with seeing such exception in the log file. Do you know what is causing it: >>>>>>>> RubyAMF >>>>>>>>> #<RubyAMF::Actions::PrepareAction:0x0000010139ff48> took: 0.00020 secs >>>>>>>> RubyAMF >>>>>>>>> #<RubyAMF::Actions::RailsInvokeAction:0x0000010139ff10> took: 0.29973 secs You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.include? /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:142:in `create_time_zone_conversion_attribute?' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:75:in `block in define_attribute_methods' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:71:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:71:in `define_attribute_methods' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/attribute_methods.rb:242:in `method_missing' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/base.rb:2832:in `hash' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `hash' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `hash' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `[]=' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:366:in `store_object' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:234:in `write_amf3_object' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:154:in `write_amf3' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:78:in `write' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:70:in `block in run' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:56:in `upto' /Users/tammam56/lal/vendor/plugins/ruby_amf/io/amf_serializer.rb:56:in `run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:91:in `block in run' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/benchmark.rb:309:in `realtime' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:91:in `run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:12:in `block in run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:11:in `each' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/filters.rb:11:in `run' /Users/tammam56/lal/vendor/plugins/ruby_amf/app/rails_gateway.rb:28:in `service' /Users/tammam56/lal/app/controllers/rubyamf_controller.rb:19:in `gateway' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:1331:in `perform_action' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/filters.rb:617:in `call_filters' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in `perform_action_with_filters' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `block in perform_action_with_benchmark' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `block in ms' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/benchmark.rb:309:in `realtime' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_with_benchmark' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in `perform_action_with_rescue' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in `perform_action_with_flash' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `process' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in `process_with_filters' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in `process' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in `dispatch' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in `_call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in `block in build_middleware_stack' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `block in call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/session/cookie_store.rb:93:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/lock.rb:11:in `block in call' <internal:prelude>:8:in `synchronize' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in `block in call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in `run' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/rack/static.rb:31:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `block in call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/rack/log_tailer.rb:17:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/content_length.rb:13:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/chunked.rb:15:in `call' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:64:in `process' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:159:in `block in process_client' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `process_client' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `block (2 levels) in run '

    Read the article

  • PHP pages working slow from time to time

    - by user1038179
    I have VPS with limit of 2GB of ram and 8 CPU cores. I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites. Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS. Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery. Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable. But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time. I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time. So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems. Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again. In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually. And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast. Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor). When site start to slow down sometimes in apache status I can see something like this: mod_fcgid status: Total FastCGI processes: 37 Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 11300 39 28 7 Working 11274 47 28 7 Working 11296 40 29 3 Working 11283 45 30 3 Working 11304 36 31 1 Working 11282 46 32 3 Working 11292 42 33 1 Working 11289 44 34 1 Working 11305 35 35 0 Working 11273 48 36 2 Working 11280 47 39 1 Working 10125 133 40 12 Exiting(communication error) 11294 41 41 1 Exiting(communication error) 11277 47 42 2 Exiting(communication error) 11291 43 43 1 Exiting(communication error) 10187 108 43 10 Exiting(communication error) 10209 95 44 7 Exiting(communication error) 10171 113 44 5 Exiting(communication error) 11275 47 47 1 Exiting(communication error) 10144 125 48 8 Exiting(communication error) 10086 149 48 20 Exiting(communication error) 10212 94 49 5 Exiting(communication error) 10158 118 49 5 Exiting(communication error) 10169 114 50 4 Exiting(communication error) 10105 141 50 16 Exiting(communication error) 10094 146 50 15 Exiting(communication error) 10115 139 51 17 Exiting(communication error) 10213 93 51 9 Exiting(communication error) 10197 103 51 7 Exiting(communication error) Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7983 1079 2 149 Ready 7979 1079 11 151 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7990 1066 0 57 Ready 8001 1031 64 35 Ready 7999 1032 94 29 Ready 8000 1031 91 36 Ready 8002 1029 34 52 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7991 1064 29 115 Ready When it is working nicly there is no lines with "Exiting(communication error)" Active and Idle are time active and time since last request, in seconds. Here are system info. Sysem info: Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5440 @ 2.83GHz Speed 88.320 MHz Cache 6144 KB All other seven are the same. System Information Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux Current Memory Usage total used free shared buffers cached Mem: 8388608 882164 7506444 0 0 0 -/+ buffers/cache: 882164 7506444 Swap: 0 0 0 Total: 8388608 882164 7506444 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/vzfs 100G 34G 67G 34% / none System Details: Running on: Apache/2.2.22 System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6 Powered by: PHP/5.3.10 Current Configuration Default PHP Version (.php files) 5 PHP 5 Handler fcgi PHP 4 Handler suphp Apache suEXEC on Apache Ruid2 off PHP 4 Handler suphp Apache suEXEC on Apache Configuration The following settings have been saved: fileetag: All keepalive: On keepalivetimeout: 3 maxclients: 150 maxkeepaliverequests: 10 maxrequestsperchild: 10000 maxspareservers: 10 minspareservers: 5 root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch serverlimit: 256 serversignature: Off servertokens: Full sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH startservers: 5 timeout: 30 I hope, I explained my problem nicely. Any help would be nice.

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • How do I make solr/jetty find the installed slf4j jars in Ubuntu 12.04?

    - by J. Pablo Fernández
    I'm running Ubuntu 12.04's packaged Jetty in which I installed solr 4.3.1 (by copying the war file to /var/lib/jetty/webapps. When I start Jetty, I get this error: failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. The package libslf4j-java is installed, and the jars are in /usr/share/java: /usr/share/java/log4j-over-slf4j.jar /usr/share/java/slf4j-api.jar /usr/share/java/slf4j-jcl.jar /usr/share/java/slf4j-jdk14.jar /usr/share/java/slf4j-log4j12.jar /usr/share/java/slf4j-migrator.jar /usr/share/java/slf4j-nop.jar /usr/share/java/slf4j-simple.jar but somehow, Jetty and/or Solr are not finding them. How do I make them find them? or how do I install some other jars where jetty/solr would find them? The full error is: 88 [main] INFO org.mortbay.log - jetty-6.1.24 443 [main] INFO org.mortbay.log - Deploy /etc/jetty/contexts/javadoc.xml -> org.mortbay.jetty.handler.ContextHandler@cec0c5{/javadoc,file:/usr/share/jetty/javadoc} 522 [main] INFO org.mortbay.log - Extract file:/var/lib/jetty/webapps/solr.war to /var/cache/jetty/data/Jetty__8080_solr.war__solr__zdafkg/webapp 1501 [main] WARN org.mortbay.log - failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging 1501 [main] ERROR org.mortbay.log - Failed startup of context org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war} org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:105) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at java.lang.Class.newInstance0(Class.java:374) at java.lang.Class.newInstance(Class.java:327) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:92) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:662) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:103) ... 36 more Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 37 more 1505 [main] WARN org.mortbay.log - failed org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war}: java.lang.NoClassDefFoundError: org/slf4j/Logger 1579 [main] WARN org.mortbay.log - failed ContextHandlerCollection@19d0a1: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] INFO org.mortbay.log - Opened /var/log/jetty/2013_06_27.request.log 1582 [main] WARN org.mortbay.log - failed HandlerCollection@cbf30e: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] ERROR org.mortbay.log - Error starting handlers java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2454) at java.lang.Class.getMethod0(Class.java:2697) at java.lang.Class.getMethod(Class.java:1622) at org.mortbay.log.Log.unwind(Log.java:228) at org.mortbay.log.Log.warn(Log.java:197) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:475) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 29 more

    Read the article

  • IIS7 unchecked in windows component list yet when go to http://localhost still directs me to IIS7. How to get to Apache?

    - by Ed Hancock
    IIS7 was turned off on my Windows 7 system, Under control panel services and applications no web publishing appears. Have Apache, et. al. installed with Wampserver. Yet when I try to access the local server astill get directed to IIS7 welcome page. After turning off IIS7 restarted computer, no help, eliminated history, no help, deleted IIS7 folders, no help. It is hiding somewhere and I can not find it. Any suggestions/help would be appreciated. Ed

    Read the article

  • Why do I get the error "Only antlib URIs can be located from the URI alone,not the URI" when trying to run hibernate tools in my build.xml

    - by Casbah
    I'm trying to run hibernate tools in an ant build to generate ddl from my JPA annotations. Ant dies on the taskdef tag. I've tried with ant 1.7, 1.6.5, and 1.6 to no avail. I've tried both in eclipse and outside. I've tried including all the hbn jars in the hibernate-tools path and not. Note that I based my build file on this post: http://stackoverflow.com/questions/281890/hibernate-jpa-to-ddl-command-line-tools I'm running eclipse 3.4 with WTP 3.0.1 and MyEclipse 7.1 on Ubuntu 8. Build.xml: <project name="generateddl" default="generate-ddl"> <path id="hibernate-tools"> <pathelement location="../libraries/hibernate-tools/hibernate-tools.jar" /> <pathelement location="../libraries/hibernate-tools/bsh-2.0b1.jar" /> <pathelement location="../libraries/hibernate-tools/freemarker.jar" /> <pathelement location="../libraries/jtds/jtds-1.2.2.jar" /> <pathelement location="../libraries/hibernate-tools/jtidy-r8-20060801.jar" /> </path> <taskdef classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="hibernate-tools"/> <target name="generate-ddl" description="Export schema to DDL file"> <!-- compile model classes before running hibernatetool --> <!-- task definition; project.class.path contains all necessary libs <taskdef name="hibernatetool" classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="project.class.path" /> --> <hibernatetool destdir="sql"> <!-- check that directory exists --> <jpaconfiguration persistenceunit="default" /> <classpath> <dirset dir="WebRoot/WEB-INF/classes"> <include name="**/*"/> </dirset> </classpath> <hbm2ddl outputfilename="schemaexport.sql" format="true" export="false" drop="true" /> </hibernatetool> </target> Error message (ant -v): Apache Ant version 1.7.0 compiled on December 13 2006 Buildfile: /home/joe/workspace/bento/ant-generate-ddl.xml parsing buildfile /home/joe/workspace/bento/ant-generate-ddl.xml with URI = file:/home/joe/workspace/bento/ant-generate-ddl.xml Project base dir set to: /home/joe/workspace/bento [antlib:org.apache.tools.ant] Could not load definitions from resource org/apache/tools/ant/antlib.xml. It could not be found. BUILD FAILED /home/joe/workspace/bento/ant-generate-ddl.xml:12: Only antlib URIs can be located from the URI alone,not the URI at org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:216) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:140) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.parseBuildFile(InternalAntRunner.java:191) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:400) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) Total time: 195 milliseconds

    Read the article

  • Setting spring bean property value using ref-bean

    - by Apache Fan
    Hi, I am trying to set a property value using spring. <bean id="velocityPropsBean" class="com.test.CustomProperties" abstract="false" singleton="true" lazy-init="false" autowire="default" dependency-check="default"> <property name="properties"> <props> <prop key="resource.loader">file</prop> <prop key="file.resource.loader.cache">true</prop> <prop key="file.resource.loader.class">org.apache.velocity.runtime.resource.loader.FileResourceLoader</prop> <prop key="file.resource.loader.path">NEED TO INSERT VALUE AT STARTUP</prop> </props> </property> </bean> <bean id="velocityResourcePath" class="java.lang.String" factory-bean="velocityHelper" factory-method="getLoaderPath"/> Now what i need to do is insert the result from getLoaderPath into file.resource.loader.path. The value of getLoaderPath changes so it has to be loaded at server startup. Any thoughts how i can inset the velocityResourcePath value to the property?

    Read the article

  • Base64.encodeBase64URLSafeString() could not find method error in eclipse (Android project).

    - by jax
    I have an Android project that is using the Base64.encodeBase64URLSafeString commons method. The part that does the Base64 is in another java project. I have added the java project to the android project through the "Project" tab in the Build Path. I have already linked both projects to commons-codec thinking that this might be the problem but am still getting the following error in Eclipse. Both project have no errors. Could not find method org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString, referenced from method com.mydomain.android.licensegenerator.client.LicenseLoader.doSha1AndBase64Encryption What might I be doing wrong?

    Read the article

  • WstxParsingException: "Expected a text token, got START_ELEMENT"

    - by lasombra
    I have a stub generated by WSDL2Java. I send a request and the answer that comes back (used tcptrace) looks fine. However, an AxisFault is thrown: org.apache.axis2.AxisFault: com.ctc.wstx.exc.WstxParsingException: Expected a text token, got START_ELEMENT. at [row,col {unknown-source}]: [4,1313] at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430) at org.tempuri.MyStub.fromOM(MyStub.java:1726) at org.tempuri.MyStub.acceptResults(MyStub.java:612) The corresponding code in MyStub.java looks like: 607: org.apache.axis2.context.MessageContext _returnMessageContext = _operationClient 608: .getMessageContext(org.apache.axis2.wsdl.WSDLConstants.MESSAGE_LABEL_IN_VALUE); 609: org.apache.axiom.soap.SOAPEnvelope _returnEnv = _returnMessageContext 610: .getEnvelope(); 611: 612: java.lang.Object object = fromOM(_returnEnv.getBody() 613: .getFirstElement(), 614: org.tempuri.AcceptQcResultsResponse.class, 615: getEnvelopeNamespaces(_returnEnv)); How do I find out which token is meant by the error? I have [row,col {unknown-source}]: [4,1313] but I don't know how to use that information.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • activemq-maven-plugin ignore files in classpath?

    - by Oscar Chan
    I have been trying to get activemq-maven-plugin to run activemq with configuration in classpath of the bundle. However, I don't have much luck. It seems that the activemq-maven-plugin just ignore resources (resources/main/conf/activemq.properties) the local bundle. I checked the jar and target/classes and they are built into the right local. I am able to get plugin to run (mvn activemq:run) if I take out the PropertyPlaceholderConfigurer bean in the activemq.xml Did I do anything wrong? Here is the output [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to start ActiveMQ Broker Embedded error: Could not load properties; nested exception is java.io.FileNotFoundException: class path resource [conf/activemq.properties] cannot be opened because it does not exist [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Mon May 03 15:56:05 PDT 2010 [INFO] Final Memory: 11M/79M [INFO] ------------------------------------------------------------------------ Here is the pom.xml, which I specific the plugin to look up activemq.xml via file, that works. However, in the activemq.xml <?xml version="1.0"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>oc.test</groupId> <artifactId>mq</artifactId> <version>0.1</version> <name>mq</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.activemq.tooling</groupId> <artifactId>maven-activemq-plugin</artifactId> <version>5.3.1</version> <configuration> <configUri>xbean:file:src/main/resources/conf/activemq.xml</configUri> <fork>false</fork> <systemProperties> <property> <name>javax.net.ssl.keyStorePassword</name> <value>password</value> </property> <property> <name>org.apache.activemq.default.directory.prefix</name> <value>./target/</value> </property> </systemProperties> </configuration> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>2.5.5</version> </dependency> </dependencies> </plugin> </plugins> </build> </project> Here is the src/main/resources/conf/activemq.xml <?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd "> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <value>classpath:conf/activemq.properties</value> </property> <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/> </bean> <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="./data"> <!-- The transport connectors ActiveMQ will listen to --> <transportConnectors> <transportConnector name="openwire" uri="tcp://localhost:61616"/> </transportConnectors> </broker> </beans> Here is the src/main/resources/conf/activemq.properties activemq.port=61616

    Read the article

  • Struts 1 - struts-taglib.jar is not being found by the web application

    - by Yatendra Goel
    I am using Struts-1. I have developed a struts-based web application. I am using struts tags in my JSP pages supplied in struts-taglib.jar by inserting the following lines in the JSP file: <%@ taglib prefix="html" uri="http://struts.apache.org/tags-html" %> <%@ taglib prefix="logic" uri="http://struts.apache.org/tags-logic" %> <%@ taglib prefix="bean" uri="http://struts.apache.org/tags-bean" %> Now the application is working fine when I run it on my localsystem but when I deploy it on a server, it shows the following exception: org.apache.jasper.JasperException: The absolute uri: http://struts.apache.org/tags-html cannot be resolved in either web.xml or the jar files deployed with this application From the above exception, it seems that the application hasn't found the struts-taglib.jar file. But I have put the struts-taglib.jar in /WEB-INF/lib directory. Then where is the problem?

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Struts 1 - struts-taglib.jar is not being found by my web application

    - by Yatendra Goel
    I am using Struts-1. I have developed a struts-based web application. I am using struts tags in my JSP pages supplied in struts-taglib.jar by inserting the following lines in the JSP file: <%@ taglib prefix="html" uri="http://struts.apache.org/tags-html" %> <%@ taglib prefix="logic" uri="http://struts.apache.org/tags-logic" %> <%@ taglib prefix="bean" uri="http://struts.apache.org/tags-bean" %> Now the application is working fine when I run it on my localsystem but when I deploy it on a server, it shows the following exception: org.apache.jasper.JasperException: The absolute uri: http://struts.apache.org/tags-html cannot be resolved in either web.xml or the jar files deployed with this application From the above exception, it seems that the application hasn't found the struts-taglib.jar file. But I have put the struts-taglib.jar in /WEB-INF/lib directory. Then where is the problem? Note: You can also look at http://stackoverflow.com/questions/2452492/java-problem-in-deploying-web-application for more information

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >