Search Results

Search found 24623 results on 985 pages for 'linux'.

Page 393/985 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • mdadm - Recovering a 'split' RAID1 array

    - by Hamza
    I have two drives that used to be part of a single RAID1 volume but it appears that one of them went offline for some time, something I've noticed just now when I rebooted my system. I now seem to have two RAID volumes, as reported by: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc[1] 2096116 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb[0] 2096116 blocks super 1.2 [2/1] [U_] unused devices: <none> Not exactly sure where to go from here. How can I merge and re-sync these volumes without data loss? Thanks.

    Read the article

  • Webcam error (libv4lconvert) while capturing VIDEO

    - by shadyabhi
    I get the following when I capture images using my web-cam.. libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff ...(same error repeating) ... I also had issue that my camera was not getting detected in ubuntu. So, in order to run an application that uses the webcam, I have to run a command like LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so ./6dofhand Whats causing these errors?

    Read the article

  • How to share drive space from vmware server 2 host to a guest?

    - by matnagel
    In the vmware tools in the guest there is an option to access shares from the host. What is the way to create such shares on a vmware 2 host? I did not find where in infrastructure web access. I also went through the vmware server 2 user's guide but did not see it mentioned. Can you help? This is an ubuntu 64 bit server 8.04 LTS host.

    Read the article

  • Using cookies with lynx

    - by XXL
    lynx -cfg=cfg.file $URL this works with the following contents of the .cfg file: SET_COOKIES:TRUE ACCEPT_ALL_COOKIES:TRUE PERSISTENT_COOKIES:TRUE COOKIE_FILE:cookie.file however, this does not: lynx -cookies=1 -accept_all_cookies=1 -cookie_file=cookie.file $URL if it's going to be of any help - here's the trace: parse_arg(arg_name=-cookies=1, mask=1, count=2) parse_arg lookup(cookies=1) ...skip (mask 1/4) parse_arg(arg_name=-accept_all_cookies=1, mask=1, count=3) parse_arg lookup(accept_all_cookies=1) ...skip (mask 1/4) parse_arg(arg_name=-cookie_file=cookie.file, mask=1, count=4) parse_arg lookup(cookie_file=cookie.file) ...skip (mask 1/4) parse_arg(arg_name=$URL, mask=1, count=5) parse_arg startfile:$URL obvious question, why? the actual difference, from what i see, is the inability to trigger "PERSISTENT_COOKIES:TRUE" by command-line options in lynx. or, maybe, i have overlooked/misunderstood something?

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • xrander detect only one display

    - by cupakob
    Hi all, i have a problem, to get a picture on my tv over VGA (and also over S-Video to SCART). I've tried it first over xorg, but without success. After that i tried xrand, but xrander detect only my laptop display, here the output bufka [~] $ xrandr -q Screen 0: minimum 1680 x 1050, current 1680 x 1050, maximum 1680 x 1050 default connected 1680x1050+0+0 0mm x 0mm 1680x1050 50.0* 51.0 52.0 Any suggestions, how to solve the problem? My video card is Nvidia Geforce 8600M GT, TV is LG M227WPD and OS Ubuntu Lucid...

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • Advantages of a deployment tool over shell

    - by Jimmy
    Currently I have all of my deployment scripts in shell, which installs about 10 programs and configures them. The way I see it shell is a fantastic tool for this: Modular: Only one program per script, this way I can spread the programs across different servers Simple: Shell scripts are extremely simple and don't need any other software installed One-click: I only have to run the shell script once and everything is setup Agnostic: Most programmers can figure out shell, and don't need to know how to use a specific program. Versioning: Since my code is on github a simple git pull and restart all of supervisor will run my latest code. My question is, with all of these advantages, why is it people are constantly telling me to use a tool such as ansible or chef, and not to use shell.

    Read the article

  • How long does badblocks take on a 1TB drive?

    - by Steven Don
    I'm running badblocks (or rather "e2fsck -c") on a 1TB drive and if the progress indicator is any indication (no pun intended), it's going to take almost forever to complete. Right now it says 0.01% done, 30:20 elapsed which would mean the thing would take 17 weeks or so to complete, which seems rather excessive in my book. Is that a normal amount of time for such a check to take or it simply that my suspicions are correct in that the drive is failing, thus causing the check to take only slightly shorter than eternity? I found this question here, but that pertains to the amount of passes done.

    Read the article

  • Route return traffic to correct gateway depending on service

    - by Marnix van Valen
    On my office network I have two internet connections and one CentOS server running a website (HTTPS on port 443). The website should be publicly accessible through the public IP of the first internet connection (ISP-1). The other internet connection, ISP-2, id the default gateway on the network. Both internet connections have routers (the household-kind) with NAT, SPI firewalls etc. The router on ISP-2 is a Netgear WNDR3700 (aka N600) with original firmware. The problem is that the website is unreachable. Looks like incoming traffic on ISP-1 will reach the server but the returning traffic is routed through ISP-2, effectively making the site unreachable. As far as I can tell I can't do port based routing on the WNDR3700. What are my options to make this work? I've been looking at implementing an iptables / routing based solution on the server itself but haven't been able to make that work. Update: Note that the server has one network interface connecting it to both routers.

    Read the article

  • What is the most secure way to allow a user read access to a log file?

    - by gAMBOOKa
    My application requires read access to /var/log/messages, which belongs to user and group root. What is the minimal exposure level required on /var/log/messages so my application can read it? Presently, my plan is to change the group ownership of /var/log/messages to a new group, and add root and my application user to it, but this would also give the application write privileges to /var/log/messages. OS: Centos 5.5

    Read the article

  • Download from http server all directories,files and subdirectories and so on

    - by Jack
    I want to download from remote http server all files directories,files and so on. I found some solutions to ftp server,but doesn't work to http. Until now no luck with wget -r or -m. It download all direcotories in the root and the respective index.html. Not all files and sub-directory under such it(note the sub-directory may have another directory and so on) not sure on tags fix for me if needs. Note: I'm not a native english speaker,sorry for bad english.

    Read the article

  • Problem with keyboard layout when OS X 10.6.3 -> Fedora 13

    - by user20196
    Hi, I have VMware installed on Fedora 13 (host OS) /amd 64bit. When I log to it from console VMware works good. I wanted to start remotely from OS X 10.6.3, so I installed NX client. Everything is fine with the keyboard layout if I do not use VMware. When I try to run the guest OS on VMware my keyboard is cut off completely. The host and the guest OSes are setup for "us" layout and for "generic 105 keys" keyboard.

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • Can't work with server after modifying /lib/libc.so.6

    - by Afshin
    I have a CentOS server, VPS. After running this command I can't work with server and get the same error in all actions (SSH, Login, ls and ...) The command: ln -s /lib/libc.so.1 /lib/libc.so.6 -f And the error is: /sbin/shutdown: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I have VNC to server but because I can't login to server, that's unusable. Thanks in advance.

    Read the article

  • Keepalived for more than 20 virtual addresses

    - by cvaldemar
    I have set up keepalived on two Debian machines for high availability, but I've run into the maximum number of virtual IP's I can assign to my vrrp_instance. How would I go about configuring and failing over 20+ virtual IP's? This is the, very simple, setup: LB01: 10.200.85.1 LB02: 10.200.85.2 Virtual IPs: 10.200.85.100 - 10.200.85.200 Each machine is also running Apache (later Nginx) binding on the virtual IPs for SSL client certificate termination and proxying to backend webservers. The reason I need so many VIP's is the inability to use VirtualHost on HTTPS. This is my keepalived.conf: vrrp_script chk_apache2 { script "killall -0 apache2" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER virtual_router_id 51 priority 101 virtual_ipaddress { 10.200.85.100 . . all the way to . 10.200.85.200 } An identical configuration is on the BACKUP machine, and it's working fine, but only up to the 20th IP. I have found a HOWTO discussing this problem. Basically, they suggest having just one VIP and routing all traffic "via" this one IP, and "all will be well". Is this a good approach? I'm running pfSense firewalls in front of the machines. Quote from the above link: ip route add $VNET/N via $VIP or route add $VNET netmask w.x.y.z gw $VIP Thanks in advance. EDIT: @David Schwartz said it would make sense to add a route, so I tried adding a static route to the pfSense firewall, but that didn't work as I expected it would. pfSense route: Interface: LAN Destination network: 10.200.85.200/32 (virtual IP) Gateway: 10.200.85.100 (floating virtual IP) Description: Route to VIP .100 I also made sure I had packet forwarding enabled on my hosts: $ cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.ip_nonlocal_bind=1 Am I doing this wrong? I also removed all VIPs from the keepalived.conf so it only fails over 10.200.85.100.

    Read the article

  • Ubuntu 11.10 firewall/gateway - no client internet access

    - by Siriss
    I have read many other posts but cannot figure this out. eth0 is my external connected to a Comcast modem. The server has internet access with no issues. eth1 is internal and running DHCP for the clients. I have DHCP working just fine, all my clients can get an IP and ping the server but they cannot access the internet. I am using ISC-DHCP-SERVER and have set /etc/default/isc-dhcp-server to INTERFACE="eht1" Here is my dhcpd.conf file located in /etc/dhcp/dhcpd.conf ddns-update-style interim; ignore client-updates; subnet 10.0.10.0 netmask 255.255.255.0 { range 10.0.10.10 10.0.10.200; option routers 10.0.10.2; option subnet-mask 255.255.255.0; option domain-name-servers 208.67.222.222, 208.67.220.220; #OpenDNS # option domain-name "example.com"; default-lease-time 21600; max-lease-time 43200; authoritative; } I have made the *net.ipv4.ip_forward=1* change in /etc/sysctl.conf here is my interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp iface eth1 inet static address 10.0.10.2 netmask 255.255.255.0 network 10.0.10.0 auto eth1 And finally- here is my iptables.conf file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *nat :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE #-A PREROUTING -i eth0 -p tcp --dport 59668 -j DNAT --to-destination 10.0.10.2:59668 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i eth1 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT -A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT -A FORWARD -s 10.0.10.0/24 -o eth0 -j ACCEPT -A FORWARD -d 10.0.10.0/24 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -i lo -j ACCEPT -A FORWARD -i eth1 -j ACCEPT #-A FORWARD -i eth0 -m state --state NEW -m tcp -p tcp -d 10.0.10.2 --dport 59668 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT I am completely stuck. I cannot figure out why the clients cannot access the internet. Am I missing a service? Is a service not running? Any help would be greatly appreciated. I tried to be as thorough as possible but please let me know if I have missed something. Thank you!

    Read the article

  • Xorg: How can I map AltGr to the CapsLock Key (to toggle 3rd level symbols)

    - by basweber
    Hi, as many others I don't need Capslock. I want to reassign it to have the function of AltGr. I use Kubuntu 9.10 but I think there must be a solution which is distribution independent. I already tried to use setxkbmap or xmodmap. Using xmodmap at least I managed that the CapsLock key to behaves like the Delete key by following this description. But I could not achieve assigning the AltGr behavior to CapsLock.

    Read the article

  • How to understand cpu family/model/stepping fields in /proc/cpuinfo

    - by Victor Sorokin
    I have following in cpuinfo: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ stepping : 2 According to Wikipedia page there are two kinds of 5600+ -- one of 90nm technology, another of 65nm. How can I understand which one I have? There seem to be no direct correspondence between contents of cpuinfo and info on Wikipedia page. AMD site seems to use some other naming scheme for processors too. How can I map values of family, model and stepping from cpuinfo to the data available on Wikipedia/AMD?

    Read the article

  • Mass editing videos on Ubuntu?

    - by rick
    Hi, I'm trying to add a watermark and a credits image to all of my old videos. I downloaded them off YouTube so they are all flv (H.264?). Is there some software that will allow me do simple edits in batches? I know a little bit of Python and tried looking at some of the library but they all seem like overkill (and way above my head). So is there a solution besides getting some software and going through all my videos and doing it manually? They are all mostly the same length, but it would be nice to specify a relative position for my credits. e.g. show a static image for 10 seconds when the video is at 95%

    Read the article

  • HELP! Free space not reclaimed after online resizing ext4 in Ubuntu 9.10

    - by TiansHUo
    My root partition was filling up, with only 500 mbs left, I wanted to resize my root partition from 20 Gb to 40Gb So I resized my partition by using these steps: Using Gparted to resize another partition to give space for the EXT4 Using fdisk, deleting the root partition (on /dev/sda2), and creating it again using the new size resize2fs /dev/sda2 Updating grub2 But now the problem is that although I can boot in my new partition and the new partition shows it is 40Gb, but the free size was still 500mb. So I booted from a LiveCD and checked with e2fsck -p /dev/sda2, it reported clean. So I added the -f flag (force check), still, the drive is full.

    Read the article

  • USB forwarding from dom0 to domU

    - by Karolis T.
    What are my options to forward two USB connected phones to xen guest? I've read about PCI-passthrough http://www.wlug.org.nz/XenPciPassthrough, but I'm sure usb controller in the server isn't a pci card. There's device level forwarding, but I need to forward two devices, this here doesn't say how to do it: http://www.olivetalks.com/2008/02/03/usb-forwarding-on-xen-it-just-does-not-work/ Would something as simple as: usbdevice = [ 'host:xxx', 'host:yyy', ] work? EDIT: I'm now starting a bounty. This is really important for me and for other people also, hoping someone who have this resolved will be able to help.

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >