Search Results

Search found 26742 results on 1070 pages for 'linux kernel'.

Page 372/1070 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • I've set an editor as default, how do I call it to open files in a shell?

    - by iight
    EDIT I thought of a better way to phrase the question. How can I find the alias that Ubuntu is using for a different text editor? Rather than using nano by typing nano file.txt, i'd like to be able to type sublime file.txt to open sublime editor. I don't know where to look to find these aliases. sudo update-alternatives --config editor does not show it as a choice, I only see the 'default' editors, like Nano and vim.tiny.

    Read the article

  • LS command for torrent files

    - by amir-beygi
    Hi all I have a directory full of torrent files,and i have to download all of them; But the problem is i have disk limit in my remote server,and file sizes are vary(100MB~8GB) and if i add all of torrent files ,none of them would be download completely;So i need a command to list all my torrents and the size of them , to be selected and add to download list later . NOTE: REMOTE SERVER - LINUX_UBUNTU_9.10 // SSH So i need a command like torrentls That output somethings like: file1.torrent 1111MB file2.torrent 222MB file3.torrent 3333MB file4.torrent 444MB file5.torrent 5555MB

    Read the article

  • keyboard mappings are totally screwed after updating to kde4

    - by zeonglow
    I recently upgraded from KDE 3.5 to KDE 4, and I have been having weird issues with my keyboard. In one of the virtual consoles e.g. when I press ctrl + alt 1 , I can type perfectly, but in KDE, several of the number keys don't work, the left and right arrows don't work either. When I press the right arrow key in xev I get this: KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 903459, (111,55), root:(115,836), state 0x10, keycode 114 (keysym 0x1008ff11, XF86AudioLowerVolume), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False When I press the '3' key it toggles my Bookmarks toolbar in Firefox, in xev I get this: KeyPress event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 999968, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 1000032, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False As this is deeper down, changing the type of keyboard in the KDE meun's has no effect. I'm slowly beginning to wade through the mountains of documentation about the X keyboard model, but there has to be a better way. Does anyone no what it is? Edit: 1234567890 ! after deleting the entire .kde folder. but only until I change the Keyboard settings from the "system settings" applet, then its hosed full time. Regardless of what I set the settings too. (restore to default settings doesn't) 2nd Edit: I'm using Gentoo AMD64, I was upgrading from KDE 3.5 KDE 4.2. I think I had manual settings before, although I didn't change anything. I was originally running KDE without HAL until that stop working a year or so ago. The only customisation I made was to set the multimedia keys to control Amarok. 3rd Edit $ grep xkb /var/log/Xorg.0.log (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" Xorg.0.log has this to say: (WW) AllowEmptyInput is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. (WW) Disabling Mouse1 (WW) Disabling Keyboard1 My Xorg.conf has this in it. Identifier "Keyboard1" Driver "kbd" Option "AutoRepeat" "500 30" # Specify which keyboard LEDs can be user-controlled (eg, with xset(1)) Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "gb"

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • ignore ipv6 router advertisements for static addresses with bonded interfaces

    - by boran
    I need to attribute static IPv6 addresses (not use autoconfigured addresses, and ignore router advertisements). This can be done as follows for a standard interface like eth0 iface eth0 inet6 static address myprefix:mysubnet::myip gateway myprefix:mysubnet::mygatewayip netmask 64 pre-up /sbin/sysctl -q -w net.ipv6.conf.$IFACE.autoconf=0 pre-up /sbin/sysctl -q -w net.ipv6.conf.$IFACE.accept_ra=0 However, how can this be done for bonded interfaces? using the "all" interface does not work. Systems is Ubuntu 10.04, 2.6.24-24-server. If one uses the above sysctl command for the bond0, the networking hangs on boot, because /proc/sys/net/ipv6/conf/bond0 does not yet exist and cannot be written to. One the system has booted /proc/sys/net/ipv6/conf/bond0 exist, so one solution after booting is to add the following to /etc/rc.local: /sbin/sysctl -q -w net.ipv6.conf.bond0.autoconf=0 /sbin/sysctl -q -w net.ipv6.conf.bond0.accept_ra=0 /etc/init.d/networking restart and this has the desired effect, the autoconfig v6 address disappears. Seems like a bit of a hack though, are there better solutions?

    Read the article

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • Debian modem problem

    - by Raafat
    I'm a new Debian user, it looks like a very good choice 4 me, every thing is stable, free and easy to use. The problem is, I'm using my modem to establish a dial up connection to the internet (ppp) (a very old stupid way I'm forced to use for now), and using the KPPP application to do that, and nothing is working properly for me. it seems like it didn't recognize my modem or something. I already tried to make a few stuff, and now i know my modem is on /dev/tty0, so i made a link for that on /dev/modem, and query the modem using KPPP and it responded with something like: Ati : Ati0: Ati1: ... ... Ati7: with a textBox to fill up in front of each one of thees Atis, and now, when i press connect on kppp, it says modem ready, and that's it. BTW, my modem is MDC AC'97

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • ubuntu: sending mail with postfix?

    - by ajsie
    i've got some questions about how it works: so ubuntu server comes with postfix installed. if i want my php script to send a mail to lets say [email protected], how does it work? do i have to specify any ip to another MTA (my ISP's MTA?) in postfix's configuration file? and if someone sends back, will it get to my ip? is it postfix that receives it? or has it to do with fetchmail?

    Read the article

  • Apache + Tomcat error 120006 Using mod_proxy_ajp for Load Balance

    - by Wakaru44
    I have an apache 2 frontend with two nodes, and a backend with two instances of tomcat 6 balance with mod_proxy_ajp. The bbdd is in a separate machine. All machines use RHEL, 6.2 on the frontend, 5.5 on the backend. The infraestructure is virtualized using VMware. # This is the apache config in one of the virtualHost. ProxyPreserveHost On ProxyPass / balancer://liferay/ <Proxy balancer://liferay> BalancerMember ajp://lrab:8009 route=liferaya BalancerMember ajp://lrbb:8009 route=liferayb status=+H ProxySet lbmethod=byrequests nofailover=on </Proxy> The conector in tomcat is now configured like this: <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" enableLookups="false" allowTrace="true" /> Do you think it could be useful to set a maxThreads parameter, like in this post?? in that case, How can i determine a proper number of threads? From time to time, we get errors like this [Tue Sep 18 17:57:02 2012] [error] ajp_read_header: ajp_ilink_receive failed [Tue Sep 18 17:57:02 2012] [error] (120006)APR does not understand this error code: proxy: read response failed from 192.168.1.104:8009 (lrab) And apache switches to the pasive node (if its active) or fails with 503. Some things i have tried so far: I think that i have some performance issues with one of the applications, Here you can see a threadDump But i'm not quite sure about it. I also started to monitor the network connection. I have noticed that there are some pings lost when i have a "ping -f " so maybe it could be a network issue, but the success rate is 100% (so the lost packets are only a few among the flood, but maybe, i don't know, enough to break the link betwen apache and tomcat). I wrote a python script to check connectivity with timestamps on the pings, so i can know when the network fails. After sniffing the network , i can also see some RST packets, but i don't know if that is a normal behaviour (some applications do that to end a network communication). I have also noticed that the applications have problems communicating with the database, but im not even sure if this could be related or not. If you think so, i can post more info about it. I changed the connector on the tomcats to use the native one, but still the same. I need not even a solution to this, but maybe some guidance on how can i troubleshoot this better ¿Analyze threads, monitor mysql performance, sniff the traffic between apaches and tomcats? Ultimately, all i need is to balance the tomcat instances in Active/pasive mode, so if there is another way to do it, i could give it a try.

    Read the article

  • DNS Problems (NIGHTMARES!) with BIND and Virtualmin

    - by Nyxynyx
    I have a webserver (Ubuntu 12.04 with LAMP) using Virtualmin / Webmin. Because I just moved from a Cpanel system, I am having a nightmare configuring the DNS! Using intoDNS.com, the failed reports are: Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records. DNS servers responded ERROR: One or more of your nameservers did not respond: The ones that did not respond are: 123.123.123.123 213.251.188.141x Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me. Missing nameservers reported by your nameserver You should already know that your NS records at your nameservers are missing, so here it is again: ns1.mydomain.com. sdns2.ovh.net. SOA record No valid SOA record came back! MX Records WWW A Record ERROR: I could not get any A records for www.mydomain.com! Step-by-Step of my Attempt In my domain registrar (Namecheap), I registered ns1.mydomain.com as a nameserver, pointing to the IP address of my web server which is running bind9. The domain is setup with DNS ns1.mydomain.com and sdns2.ovh.net. sdns2.ovh.net is a secondary DNS server (SLAVE and pointing mydomain.com to the IP address of my web server) Webserver domain: mydomain.com Webserver hostname: ns4000000.ip-123-123-123.net Webserver IP: 123.123.123.123 Under Virtualmin, I edited the default Virtual server template, BIND DNS records for new domains: ns1.mydomain.com Master DNS server hostname: ns1.mydomain.com Next I created a Virtual server using that server template. This is what I've done but its still not working! Any ideas? I've been stuck for days, thank you for all your help! service bind9 status * bind9 is running lsof -i :53 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME named 6966 bind 20u IPv6 338583 0t0 TCP *:domain (LISTEN) named 6966 bind 21u IPv4 338588 0t0 TCP localhost.localdomain:domain (LISTEN) named 6966 bind 22u IPv4 338590 0t0 TCP ns4000000.ip-123-123-123.net:domain (LISTEN) named 6966 bind 512u IPv6 338582 0t0 UDP *:domain named 6966 bind 513u IPv4 338587 0t0 UDP localhost.localdomain:domain named 6966 bind 514u IPv4 338589 0t0 UDP ns4000000.ip-123-123-123.net:domain /etc/resolv.con (Not sure how 213.186.33.99 got here) nameserver 127.0.0.1 nameserver 213.186.33.99 search ovh.net host 123.123.123.123 (my web server's IP) 13.60.245.198.in-addr.arpa domain name pointer ns4000000.ip-123-123-123.net. nslookup 213.186.33.99 Server: 127.0.0.1 Address: 127.0.0.1#53 Non-authoritative answer: 99.33.186.213.in-addr.arpa name = cdns.ovh.net. Authoritative answers can be found from: 33.186.213.in-addr.arpa nameserver = ns.ovh.net. 33.186.213.in-addr.arpa nameserver = dns.ovh.net. nslookup ns1.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached nslookup ns2.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached nslookup www.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached dig mydomain.com ; <<>> DiG 9.8.1-P1 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43540 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 11 11:30:09 2012 ;; MSG SIZE rcvd: 30 dig ns1.mydomain.com ; <<>> DiG 9.8.1-P1 <<>> ns1.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31254 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.mydomain.com. IN A ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 11 11:30:16 2012 ;; MSG SIZE rcvd: 34 /etc/bind/named.conf include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; /etc/bind/named.conf.default-zones zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; /etc/bind/named.conf.local zone "mydomain.com" { type master; file "/var/lib/bind/mydomain.com.hosts"; allow-transfer { 127.0.0.1; localnets; }; }; /etc/bind/named.conf.options options { directory "/var/cache/bind"; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; // allow-recursion { 127.0.0.1; }; // transfer-source; }; named-checkconf -z dns_master_load: /var/lib/bind/mydomain.com.hosts:21: unexpected end of line dns_master_load: /var/lib/bind/mydomain.com.hosts:20: unexpected end of input /var/lib/bind/mydomain.com.hosts: file does not end with newline zone mydomain.com/IN: loading from master file /var/lib/bind/mydomain.com.hosts failed: unexpected end of input zone mydomain.com/IN: not loaded due to errors. _default/mydomain.com/IN: unexpected end of input zone localhost/IN: loaded serial 2 zone 127.in-addr.arpa/IN: loaded serial 1 zone 0.in-addr.arpa/IN: loaded serial 1 zone 255.in-addr.arpa/IN: loaded serial 1 iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:submission ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • What are my options with a downloaded Rackspace cloud image?

    - by schnippy
    I've got an unresponsive Rackspace slice that has defied all attempts at accessing. I created an emergency image from this and deleted it, downloading the files that compromise the image to a local source. There are a number of files / assets I would still like to recover from this server if possible but not sure exactly what I can do with the image files, if anything. Here's the files I have, for what its worth: emergency_########_######_cloudserver########.tar.gz.0 (5gb) emergency_########_######_cloudserver########.tar.gz.1 (5gb) emergency_########_######_cloudserver########.tar.gz.2 (5gb) emergency_########_######_cloudserver########.tar.gz.3 (50mb) emergency_########_######_cloudserver########.yml (25kb) Is it possible to mount this image as a drive? Are there other forensic recovery options?

    Read the article

  • Why isn't "(ulimit -d 1000; firefox) &" working?

    - by MaxB
    I'm trying to limit the memory usage of firefox to prevent it from stalling the whole system with problematic web sites. I tried, in bash: (ulimit -d 1000; firefox) & This should limit the memory usage to 1000kB. Then I opened YouTube, and noticed, in top, that firefox is using 2.6% of the memory, or about 200MB, and not crashing. Clearly the limit is being ignored. Why is that, and how can I enforce it correctly?

    Read the article

  • Ubuntu dpkg error , after crash and filesystem error recovery

    - by Radian
    Ubuntu recently crashed , causing it's partition damaged ( which is EXT4) and Ubuntu was unable to boot , because it couldn't mount anything , only displays Busybox So I used the Live CD to run fsck on the partition, which fixed it , but deleted some nodes Now Ubuntu is working , but some files were missing , for example I lost the Panels configurations and Chromium's Extensions The Most Annoying problem , that there is some files corrupted , for example when I try to install any program, I got this (Reading database ... 95%dpkg: unrecoverable fatal error, aborting: files list file for package 'libservlet2.4-java' is missing final newline I tried these commands dpkg --configure -a apt-get -f install and from GUI , Synaptic Package Manager Fix Broken Packages So this file "libservlet2.4-java" Does anyone knows what it does ! and where it's location ? and how can I fix/get-correct-version-of it ? Also , is there any way I could tell Ubuntu to Check for ALL it's file , and if there is something corrupted it should recover it form the CD ?

    Read the article

  • Is there a max thread per mongrel?

    - by Blankman
    I don't know much about ruby, much less how or what is involved with hosting a ruby on rails web app. BUT, I recall hearing someone saying that they have to run multiple mongrels b/c of a limit of 50 threads? Is this true (or something similiar)? Why does it have this limitation?

    Read the article

  • Rip authedicatation from LDAP to Local

    - by oxinabox
    We are taking a small portion of out network offline, and running a separate network using that portion. (By small portion I mean 2 servers, that will be connected to 30 odd boxs that aren't usually part of our network, and don't need to authenicate) I intend to create a VM on one of the servers to provide general user services, and IRC server, remote shell etc. And I would like the users to be able to use there usual server log in details. Problem is the LDAP server that normally checks those details is not one of the severs. So I need to be able to some how take their details off LDAP and put them on the the server that is coming. One suggestion I had was to set a LDAP server on the VM locally, and clone the LDAP database onto it (using something called slapcat) is this the best way? Or can I I change the LDAP data into local authentication data?

    Read the article

  • md5sum returns a different hash value than online hash generators

    - by Ravi
    On suse10, md5sum myname gives md5 hash as 49b0939cb2db9d21b038b7f7d453cd5d The file myname contains string "ravi" while some of the online md5 hash generators for the same string seem to give a different hash http://md5-encryption.com/ http://www.miraclesalad.com/webtools/md5.php They spit out the hash for "ravi" as 63dd3e154ca6d948fc380fa576343ba6 Why is there a difference in md5sum for the same string "ravi" ?

    Read the article

  • Debugging Samba/CUPS printer sharing with Windows

    - by mrdrbob
    I've got a HP Deskjet hooked up to a Slackware 12.2 box. I've got CUPS set up and can print a test page from the box just fine. I've also got Samba set up and have a couple file shares that work fine. I'm trying to share that HP Deskjet out via Samba, but I can't get it to show up in any Windows system. I see the server and its file shares in Windows networking, but when I open the Printers, no printer shows up. Running net view \\servername from the command line lists the file shares, but no printers. Here's the pertinent part of my smb.conf, if that helps: [global] workgroup = HOMENET security = share hosts allow = 192.168.1. 192.168.2. 127. load printers = yes printcap name = cups printing = cups log file = /var/log/samba.%m max log size = 50 [printers] comment = All Printers path = /var/spool/samba browseable = no public = yes writable = no printable = yes guest only = yes Can anyone give me some pointers as to where to start looking for potential causes? Update: Running testparm shows no errors. Here's the output (minus the file shares): [global] workgroup = HOMENET security = SHARE log file = /var/log/samba.%m max log size = 50 printcap name = cups hosts allow = 192.168.1., 192.168.2., 127. [printers] comment = All Printers path = /var/spool/samba guest only = Yes guest ok = Yes printable = Yes browseable = No

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >