Search Results

Search found 36619 results on 1465 pages for 'damn small linux'.

Page 409/1465 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • HALEVT troubleshooting: VFAT usb storage device gets mounted with root:root user:group

    - by Nova deViator
    Hi, i'm banging my head for number of days around this problem. using Halevt for automounting, everything mostly works, but the only thing is that Halevt mounts external USB storage devices as root. So, as user i cannot write to files on them. Halevt gets run as halevt user on boot through /etc/init.d script. This is Ubuntu Lucid with Awesome WM. No GDM. Running halevt as user seem to not work (halevt runs but doesn't respond on Insert) I know HAL is deprecated and removed and i should probably write my own UDEV rules, but until then it seems there must a be simple hack that enables mounting VFAT/NTFS devices with specific uid/gid. this question/answer helps a lot, but not specifically to the above.

    Read the article

  • Hanging of host network connections when starting KVM guest on bridge

    - by Chris Phillips
    Hi, I've a KVM system upon which I'm running a network bridge directly between all VM's and a bond0 (eth0, eth1) on the host OS. As such, all machines are presented on the same subnet, available outside of the box. The bond is doing mode 1 active / passive, with an arp_ip_target set to the default gateway, which has caused some issues in itself, but I can't see the bond configs mattering here myself. I'm seeing odd things most times when I stop and start a guest on the platform, in that on the host I lose network connectivity (icmp, ssh) for about 30 seconds. I don't lose connectivity on the other already running VM's though... they can always ping the default GW, but the host can't. I say "about 30 seconds" but from some tests it actually seems to be 28 seconds usually (or at least, I lose 28 pings...) and I'm wondering if this somehow relates to the bridge config. I'm not running STP on the bridge at all, and the forwarding delay is set to 1 second, path cost on the bond0 lowered to 10 and port priority of bond0 also lowered to 1. As such I don't think that the bridge should ever be able to think that bond0 is not connected just fine (as continued guest connectivity implies) yet the IP of the host, which is on the bridge device (... could that matter?? ) becomes unreachable. I'm fairly sure it's about the bridged networking, but at the same time as this happens when a VM is started there are clearly loads of other things also happening so maybe I'm way off the mark. Lack of connectivity: # ping 10.20.11.254 PING 10.20.11.254 (10.20.11.254) 56(84) bytes of data. 64 bytes from 10.20.11.254: icmp_seq=1 ttl=255 time=0.921 ms 64 bytes from 10.20.11.254: icmp_seq=2 ttl=255 time=0.541 ms type=1700 audit(1293462808.589:325): dev=vnet6 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.604:326): dev=vnet7 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.618:327): dev=vnet8 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffdd694a kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079 64 bytes from 10.20.11.254: icmp_seq=30 ttl=255 time=0.514 ms 64 bytes from 10.20.11.254: icmp_seq=31 ttl=255 time=0.551 ms 64 bytes from 10.20.11.254: icmp_seq=32 ttl=255 time=0.437 ms 64 bytes from 10.20.11.254: icmp_seq=33 ttl=255 time=0.392 ms brctl output of relevant bridge: # brctl showstp brdev brdev bridge id 8000.b2e1378d1396 designated root 8000.b2e1378d1396 root port 0 path cost 0 max age 19.99 bridge max age 19.99 hello time 1.99 bridge hello time 1.99 forward delay 0.99 bridge forward delay 0.99 ageing time 299.95 hello timer 0.50 tcn timer 0.00 topology change timer 0.00 gc timer 0.04 flags vnet5 (3) port id 8003 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags vnet0 (2) port id 8002 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags bond0 (1) port id 0001 state forwarding designated root 8000.b2e1378d1396 path cost 10 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 0001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags I do see the new port listed as learning, but in line with the forward delay, only for 1 or 2 seconds when polling the brctl output on a loop. All pointers, tips or stabs in the dark appreciated.

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • Installing checkinstall on x86_64 bit

    - by SephMerah
    I downloaded the source for check install. checkinstall-1.6.2.tar.gz. I then tar -xzvf checkinstall-1.6.2.tar.gz Then I make. It prints this error: [root@ip-50-63-180-135 checkinstall-1.6.2]# make for file in locale/checkinstall-*.po ; do \ case ${file} in \ locale/checkinstall-template.po) ;; \ *) \ out=`echo $file | sed -s 's/po/mo/'` ; \ msgfmt -o ${out} ${file} ; \ if [ $? != 0 ] ; then \ exit 1 ; \ fi ; \ ;; \ esac ; \ done make -C installwatch make[1]: Entering directory `/home/sofiane/checkinstall-1.6.2/installwatch' gcc -Wall -c -D_GNU_SOURCE -DPIC -fPIC -D_REENTRANT -DVERSION=\"0.7.0beta7\" installwatch.c installwatch.c:2942: error: conflicting types for 'readlink' /usr/include/unistd.h:828: note: previous declaration of 'readlink' was here installwatch.c:3080: error: conflicting types for 'scandir' /usr/include/dirent.h:252: note: previous declaration of 'scandir' was here installwatch.c:3692: error: conflicting types for 'scandir64' /usr/include/dirent.h:275: note: previous declaration of 'scandir64' was here make[1]: *** [installwatch.o] Error 1 make[1]: Leaving directory `/home/sofiane/checkinstall-1.6.2/installwatch' make: *** [all] Error 2 I searched extensively on this issue and this solution looks promising. Should I attempt to install checkinstall as an fpm? What would be the best way to go about that? Centos 6.3 x86_64

    Read the article

  • Problem about IP and computer name in Ubuntu

    - by bugbug
    I can't connect to mysql database becase it alway change 192.168.1.101 to ubuntu.local. $ mysql -uroot -padmin1234 -h192.168.1.101 ERROR 1045 (28000) : Access denined for user 'root'@'ubuntu.local' (using password: YES) How do I solve this problem. File: /etc/hosts in this machine 127.0.0.1 localhost 127.0.1.1 ubuntu.ubuntu-domain ubuntu # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts I have no idea about "root'@'ubuntu.local", where is it come from.

    Read the article

  • Amazon EC2 tools for Debian?

    - by Jonik
    What is the recommended way of getting command-line Amazon EC2 tools on Debian? So, basically the same as this question, but for EC2 instead of S3. Ubuntu has ec2-ami-tools and ec2-api-tools, but I couldn't find equivalent packages for Debian. A blog post titled "Install EC2 AMI & API tools in Debian" talks about installing Amazon's packages outside package management, but that seems a little clumsy.

    Read the article

  • What are "build-essential" & "build-dep"?

    - by Adam Siddhi
    I am researching how to install Ruby 1.9.1 in Xubuntu 10.04 and I came across the command build-essential and build-dep multiple times. Sometimes it is followed by packages and sometimes it is both preceded and post-ceded by packages. The 2 examples I am looking at are: sudo apt-get install build-essential zlib1g zlib1g-dev zlibc libruby1.9 libxml2 libxml2-dev libxslt-dev sudo apt-get build-dep ruby1.9 and sudo apt-get install ruby irb ri rdoc ruby1.8-dev libzlib-ruby libyaml-ruby libreadline-ruby libncurses-ruby libcurses-ruby libruby libruby-extras libfcgi-ruby1.8 build-essential libopenssl-ruby libdbm-ruby libdbi-ruby libdbd-sqlite3-ruby sqlite3 libsqlite3-dev libsqlite3-ruby libxml-ruby libxml2-dev

    Read the article

  • NFS mount mounted inside another NFS mount disappears randomly

    - by espenfjo
    I have quite an odd issue where my nested NFS mounts just disappear randomly from time to time. The fstab entries look somewhat like this: nfs:/home /home/nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp 0 0 nfs:/bigdir /home/bigdir nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp,bg 0 0 The issue is that from time to time the "/home/bigdir" folder will be empty, even though mtab think that the share is still mounted. nfsstat et. al. do also think the share is still mounted. Only thing that works is by unmounting, and then (re)mounting the bigdir share. The server side is a NetApp. The client side is RHEL5.5, 2.6.18-194 kernel (Yes, I know 5.8 is out, but as far as I can see there are no erratas for this particular issue). I can use various hacks like automount, or mounting it to another path and then using --mount bind, but I would like to fix the underlying issue. -- Best regards Espen Fjellvær Olsen

    Read the article

  • How can I remove all the files which has a string in a file

    - by michael
    Hi, I am trying to remove all the files in a directory hierarchy which a certain string inside the file (not the file name, it is the file content). I can list out all the file name which has a string in the file using 'grep -r -l mystringlooking for'. But how can I remove all the files returned by the grep ? I am trying this on ubuntu. Thank you.

    Read the article

  • Create XFS volume on /dev/sg* device

    - by cpt.Buggy
    Now I have couple of Supermicro 24x2Tb SATA servers and I have now idea how to get access to disks. I need to create XFS volume on each of them but really don't know how to do it, because fdisk doesn't see them. # sg_scan -i /dev/sg0: scsi0 channel=0 id=0 lun=0 [em] ATA ST3250318AS CC38 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg1: scsi1 channel=0 id=0 lun=0 [em] ATA ST3250318AS CC38 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg2: scsi6 channel=1 id=8 lun=0 [em] Hitachi HDS722020ALA330 JKAO [rmb=0 cmdq=1 pqual=1 pdev=0x0] ... ... ... /dev/sg25: scsi6 channel=1 id=31 lun=0 [em] Hitachi HDS722020ALA330 JKAO [rmb=0 cmdq=1 pqual=1 pdev=0x0] /dev/sg26: scsi6 channel=3 id=0 lun=0 [em] LSILOGIC SASX36 A.1 7017 [rmb=0 cmdq=1 pqual=0 pdev=0xd] # sg_map /dev/sg0 /dev/sda /dev/sg1 /dev/sdb /dev/sg2 .. ... ... /dev/sg25 /dev/sg26 I can't use fdisk and mkfs, what should I do?

    Read the article

  • How do i install apache on my ubuntu 12.04 where it has virtualhost

    - by YumYumYum
    According to the docs https://help.ubuntu.com/10.04/serverguide/httpd.html i have done following, and that is almost how i do always in my Fedora, but Ubuntu looks like its not working. a) DNS to IP $ echo "127.0.0.1 a" > /etc/hosts $ echo "127.0.0.1 b" > /etc/hosts b) Apache virtualhost $ ls 1 2 default default.backup default-ssl $ cat 1 <VirtualHost *:80> ServerName a ServerAlias a DocumentRoot /var/www/html/a/public <Directory /var/www/html/a/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> $ cat 2 <VirtualHost *:80> ServerName b ServerAlias b DocumentRoot /var/www/html/b/public <Directory /var/www/html/b/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> c) load into Apache and restart the service $ a2ensite 1 $ a2ensite 2 $ a2dissite default $ /etc/init.d/apache2 restart d) Browse the new 2 hosts $ firefox http://a Does not work it goes always with http://a or http://b to /var/www/html How do i fix it so that it goes to its own directory e.g: http://a goes to /var/www/html/a/public not /var/www/html?

    Read the article

  • How to Find Out What Version of Display Driver is Installed

    - by Artium
    One of my favorite games, "Wolfenstein Enemy Territory", has stopped working lately. It throws a segfault during the initialization phase. I suspect that the reason is a recent update to the video card driver. The problem started after I updated Ubuntu but I do not remember if there was a driver update in the list. My question is how can I check this. How can I view the current version of the display driver installed and the date it was last updated? If I discover that this is indeed the problem, will it be possible to revert the update and stay with the previous version of the driver?

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • Configure host access rights in OpenLDAP

    - by Anonymous Coward
    I've set up an OpenLDAP-Server to authenticate users to our Ubuntu-servers. The authentication works quite well but I'd like to restict the user's access to certain servers. I know this can be done through nss_base_something in the client's ldap.conf. However, this requires the group restrictions to be specified on the client. I wonder if the restrictions can be set completely in OpenLDAP. If it is, I'd like to know how. Thanks, AC

    Read the article

  • Where is the xorg.conf file in Karmic Koala (Ubuntu 9.10) ?

    - by jfmessier
    I am trying to change this xorg.conf file that I used to modify under Ubuntu 9.04, so it can have the higher resolutions of my monitor. Under 9.04, the monitor was unknown, and I had to key in all resolutions in the file, and although it is found under 9.10, 9.10 does not have the highest resolution that My monitor can sustain. How can I change such setting ? Is xorg.conf moved, or replaced ? Merci :-) JF

    Read the article

  • FreeNAS - how to "Exclude from file" in Rsyncd (GUI)

    - by user179181
    I am trying to set rsync tasks to Pull user profiles from 11 Windows machines running DeltaCopy Server and then configure ZFS periodic snapshot tasks for a backup solution. So far this has been working fine, although i would like to exclude certain file types like .DAT or NTUSER.DAT. My Exclusion file resides on the local ZFS Dataset (Receiving side) and is as follows: Temp Temporary Internet Files NTUSER.DAT NTUSER.DAT.LOG *.dat *.tmp *.DAT.log *.ost *.pst The command i typed under Auxiliary Parameters (Rsyncd Global Conf under services)is as follows: exclude from = /mnt/Storage/User_Profiles/exclude.txt Ive tried deleting the .DAT files from the receiving end and just as i start to get excited i click refresh and there they are again

    Read the article

  • How to change a physical partition system to LVM?

    - by Daniel Hernández
    I have a server with Debian that have 3 physical partitions covering all the disk: boot, root y swap. Now I want to replace that partitions with LVM partitions. I know how install Debian with LVM at beginning, but in this case I can't install the system at beginning because the provider gets me a server with remote access and the system installed in this way. How can I change that partitions using only an ssh connection and possibly other remote server where to put some temporal data?

    Read the article

  • External monitor image not centered in the screen

    - by kemp
    I'm using Xorg 7.5 on a Radeon HD4870 with the FOSS radeonhd server on my HP 6830s. The laptop has a VGA connector and I attach it to a 37" panasonic plasma TV. It works fine except for a little annoyance: when activated, the TV screen is set to the resolution of 1360x768 (which it reports as being the highest it supports) but all the image is shifted by about 100 pixels to the right. I can't see the leftmost part of the page, and I have a black vertical bar to the right. If I change the resolution to 1024x768 there is no shifting, the image fills the entire screen with no parts hidden, but at this resolution the image is stretched. How can I tune the position on the external monitor so that the image is centered in the screen filling it entirely?

    Read the article

  • mod_ntlm for RHEL 5.3

    - by vikasa
    I tried to compile mod_ntlm for Oracle HTTP Server but got all sorts of errors, can someone point me to a pre-compiled binary? Tried everything at http://wiki.bestpractical.com/view/NtlmAuthentication still no go Thanks

    Read the article

  • install grub on disk image

    - by Dima
    I have disk image with 2 partitions: Partition 1 has cramfs file system (read only). This partition contains all system files of the OS Partition 2 has ext3 file system. This partition has only configuration files that may be changed. How can I install GRUB1 boot loader on MBR. I tried to copy first 446 bytes of my hard disk and copy GRUB files to the /boot directory on the 1st (cramfs) partition. I cannot use grub-install because I have disk image and not disk itself. Any ideas?

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >