Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 379/1051 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • have a bash script remotely shutdown another computer on the lan

    - by gletscher
    Hi I want to write a bash script that when called shuts down another computer on the lan. Maybe using ssh? The other computer is an ubuntu machine. Now I'm not sure how to send e.g. a sudo shutdown -h now command from withing a bash script to the ssh after logging in. Also I'm not sure how to obtain the rights for the sudo command, hence how to handle the communication between the server and client from within a bash script. Any suggestions are greatly appreciated.

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Failed to configure CA certificate chain

    - by kron
    Hi All, I'm trying to setup SSL on fedora with apache. In my vhost... SSLCertificateFile /your/path/to/crt.crt SSLCertificateKeyFile /your/path/to/key.key SSLCertificateChainFile /your/path/to/DigiCertCA.crt I had it working fine with a self signed key, but can't get it to work with the DigiCertCA crt. When I run service httpd restart It fails to start. This is what I get in the logs... [Sat Jan 29 07:57:13 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suex$ [Sat Jan 29 07:57:13 2011] [error] Failed to configure CA certificate chain! Any assistance would be really appreciated! Thanks

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • getaddrinfo: command not found

    - by jebbie
    I've installed a new Ubuntu 12.04 on an AWS EC2 instance and everything worked fine till now. I followed the instructions in this great tutorial: http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ Now i'm on the point "installing monit" and when i restart the service i get this error message now: monit: Cannot translate '(none)' to FQDN name -- Name or service not known I started googling and someone is writing there, that monit uses getaddrinfo in his startup-process to determine the hostname. Ok, so i thought i try out on myself what is getaddrinfo delivering, and then i got: getaddrinfo: command not found I guess, something is missing on my system. Can anyone help?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Google Search w/ Chrome Incognito w/ Gnome Do

    - by jrc03c
    I've installed Google Chrome as my default browser in Ubuntu, and recently installed Gnome Do and enabled the Google Search plugin. The Google Search from Gnome Do works exactly as expected but for one thing: Chrome (which is typically set to open in "incognito" mode) does not open in "incognito" mode. The shortcuts on my desktop, taskbar, and menus all have the --incognito flag attached (which works just fine), but the browser refuses to open in this mode when launched from Gnome Do. Any suggestions? Also, please note the settings for the Google Search plugin in Gnome Do: It's obvious that Gnome Do just passes the Google Search blindly to the default browser. In other words, there are no configurable settings specifically for Chrome. Any thoughts?

    Read the article

  • formatting before md device creation in RAID5

    - by kumar
    consider you are creating a raid5 device with three drives. mdadm --create /dev/md0 --leve=5 --raid-disk=3 /dev/sda1 /dev/sdb1 /dev/sdc1 After issuing this command , I can see the progress of md device creating using cat /proc/mdstat. During the progress ITSELF, can I create a file ssytem partition say ext2 on md0 device like: mkfs.ext2 /dev/md0. Actually I am able to create this and want to confirm whether doing this before 100% completion of md device creation is CORRECT?

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • samba4 not building in Arch

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c Any ideas as what is causing my build to fail? I assume it's an issue with manpages I can't figure out exactly what package it is looking for that I don't have.

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • winbind failing after a semi-random amount of time

    - by The Digital Ninja
    I have winbind set up to authenticate to our AD for samba shares. This is the third such server, and the only one having any issues. It seems after a random amount of time samba shares will just stop working. Winbind processes seem to be running but restarting them seems to fix the issue for a while. Looking at the logs have been kind of hit an miss and I don't know exactly when it fails. One interesting thing is that it seems to be pulling from another domain controller that it shoudlnt. I censored out the domain name in this example. But isnt there some way to block authentication to a domain? I'm not sure if this is a symptom or a cause of the issue. [2010/10/18 08:02:10, 0] winbindd/winbindd_cache.c:initialize_winbindd_cache(2577) initialize_winbindd_cache: clearing cache and re-creating with version number 1 [2010/10/18 09:15:54, 1] libsmb/clikrb5.c:ads_krb5_mk_req(686) ads_krb5_mk_req: krb5_get_credentials failed for [email protected] (Cannot find KDC for requested realm) [2010/10/18 09:15:54, 1] libsmb/cliconnect.c:cli_session_setup_kerberos(624) cli_session_setup_kerberos: spnego_gen_negTokenTarg failed: Cannot find KDC for requested realm [2010/10/18 09:15:54, 0] lib/util_sock.c:write_data(1139) write_data: write failure. Error = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:write_socket(242) write_socket: Error writing 108 bytes to socket 18: ERRNO = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:cli_send_smb(290) Error writing 108 bytes to client. -1 (Connection reset by peer)

    Read the article

  • Cannot install CentOS 6.5 using UEFI usb boot

    - by Vaindil
    I am trying to dual-boot CentOS 6.5 on my desktop that is currently running Windows 8.1. I have two storage devices: an SSD that has my Windows installation, and an HDD that has all of my data. Both are formatted using GPT, and Windows boots using UEFI. I used the CentOS 6.5 live DVD (CentOS-6.5-x86_64-LiveDVD.iso) to create an EFI-bootable flash drive (it does boot properly in EFI mode). I receive an error, however, when CentOS is booting (error is below). I have a 6.4 boot DVD which boots as expected, but it does not boot in UEFI mode and therefore doesn't play nicely with my Windows installation (I have no way to access it, even using rEFInd or any other similar tools). What do I need to do to get the device to boot properly in UEFI mode? Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-431.el6.x86_64 #1 Call Trace: [<ffffffff815271fa>] ? panic+0xa7/0x16f [<ffffffff81077622>] ? do_exit+0x862/0x870 [<ffffffff8118a865>] ? fput+0x25/0x30 [<ffffffff81077688>] ? do_group_exit+0x58/0xd0 [<ffffffff81077717>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b drm_kms_helper: panic occurred, switching back to text console

    Read the article

  • Xen domain migration locking problem

    - by brodie
    I am trying to live migrate a VM (domain) between two Xen servers. I have xen locking (xend-domain-lock = yes) configured with a ocfs2 shared storage between them. This locking is working fine. If I try to start up the VM on the secondary server it refuses to start (which is correct). The problem I am having is when trying to do live migration, it seems like it is trying to remove the lock twice. The first lock it removes is for "domain test", the second is for "migrating-test" which does not exist. Should their be a lock for this "migrating-test" VM? These are the relevant options in the xen config file: (xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-address '') (xend-relocation-hosts-allow '') (xend-domain-lock yes) (xend-domain-lock-path /var/lib/xen/lock) This is the section of the log: [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain test [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) SUSPEND shinfo 000c6ceb [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) delta 21ms, dom0 95%, target 0%, sent 57Mb/s, dirtied 173Mb/s 111 pages 4: sent 111, skipped 0, delta 6ms, dom0 100%, target 0%, sent 606Mb/s, dirtied 606Mb/s 111 pages [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Total pages sent= 131295 (0.99x) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) (of which 0 were fixups) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) All memory is saved [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Save exit rc=0 [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:123) Domain 22 suspended. [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=22 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2227) Destroying device model [2010-06-10 10:45:58 14488] INFO (image:567) migrating-test device model terminated [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2234) Releasing devices [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vif, device = vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vkbd, device = vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = console, device = console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain migrating-test [2010-06-10 10:45:59 14488] ERROR (XendDomainInfo:4070) Failed to remove unmanaged directory /var/lib/xen/lock/b01515ae-9173-03cb-0cb7-06f3dfbede8b.

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • Calculate minimum ext3 partition size for certain amount of data

    - by Daniel Beck
    These following ext3 partitions contain identical data. As we can see, the larger the partition size, the more space is required for the same files: Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop11 3965777 561064 3199964 15% [...] /dev/loop19 573029 543843 29186 95% [...] Filesystem Size Used Avail Use% Mounted on /dev/loop11 3.8G 548M 3.1G 15% [...] /dev/loop19 560M 532M 29M 95% [...] Filesystem Inodes IUsed IFree IUse% Mounted on /dev/loop11 1024000 1656 1022344 1% [...] /dev/loop19 1024000 1656 1022344 1% [...] I start with a partition of fixed size that possibly wasted a lot of space and I want to create a partition that is able to hold that data but with (almost) minimal size. How can I reliably calculate that minimal partition size needed for storing a certain amount of data? The amount of data changes over time, and I need to automate these calculations.

    Read the article

  • CentOS - dual boot from new partition

    - by Dima
    I need to install two copies of the CentOS 5.5 (bank A and bank B) on different partitions of the same hard disk and install grub boot loader to another partition (visible from both banks). The boot loader should redirect the boot menu to bank A or bank B (according to the configuration). The new partition is mounted to /common_partition and grub is installed on it using following command: grub-install /dev/hda In the new partition I'm created the following menu.lst file: title BOOTCONTROL REDIRECT : PLEASE WAIT root (hd0,1) configfile /boot/menu.lst boot On my setup: both partitions (bank A and bank B) are primary and grub is installed on MBR. The problem is: but the new boot loader (on common_partition) did not load. What wrong on my configuration?

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >