Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 500/1663 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • How can I check whether a volume is mounted where it is supposed to be using Python?

    - by Ben Hymers
    I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!

    Read the article

  • strange behaviour - dhclient needs to be run twice in order to connect to wireless

    - by splicer
    I am trying to connect my to my wlan without the use of NetworkManager. I run the following commands after boot: iwconfig wlan0 enc <WEP passwd> mode managed essid <name> channel 6 ifconfig wlan0 up dhclient wlan0 At this point, dhclient stalls for ages (perhaps 2 minutes), then it returns with PING 192.168.1.254 (192.168.1.254) from 192.168.1.65 wlan0: 56(84) bytes of data. --- 192.168.1.254 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3000ms pipe 3 .. The strange thing is that when I run pkill dhclient; dhclient wlan0 right after this, it connects in about <3 seconds. Any idea what could be the cause of this problem? Edit: oh, and I did try using the -timeout flag on dhclient but that didn't seem to make any difference (it still stalled for ages).

    Read the article

  • Ubuntu most menu items dark-on-dark

    - by krzysz00
    Since to ubuntu 10.04 upgrade move of my drop-down menus have been dark-on-dark text, which becomes readable (changed background) when selected. I don't know what's causing this but it's a problem on Ambience and Radiance both. Any hints?

    Read the article

  • refresh screen rate ubuntu

    - by user24224
    Hello all I am having problems with the refresh rate if the screen . In the the refresh mode of the monitor in the monitor options have only one option 60Hz. I have LG 24 + ATI Radon 3870. And already installed the ati driver via ubuntu download centre. Any idea how i solve that one ? Thanks.

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Wpa supplicant suddenly stopped working

    - by Grzenio
    Hi, Recently my wireless stopped working on my Debian testing system. It just doesn't connect. The best I get (only after a reboot) is that it says it did connect, but failed to get IP address. But usually it just tries to connect, disconnects straight away, connects again etc. so it never manages to associate correctly. I am sure it did work about a month ago, stopped working after recent upgrades from the repository. Any ideas how to find the issue and fix it?

    Read the article

  • Postfix how to triggering my script when outgoing email status is sent?

    - by Laszlo Malina
    I want to run a program when postfix has successfully sent out a mail (local or remote). I would like to pass the headers to program and if possible also the destination ip or address (exclude spam filter delivery). I just have an idea: Delivery Status Notification processing via uniqe transport program, but I'd prefer the above. My goal is to be recorded lifetime (events) of email: it came, it went out (from, to, subject, datetime, message id, message status: bounce, sent). I would only need the state of the outgoing mail, because incoming and bounce program is working. It is possible to trigger a program (similar to a transport pipe/spawn) or DSN "cheat" stay? Thanks in advance for any reply!

    Read the article

  • How to report a bug against Ubuntu's upgrade process?

    - by Kim
    I just upgraded to lucid and discovered a nasty bug. It prevents the system from booting and took me hours to resolve. Now I'd like to report it along with the workaround I found. The only problem is: Where? Other such bugs have been filed against "update-manager", but that's just the GUI calling some scripts which do the real work. so what do I do? What should I substitute for XYZ in ubuntu-bug XYZ ?

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • the effect of large number of files on disk space in unix filesystems

    - by user46976
    If I have a text file in Unix that contains N-many independent entries (e.g. records about employees, where each employee has a separate record), is it expected that this file will take up less space than if I split the file into N files, each containing the entry for one employee? in other words, can one save significant space on unix file systems by concatenating many files together, or is the difference negligible? thanks.

    Read the article

  • How to free up block device that is mounted to an inaccessible place?

    - by Vi
    root@vi-notebook:~# cat /proc/mounts | grep raidy /dev/md0 /root/e/i/wpc2/boot/mnt/raidy reiserfs ro,nosuid,nodev,noexec,noatime 0 0 root@vi-notebook:~# umount -n /root/e/i/wpc2/boot/mnt/raidy umount: /root/e/i/wpc2/boot/mnt/raidy: Transport endpoint is not connected root@vi-notebook:~# mount /dev/md/raidy /mnt/raidy/ -t reiserfs -o nodev,nosuid,noexec,acl,noatime mount: /dev/md0 already mounted or /mnt/raidy/ busy The only workaround I found is: root@vi-notebook:~# losetup /dev/loop0 /dev/md/raidy root@vi-notebook:~# mount /dev/loop0 /mnt/raidy/ -t reiserfs -o nodev,nosuid,noexec,acl,noatime

    Read the article

  • Sftp via shell - how is it possible?

    - by Tomasz Zielinski
    (Moved from StackOverflow: http://stackoverflow.com/questions/4589725/sftp-via-shell-how-it-is-possible) How is it possible for tools like http://mysecureshell.sourceforge.net/ to provide SFTP access by merely specifying them as shell by typing: usermod -s /bin/MySecureShell myuser ? I'm on Debian Lenny, with default sshd/OpenSSH. Is this e.g. a feature of SSH protocol that allows user shell to handle sftp commands? I can't wrap my head around this because usually OpenSSH needs sftp-server module (or the internal one in newer versions) - and this makes me think that sftp commands don't even hit the shell and are handled earlier or by different code path..

    Read the article

  • Perform action based on load avg

    - by sfx
    I'm running some web applications on an debian server and have to struggle with ddos attacks sometimes. It's eating up all my resources and I can't ssh anymore into the server. An idea was to drop all connections if the load avg is too high, so there are still resources for me and accept new connections if the load avg is low enough. Since this has to work under heavy load I'm afraid a cronjob wouldn't be fast enough or take too much resources. tl;dr: Is there a way to configure the behavior if the load avg is above a specific threshold?

    Read the article

  • Replacing compiz/metacity with openbox reduces workspaces to 1

    - by Brian
    I like to use the GNOME desktop, but I prefer to replace its window manager with openbox, with 4 workspaces. However, when I run openbox --replace, the number of workspaces available drops to 1. If I go into obconf, workspaces is still configured to be 4 (~/.config/openbox/rc.xml shows the same). I can get the workspaces to reappear by changing the value in obconf to anything else, and then back to 4. I have just been dealing with this problem since Ubuntu 9.04 (now up to 10.10) since I don't reboot very often. But it's really annoying to have to reset my workspaces whenever I do have to reboot. Changing the value in rc.xml and running openbox --reconfigure does not seem to have any effect. So what is obconf doing that I'm not (sends a dbus message perhaps [EDIT: watching with dbus-monitor I see no messages when changing the workspaces value in obconf])? I was hoping there would be a cleaner way to change the window manager than just running openbox --replace at login. So my questions are: Is there a better way to specify an alternate window manager (i.e. a way that doesn't cause the workspaces to break)? If not, how can I automatically set the number of workspaces back to 4? Update: I finally got around to trying what I commented on MrShunz's answer (adding WINDOW_MANAGER=/usr/bin/openbox to ~/.gnomerc). But the effect is the same as openbox --replace.

    Read the article

  • Revo 3610 not doing hdmi handshake

    - by DoomStone
    I am having a problem with my Revo 3610 witch is connected to my tv via hdmi. For some reason will it not do the hdmi handshake with the tv, so my tv does not think that there are anything in the hdmi port. I have tested the tv and it works find, with my laptop and dvd. It dose work some times, but this time have it failed for 2 days in a row, and i have tried rebooting, turning the tv off and on, and so on nothing helps. I can trick the TV to listen to the HDMI with connecting with my laptop and then change the hdmi back to my revo, this on the other hand results in the image going thoug nicely but there are a big fat "Check signal cable." on the screen. I have also tryed changing the resolution in the revo but this dose not help ether. Have any one had this problem before, and if so how did you fix it? Example: http://i.imgur.com/gguZ4.jpg

    Read the article

  • syslog ip ranges to specific files using `rsyslog`

    - by Mike Pennington
    I have many Cisco / JunOS routers and switches that send logs to my Debian server, which uses rsyslogd. How can I configure rsyslogd to send these router / switch logs to a specific file, based on their source IP address? I do not want to pollute general system logs with these entries. For instance: all routers in Chicago (source ip block: 172.17.25.0/24) to only log to /var/log/net/chicago. all routers in Dallas (source ip block 172.17.27.0/24) to only log to /var/log/net/dallas. Finally, these logs should be rotated daily for up to 30 days and compressed. NOTE: I am answering my own question

    Read the article

  • How to recover a MySQL InnoDB table?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES EDIT: I could find out, that the table 'data_bases' is responsible for the crash of the MySQL server process. But trying to repair it using a REPAIR TABLE statement doesn't work. EDIT: I now dropped the whole database and restored it from a dump. But why I try to recover the data_bases table I get the following error: ERROR 1005 (HY000) at line 24: Can't create table './psa/data_bases.frm' (errno: 121) I am not able to create the table again. Somewhere in the MySQL system there is still some information about this table. I tested the same thing locally. If I just delete the table files and then try to create the table again I get the same error. If I drop the table through MySQL, I can create the table again afterwards. But trying to drop the table using MySQL the whole MySQL system crashed. Is there any way to solve that issue?

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >