Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 3/35 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ubuntu stops auto-mounting flash drive

    - by Brian
    It seems that after being up for a few days, my Ubuntu system refuses to auto-mount hot-plugged USB disks (i.e. flash drives). The output from dmesg shows that the kernel recognizes the device correctly. The only solution I'm aware of at the moment is to reboot (logging out may work as well, but the impact is the same since I have a bunch of stuff open and it takes a few minutes to get everything situated after startup/login). I thought gvfs-fuse-daemon was the thing responsible for managing filesystems in userspace, but killing and restarting that doesn't help. Any other ideas?

    Read the article

  • (How) does deleting open files on Linux and a FAT file system work?

    - by lxgr
    It's clear to me how deleting open files works on filesystems that use inodes - unlink() just decreases the link count to zero, and when the last file handle to the file is closed, the inode will be removed. But how does it work when using a file system that doesn't use inodes, like FAT32, with Linux? Some experiments suggest that deleting open files is still possible (unlike on Windows, where the unlink call wouldn't succeed), but what happens when the file system is uncleanly unmounted? How does Linux mark the files as unlinked, when the file system itself doesn't support such an operation? Is the directory entry just deleted, but retained in memory (that would guarantee deletion after unmounting in any case, but would leave the file system in an inconsistent state), or will the deletion only be marked in memory, and written at the time the last file handle is closed, avoiding possible corruption, but restoring the deleted files after an unclean unmount?

    Read the article

  • Is there good FAT driver for FUSE? (Lightweight, not mountlo)

    - by Vi.
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Using FUSE to mount external storage devices is good both for security and for flexibility reason: the kernel sees only block reads and writes while actual code that deals with filesystem details runs with user privileges, allowing user to use custom filesystems and preventing from kernel filesystem exploits. Is there good vfat FUSE driver?

    Read the article

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • Understanding the Mounting of a Filesystem

    - by Tom H.
    I'm new to linux and want to check my understanding of how mounting/filesystems work. I read related manpages, but just want to be sure. I have a partition say /dev/sda5 that is currently mounted to /home with various subdirs. It is my understanding that this means /dev/sda5 has its own portable filesystem that can be moved anywhere in the main filesystem. Questions: If I unmount /dev/sda5 from /home (# umount /home) and then mount it to /var/www/ (which is empty) (# mount -t ext3 /dev/sda5 /var/www) and replace the fstab entry, with /dev/sda5 /var/www ext3 defaults,noatime,nodev 1 2 and # mount -a, Q1) are all of the contents of /home now accessible under /var/www/ (i.e. /home/username -> /var/www/username)? Q2) Are all of the permissions from the /home filesystem kept intact in this new location? Anything else I should be concerned with? Just want to make sure I don't go wipe/corrupt anything. Coming from Windows the filesystem architecture takes getting used to (though I'm loving the flexibility!).

    Read the article

  • Can I set up arbitrary filesystem redirection in Windows?

    - by Jon
    I am sitting in front of a Windows 7 machine that has no drive Q:. Is it possible to arrange for accesses to Q:\somedir to be redirected to an arbitrary location on the existing filesystems (for example, C:\Windows)? I would especially like a "set it and forget it" option, if one exists. I am assuming (although I have not tried it) that it is possible to use SUBST to mount an existing (empty, created for this purpose) folder as drive Q: and then MKLINK /J to create a directory symbolic link from Q:\somedir to wherever I want. However, this approach has a couple of drawbacks that I would like to avoid if possible: The drive Q: will be visible in the system. It is not as clean as I would like (removing the mounted folder will break it; a batch script needs to be manually added to the system startup). Is there a better option? If there is none and I am forced to make compromises, what is the closest I could get to the ideal solution? Assume anything is up for discussion.

    Read the article

  • Difference between "bit-number" in filesystem and that in OS (like 32,64 bit)?

    - by learner
    I encountered 2 terms ,"FAT32", a file system and "Windows Vista 32 bit". I found that the meaning of a 32 bit OS is that that the OS deals with data in chunks of minimum size 32 bits. I don't quite understand the depth of that, but I figure ,every file in that system with that OS should have a minimum size of 32 bits. I also read that these 32 bits are used to hold data of files' location(reference) and details. Which of it is it? I have also read that 4 GB of RAM is all that is needed at the most if you're on a 32 bit OS. But I don't understand why. If there are 32 bits to hold info about files and their locations,there can be 2^32 possible combinations of it. But I have found in many places,2^32 is divided by 1024 thrice to get 4GB. Why? Did that 2^32 become equal to 2^32 bytes? And about filesystems I read a similar explanation for what 32 means in FAT32. It is supposed to mean that 32 bits are used to number file system block. Now how is different from the number before the OS?

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Is this valid JFS partition?

    - by Coolmax
    This is my first question on StackExchange. My teacher gave my his laptop (with Fedora 16 on it) and compact flash card with data. He want to have access to files on card, but he couldn't get access to it. The problem is Linux don't know what type of partion is. I suppose there is JFS: root@debian:~# dmesg |grep sdc [ 9066.908223] sd 3:0:0:1: [sdc] 3940272 512-byte logical blocks: (2.01 GB/1.87 GiB) [ 9066.962307] sd 3:0:0:1: [sdc] Write Protect is off [ 9066.962310] sd 3:0:0:1: [sdc] Mode Sense: 03 00 00 00 [ 9066.962312] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028420] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028637] sdc: unknown partition table [ 9067.097065] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.097281] sd 3:0:0:1: [sdc] Attached SCSI removable disk and some of data: root@debian:~# hexdump -Cn 65536 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00008000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 00008010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 00008020 00 20 00 00 00 09 20 10 02 00 00 00 00 00 00 00 |. .... .........| 00008030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 00008040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 00008050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 00008060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 00008070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 00008090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 000080a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 000080b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000080c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * [cut] * 0000f000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 0000f010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 0000f020 00 20 00 00 00 09 20 10 00 00 00 00 00 00 00 00 |. .... .........| 0000f030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 0000f040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 0000f050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 0000f060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 0000f070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 0000f090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 0000f0a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 0000f0b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f0c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00010000 I'm total newbie to filesystems. I googled and found that JFS superblock may starts on 0x8000 offset. But what next? How to mount this card? If there would be normal partition table I would expect 55 AA on 510th and 511th byte, but first 8000 bytes are clear. Any help would be greatly appreciated. And sorry for my bad english :) Kind regards.

    Read the article

  • PHP detecting filesystem encoding

    - by Evert
    Hi guys, I need to save files with non-latin filenames on a filesytem, using PHP. I want to make this work cross-platform. How do I know what encoding I can use to write the file? I understand many modern filesystems are UTF-8 based (is this correct?), but I doubt Windows XP is (for instance). So, is there a robust detection mechanism? Evert

    Read the article

  • mkfs.xfs: libxfs_device_zero write failed: Input/output error

    - by Crazy_Bash
    I can't find a way to create a filesystem on one of my disks. first i'm geting the following output: [root@~]# mkfs.xfs /dev/sdb1 mkfs.xfs: /dev/sdb1 appears to contain a partition table (dos). mkfs.xfs: Use the -f option to force overwrite. after using -F flag: [root@~]# mkfs.xfs -f /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=32, agsize=22892696 blks = sectsz=512 attr=2 data = bsize=4096 blocks=732566272, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 **mkfs.xfs: libxfs_device_zero write failed: Input/output error** /dev/sdb: Disk /dev/sdb: 3001GB 1 1049kB 3001GB 3001GB primary Linux: Centos 6.3 Linux 1 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux what i've tried so far: recreating partition with parted rm 1

    Read the article

  • EXT4-fs error (device loop0) ext4_mb_generate_buddy

    - by Rachel Nark
    I've been randomly getting theses in my /var/log/messages [5747511.945300] EXT4-fs error (device loop0): ext4_mb_generate_buddy: EXT4-fs: group 1: 29505 blocks in bitmap, 29455 in gd Not had any luck with google is this a drive failue or just a kernel glitch or a need to fsck? kernel: 2.6.32-320.4.1.lve1.1.4.el6.x86_64 #1 SMP Wed Mar 7 06:32:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux the DC i think gave me a new drive over a few months ago

    Read the article

  • Temp file folder full, but clearing it out doesn't seem to work

    - by Vegar Westerlund
    I got this error on our build server: MSB6003: The specified task executable could not be run. MSB5003: Failed to create a temporary file. Temporary files folder is full or its path is incorrect. The directory name is invalid. [C:\Users\swdev_build\bamboo-agent-home\xml-data\build-dir\XXXX-ZZZZZZ-JOB1\build.xml] This was when trying to run and msbuild task to run our test suite. Using the power of google it seemed that this should be a problem with the %TEMP% folder running out of tmp file names apparently because a 4 digit hex name is used (for a total of 65535 temporary files). The problem is that the error persist, even after going into the %TEMP% folder and deleting everything before rebooting the machine. Does anyone know how to fix the issue and more importantly how to prevent it from happening again? What is the preferred way of cleaning up this temp files? Make it a part of the build process? Update: Actually I cleared the TEMP folder of the only local user, but since the build server is running as the SYSTEM user, it probably has some temp folder somewhere else.

    Read the article

  • Alternative to robocopy /MIR

    - by Robin Day
    We run a number of web apps that store a lot of local data in small xml files. One part of our backup / recovery strategy is to produce a local mirror of the file system via a VPN to the hosting centre. The VPN connection is only via a 12Mbps ADSL and whilst there are a lot of files and directories, the actual number of files that changes is quite small. Although the bandwidth is probably an issue, I'm seeing results such as the output below. The robocopy /MIR took 5 hours to run yet only 30 mins to actually perform the copy. Does anyone have any suggestions as to ways to improve this. The 5 hours is now bordering on too slow and if we can't find a way to speed this up then we're going to have to come up with a completely different solution. Total Copied Skipped Mismatch FAILED Extras Dirs : 17625 6618 11007 0 0 0 Files : 1112430 1223 1111207 0 0 0 Bytes : 57.451 g 192.25 m 57.263 g 0 0 0 Times : 5:01:23 0:35:55 0:00:00 4:25:27 Speed : 93509 Bytes/sec. Speed : 5.350 MegaBytes/min. Ended : Fri Apr 16 05:54:23 2010

    Read the article

  • How to install ceph on EC2 Amazon Linux AMI

    - by takaomag
    I want to test Ceph (a distributed network storage and file system) on some EC2 hosts which is derived from Amazon Linux AMI (amzn-ami-2011.09.2.x86_64-ebs). The kernel version is 3.2 and btrfs is enabled. But kernel config options related to Ceph (CONFIG_CEPH_FS and CONFIG_BLK_DEV_RBD) seems to be disabled. I have to make a new kernel and register it to amazon ? Or, does someone know more easy way ?

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • Kernel NTFS driver vs NTFS-3G

    - by Jack
    A more comprehensive phrased question since I lost access to the other one. I would ask that the other one be deleted, not this one, as it should not have been migrated in the first place. There are currently two NTFS drivers available for Linux. The NTFS driver included in the kernel, and the userspace NTFS-3G driver that makes use of FUSE. By all accounts, NTFS-3G works perfectly. My question then, is if the NTFS filesystem has been successfully reverse engineered, why have the kernel NTFS team not implemented the changes in their driver? At the moment it is still marked as experimental, and there is a good chance it will destroy your data. Note: This has absolutely nothing to do with distributions...

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • Troubleshooting transient Windows I/O "The parameter is incorrect." errors

    - by Kevin
    We have a set of .Net 2.0 applications running on Windows 2003 servers which have started experiencing transient "The parameter is incorrect." Windows I/O errors. These errors always happen accessing a file share on a SAN. The fact that this error has happened with multiple applications and multiple servers leads me to believe that this is an infrastructure issue of some sort. The applications all run under the same domain account. When the errors occur they generally will resolve themselves within a few minutes. I can log in to the application server once the error starts occurring and access the file share myself with no problems. I have looked at the Windows event logs and haven't found anything useful. Due to the generic nature of "The parameter is incorrect.", I am looking for additional troubleshooting suggestions for this error. A sample stack trace is below. Note that while this example was during a directory creation operation, when the problem is occurring, this exception is thrown for any file system operations on the share. Exception 1: System.IO.IOException Message: The parameter is incorrect. Method: Void WinIOError(Int32, System.String) Source: mscorlib Stack Trace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.Directory.InternalCreateDirectory(String fullPath, String path, DirectorySecurity dirSecurity) at System.IO.Directory.CreateDirectory(String path, DirectorySecurity directorySecurity) at System.IO.Directory.CreateDirectory(String path)

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • Recovery of Windows DFS partion with shadow copy versioned files when overwritten with older modifie

    - by patjbs
    I've noticed the following "bug" on a DFS volume with shadow copies: Pretend you have the following folders/files under shadow copy versioning, going back two weeks. MyDirectory+ MyFile - Modified Date 8/1/2009 The current date: 8/30/2009 You have another version of MyFile stored elsewhere, with a modified date of 7/1/2009. Copy your other version of MyFile into MyDirectory, overwriting the newest version. I expected that you could roll back to the version that was there when it last imaged, say on the prior day and recover your 8/1 version. Not the case. Now, when you go to look at previous versions for the past two weeks, the versioning of that file will be entirely lost, and you'll be stuck with your older 7/1 version. Suckage. Questions: (1) Is this intentional, and if so, what's the rationale? I assume that DFS picks up on the versioning based on the current file, and that's what's wiping out prior versions, but it seems like a fairly stupid/naive way of handling versioning to me. (2) Is there a way to backtrack out of this, without resorting to restoration from other backup mediums? Thanks!

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Troubleshooting transient Windows I/O "The parameter is incorrect." errors

    - by Kevin
    We have a set of .Net 2.0 applications running on Windows 2003 servers which have started experiencing transient "The parameter is incorrect." Windows I/O errors. These errors always happen accessing a file share on a SAN. The fact that this error has happened with multiple applications and multiple servers leads me to believe that this is an infrastructure issue of some sort. The applications all run under the same domain account. When the errors occur they generally will resolve themselves within a few minutes. I can log in to the application server once the error starts occurring and access the file share myself with no problems. I have looked at the Windows event logs and haven't found anything useful. Due to the generic nature of "The parameter is incorrect.", I am looking for additional troubleshooting suggestions for this error. A sample stack trace is below. Note that while this example was during a directory creation operation, when the problem is occurring, this exception is thrown for any file system operations on the share. Exception 1: System.IO.IOException Message: The parameter is incorrect. Method: Void WinIOError(Int32, System.String) Source: mscorlib Stack Trace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.Directory.InternalCreateDirectory(String fullPath, String path, DirectorySecurity dirSecurity) at System.IO.Directory.CreateDirectory(String path, DirectorySecurity directorySecurity) at System.IO.Directory.CreateDirectory(String path)

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >