Search Results

Search found 103 results on 5 pages for 'hfs'.

Page 1/5 | 1 2 3 4 5  | Next Page >

  • HFS for Windows by Paragon BSOD

    - by texmex5
    Installed HFS for Windows 7 yesterday. Did a reboot. Copied some files from my internal drive to external hfs drive. Did another reboot and rebooted to Mac. Mac worked fine. Then yesterday the battery went dead on my mac and the computer switched off. This morning tried to boot into windows but got a BSOD as soon as I moved my mouse in the login screen. BSOD refeers to an error with service HFSPLUS.SYS Now when I try to boot into mac (pressing alt while restarding) the mac drive isn's shown and I can only boot into Windows 7 Safe Mode. Cannot uninstall HFS for Windows the uninstaller in control panel says: "THE WINDOWS INSTALLER SERVICE COULD NOT BE ACCESSED. THIS CAN OCCUR WHEN WINDOWS INSTALLER IS NOT CORRECTLY INSTALLED". SETUP: Intel Core 2 Duo, P8600 @ 2.40GHZ, 4GB RAM, WINDOWS 7 ULTIMATE 32 BIT

    Read the article

  • What are 'damaged files' on external hard drive (HFS format for OS X)?

    - by dtlussier
    I have an external HD formatted to default HFS (Mac OS Extended - Journaled) and very once and a while I get a folder called DamagedFiles in the root of the volume. The folder contains a collection of links to files on the drive. In general the files seem fine as I am for example able to open the images or text files without a problem. Is this serious? What can I do to fix this problem? Any advice would be great as I couldn't find anything on here or via Google that addressed this problem in particular. Many thanks.

    Read the article

  • Mount encrypted hfs in ubuntu

    - by pagid
    I try to mount an encrypted hfs+ partition in ubuntu. An older post described quite good how to do it, but lacks the information how to use encrypted partitions. What I found so far is: # install required packages sudo apt-get install hfsprogs hfsutils hfsplus loop-aes-utils # try to mount it mount -t hfsplus -o encryption=aes-256 /dev/xyz /mount/xyz But once I run this I get the following error: Error: Password must be at least 20 characters. So I tried to type it in twice, but that results in this: ioctl: LOOP_SET_STATUS: Invalid argument, requested cipher or key (256 bits) not supported by kernel Any suggestions? Thx Edit: One thing I'm not sure about is whether I use the right password. My assumption is that it is my default one for these situations. But I'm not sure whether Max OSX choose another password (internally) for that.

    Read the article

  • HFS partition mounting read-only

    - by Sid
    Hey, I have an external Western Digital Hard drive with two HFS partitions with journaling disabled. When I connect it to a computer running Linux (Debian or Ubuntu), frequently both partitions are mounted read-only. In the past, mounting them on my Macbook and executing the command to disable the journaling often worked (even though it would tell me that journaling was already disabled) but I would love to have a solution which works every time. Thanks! Edit: In light of Chris Johnsen's comment below - my question is how to mount the filesystem read+write on Linux since it is not automatically doing so itself

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • Can't copy from hfs+ external

    - by Janine
    I've spent days and days trying to figure this out, but still can't. I think I've read every article on this and tried all there is to try, and I still can't copy any file from my mac-formatted hfs+ external drive. Sorry if there's still an article I've missed.. I have disabled journaling and tried all the hfsprogs commands I could find, but still whenever I click on a folder on the external and try to copy it to my home directory, I get this: "The folder xxx cannot be handled because you don't have permission to read its content." I then found an article about inoring this by copying files through the Terminal. When trying to run the sudo cp -r command in the Terminal with my external drive path, I always get 'no such file or directory'.. Does anyone have another suggestion for me? Thanks in advance!

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • How to mount a HFS partition in Ubuntu as Read/Write?

    - by GiH
    I plugged in my external harddrive (which was formatted on my Mac into HFS+ journaled) to my Ubuntu desktop 9.04 64bit. I am not able to get the drive to mount with write capability, how do I do that? Right now all I'm getting is read access, I tried sudo mount -t hfsplus /dev/sdf2 /media/"Portable HD" but that still gave me only read access... ideas??

    Read the article

  • How to use HFS formatted pen drive in Windows 7?

    - by row-sun
    I recently used disk utility in my mac book pro to format my 8 GB pen drive to install OS X. After that I formatted my pen drive from disk utility as FAT32 so that I would be able to use it in windows. But in windows the pen drive does not show up. When I right click on my computer and click manage and then disk management, the pen drive is listed there, but it doesn't show up in the explorer and I cant use it. I tried to do many things but I'm still not being able to use it in windows though I can use it in Mac OS X. Could anyone help? Thanks.

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos.com
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • HFS+ hard drive being mounted as read only

    - by DNA
    This is a recurring problem and occurs a few times a week. I have an external hard drive which is hfs+. Every couple of weeks, for no obviuous reason, when I mount it by pluggin it in to my Ubuntu 11.10, it is read only and I can't copy any files into it. I gksudo nautlius and change the ownership and it magically works in some time. But returns to the read only state soon in a few hours-days without any rhyme or reason. Right now my fstab doesn't have any entry for my hard drive. What gives? What in the world is going on with Linux/HFS+? This is frustrating. I can't reformat my hard drive because I have almost a terrabyte of data in it and no receptacle to hold it while I reformat it.

    Read the article

  • How to manage permissions on a shared volume for OSX and ubuntu

    - by gentmatt
    On my mac I'm using an unjournaled HFS partition to share files between OSX 10.8 and Ubuntu 12.04. It was a nice thought at first, because Time Machine will automatically backup the volume in OSX, but I soon noticed that OSX and Ubuntu mess with the permission in a way that makes things messy for me. So, in order to fully view and change files, I keep using chmod to apply permissions that which will allow me to fully use a document. But I don't understand why I have to keep applying changes over and over. Is possible to set some kind of permission permanently so that both operating systems will respect permanently? I guess 777 will work, but I thought that this is not a smart thing to do. But as long as 'others' does not get full access (third seven), I see a lock icon on the file in ubuntu.

    Read the article

  • Using rsync with link-dest from HFS to NTFS

    - by Tom
    Hi, I'm having a problem with rsync. I'm on a Mac and I'd like to sync my everyday's changes from my HFS+ partition to my NTFS formated networked drive. Pretty simple, and everything goes well except that it syncs every file each times. Here's my script: #! /bin/sh snapshot_dir=/Volumes/USB_Storage/Backups snapshot_id=`date +%Y%m%d%H%M` /usr/bin/rsync -a \ --verbose \ --delete --delete-excluded \ --human-readable --progress \ --one-file-system \ --partial \ --modify-window=1 \ --exclude-from=.backup_excludes \ --link-dest ../current \ /Users/tommybergeron/Desktop/Brainpad \ $snapshot_dir/in-progress cd $snapshot_dir rm -rf $snapshot_id mv in-progress $snapshot_id rm -f current ln -s $snapshot_id $snapshot_dir/current Could someone help me out please? I've been searching for like two hours and I still am clueless. Thanks so much.

    Read the article

  • OS X 10.6 Snow Leopard no longer mounting an external USB drive

    - by Brant Bobby
    I have a 1TB generic external hard drive containing a single HFS partition. I originally formatted this using Disk Utility and it worked fine. Now, for some reason, it's not auto-mounting when I start up. Using mount at the command line gives the following error: $ sudo mount /dev/disk1s2 /Volumes/Test /dev/disk1s2 on /Volumes/Test: Incorrect super block. ... but if I use the mount_hfs command it works fine, mounts, and is readable. $ mount_hfs /dev/disk1s2 /Volumes/Test/ fsck gives me an error about a bad super block: $ fsck /dev/disk1 ** /dev/rdisk1 (NO WRITE) BAD SUPER BLOCK: MAGIC NUMBER WRONG ... but fsck_hfs -fn /dev/disk1s2 doesn't find any problems and reports that the volume appears to be OK. In Disk Utility, the drive appears to have a single MS-DOS partition with a curious notice about how it appears to be partitioned for Boot Camp: I have the Boot Camp HFS driver installed in WIndows 7, and that OS sees the drive/partition normally. What's wrong with my disk?

    Read the article

  • On a Mac, how can I find all files on a NTFS partition that have the same name, given case *in*sensi

    - by SCdF
    Here's the deal, I have a huge mess of files on an external drive that is formatted as NTFS. I wish to copy all of these files onto my MacBook Pro. NTFS, like sane filesystems, is case sensitive. HFS is not. There is, somewhere in the mess of tens of thousands of files and directories, one or more 'duplicates' in the eyes of HFS. Theses are preventing me from copying the entire directory of data onto my mac. (MacOSX rather unhelpfully throws a general error explaining the problem, but not the exact file. It also doesn't give you an option to skip) What is the best approach to solve this? Does anyone know a tool that can find files and directories that have the same case-insensitive name?

    Read the article

  • unable to format external drive to HFS+ for Mac

    - by dtlussier
    I have an external hard drive (1 TB Western Digital) that is currently formatted as FAT32 and I want to reformat it to a single partition of HFS+ for my Mac. I realize that I can read FAT32 from my Mac but want HFS as it has other feathers like permissions that I'd like to have. I have tried using Disk Utility to format the drive as I have done in the past, but when I go through the process it fails and throws out an error stating that it is unable to reformat the drive to HFS. What might be the reasons that this could happen? Are there any diagnostics I could run on the physical disk to check if it is running well?

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

  • How do I mount a HFS+ dd image in OSX?

    - by Paul McMillan
    I had an HFS+ formatted drive that was going bad and wouldn't mount at all on OSX. I created an image using ddrescue on linux, and was able to save most of it. I can mount the drive and see the data just fine in linux using this: mount -o loop -t hfsplus dd_image mountpoint This doesn't work on my OSX system since hfsplus isn't a valid filesystem type. If I try: mount -t hfs image mountpoint It complains that it needs a block device. What's the fix here?

    Read the article

  • Drobo not mounting. Disk repair doesn't work either.

    - by kohei
    Hi, While transferring data to my 2nd gen Drobo power went out. Now my Drobo is not mounting to my OS X 10.6.3 I have tried Disk Repair and this error message appears: Verify and Repair volume “disk1s2” Checking Journaled HFS Plus volume. Invalid key length Invalid record count Catalog file entry not found for extent The volume could not be verified completely. Volume repair complete.Updating boot support partitions for the volume as required.Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files. I tried DiskWarrior too but it doesn't work either. It gives me that I need more memory to continue and software shuts down. Any one know solution to this one?

    Read the article

  • locked files on HFS+ home partition shared between OSX/Linux

    - by HazyBlueDot
    I dual boot into Arch Linux and OS X 10.6 on my MacBook pro. I synced my UID between both OSes and created an HFS partition (with no journaling) to use as a shared home/Users partition. For the most part it works just as I'd expect, but sometimes when I'm booted into OS X certain files are "locked" (when I get info on a particular file the "Locked" box is checked under the "General" pane. I can resolve the issue by manually unchecking the box) and/or I get "Operation not permitted" when I try deleting or chmod'ing a file. In both cases I don't see anything out of the ordinary on the permission bits displayed with ls -l, except for a trailing '@' character in the position where the sticky bit would normally occur: -rw-r--r--@ 1 myuser mygroup 296 Mar 29 11:44 myfile This '@' character shows up on ALL normal files, so doesn't seem to be linked to the locked/operation not permission situation. On the Linux side of things I never have permission problems. To the best of my limited knowledge and experience with ACLs I've not found any ACLs on any of the files in question. For what it's worth, I do most of my file editing using emacs (Aquamacs in OSX), is it possible it is setting weird permission bits? What is the "locked" setting that OS X uses and does it have a permission bit equivalent (so at the very least I could recursively unlock all files in my home directory from the terminal) why might some, but not other files get "locked" when booting into OS X what is the meaning of the '@' character?

    Read the article

1 2 3 4 5  | Next Page >