Search Results

Search found 103 results on 5 pages for 'hfs'.

Page 3/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Booting Ubuntu EFI on Macbook Pro Apple Bootmanager

    - by user279771
    Following: http://www.rodsbooks.com/ubuntu-efi/ and https://help.ubuntu.com/community/UEFIBooting#Detect_.28U.29EFI_firmware_processor_architecture I have ubuntu booting on my macbook pro with both refined and the native apple loader working, however, I would like to get the "Improving the Boot Method" (i believe it is called the kernel efi stub loader) working with the Apple Boot manager (the article only explains for refined), but have found no articles explaining this. Can anyone help with this? Below is what I have: What I have is the following: /dev/sda apple partitions /boot ext3 (root) ext3 swap side notes: From what I understand, I should have had my boot partition as fat32/hfs+... I can always switch it or copy the kernel to the apple EFI partition. (I tried creating /boot as an hfs+ partition during installation but was unable. Even after installing hfsprogs, although I was able to create an hfs+ partition in gparted I couldn't use the partition as /boot in ubiquity).

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • Can't boot freshly burned Ubuntu cd

    - by user89004
    So I just burned a Ubuntu 12.04.1 powerpc .iso on a cd for my iMac G5 running Mac OS X 10.4.11 and it won't even recognize the cd. I burned it on my dad's Windows 7 laptop as the process is way easier (just 2 clicks). Mac OS X 10.4.11 gives me an error when it starts and when the CD is in saying "the disk you inserted was not readable by this computer". What's funny is that I burned a Ubuntu Minimal .iso on a CD and it would totally read that and even boot it though it gave me some errors afterwards and I couldn't install. I even tried going into openfirmware and hitting boot cd:,\tbxi but I get the error "Warning sector size mismatch can't OPEN cd:,\tbxi Can't open device or file" Was there something wrong with the .iso I burned? Mac OS X 10.4.11 won't even mount that .iso it tells me that the HFS file system is corrupt or something, but I know the .iso doesn't contain HFS file system. Any help? I downloaded the .iso from http://cdimage.ubuntu.com/releases/precise/release/ubuntu-12.04-desktop-powerpc.iso

    Read the article

  • Boot from Ubuntu ISO on a hfsplus partition (macbook pro)

    - by user279771
    I would like to be able to boot from an ISO stored on an HFS+ partition (the main partition on my macbook pro). Here is what I've done so far: (writing in shorthand :D) grub> insmod hfs,hfsplus,loopback,part_gpt grub> loopback loop (hd0,gpt2)/location/to/img.io grub> configfile (loop)/boot/grub/loopback.cfg ... This does not work. tab-complete of the (loop) path does not work... However, this does work (tab-complete and all) if the iso comes from my ext3 partition. For particular reasons, I can't have the iso images on the ext3 partition, they need to be kept on the hfs+ partition. What should be done?

    Read the article

  • ipod not mounting

    - by rls
    Tried to connect my iPod, but got this message: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Have seen links to this here, but beeing rather green, I don't understand much. https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/734883 What do I do now? The dmesg|tail says [ 2819.709437] sd 6:0:0:0: [sdb] 3901376 4096-byte logical blocks: (15.9 GB/14.8 GiB) [ 2819.710161] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2819.735294] sdb: [mac] sdb1 sdb2 [ 2819.738060] sd 6:0:0:0: [sdb] 3901376 4096-byte logical blocks: (15.9 GB/14.8 GiB) [ 2819.738671] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2819.738688] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 2820.420130] sd 6:0:0:0: [sdb] Bad block number requested [ 2820.420167] hfs: unable to find HFS+ superblock [ 2820.612140] sd 6:0:0:0: [sdb] Bad block number requested [ 2820.612191] hfs: unable to find HFS+ superblock

    Read the article

  • OS X Client & Ubuntu Server - Best way for client to access files on server?

    - by Camsoft
    I've got a local development web server running Ubuntu. I also have an iMac running OS X 10.6 which I use a client and is my development machine. I'm currently have Samba server installed on my Ubuntu server. I have shares setup for all the website directories. I then use my Mac and Coda to edit the files via their shares. This generally works really well but I noticed that my Mac was writing loads of resource fork ._filename files everywhere. I found out the following about the files: These files are created on volumes that don't natively support full HFS file characteristics (e.g. ufs volumes, Windows fileshares, etc). When a Mac file is copied to such a volume, its data fork is stored under the file's regular name, and the additional HFS information (resource fork, type & creator codes, etc) is stored in a second file (in AppleDouble format), with a name that starts with "._". (These files are, of course, invisible as far as OS-X is concerned, but not to other OS's; this can sometimes be annoying...) Does anyone know of a way of sharing files between a Mac client and a Linux server that is most compantable between the two operation systems? Ideally it needs to support the HFS filesystem so that the resource forks are not created and it also needs to support the permissions between server and client.

    Read the article

  • Gray Progress Bar in Macbook Boot caused by Partition Fault

    - by Konstantin Bodnya
    Here's my problem: When I try to load my macbook it shows the gray progress bar. It takes a while to fill the whole progress and after it macbook just shuts down. I tried to boot from recovery partition and run Disk Utility to repair it. Disk Utility showed my "Macintosh HD" in a gray color and failed to repair it. I thought all my data was lost, but then I tried the following: So I booted into ubunto from live usb and it successfully mounted my macintosh hd hfs+ partition. Parted shows me the following partitions on my disk: Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 20.5kB 210MB 210MB fat32 EFI System Partition boot 2 210MB 499GB 499GB hfs+ Macintosh HD 3 499GB 500GB 650MB hfs+ Recovery HD Seems legit except for FAT32 for EFI System Partition. Since all my data is okay and backed up what should I do to recover the system. I don't really want to reinstall all the system though I believe there's a command to make it allright in linux. Thank you everyone!

    Read the article

  • Linux USB to work as cd rom on mac

    - by user157483
    I am working in driver development in linux USB modules. I have written driver for usb and it is working as cd rom in windows machine 1)I made first partation as fat32 "modprobe g_hidmass file=/dev/mmcblk0p1 cdrom=1 stall=0 removable=1" this works fine in windows 2)I made first partation as hfs partation "modprobe g_hidmass file=/dev/mmcblk0p1 cdrom=1 stall=0 removable=1" but same thing i applied with hfs partation in MAC it is getting error like this "The disk you inserted was not readable by this computer" in diskutil it is shown as CD-rom but not reading the file system. frame like this Please help me how can I overcome this error...

    Read the article

  • Cascading S3 Sink Tap not being deleted with SinkMode.REPLACE

    - by Eric Charles
    We are running Cascading with a Sink Tap being configured to store in Amazon S3 and were facing some FileAlreadyExistsException (see [1]). This was only from time to time (1 time on around 100) and was not reproducable. Digging into the Cascading codem, we discovered the Hfs.deleteResource() is called (among others) by the BaseFlow.deleteSinksIfNotUpdate(). Btw, we were quite intrigued with the silent NPE (with comment "hack to get around npe thrown when fs reaches root directory"). From there, we extended the Hfs tap with our own Tap to add more action in the deleteResource() method (see [2]) with a retry mechanism calling directly the getFileSystem(conf).delete. The retry mechanism seemed to bring improvement, but we are still sometimes facing failures (see example in [3]): it sounds like HDFS returns isDeleted=true, but asking directly after if the folder exists, we receive exists=true, which should not happen. Logs also shows randomly isDeleted true or false when the flow succeeds, which sounds like the returned value is irrelevant or not to be trusted. Can anybody bring his own S3 experience with such a behavior: "folder should be deleted, but it is not"? We suspect a S3 issue, but could it also be in Cascading or HDFS? We run on Hadoop Cloudera-cdh3u5 and Cascading 2.0.1-wip-dev. [1] org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3n://... already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) at com.twitter.elephantbird.mapred.output.DeprecatedOutputFormatWrapper.checkOutputSpecs(DeprecatedOutputFormatWrapper.java:75) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:923) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:856) at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:104) at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:174) at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:137) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:122) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:42) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.j [2] @Override public boolean deleteResource(JobConf conf) throws IOException { LOGGER.info("Deleting resource {}", getIdentifier()); boolean isDeleted = super.deleteResource(conf); LOGGER.info("Hfs Sink Tap isDeleted is {} for {}", isDeleted, getIdentifier()); Path path = new Path(getIdentifier()); int retryCount = 0; int cumulativeSleepTime = 0; int sleepTime = 1000; while (getFileSystem(conf).exists(path)) { LOGGER .info( "Resource {} still exists, it should not... - I will continue to wait patiently...", getIdentifier()); try { LOGGER.info("Now I will sleep " + sleepTime / 1000 + " seconds while trying to delete {} - attempt: {}", getIdentifier(), retryCount + 1); Thread.sleep(sleepTime); cumulativeSleepTime += sleepTime; sleepTime *= 2; } catch (InterruptedException e) { e.printStackTrace(); LOGGER .error( "Interrupted while sleeping trying to delete {} with message {}...", getIdentifier(), e.getMessage()); throw new RuntimeException(e); } if (retryCount == 0) { getFileSystem(conf).delete(getPath(), true); } retryCount++; if (cumulativeSleepTime > MAXIMUM_TIME_TO_WAIT_TO_DELETE_MS) { break; } } if (getFileSystem(conf).exists(path)) { LOGGER .error( "We didn't succeed to delete the resource {}. Throwing now a runtime exception.", getIdentifier()); throw new RuntimeException( "Although we waited to delete the resource for " + getIdentifier() + ' ' + retryCount + " iterations, it still exists - This must be an issue in the underlying storage system."); } return isDeleted; } [3] INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] at least one sink is marked for delete INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] sink oldest modified date: Wed Dec 31 23:59:59 UTC 1969 INFO [pool-2-thread-15] (HiveSinkTap.java:148) - Now I will sleep 1 seconds while trying to delete s3n://... - attempt: 1 INFO [pool-2-thread-15] (HiveSinkTap.java:130) - Deleting resource s3n://... INFO [pool-2-thread-15] (HiveSinkTap.java:133) - Hfs Sink Tap isDeleted is true for s3n://... ERROR [pool-2-thread-15] (HiveSinkTap.java:175) - We didn't succeed to delete the resource s3n://... Throwing now a runtime exception. WARN [pool-2-thread-15] (Cascade.java:706) - [...] flow failed: ... java.lang.RuntimeException: Although we waited to delete the resource for s3n://... 0 iterations, it still exists - This must be an issue in the underlying storage system. at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:179) at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:40) at cascading.flow.BaseFlow.deleteSinksIfNotUpdate(BaseFlow.java:971) at cascading.flow.BaseFlow.prepare(BaseFlow.java:733) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:761) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:710) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • 'Unable to mount Filesystem' Error

    - by Charles
    Trying to extract data from a 'bricked' Western Digital MyBook Live 2tb drive. I came across a forum that advised to use Ubuntu (booted from a CD) on my Macbook. Managed to download and create a boot CD for Ubuntu (like this little operating system btw). Booted the machine with the CD and plugged the drive (which I had extracted from it's casing and placed into a external USB SATA case & plugged to the laptop). The drive is seen by Ubuntu but each time I click on the drive, it gives me the following error: Unable to mount 2.0 TB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb4, missing codepage or helper program, or other error In some cases useful info is found in syslog -try dmesg | tail or so I am new to this and spent quite some time searching this site to see if I could find a solution to this problem without troubling anyone. I came up with a few that came close but some of the questioners mentioned that they had lost data...which scared me from going further. I need to basically extract 1 particular folder from the drive. If I can get to mount this volume 'sdb4', there is a folder called 'My_Work' which I need to back up. The rest I have/had a copy of. When I typed in dmesg | tail...I got several lines..but I think ones that are relevant are: [ 406.864677] EXT4-fs (sdb4): bad block size 65536 [ 429.098776] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 439.786365] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 445.982692] EXT4-fs (sdb4): bad block size 65536 [ 1565.841690] EXT4-fs (sdb4): bad block size 65536 I read somewhere to try/check 'sudo fdisk -l /dev/sdb4'. It gave me the following result: Disk /dev/sdb44: 1995.8 GB, 1995774623744 bytes 255 heads, 63 sectors/track, 242639 cylinders, total 3897997312 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb4 doesn't contain a valid partition table This is where I reached and got frustrated and decided to try & get help on this without digging myself deeper into a hole! I understand that the answer may already be out there. If so, could someone please point me in the right direction. And if not, could someone please resolve (if possible) my situation!

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • How to enable an external USD harddrive with ubuntu

    - by LarsOn
    Hello I'm trying to install a new LaCie Hard Disk design by Neil Poulton 1TB USB 2.0 GParted reports /dev/sda1 (with exclamation mark and key sign) ntfs 1 KiB unallocated 320 MiB /dev/sda2 hfs+ 2.84 MiB unallocated 931.2 GiB When trying to create a partition with Disk Utility it says Daemon is inhibited It seems I can't create the partition that way. Can you recommend how I can proceed? Thank you

    Read the article

  • How can I mount an AES-128 encrypted DMG file using the NTFS file system under Windows XP?

    - by flarn2006
    I found HFSExplorer, which can handle encrypted DMG files, but it only supports HFS, and the DMG I want to open is NTFS. I can't just use any image mounter since the DMG file is encrypted, and I don't want to need to copy it over to my main hard drive first. (The DMG is on an external hard drive.) If copying it is the only way, however, I'd be willing to settle for that, but I'd rather just leave it on the external hard drive in an encrypted form.

    Read the article

  • How to Enable an External USB Hard Drive with Ubuntu

    - by LarsOn
    I'm trying to install a new LaCie Hard Disk design by Neil Poulton 1TB USB 2.0. GParted reports /dev/sda1 (with exclamation mark and key sign) ntfs 1 KiB unallocated 320 MiB /dev/sda2 hfs+ 2.84 MiB unallocated 931.2 GiB When trying to create a partition with Disk Utility it says Daemon is inhibited It seems I can't create the partition that way. Can you recommend how I can proceed? Thank you

    Read the article

  • Are filesystem operations a function of the kernel?

    - by hydroparadise
    I suppose the question would be OS specific, so I'll take the following scenarios: Winodows (NTFS) OSX (HFS) Linux (ext2,ext3,ext4) Each operating system has it's default filesystem it operates os (OSX, I beleive, only has the one choice available). I've noticed some utilities out there for OS's to read different file systems (which obvisouly is NOT apart of the kernel), which got me thinking: Are filesystem operations a function of a driver (ie, potentially modular), or is it truly apart of the kernel?

    Read the article

  • Have I fixed my partition problem with os x 10.5.8? Are my GPT and MBR back to normal?

    - by David Schaap
    I'm new to linux and I have overstepped by abilities. I tried dual booting os x 10.5.8 with ubuntu 11.10 with rEFIt, but I been having problems with partitioning. Instead of enduring more headaches, I've made the decision to simply use ubuntu on VirtualBox. I've tried to return my HDD to normal, but I am looking for confirmation that my partitions are ok. Here is the report from partition inspector: *** Report for internal hard disk *** Current GPT partition table: # Start LBA End LBA Type 1 409640 233917359 Mac OS X HFS+ Current MBR partition table: # A Start LBA End LBA Type 1 1 234441647 ee EFI Protective MBR contents: Boot Code: GRUB Partition at LBA 409640: Boot Code: None File System: HFS Extended (HFS+) Listed in GPT as partition 1, type Mac OS X HFS+ Also, my HDD directory has a bunch of extra folders in them and they appear to be ubuntu related, although it is no longer installed. folders like bin, sbin, cores, var, user, and so on. Those folders aren't supposed to be there, right? Thanks in advance.

    Read the article

  • 7-Zip - A Free alternative to other compression utilities

    - by TATWORTH
    At http://www.7-zip.org/download.html, there is a free alternative other compression utilities. It handles a wide variety of formats including RAR!Here is the description from its home page:License 7-Zip is open source software. Most of the source code is under the GNU LGPL license. The unRAR code is under a mixed license: GNU LGPL + unRAR restrictions. Check license information here: 7-Zip license. You can use 7-Zip on any computer, including a computer in a commercial organization. You don't need to register or pay for 7-Zip. The main features of 7-Zip High compression ratio in 7z format with LZMA and LZMA2 compressionSupported formats: Packing / unpacking: 7z, XZ, BZIP2, GZIP, TAR, ZIP and WIMUnpacking only: ARJ, CAB, CHM, CPIO, CramFS, DEB, DMG, FAT, HFS, ISO, LZH, LZMA, MBR, MSI, NSIS, NTFS, RAR, RPM, SquashFS, UDF, VHD, WIM, XAR and Z. For ZIP and GZIP formats, 7-Zip provides a compression ratio that is 2-10 % better than the ratio provided by PKZip and WinZipStrong AES-256 encryption in 7z and ZIP formatsSelf-extracting capability for 7z formatIntegration with Windows ShellPowerful File ManagerPowerful command line versionPlugin for FAR ManagerLocalizations for 79 languages

    Read the article

  • HTTP(S) based file server

    - by Michael
    I've got a server running Ubuntu 10.04. I've already gotten openssh for ssh and sftp on it. I've been looking for a web-based (http, or preferably https) file server, perhaps a web-front-end to an (S)FTP server, that allows access to a specific folder, and also allows uploads. It requires user authentication, preferably using PAM. This web-based solution is for users that are not allowed to use FTP software / browser extension and don't have flash / java browser plugins within their corporate environments. So far I have looked into: Webmin: Includes a file manager, however it uses Java, and I'm looking for a plugin-free implementation. Apache2: I was able to set up https and PAM authentication, but the barebone implementation doesn't include file upload (as far as I'm aware of). HFS: Haven't tried it out because it is for Windows/wine only, and I don't want to run it under wine.

    Read the article

  • HFS+/How to convert an XFS file system to HFS+ [on hold]

    - by user219350
    I have repeatedly convinced of the reliability of the XFS file system , and I was more than satisfied . I was happy with everything in Ubuntu 14.04 ( great software) , but there is a little "but ! " Basically, I work on OSX-Mavericks 10.9.3, which sees very Windows 8.1 and works wonders with NTFS, but Ubuntu does not see! Briefly describe the equipment: ASRock B75 Pro3-M i5 3330 GeForce GTX 650 Ti SATA 500GB running OS X Mavericks + Clover - a boot disk Toshiba 2TB running Windows 8.1 (x64) and Ubuntu 14.04 (amd64) If you boot from the Toshiba (where there is Ubuntu and boot Windows + GRAB) after restart boot from Clover, it is impossible. Tried a lot of options - as Clover installation and boot priority, and various settings Grab, but have not found an acceptable option and reinstall again Clover - no desire ( Mavericks 20 seconds reboots - excellent !) So please help on the file system - how to convert to XFS HFS + magazine . Mavericks to saw it all synced on Mac . Thank you for the sensible answer and help! Originally in Russian.

    Read the article

  • Dropped hard drive won't mount

    - by Dave DeLong
    I have a 2 TB HFS+-formatted external hard drive that got dropped a couple of days ago while transferring files onto a Macbook Pro. Now the drive's partitions won't mount. Disk Utility can see the drive, but doesn't recognize that it has any partitions. I've tried using Data Rescue 2 to recover files off of it, but it couldn't find anything. In addition, our local computer repair shop said they couldn't find anything on there either. I know that I could ship the drive off to someone like DriveSavers, but I was hoping for a cheaper option (since they start at about $500 for the attempt). Is there something else I could try on my own? Would TestDisk be able to help with something like this?

    Read the article

  • How do I mount a HFS+ dd image in OSX?

    - by Paul McMillan
    I had an HFS+ formatted drive that was going bad and wouldn't mount at all on OSX. I created an image using ddrescue on linux, and was able to save most of it. I can mount the drive and see the data just fine in linux using this: mount -o loop -t hfsplus dd_image /Volumes/mountpoint This doesn't work on my OSX system since hfsplus isn't a valid filesystem type. If I try: mount -t hfs image mountpoint It complains that it needs a block device. What's the fix here?

    Read the article

  • Archiving old, outdated hard drives

    - by Calvin
    I have a number of old hard drives. I've decided to throw them out. But before I do that, I'd like to keep the contents of hard drives, intact. I tried to use the ISO file format to archive but the major problem is that it loses file attributes and can't create directories with exceeding depth of 8 levels. I do have drives over a variety of file systems; FAT, NTFS, ext2, ext3 and HFS and I'd like to archive them without any loss of information.

    Read the article

  • Recovering/Creating NewWorld Partition on Mac G4 (PPC) after botched Debian Install

    - by Luis Espinal
    I was trying to install Debian 5.04 on a Mac G4, and in typical geek tradition, I didn't RTFM. During installation, I nuked all existing partitions, creating new to my liking. But as I learned later during the installation process, yaboot needed a NewWorld partition, so I can't boot the installation. I don't have any OSX CDs with me (this is a used G4 I purchased of craigslist) with which to create a HFS partition. I've re-run the Debian installer, which lets me create a partition that is supposed to be of type 'NewWorld', but the installer does not seem to like it or recognizes it. Any ideas how to proceed from here? Thanks.

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >