Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 17/35 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Changing filesystem types "safely"

    - by warren
    Back in Windows 95 OSR2 (I believe), there was a conversion tool that would take your extant FAT16 partition and change it to FAT32 non-destructively (most of the time). Are there any tools like that now for going from one file system type to another in situ without destroying the data? For example, from etx3 to ext4? Or NTFS to XFS?

    Read the article

  • Trying to move Users And Program Files Directory's to another partition

    - by Jharwood
    Currently I've Followed this Guide: http://lifehacker.com/5467758/move-the-users-directory-in-windows-7 Pointed my C:\Users, C:\Program Files (x86), C:\Program Files directory's to their respective counterparts on the B: drive. I used mklink /J D:\Users B:\Users (D was the C: drives name in recovery) but when I come to boot, all I get is that the profile can't be loaded. I have to accomplish this, and don't really mind reinstalling as its a fresh install anyway.

    Read the article

  • EXT4 external hard drive for use with multiple systems

    - by EXTdumb
    I recently bought a external hard drive to store some data on. I use Linux but I am not a power user. If I format the drive to EXT4, is it possible for the permissions to ever screw up and I lost access to my data? I will be plugging the drive into several different linux based computers at work and I frequently hop distros on my main home machine. I need to make sure I don't lose any data because I overlooked something. I am not familiar with EXT 3 or 4. So far I have done this : Formatted drive to EXT4 ran gksudo thunar and changed the permissions to my user account and all settings to read/write Wrote all the files I need to the drive I really appreciate any help.

    Read the article

  • In Windows XP Professional, is there a limit on the number of files that can be contained in a single folder? [duplicate]

    - by Andrew
    This question already has an answer here: How many files can a windows folder contain? 1 answer I am running Windows XP Professional, service pack 3. Right now I have 4,398 files in a single folder, and Windows XP seems to read it fine. How many more files can I place in this same folder, either theoretically or practically? Thanks for your time.

    Read the article

  • Should all my files be stored on my shared partition?

    - by James
    I am setting up a tripple boot HD and was going to use a 4th partition to share files between OS's. I was wondering if there is any point in having much space on each OS partition to store files or if I just make the shared partition big and put everything on that? Is there any difference in speed between accessing files on the shared partition vs the native files? Are there any other benefits/disadvantages of having files on either the native/shared partition? EDIT: OS's in question are Windows 7, Ubuntu 12.04, and OS X 10.7.4.

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Media meta file system for windows

    - by Chris Marisic
    I have a large assortment of media that is currently arranged using folders. As the library has grown I've started to notice that folders aren't the best at conveying meaning. Also as the number of folders serving as categories/tags has grown has lead to data duplication for not realizing it was already filed under a different tag. As I started to think about this I realized tag cloud visualization would be tremendously powerful and figured there has to be something like this out there.

    Read the article

  • How do I make an encrypted disk image on Debian?

    - by Blacklight Shining
    I'm basically looking for an equivalent to OS X's encrypted sparsebundles. The solution should have support for file ACLs and should not force me to specify a size in the beginning (the image should only take up as much space as it needs) or require root access to mount and unmount. Ideally, I should be able to set two different passwords (both for the same data), but that's not too important. (I do have root access to the machine and so can install packages and such, but I would rather not have to sudo just to mount an image.)

    Read the article

  • Best alternatives to recover lost directories in FAT32 external hard drive?

    - by Sergio
    I have an 320 GB ADATA CH91 external hard drive. I guess it has some problems with the connector of the USB jack. The point is that in certain occasions it fails in write operations generating data losses. Right now I lost a directory with several GB's of very useful information. Since then I have not attempted to write to the disk any more. What tool would you recommend to recover the lost data? The disk is FAT32 formatted (only one partition) and I use both Linux and Windows. What filesystem format would you recommend to avoid future data losses? I currently only use this external hard drive in Linux so there are several available choices (FAT, NTFS, ext3, ext4, reiser, etc.).

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • Trying to move Users And Program Files Directories to Another Partition

    - by Jharwood
    Currently I've Followed this guide. I pointed my C:\Users, C:\Program Files (x86), and C:\Program Files directories to their respective counterparts on the B: drive. I used mklink /J D:\Users B:\Users (D was the C: drives name in recovery) but when the computer boots, all I get is that the profile can't be loaded. I have to accomplish this, and don't really mind reinstalling as its a fresh install anyway.

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation may be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation? Additional Info: The average file size is 8-14KB Format information from fsutil: NTFS Volume Serial Number : 0x2ae2ea00e2e9d05d Version : 3.1 Number Sectors : 0x00000001e847ffff Total Clusters : 0x000000003d08ffff Free Clusters : 0x000000001c1a4df0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x000000208f020000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x000000001e847fff Mft Zone Start : 0x0000000002163b20 Mft Zone End : 0x0000000007ad2000

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • Strange Lose Of Format On Pen Drive

    - by Kiewic
    Hi, here is an screenshot of my pen drive. The files are impossible to open, and the names has been replaced by strange characters. In Ubuntu is worst, the windows system crash. What can I do to recover my information?

    Read the article

  • Is it a good idea to take onsite/offsite backups of server images?

    - by ServerAdminGuy45
    Assuming a non-virtualized environment it a good idea to take actual images of servers (using something like Acronis True Image) and store them on\off site? Backing up data is great but I feel it would be good to have copies of OS images in the event hardware dies or an upgrade gets botched I can always revert back. What would be your recommended way to do this (preferably using a NAS and an online backup service)? I was talking with the Iron Mountain folks and the service they described is more geared toward taking incremental snapshots of data. I'm not sure if there's a way to backup images in an incremental way such that only the changes between them are saved (that way I'm not wasting X GB each time I take an image).

    Read the article

  • How to cleanup tmp folder safely on Linux

    - by Syncopated
    I use RAM for my tmpfs /tmp, 2GB, to be exact. Normally, this is enough but sometimes, processes create files in there and fail to cleanup after themselves. This can happen if they crash. I need to delete these orphaned tmp files or else future process will run out of space on /tmp. How can I safely garbage collect /tmp? Some people do it by checking last modification timestamp, but this approach is unsafe because there can be long-running processes that still need those files. A safer approach is to combine the last modification timestamp condition with the condition that no process has a file handle for the file. Is there a program/script/etc that embodies this approach or some other approach that is also safe? Incidentally, does Linux/Unix allow a mode of file opening with creation wherein the created file is deleted when the creating process terminates, even if it's from a crash?

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • How to work around Windows error 8x90070057?

    - by Chris
    So today I was trying to copy a simple .PDF file from a local folder on my machine to a network folder and every time I tried to move the file I would get an error dialog box which would state that I was passing a wrong parameter and give me the error code of 8x90070057. Does anyone know of a way to work around this error so that I can get this file copied? I have tried renaming the file with an underscore in front. I have tried copying from my local folder to my desktop and then to the remote folder. I have tried hunting down anything that might be using the file. An example of the file name is: Flowers & Trees.pdf

    Read the article

  • maximum size filesystem on my test .... approach?

    - by jocco
    Hello all I'm new at the site, and I have a question. I got this question at a test and really like to know the correct approach to solving this problem? Here is the question. In an indexed filesystem the first indexblock (inode) has 12 direct pointers and 1 pointer to an indirect indexblock. The filesystem is implemented on a disk with a diskblock-size of 1024 bytes. All pointers are 32 bit. Question: what is the maximum filesize (Kilobytes) of this filesystem? If it's possible not an just an answer but an explanation. edit: It was a multiple choice btw with 4 answers a. 13 K b. 268 K c. 524 K d. 1036 K As for my approach I only got as far as to know that 1 pointer is 32 bit Also I found something else here on the site which seems very usefull. http://stackoverflow.com/questions/2755006/understanding-the-concept-of-inodes Ok i got this far There are 12 blocks and each block is 1024 bytes. 1024 * 12 = 12288 bytes or 12 KB directly accessible. Please correct me if I'm wrong. Each pointer is 32 Bit = 4Byte And to be honest at this point I'm starting to get confused especially since my answer is way over any of my multiple choice answers.

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • Finding the length of files and file path of directory structure in a Linux file system.

    - by Robert Nickens
    I have a problem on a Linux OS running a version of SMB where if the absolute path to a directory within a Shared Folder is greater than 1024 bytes and the filename component is greater than 256 bytes the SMB service crashes and locks out all other services for network access like, SSH and FTP rendering the machine mute. To keep the system for crashing I’ve temporarily moved a group of folders where I think the problem path may be located outside of Shared Folder. I need to find the file and file path that exceeded this limitation and rename them or remove them allowing me to return a bulk of the files to the Shared Folder. I’ve tried the find and grep commands without success. Is there a chain of commands or script that I can use to hunt down the offending files and directory? Please advise.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >