Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 6/35 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • can 'Percona MySQL Data Recovery' be used to recover dropped tables if the datadir filesystem is mounted as /

    - by Tom Geee
    according to Percona: Unmount the filesystem or make it read-only if... You have filesystem corruption OR You have dropped tables in innodb_file_per_table format If I have innodb_file_per_table enabled, and accidently dropped a table, while the datadir is mounted as within the / partition , can data still be recovered? Obviously you can't work with an unmounted root filesystem. Our VPS host has a defaulted filesystem table which we cannot customize. I was wondering in case of any future scenario. edit: would mounting the / filesystem through NFS onto another system as read-only be a workaround? TIA.

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • What are the functionalities of Distributed File systems and Distributed Storage Systems?

    - by Berkay
    i'm reading cloud vendors solutions for the distributed storage systems such as Amazon Dynamo and Google Big Table. and really confused in two terms : what is Distrubuted file systems for in cloud ? what is Distributed storage systems for? what are differences of these terms and functionalities ? if i understand these terms i will create the general architecture of the cloud vendors, any good tutorial or web page will be appreciated. Thanks

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • What does the filesystem give me?

    - by Alec
    I'm writing a specialised database. It currently sits atop of ext4, with one big file that it reads and writes to. I'm wondering whether it would be worth forgoing the filesystem and reading and writing directly to the device. I already use O_DIRECT, so as far as I can tell it wouldn't require much of a code change. What might the risks, advantages and disadvantages of forgoing the filesystem be? Is it likely to improve performance, or harm it?

    Read the article

  • On Windows 7, the last access date is not changed even after reading the file?

    - by Jian Lin
    I have some files on Windows 7, and want to see what time it was that I read it this morning (February 27 morning), but when I right click on the file and choose Properties, I see Accessed: Yesterday, Feb 26, 2011, 2:12:37PM so I open the file to read the content again, and then open up the Properties again, and still the Accessed (date) is the same (Feb 26). Even if I add a column to the folder for "Date Accessed", it still shows Feb 26. But today is Feb 27 and clearly I have "accessed" it... so how can I see the true last accessed date?

    Read the article

  • Folder/File permission transfer between alike file structure

    - by Tyler Benson
    So my company has recently upgraded to a new SAN but the person who copied all the data over must have done a drag n' drop or basic copy to move everything. Apparently Xcopy is not something he cared to use. So now I am left with the task of duplicating all the permissions over. The structure has changed a bit ( as in more files/folders have been added) but for the most part has been stayed unchanged. I'm looking for suggestions to help automate this process. Can I use XCopy to transfer ONLY permissions to one tree from another? Would i just ignore any folders/permissions that don't line up correctly? Thanks a ton in advance, Tyler

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • Local links ( in browsers ) on *nix systems

    - by meder
    On Windows I can access files directly from the browser ( or at least I have it configured currently, forget if it was native like this ) with the file:// protocol, so I can access files from say the C drive. I'm wondering what the equivalent would be to accessing my files from the browser, if at all possible on a *nix system such as Debian.

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • two operating systems sharing their file systems with eachother (Windows and Linux)

    - by John Kube
    I have two operating systems installed on my notebook computer, Windows Vista and Ubuntu Linux. When I boot up, I'm presented with a bootloader which allows me to choose which one I want to load. I'm interested in sharing each operating system's file system with the other, such that I could access my Windows files from Linux and vice-versa. Is this possible, and if so how would one go about setting it up? Feel free to just post a link to an existing solution if there is one. I would Google for this myself, but I don't even know what to search for, as I don't know what this is called.

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • External hard drive FAT32 to NTFS conversion fails

    - by Pieter
    I'm trying to convert the FAT32 file system of an external hard drive to NTFS. Here's what happened: C:\Windows\system32>chkdsk G: The type of the file system is FAT32. Volume PIETEREXT created 3/19/2008 12:43 Volume Serial Number is 1806-2E30 Windows is verifying files and folders... File and folder verification is complete. Windows has scanned the file system and found no problems. No further action is required. 488,264,768 KB total disk space. 72,192 KB in 1,503 hidden files. 1,281,792 KB in 40,029 folders. 309,235,168 KB in 199,915 files. 177,675,584 KB are available. 32,768 bytes in each allocation unit. 15,258,274 total allocation units on disk. 5,552,362 allocation units available on disk. C:\Windows\system32>cd \ C:\>convert g: /fs:ntfs The type of the file system is FAT32. Enter current volume label for drive G: PIETEREXT Volume PIETEREXT created 3/19/2008 12:43 Volume Serial Number is 1806-2E30 Windows is verifying files and folders... File and folder verification is complete. Windows has scanned the file system and found no problems. No further action is required. 488,264,768 KB total disk space. 72,192 KB in 1,503 hidden files. 1,281,792 KB in 40,029 folders. 309,235,168 KB in 199,915 files. 177,675,584 KB are available. 32,768 bytes in each allocation unit. 15,258,274 total allocation units on disk. 5,552,362 allocation units available on disk. Determining disk space required for file system conversion... Total disk space: 488384001 KB Free space on volume: 177675584 KB Space required for conversion: 975155 KB Converting file system The conversion failed. G: was not converted to NTFS I looked at the TechNet page for my error, but after closing every app the conversion was still failing halfway through. Why does it keep failing? I kept an eye on Task Manager but it didn't look like my system resources were near depletion. I'm using Windows 8.

    Read the article

  • Iomega Home Media Network Hard Drive: Filesystem on the disk?

    - by JJarava
    Hi all! I've got to deal with a malfunctioning "Iomega Home Media Network Hard Drive", and I was wondering if anybody knew what file system format does Iomega use on the disk? I've been trying to find the answer online, but i've got nowhere, and checking an obviously malfucntioning unit is not going to give me any assurance. Thanks a lot

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • CentOS 5.5 installation on disk image

    - by Dima
    Today, in order to install CentOS 5.5 I'm using kickstart script. I would like to install CentOS on different way: Create disk image (using dd command) Create filesystem on this disk image using mkfs.ext3 Install CentOS on this filesystem Make this disk image bootable (using grub-install) Copy the disk image to the physical hard disk (using dd command) I know to do all these items except 3. Is it possible to do it? If yes, how can I install CentOS on the disk image?

    Read the article

  • Program complains not enough disk space even if the disk space exists

    - by user1189899
    I have an EXT3 partition mounted in ordered data mode. If a power failure occurs when a program is creating files on that partition, I see that space usage reported is normal and I don't see any partial written files. But when I try to run the same program again after the system comes back up it complains that there is not enough disk space. Even though the free space reported is far more than required. The program always succeeds in normal conditions. Also the problem seems to disappear when the partition is remounted. I was wondering what could be the right way to handle the situation other than unmounting and remounting.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >