Search Results

Search found 2834 results on 114 pages for 'filesystem corruption'.

Page 80/114 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Best alternatives to recover lost directories in FAT32 external hard drive?

    - by Sergio
    I have an 320 GB ADATA CH91 external hard drive. I guess it has some problems with the connector of the USB jack. The point is that in certain occasions it fails in write operations generating data losses. Right now I lost a directory with several GB's of very useful information. Since then I have not attempted to write to the disk any more. What tool would you recommend to recover the lost data? The disk is FAT32 formatted (only one partition) and I use both Linux and Windows. What filesystem format would you recommend to avoid future data losses? I currently only use this external hard drive in Linux so there are several available choices (FAT, NTFS, ext3, ext4, reiser, etc.).

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

  • Linux Raid: Can mdadm --grow a raid1 while mounted?

    - by Chris
    I have 2 500gb drives in a RAID1 setup that I needed to upgrade for more space. I mdadm --fail'ed each drive in turn and I used dd to copy each drive to it's respective larger drive (2tb each), removed the smaller drives and replaced them with the larger drives, and reassembled the array and forced a resync. So now I've got a 500gb RAID1 sitting on 2TB drives, and wish to grow them. The plan is to use mdadm --manage /dev/md0 --grow to grow them, then boot a rescue cd, assemble the array under that environment, and do the resize2fs on them. Can I use mdadm --grow on a mounted and live filesystem? Also, do I need more options to make sure the grow operation stays raid1?

    Read the article

  • What is the secure way to isolate ftp server users on unix?

    - by djs
    I've read documentation for various ftp daemons and various long threads about the security implications of using a chroot environment for an ftp server when giving users write access. If you read the vsftpd documentation, in particular, it implies that using chroot_local_user is a security hazard, while not using it is not. There seems to be no coverage of the implications of allowing the user access to the entire filesystem (as permitted by their user and group membership), nor to the confusion this can create. So, I'd like to understand what is the correct method to use in practice. Should an ftp server with authenticated write-access users provide a non-chroot environment, a chroot environment, or some other option? Given that Windows ftp daemons don't have the option to use chroot, they need to implement isolation otherwise. Do any unix ftp daemons do something similar?

    Read the article

  • Unable to mount cifs in redhat 6

    - by user3734522
    I am relatively new to Linux, and I am trying to mount a CIFS filesystem from an openfiler instance I have on my network in Red Hat. The openfiler instance is authenticating using AD. I am able to connect using samba: smbclient '\\10.25.214.26\cluster_storage.cluster.Cluster' -U [DOMAIN]+[USERNAME] Enter DOMAIN+USERNAME's password: Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.5.6] smb: \> When I attempt to mount on boot via fstab, I am told that the line is bad during startup. mount -t cifs -o username=[DOMAIN]+[USERNAME], password=[my password], domain=[domain.edu] '\\10.25.214.26\cluster_storage.cluster.Cluster' /mnt/scratch Any help would be greatly appreciated.

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • How to automatically copy a file uploaded by a user by FTP in Linux (CentOS)?

    - by Buttle Butkus
    Outside contractor says they need read/write/execute permissions on part of the filesystem so they can run a script. I'm ok with that, but I want to know what they're running, in case it turns out there is some nefarious code. I assume they are going to upload the file, run it, and then delete it to prevent me from finding out what they've done. How can I find out exactly what they've done? My question specifically asks for a way of automatically copying the file, which would be one way. But if you have another solution, that's fine. For example, if the file could be automatically copied to /home/root/uploaded_files/ that would be awesome.

    Read the article

  • Win 2003 Junction Point to Remote Unix Share

    - by Pogrindis
    Env : Windows Server 2003 with already established shared folders over the local Domain via Windows DC and AD. - Linux box being used as a fileserver with the folder /files/share being R+W by all domain users, this is not a problem. I have already transfered the files from the Windows Box to the /files/share on the Linux Box however i now want to create a junction point in order to prevent users saving to the Windows box. I have tried the FileServer Administration on windows server 2003 however it will not allow me to junction remote servers. I have tried mounting the remote filesystem as a drive and proceeding that way however no joy. Anyone have any suggestions ?

    Read the article

  • Can't access to a iSCSI volume

    - by jmiguel.rodriguez
    I have a iSCSI target on a customer place I'm using from an old Fedora (Core6) server. I configured it and formatted as ext3 (mistake, now I know) and I've been working with it for some time. Now I need to access this volume from other machine. As far as I've read, I can't do it safely from two machines at the same time (yep, that's the first thing I tried). So I've umount it from original server and tried to mount it on the new server (I did it at first with Ubuntu 10 LTS but when I was unable to do it I installed another Fedora with the same configuration) with no success. The problem: I can see all target on NAS but when I do a "fdisk -l" to see all devices and know which mount I see all targets as SFS filesystem. From the original server I see all SFS (after all, they belong to my customer and don't know what he have in) except the one I manage which I see as 'Linux'. What can I do? Thank you in advanced, regards, jmiguel

    Read the article

  • Cannot to change my root password on Xenserver

    - by Michlaou
    I try to change my root password on my Xenserver 6.0. I follow these steps: enter boot: menu.c32 selecet xe-serial and press tab add "single" before the 2nd triple hyphens and i press enter. I have that: mboot.c32 /boot/xen.gz com1=115200,8n1 console=com1, vga mem=1024G dom0_max_vcpus4 dom0_mem=752M lowmem_emergency_pool=1M crashkernel=64M@32M single --- /boot/vmlinuz-2.6-xen root=LABEL=root-rodraxar ro console=tty0 xencons=hvc console=hvc0 --- /boot/initrd-2.6-xen.img I have commande on the screen and it's stop at: ext3-fs: monted filesystem with ordered data mode. Can you help me?

    Read the article

  • Is ceph usable with only 100Mbps bandwidth between nodes?

    - by vaab
    I haven't great hardware, but my requirements are low, I would like to start using ceph so as to abstract filesystem location and allow potential easy scaling to bigger hardware in an hypothetical future. My actual hardware meets ceph hardware requirements except the ethernet bandwidth part between the hosts. Mine is 100 Mbit/s which is much lower than the 1Gbps expected in ceph, even from the minimal requirement. Will I be able to use ceph in a very small smili-prod environnement (with limited number of clients) ? FYI: My hardware is 2 or maybe 3 hosts having each 4 core Intel, 24Go RAM, 2x2To disks but 100Mpbs between them.

    Read the article

  • (Possibly) Corrupted DDCrypt self-decrypting file

    - by sca
    I recently discovered some files I had encrypted and archived on a CD using DDCrypt 2.0, which is a small encryption utility from back in the day. One of its functions is to create self-decrypting files, which are essentially EXE files that you can enter a password and then have the file extracted onto your filesystem. Unfortunately, I can't seem to get them to do this properly. They appear to be decrypting and then at the last moment I get the error: Error opening encrypted file C:\users\username\Temp\dde3cfa.tmp Reason: Success and then another dialog immediately says Error extracting ddc file. I'm not really sure what this means (certainly not success), but I've checked out the temporary file and it appears to be garbage data (although of the right file size). I don't know if anyone has an idea what might be done here to extract the files, or has dealt with a similar situation before. Any ideas are appreciated! Thank you in advance for your time and any help you can provide.

    Read the article

  • In Linux, is it possible to get a listing of drives' disk space usage that also shows volume labels?

    - by DavidH
    I know about df, of course, but df does not output volume labels. I have 5 USB hard drives plugged into my NAS box, and would love to know which is which. Current df output: Filesystem Size Used Avail Use% Mounted on /dev/sda1 27G 2.2G 24G 9% / none 56M 476K 55M 1% /dev none 60M 0 60M 0% /dev/shm none 60M 332K 59M 1% /var/run none 60M 0 60M 0% /var/lock none 60M 0 60M 0% /lib/init/rw /dev/sde1 150G 102G 48G 68% /media/usb0 /dev/sdb1 299G 196G 103G 66% /media/usb1 /dev/sdc1 233G 183G 51G 79% /media/usb2 /dev/sdd1 233G 209G 25G 90% /media/usb3 /dev/sdf1 150G 101G 49G 68% /media/usb4

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • why in /proc file system have this infomation

    - by liutaihua
    run: lsof|grep delete can find some process open fd, but system dis that it had to delete: mingetty 2031 root txt REG 8,2 15256 49021039 /sbin/mingetty (deleted) I look the /proce filesystem: ls -l /proc/[pid] lrwxrwxrwx 1 root root 0 9? 17 16:12 exe -> /sbin/mingetty (deleted) but actually, the executable(/sbin/mingetty) is normal at /sbin/mingetty path. and some soket like this situation: ls -l /proc/[pid]/fd 82 -> socket:[23716953] but, use the commands: netstat -ae|grep [socket id] can find it. why the OS display this infomation??

    Read the article

  • df -h overreports disk space on VPS

    - by Rincewind42
    When I run the command df -h on my new Ubuntu linux vServer I get the following: # df -h Filesystem Size Used Avail Use% Mounted on /dev/hdv1 466G 33G 434G 7% / none 16M 0 16M 0% /tmp Running du -sh gives # du -sh du: cannot access `./proc/13624/task/13624/fd/4': No such file or directory du: cannot access `./proc/13624/task/13624/fdinfo/4': No such file or directory du: cannot access `./proc/13624/fd/4': No such file or directory du: cannot access `./proc/13624/fdinfo/4': No such file or directory 952M . The VPS should only have 5Gb of disk space but df reports 466Gb. How can I view the correct amount of disk space?

    Read the article

  • Hard drive partition size wrong. How do I resize without loss of data?

    - by BreezyChick89
    $ fsck fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! It should be 1 partition but it now shows 2.2tb partitioned and .3tb unpartitioned How do I make the first partition correctly be 2.5tb without destroying whatever is in either partition? I did not raid or anything. My devices have been getting repeatedly corrupt by thunderstorms. Looks like people recommend doing something like in other places. sudo resize2fs /dev/sdc1 610471680

    Read the article

  • Is there a tool for verifying the contents of a Zip archive against the source directory's contents?

    - by Basil
    Here's the scenario: I create a ZIP archive using some GUI package like WinZip, 7-Zip or whatever by right-clicking on a directory "somename" and selecting "Compress to archive 'somename.zip'" When the archive is completed, I open it and discover that some files don't exist in the archive (for reasons yet unknown). I want to find all files that are missing from the archive without having to extract the archive to another directory, then doing directory diff, etc. So.. Is there a tool (GUI or command-line, standalone or built into a compressor, for Windows or Linux, I don't care) that can walk through an archive and compare its contents against a directory on the filesystem?

    Read the article

  • Turning a running Linux system into a KVM instance on another machine

    - by Charles
    I have two physical machines that I wish to virtualize. I can not (physically) plug the hard drives from either machine into the new machine that will act as their VM host, so I think that copying the entire structure of the system over using dd is out of the question. How can I best go about migrating these machines from their hardware to the KVM environment? I've set up empty, unformatted LVM logical volumes to host their filesystems, with the understanding that giving the VMs a real partition to work with achieves higher performance than sticking an image on the filesystem. Would I be better off creating new OS installs and rsyncing the differences over? FWIW, the two machines to be VM'd are running CentOS 5, and the host machine is running Ubuntu Server 10.04 for no particularly important reason. I doubt this matters too much, as it's still going to be KVM and libvert that matter.

    Read the article

  • Shrink NTFS Partition Windows 2003

    - by Coops
    We have an iSCSI target provided by a CentOS server attached to a Windows Server 2003 Standard box, formatted in NTFS. My question is this - I know we can resize the backend block device fine (LVM et al.), however how do you tell Windows the NTFS filesystem has shrunk afterwards? [note we want to shrink]. I'm imagining a world of pain if it's not done correctly! This is a production box, so ideally we'd like the process to keep the drive mounted and online during the process, but downtime can be scheduled if need be. 90% of what I've found on the subject so far basically involves using the 'ntfsresize' command in Linux to do the job -- but surely Windows can do this itself? Cheers!

    Read the article

  • Apache .shared folder

    - by Kevin
    There are already a bunch of rules in my Apache configuration. What I want to add is the following. There are some shared folders (.shared): /var/www/.shared/ and /var/www/.include/.shared/ and /var/www/.include/(.*)/.shared/. Now when someone visits http://domain.com/test.png it first executes the existing apache rules and will (when the file/folder was not found) look in those .shared folders. So suppose I've got this filesystem: /var/www/.shared/dog.png /var/www/.shared/test.gif and /var/www/domain.com/dog.png. Now when someone visits http://domain.com/test.gif, it must load the test.gif from the .shared folder. Now when someone visits http://domain.com/dog.png it must load the dog.png from the domain.com folder (because the existing apache rules will be executed first).

    Read the article

  • Can I lvreduce after lvextend without losing the ext4 partition inside it?

    - by DrSAR
    In a botched attempt to move my root partition from one disk to another I have done the following: added new disk partitioned it with parted (part #3 is now almost totally filling the disk) initialized a physical volume $ pvcreate /dev/sdb3 Physical volume "/dev/sdb3" successfully created extended the volume group to include this new physical disk $ vgextend myvg /dev/sdb3 Volume group "myvg" successfully extended extended the logical volume (I think this is where I ballsed it up: I think I should have pvmove'ed stuff to the new pv in that group - can someone confirm?) $ lvextend /dev/mapper/myvg-root /dev/sdb3 I would now like to undo the lvextend and then proceed with the original plan of moving the content of the old physical volume over to the new physical volume. Can I reduce the logical volume (I have not yet touched the ext4 partition that sits in /dev/mapper/myvg-root with something like resizefs) without fear of damaging the ext4 filesystem? If so, how do I tell it to reduce by exactly the right amount? $ lvreduce --by-exactly-the-amount-occupied-by-PV /ev/sdb3 /dev/mapper/myvg-root

    Read the article

  • Filename Case issue with over WebDav

    - by user98365
    We are accessing SAMBA shared directory from a Windows Client with WebDav client WebDrive. But we are having the issue that it is showing same contents in both the directories ( data/ & Data/ ) though they are entirely different. I know this is because of Windows Filesystem being case insensitive and Linux being Case Sensitive. is there any solution for this? We had the same issue when viewed through the SAMBA mounted directory but we solved it by editing the SMB.conf as said in the following link Does Samba work well with Windows when case-sensitive names are enabled? Please help to solve this when accessed from the WebDav

    Read the article

  • Best server sync software/methods [closed]

    - by Meep3D
    I have a test server at home and a test server at the office. I'd like to somehow sync multiple folders in both directions automatically so I can work at home and to also provide an offsite backup. I've tried Live Sync (Microsofts own product) but it chokes on large amounts of files and seems a bit rudimentary. Dropbox is also a bit small and does not adapt to our filesystem setup. I have seen a few online backup services but none seemed geared to multiple computers using the same account. I don't mind paying a monthly fee provided the service is good. Suggestions would be greatfully appreciated!

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >