Search Results

Search found 9383 results on 376 pages for 'disk corruption'.

Page 205/376 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Installing ArchLinux into Ubuntu 12.04 root

    - by Johnny
    Is it possible to install 2 linux distros into 1 root, so they share same uuid and guid, configs and packages + same user /home folder ? For example: I have Ubuntu and Windows 7 already in dual boot on my laptop. Could I install Arch's base, base-devel and kernel, so it won't conflict with Ubuntu on the same root folder? P.S I don't feel like repartitioning my drive again, 'cause there's very complicated hierarchy, which occupies the entire disk. =)

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Using SSH, transfer webURL to remote machine

    - by AlanTuring
    Hi so i was doing some research in the library so i could use some pictures later on my Desktop computer in my room. I have space on my Lab account which i usually SSH into, and i was wondering if URL's can be directly transferred over to a remote machine and saved on the hard disk. I was thinking something like this: scp http://click.si.edu/images/truncatedurl.jpg /home3/etc.../filename.jpg is this possible? Thanks in advance.

    Read the article

  • Use a RAID Controller without drivers?

    - by cian1500ww
    Ordered an Adaptec 1420SA RAID card for my Debian Squeeze media server but didn't check to see if it was compatible, turns out it's not because it uses something called hostRAID which requires special drivers that aren't available for Debian. Could I still use the card as an ordinary controller and just use OS software RAID?? I'm not looking for speed, just need to mirror some drives that will be used for storage, the OS will reside on a disk connected to the server's onboard controller so the system won't be booting from any drives on the Adaptec controller.

    Read the article

  • Allowing Apache in Ubuntu to access files in NTFS hard drive

    - by lyrae
    I have LAMP running in Ubuntu. However, my files are located on a separate NTFS hard drive (/media/shared/mysite/). going to http://localhost gives me a 403 how can i, securely, allow apache to read/write the NTFS disk? 'shared' is currently being mounted when system boots. here's the entry in fstab: /dev/sda1 /media/shared ntfs-3g quiet,defaults,locale=en_US.utf8,umask=000 0 0

    Read the article

  • What is the fastest method to restore MySQL replication?

    - by dwhere
    I have a MySQL (5.1) master-slave replication pair and replication to the slave has failed. It failed because the master ran out of disk space and the relay-logs became corrupt. The master is now back online and working properly. Since there is this error in the log the slave process can't simply be restarted. The server has a single 40GB InnoDB database and I would like to know what is the fastest method for getting the slave back in sync to minimize downtime.

    Read the article

  • RAW Device mappings

    - by Setesh
    I am new to ESXi, I am going to be using a hard disk attached to my VM with Raw device mappings that connects back to our SAN. What are the recommended options to choose? Where to store the LUN mappings, on the VM or with the Datastore? What compatibility mode to use physical or virtual? We are going to be using this for database server in our dev environment.

    Read the article

  • How to move the files of a replicated database (SQL Server 2008 R2) to a different drive

    - by ileon
    I would appreciate if someone could help me with the following problem: We use two SQL Server 2008 R2 databases under transactional replication: transactional publication with updatable subscriptions. because we run out of disk space we need to move the database files into a new drive. But I don't want to break the replication. What I'm looking for are the required steps that will help me to move the files to the new drive. Thanks

    Read the article

  • Windows 7 reboot and freezing, possible power problems?

    - by mikelbring
    My Gateway LX Series desktop is about 6-8 months old. When I bought it, it had Windows Vista. I then put the RC version of Windows 7 on it. About 3 months after I bought it, it would randomly start to reboot, actually just shut off. I monitored the temperature levels and they seemed normal. So I installed a fresh Windows 7 Ultimate OEM 64bit. It actually got worse and would reboot more frequently. I then contacted Gateway and they said my machine was built for Windows Vista (made me chuckle), and told me to update my BIOS. So I did, and it was fixed for a good couple months. Recently, it started to do it again. Now I noticed early on it was doing it most often, if not every time when I was either watching a flash video or playing a flash game. So I decided to download the drivers again and I also downloaded my motherboard drivers. Seemed to be okay. A week later it started doing it again. And now it's doing it even more frequently. Sometimes I would turn it on, login into Windows and *BAM!* it would shut off. Now I am at the point where I can hardly get it to turn on. It would freeze at the point where it says "Starting Windows", with the Windows logo. Sometimes it would say "Checking disk for consistency" or whatever and freeze there (not shut off, just freeze). I even got the prompt to launch startup repair. But that also freezes when it says starting Windows. It does not really freeze, just never loads up. I am kind of lost as to what's going on. I have a few ideas but nothing I want to pursue (graphics card? hard drive?). Another thing I did try was to boot into a live disk of Ubuntu and try to launch every program I could and get on the internet but I never got it to reboot. So it sounds like to me it's a Windows thing, but I have no idea. I am just stuck and would like to see if any one has any ideas or could lead me in the right direction.

    Read the article

  • slow startup of event viewer on windows 7 but fast on server 2008

    - by Tim
    Hi, Since switching to Windows 7 for my desktop I've started to get really p***ed off at the length of time it takes to start the event viewer to display the application event log (typically 20-30 secs or disk griding - presumably to load and cache all the events) I've just noticed that on server 2008 R2 it seems instantaneous. Is my experience typical? Is there any setting I can tweak to make it fast on Windows 7 as well? Tim

    Read the article

  • data recovery after windows format with ubuntu 10.10

    - by mathew
    Hello, I had a system running Win 7 Home premium and ubuntu 10.04, side by side dual boot.I got an ubuntu 10.10 image disk so I decided to update.But during the installation I think I made a mistake by specifying the whole partition, and after installation of ubuntu 10.10 I saw that my windows and all the other data was gone.there was around 250GB of it.Is there any way I can recover this data??I had a lot of irreplacable photos and collections on the drive.I do have a recovery cd for my windows, but it does not detect any windows os.thanku very much. [email protected]

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • How to create a ghost environment?

    - by manwood
    I want to create a ghost or image of a fresh Windows XP development environment with all the various bits of software installed and ready to go, so that when the OS gets clogged or the main disk fails I can simply install the ghost rather than having to run through the entire install and setup process all over again. What is the best way to go about doing this? Cheers.

    Read the article

  • Linux data storage and partitioning

    - by Rajeev
    In the following output of df -h you can see that i have added a new hard drive(/dev/hdd1) and have mounted as /hdd1. My question is if I start dumping data to /opt will that data be mounted in /hdd1 or / My goal is to utilise the new hdd1 instead of old disk(/dev/sda3). How can this be done? Filesystem Size Used Avail Use% Mounted on /dev/sda3 442G 312G 12G 86% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 57M 128M 31% /boot /dev/sdb1 1.7T 201M 2.6T 1% /hdd1

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • my HP desktop with Windows Vista wont boot

    - by John
    It continues to loop to a BSOD and I cant get a dos prompt or repair screen, I dont have recovery disks but I have brought up a window, black, with gray title bar that is labeled Edit Boot Options, below that it has Edit Windows boot options for: Windows Vista (TM) Home Premium Path: \WINDOWS\system32\winload.exe Partition:1 Hard Disk: 1549f232 and then it has a place to enter something starting with one of these [ and one or two lines down it has ] what do i enter in this space and is there something i can enter that would help solve my boot up issues

    Read the article

  • Minimize writes to SSD disks with Windows 7

    - by mfn
    Most people use their SSD as their primary system installation disk with Windows 7. W7 already has a lot of optimizations for SSDs, both in terms of performance and lifetime. Minimizing writes increases the lifetime of SSDs, so post each suggestion as an answer and let others vote on them.

    Read the article

  • Windows7: Gaining administrator rights in CLI without being prompted for password

    - by liori
    Hello, I am trying to write a script which includes disk defragmentation as one of its steps. defrag needs administrative rights to work. I tried to use runas /user:Administrator, but it always asked me for password (even though there isn't one set). The script needs to run unattended for a long time, and it needs to be started from standard user account (it is actually being run by cygwin), so I'd like to get rid of that prompt. Is this possible? Thanks,

    Read the article

  • few questions on clone zilla

    - by user23950
    I'm trying to clone my windows xp installation. If I back it up using clone zilla and the my xp machine is infected by virus/spyware would I also be bringing the whole mess when I try to back it up. Do I need the whole partition/ whole disk if I use an external hard drive to backup. Would the data be formatted on the partition that I choose?

    Read the article

  • Mac/Linux Dual Boot

    - by user38008
    I trying to create a dual boot of linux and mac without bootcamp. But I'm nervous that I'll screw up or lose my data. In disk utilitys I made a 45gig partion called linux but I dont know how to format it and if it matters at all.... Also, when the partition is done. I press cntrl when booting up select that Linux partition and put in the livebootUSB or CD right?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >