Search Results

Search found 12648 results on 506 pages for 'disk activity'.

Page 446/506 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • Virtualization in Ubuntu 9.10

    - by Jeff Dege
    I have an existing Centos 5 installation. I would like to upgrade to Ubuntu. Thing is, I don't want to be down for as long as it will take to get my entire environment moved over - software installed, connectivity configured, etc. I'd like to take it one step at a time. But I don't really want to keep rebooting back and forth from the new OS to the old OS. That's what I did last time I upgraded to a new OS, and it got old real fast. So, since my new MB is virtualization-ready (AMD Phenom II 945 quad-core), I figured I could create a virtual machine, under the new OS installation, that ran the old OS installation. The problem is that the documentation I've been able to find has been pretty sparse. I've found a lot of possibilities, and little info on which would be capable of doing what I want. I have a new Ubuntu 9.10 installation, and a second disk containing the Centos 5 installation. And I don't know where to go next. Any help would be appreciated.

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Windows 7 Install: No drives were found

    - by Albert Bori
    I was building a computer for my wife with an older SATA hard drive that I had lying around, and when attempting to do a new install of Windows 7 on it, the installer says: "No drives were found. Click Load Driver to provide a mass storage driver for installation." I ran the diskpart command: list volume, and it showed up as "Raw". So, I formatted it to NTFS and then it showed up as a healthy drive in diskpart. I also ran check disk on it with no errors. Windows 7 installer STILL can't find the drive. As far as BIOS settings, I have tried "Native IDE", AHCI, and Both AHCI/IDE mode (SATA slots 0-2 AHCI, 3-4 IDE). I tried all combinations... still "no drives were found". At this point, I'm just scratching my head. Using the installation dos window, I can see and talk to the drive just fine, but the installer just doesn't see it at all. I've even written folders and files to the drive, and it still "can't be seen". Any help would be great. Items of interest: Motherboard model: Gigabyte GA-A75M-UD2H - BIOS Version F5 (latest) Hard drive model: 80GB Seagate Barracuda 7200.7 ST380817AS (no other drives) Installing Windows 7 using a FAT32 formatted USB Drive, which I've used for other installs

    Read the article

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • Empty rewrite.log on Windows, RewriteLogLevel is in httpd.conf

    - by ripper234
    I am using mod_rewrite on Apache 2.2, Windows 7, and it is working ... except I don't see any logging information. I added these lines to the end of my httpd.conf: RewriteLog "c:\wamp\logs\rewrite.log" RewriteLogLevel 9 The log file is created when Apache starts (so it's not a permission problem), but it remains empty. I thought there might be a conflicting RewriteLogLevel statement somewhere, but I checked and there isn't. What else could cause this? Could this be caused by Apache not flushing the log file? (I closed it by hitting CTRL-C on the httpd.exe command ... this caused the access logs to be flushed to disk, but still nothing in rewrite.log) My (partial) httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName my.domain.com DocumentRoot c:\wamp\www\folder <Directory c:\wamp\www\folder> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule . everything-redirects-to-this.php [L] </IfModule> </Directory> </VirtualHost>

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • VMWare ESXi 5 - Expanded RAID 5 array - cannot access datastore

    - by Dayton Brown
    I'm using VMWare ESXi 5 and had a 2 TB RAID 5 setup on an HP DL360 with a P400i RAID card. I added two more 1 TB drives and using the SmartStart ACU, added the drives and expanded the logical disk. Now after booting back to ESXi, the server boots, but lists no available persistent storage. I've rescanned multiple times to no avail: the Datastore doesn't show up. I booted to GParted and the 1.8TB partition shows up, but it shows as unknown. Anyone have any good ideas? EDIT: Final Solution So after much gnashing of teeth, it was fairly simple to solve. I purchased an eSata 2 TB external drive and a PCI eSata card for my server. I then used Clonezilla to image the current partitions to my new external drive. You have to check "don't check drive sizes" in advanced mode, otherwise it will yell at you for have a smaller drive. For some reason my PCI card wouldn't boot on my HP server, so I hooked the drive up to another desktop I had, booted to VMWare, and copied the vmdk's to another drive. I'm going to blow out the RAID config and then create 1.5TB logical drives.

    Read the article

  • Borked ubuntu uninstall - need to delete boot partition (i think)

    - by Max Williams
    I just got a new pc laptop with windows 7 and wanted to install Ubuntu on it. Which i did, no problem there, by downloading the installer, burning it to dvd then booting off the dvd and installing. Then, i realised that the new Ubuntu 12.04 uses the Unity desktop, which i immediately disliked, and after some research, began to hate. So, i decided (after a little googling) to install Linux Mint instead. So, thinking i'd better start from scratch, i went to the Windows 7 disk manager and wiped the Ubuntu partition that had been created. Now, when i start up, i get an error from grub, the ubuntu boot manager: error: unknown filesystem grub rescue> _ and a blinking cursor where i can enter commands. I suspect that what i've done is deleted the main ubuntu partition but NOT deleted another partition which is a boot partition, or something like that? Can anyone tell me how i can rescue or unbork this? I'd like to either a) get back to my original windows-only setup OR b) install linux mint off dvd (which i have), into the empty partition, fixing any grub confusion in the process. Any suggestions? Thanks, max BTW please don't answer if you're just going to tell me to stick with 12.04, or install a different distro or something. I definitely want Mint and just want to fix this mess - thanks :)

    Read the article

  • NTBackup (on WS2k3) fails to backup remote server (WS2k8R2) with " Error: is not a valid drive, or you do not have access."

    - by Mark A
    We run an NTBackup job on a Windows Server 2003 R2 SP2 with all updates (as of Q4-2011). It works well backing up two WS2k3 servers as well as the backup server itself. However, we have been unable to successfully back up our Windows Server 2008 R2 machine ("G5-01"). It often runs for about 2GB worth of backup and then dies out with one of the below error messages. It should be more like 20GB for the full server. We have tried using the admin share (C$), an explicitly shared drive share, UNC and mapped drives. The result is the same each time, the only thing that varies is the amount of stuff backed up before it chokes. We've also run NTBbackup from the UI interface, from the command line and as a scheduled task. We are backing up to 400/800GB tapes and they have plenty of space available on them (blank media). Error: \\G5-01\c is not a valid drive, or you do not have access. Error: \\G5-01\c$ is not a valid drive, or you do not have access. Error: Y: is not a valid drive, or you do not have access. Error: Could not access or create backup catalog files. Verify that you have full access to the working folder and there is disk space available. The job is run as Administrator and we have no problems logging onto the server and transferring files. The Event Log on the WS2k8 is not much help, as it has success audits for each login. All of the hardware involved (HP DL360 G3, HP LTO Ultrium 3, Adaptec 39320A) has the latest supported drivers. We've seemingly tried a bunch of different options but are wondering where to look next to resolve the backup issue. We've been super happy with our reliable schedule task for years but this one is stumping us!

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    Hi All, I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: http://superuser.com/questions/12955/how-do-i-remove-the-option-to-eject-sata-drives-from-the-windows-7-tray-icon

    Read the article

  • Why is windows not able to create a system partition?

    - by hughes
    I'm reinstalling Windows 7 64 bit, and I encountered an issue I've never seen before. I have a legit copy of Win 64 Professional, and I've installed it probably a half dozen times on this machine in the past without a problem. Googling the error only brings me to issues with people who are upgrading to win7. The drive itself seems to not have a problem. I can mount it on other systems and I can create an NTFS partition on it on other machines. I can install Ubuntu on it without any issues. Additionally, if I try using my alternate backup hard drive, the installer gives the same error. I have run diskpart from the setup page and clean seems to report that all is well. However, I cannot get past the screen below, which says Setup was unable to create a new system partition or locate an existing system partition. This happens regardless of whether or not the disk space is already allocated. What is causing this? How do I solve or get past this?

    Read the article

  • Booting an Asus EeePC from a LiveCD USB stick

    - by Bryan
    I have two identical Asus EeePC netbooks that are both installed with Ubuntu. One of them was sitting on the closet shelf and the battery went completely dead. When I charged the battery and tried to boot the it, I got the "No init found" error. In trying to follow the suggested way to fix it posted here, I used the Startup Disk Creator on my Ubuntu 11.10 desktop machine to create a USB stick with a bootable Ubuntu 11.10 live CD on it (the netbook doesn't have a CD drive). I plugged the USB stick into the netbook with the init issues, went into the BIOS and selected the USB stick as the 1st choice to boot from, and did a hard restart. It then just stuck at the flashing underscore. Not knowing why it wasn't working, I tried booting my working netbook from the USB stick. When I got into the BIOS on the working netbook, I noticed the description in the boot order section for the USB device was different. On the non-working netbook, the description was SWISSBIT (the name of the USB stick) but on the working netbook it was just "Rem. Drive". I also noticed on the working netbook there was an additional option under the bootable order section that allowed me to choose which hard drive to boot from. This section showed two hard drives, one of them being my USB stick. So, rather than changing the device boot order, I selected the USB stick as the hard drive to boot from first and it worked like a champ - I was able to boot into the LiveCD on the USB stick. Seems to me the working netbook is seeing the LiveCD USB stick as a hard drive, where-as the non-working netbook is seeing it as a plain ol' USB stick. The BIOS is the exact same version on both netbooks... any idea why it works on one and not on the other?

    Read the article

  • Windows based development environment: HyperV, VMWare, or VirtualBox on development machine?

    - by bleepzter
    I am a software engineer with a little bit of an informal "support" functionality... I am trying to figure out what is the best possible approach to employing virtualization technologies into our development process. Since the code we develop is server-centric, testing it often requires a VM with specific software requirements. I used to use VM Ware player (free version) to run my VM's until both of my laptops started exhibiting issues with corrupted windows 7 services and dying hard drives. All leads pointed to VMWare, which by the way seems to be a solid product if you pay for the Workstation edition ($300). On a side note, I have always been a fan of the Windows Server product line. I think it makes for one of the best development environments out there - it is highly scalable, highly reliable, and very efficient. So to be fair I replaced the drives of the laptops and installed Windows Server 2008R2, VS2010 Ultimate SP1, SQL Server 2008R2, TFS Server 2010 and all other tools and API's needed do do my work properly. So now I am stuck with a bunch of VMWare VMs. I don't want to repeat of what happened before, and I certainly don't want to bog down my machine with an inefficient hypervisor or services that are not needed. Futhermore the VMDK hard-disk format used by VMWare is not compatible with the VHD format of Hyper V. It is my understanding that converting from one format to the other can only happen by Microsoft System Center Virtual Machine which I have downloaded from MSDN and ready to install. I guess the question at this point is: Does SCVM run as another service in Windows? Is it a memory hog? What is a better virtualization technology - Hyper-V or Virtual Box in terms of efficiency ease of use and most importantly - memory footprint? (Keep in mind the development environment already has a ton of services running such as TFS Server, SQL Server, IIS, etc...) How would you advise to proceed at this point so that the VMs are still used in the test process? Thanks Martin

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by nfm
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • Dialog box tells me there's a missing driver when installing 64-bit version of Windows 7

    - by Eikern
    I'm trying to install Windows 7 64-bit on my computer (ASUS P6T Deluxe V2, one 80GB HDD and two 1 TB HDDs). When I'm supposed to select whether I want to Upgrade or do a Custom install, I get a dialog box telling me: Load Driver A required CD/DVD drive device driver is missing. If you have a driver floppy disk, CD, DVD, or USB flash drive, please insert it now. Note: If the Windows installation media is in the CD/DVD drive, you can safely remove it for this step. I've tried to reach this step using a 32-bit installation disc, but that doesn't generate this message at all. Through the command windows (shift-F10) I can reach all of my drives, including my optical drive, without any problems--so what kind of device driver is it the installation wants? I've tried all the obvious drivers on the CD that followed my motherboard, but I can't seem to find the right one. The problem is that I don't know what device I'm supposed to load the drivers for in the first place. Can anyone help me? Edit: It turned out that my downloaded image was corrupted. I borrowed a DVD from a friend of mine, which worked!

    Read the article

  • Dual-booting Windows 7 and Ubuntu

    - by CFP
    Hello everyone, I've just received my Dell Studio 17 laptop, which comes with Windows 7 x64 preinstalled. I'm having quite a hard time installing ubuntu on it. First of all, here is how I partitioned the drive using GPartEd: |==Dell utility partition==|==Dell Recovery partition==|==Windows 7==|[==Ubuntu==|==Data partition==]| Where [] denotes an extended partition. Here are the steps I completed: I used GParted to create this structure, keeping windows 7 installed I booted ubuntu LiveCD, and installed it on the right partition I let it install grub automatically I rebooted intu ubuntu I went back to windows 7, no problems I then rebooted. Grub was gone. I used Super Grub Disk to restore grub, it didn't work. I tried to boot into ubuntu from supergrubdisk, but grub couldn't fint the boot folder I then reinstalled ubuntu, went through the same steps, but there SGD did boot my ubuntu I reverted to the previous version of grub, and installed it on my hard drive It worked, but trying to boot win7 got me the "No MBR, press Ctrl+Alt+Del to reboot" error I used the windows 7 cd to restore the MBR (the auto wizard didn't work, had to rebuild the mbr from command line Now Ubuntu is gone. 7 works fine I read a lot about this, and realized that many people could simply not boot win7 again after encountering this problem. Now I'd like to restore GRUB, but I really won't go through the hassle of doing a full new cycle of installing/reinstalling everything again. Is there a GRUB guru around, to provide me with a detailed guide to not screwing everything up once again? Thanks a lot!

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Boot drive not found issue after cloning using Apricorn EZgig

    - by TomWilsonFL
    A couple days ago I cloned a drive for someone using the EZgig software. Usually this goes without a hitch, but this particular drive I was cloning is quite old. When I restarted with the new drive I received the typical bootable disk not found message, so I turned it off, messed with the BIOS, restarted and it came up fine. That night I was working remotely on the computer and had to restart it. It didn't come back up; not a good sign. When the user came to the computer in the morning it was giving the same message. I have found that to make the computer boot, all I have to do is go into the BIOS and "Load Defaults", then restart. It will boot and runs great. Any thoughts on what is causing this situation? Is it MBR corruption? Are some settings being saved in the CMOS? A couple points of mention: I have already attempted looking for a BIOS update for the computer, but the newest is already installed (from 2003). When the computer reboots it either shows "None" for Primary Master, or sometimes it will just not show anything. Thanks, Tom

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >