Search Results

Search found 14013 results on 561 pages for 'remote backup'.

Page 238/561 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • MPMoviePlayerController & .m3u8 playlist

    - by Thierryb
    Hello, I would like to use a .m3u8 playlist containing remote mp4 files with MPMoviePlayerController, did you success with this ? Does the .m3u8 must contain .ts file ?If not what is the purpose of .ts ? Does next / previous buttons will be enable once the playlist be loaded ? If not, what is the purpose of these buttons ? And last question, do you have a .m3u8 sample file with remote mp4 file to test ? Thanks a lot for your help. Thierry

    Read the article

  • C# export to excel from sql server

    - by Manish Gupta
    In my C# windows application, I am exporting sql server data to excel on remote drive. But it is too slow. However, if I export data to excel in the local drive, it is fast. How can I increase the time if I want to export data to remote drive? Thanks in advance...

    Read the article

  • Do I need VMWare vSphere?

    - by Gk
    I'm planning use vmware to upgrade some of very aged server instead replace with all new bunch of server. VMWare vSphere sounds great but because of low budget I can't afford for both licenses and SANs. Without SAN, is vSphere worth the price? As I know without SAN, the VMWare HA, VMontion, FT is unavailable. So, do I need vSphere or only ESXi free version assume that I only need backup vm daily? Do you know any completed solution about backup on ESXi 4? TIA, -Gk

    Read the article

  • Windows Domain Controller: Create a test environment from a production environment

    - by Robert Coggins
    I need to create a working test environment of a domain we have. I need to have all the data from the production environment in the test environment. What is the best way to go about doing this? Here are some ideas I have but I am not sure if there is a better/recommended way of doing this. Use Vmware converter to create a VM of one of the production DCs create a VM and promo it on the real domain and move the vm to my test environment. use some kind of backup utility to backup the domain info and restore it to my vm I created. Thanks in advance for any help!

    Read the article

  • xcopy and timeouts

    - by acidzombie24
    Related to this question and this answer http://serverfault.com/questions/48839/backup-on-disc-using-truecrypt-corruption-problem/50829#50829 I want to copy the backup to my HD. However the timeout seems to be a problem. I see many single files then i see File creation error - Data error (cyclic redundancy check). and its currently trying to copy the next file. When i click my computer i see nothing and it also locks up for some seconds (30 maybe) So TrueCrypt is not a good solution if i must be able to copy files back from a volume with a corrupted sector? (answer in other thread please). Is there any way for me to change the timeout? I look here for some flags and didnt see any http://www.scriptlogic.com/support/CustomScripts/XCOPYCommandLineParameters.html

    Read the article

  • How to connect SAN from CentOS through two iSCSI Targets

    - by garconcn
    I had asked the similar question before. This time I want to use subnet for two iSCSI Targets, hence I start a new question. I have an old Promise VTrak M500i SAN Server. It comes with 2 iSCSI ports. I want to connect to two LUNs on the SAN server through two separate Targets from CentOS 5.7 64bits server. My network setup is as follows: CentOS server: Management network - 192.168.1.1 Storage network 1 - 192.168.5.2 Storage network 2 - 192.168.6.2 Promise SAN server: Management network - 192.168.1.2 iSCSI Port 1 - 192.168.5.1 iSCSI Port 2 - 192.168.6.1 I have two Logical Drives on this SAN and they are mapped as follows: Index Initiator Name LUN Mapping 0 iqn.2011-11:backup (LD0,0) 1 iqn.2011-11:template (LD1,1) Basically, I want the traffic to iqn.2011-11:backup LUN 0 through 192.168.5.1 network the traffic to iqn.2011-11:template LUN 1 through 192.168.6.1 network I don't use MPIO, just want to separate the traffic to avoid traffic jam. How do I achieve this? I am new to SAN stuff, please explain as much detail as you can. Thank you. The following are what I am doing now. After mapping the LUN to my pre-defined Initiators, the CentOS server can discover both Targets. [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.5.1 192.168.5.1:3260,1 iscsi-1 192.168.6.1:3260,2 iscsi-1 [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.6.1 192.168.6.1:3260,2 iscsi-1 192.168.5.1:3260,1 iscsi-1 [root@centos ~]# /etc/init.d/iscsi start iscsid is stopped Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] Logging in to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] Login to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] successful. Login to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] successful. [ OK ] [root@centos ~]# iscsiadm -m session tcp: [1] 192.168.6.1:3260,2 iscsi-1 tcp: [2] 192.168.5.1:3260,1 iscsi-1 When I check the LUN mapping on the SAN server for the two Logical Drives, both LUNs are connected through Port0-192.168.5.2 with the Initiator defined in CentOS. Assigned Initiator List: Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port1-192.168.5.2 1 I assume the following is what I want: Initiator Name Alias IP Address LUN iqn.2011-11.backup centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.template centos.mydomain.com Port0-192.168.6.2 1

    Read the article

  • Why does this Robocopy command make files hidden and how can I unhide them?

    - by Brian Dant
    I am trying to make a backup to an external HD using Robocopy. I am wondering why it makes the destination directory hidden and subsequently why I can't unhide it? Note: I'm new to the Windows command line. This is the first command I have ever passed. The command I used is: Robocopy D:\ I:\destination-directory /E /R:0 /DCOPY:T The backup worked fine, but it made the destination directory hidden. Then I tried to unhide the directory with the following command: ATTRIB -H "I:\destination-directory" /S /D and the output is: Not resetting system file -- I:\destination-directory So, Why does this Robocopy command hide the destination directory? What can I do to unhide this directory? I would like to use the command line to do this.

    Read the article

  • ReportViewer Error When AsyncRendering set to false

    - by Irman
    I have problem using reportviewer with asyncrendering set to false, the report showing this error message: "The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host", this report is running on iis 7 and windows 7 on my local machine.

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

  • Cronjob as root?

    - by Rob
    I'm having a bit of a problem with cronjobs for backups. I've set up the following in sudo crontab -e (not under personal account): 0 1 * * * /backups/dobackup /backups/dobackup contains this: #!/bin/sh touch ITRAN tar -cvpjf /backups/$(date +%d.%m.%Y)_backup.tar.bz2 --exclude=/backups --exclude=/proc --exclude=/lost+found --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev / The backup file is created, but the file ITRAN is not. Also, the backup file is vastly smaller than expected: -rw-r--r-- 1 rjrudman root 371620259 2012-06-21 12:39 21.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1023211449 2012-06-22 18:00 22.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1512785 2012-06-23 01:00 23.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1023272455 2012-06-24 22:41 24.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1514027 2012-06-25 01:00 25.06.2012_backup.tar.bz2 The backups with much larger file sizes are created by manually running sudo /backups/dobackup. It seems the cronjob is failing at some point.. but I have no idea how to debug this issue or where to start. Any ideas? Running ubuntu 10.04

    Read the article

  • Connect to an irc server with password

    - by hvtuananh
    I'm writing script in remote.ini The script looks like on 1:start:{ server some.irc.server server -m another.irc.server } The script works well as when I open mIRC, it automatically connect to 2 servers above Now, I want to connect to an irc server that require password, say abcdef How can I write script in remote.ini to connect to this server?

    Read the article

  • WCF multithreaded calls

    - by Guy
    Hi all, We have developed a multithreaded server that recieves data from multiple client and calls different WCF services. There are many cases that two (or more) different clients call the server at the same and the server tries to call the remote WCF from two different threads simultaneously. We have encountered some issues, especially when the remote WCF service is down. Are we doing things correctly? is there a best practice for this scenario? thanks, Guy.

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • How do I detect a connection break using MessageQueue

    - by Lopper
    My application written in C# makes use of the MessageQueue class in .NET for communicating messages with another remote application and the MessageQueue should always be "connected" (heartbeat present) with the remote messageQueue under all circumstances. If it is not "connected", then it signals that something is wrong and my application will need to perform some actions such as updating an event log. Is there a method in the MessageQueue class that can be used to detecting such breaks in connection?

    Read the article

  • fabric and svn password

    - by hyperboreean
    Assuming that I cannot run something like this with Fabric: run("svn update --password 'password' .") how's the proper way to pass to Fabric the password for the remote interactive command line? I am not sure, but the svn server we're using might have some restrictions to not allow --non-interactive passwords and I couldn't find a way to automatically update a remote repo.

    Read the article

  • Accessing Sharepoint 2007 documents outside network

    - by Ruby
    Hello, I just installed MOSS 2007 and configure it on a machine. The machine is not in a domain. When I try to create a new document in my document library (Server is on a remote location and my laptop is not a part of the domain), MS Word 2007 opens, asks me for login credentials 3 times and then disappears leaving me with a new blank word document. Is there any way to enable remote access of MOSS 2007 documents? Thanks :)

    Read the article

  • Cleaning Up Unused Users and Groups (Ubuntu 10.10 Server)

    - by PhpMyCoder
    Hello experts, I'm very much a beginner when it comes to Ubuntu and I've been learning the ropes by diving in and writing a (backend-language independent) web app framework that relies on apache, some clever mod_rewrites, Ubuntu permissions, groups, and users. One thing that really annoys my inner clean-freak is that there are loads of users and groups that are created when Ubuntu is installed that are never used (Or so I think). Since I'm just running a simple web app server, I would like to know: What users/groups can I remove? Since you'll probably ask for it...here's a list of all the users on my box (excluding the ones I know that I need): root daemon bin sys sync man lp mail uucp proxy backup list irc gnats nobody libuuid syslog And a list of all of the groups: root daemon bin sys adm tty disk lp mail uucp man proxy kmem dialout fax voice cdrom floppy tape sudo audio dip backup operator list irc src gnats shadow utmp video sasl plugdev users nogroup libuuid crontab syslog fuse mlocate ssl-cert lpadmin sambashare admin

    Read the article

  • Mercurial on IIS7 connection timeout.

    - by Ronnie
    I configured Mercurial on IIS 7 and I am able tu push and pull without problems some test files. If I try tu push a bigger repository I get for the hg push command line this error : abort: error: An existing connection was forcibly closed by the remote host From Tortoise HG I get some more detail: lopen error [Errno 10054] An existing connection was forcibly closed by the remote host Is seemed to me some kind of connection timeout for the CGI but I extended the cgi timeout properties in IIS7 configuration. What could be the problem?

    Read the article

  • Force initial Google Drive sync with a non-empty folder?

    - by Terrance Shaw
    I upgraded my iMac with an SSD last night and restored from a Time Capsule backup. Everything is now working substantially zippier and overall better, with the exception of one thing: Google Drive refuses to continue to sync with the Google Drive folder that it'd been using before the upgrade, and I ultimately ended up having to just delete the folder and let it resync from scratch to get past its stubborn error (alternatively, I suppose I could've simply moved the contents, set the path to the now-empty folder, then moved them back). Is there any way to get past this particular issue (for future reference), or is it something that Google put in place to ensure that a new user doesn't go and specify their root drive as the backup destination?

    Read the article

  • Can't write to file - 'Operation not permitted' WITH sudo

    - by charliehorse55
    I am having trouble writing to a few files on an external HD. I am using it to store media files as well as my time machine backup. The drive is formatted as HFS+ Journaled, and other files on the drive can be written successfully. Additionally, the time machine backup is working perfectly. Permissions for the file: $ ls -le -@ Parks\ and\ Recreation\ -\ S01E01.avi -rw-rw-rw-@ 1 evantandersen staff 182950496 22 May 2009 Parks and Recreation - S01E01.avi com.apple.FinderInfo 32 Things I have already tried: sudo chflags -N sudo chown myusername sudo chown 666 sudo chgrp staff Checked that the file is not locked (get info in finder) Why can't I modify that file? Even with sudo I can't modify it at all.

    Read the article

  • Why is file_get_contents() faster than using fsock_open()?

    - by eds
    In PHP, sometimes I want to send an HTTP request to a remote site just to look at the response headers, so I declare it all manually and use the fsock_open() function. However, this goes much slower than calling file_get_contents() with a remote URL (which loads the whole page content). Why is this? Is there a good alternative way to get just the response headers (to check if a page returns a 404 error, for example) that works as fast as file_get_contents()?

    Read the article

  • Recovering OS X Mail Accounts Lost in Crash

    - by Tim
    I had a hard crash on my Mac PowerBook and when I restarted, Mail came up with only my MobileMe account still available and I cannot figure out how to restore the other eight email accounts I have. The directories in ~/Library/Mail all seem to be there. I even did an rsync of the modified .plist files from a TimeMachine backup of the directory from before the crash (unfortunately, I was on travel, so the backup is more than a week old and I'd like to try and recover from that point without having to entirely restore from TimeMachine). I also did a fix permissions. So my questions are where exactly is the account information for Mac Mail kept? Any thoughts of what might have caused the failure? Why does only MobileMe come up? Any other thoughts on how to fix things?

    Read the article

  • Failover tmpfs mirroring. Am I doing it right?

    - by user45286
    My goal is to have a certain directory to be available as tmpfs. There will be some modifications during server uptime in this dir and those modifications must be synced to non-tmpfs persistent dir on HDD over rsync. After server boot the latest version from non-tmpfs persistent dir must be moved to tmpfs and rsync syncing to be started. I'm afraid that rsync will erase non-tmpfs backup if tmpfs dir will be empty.. I'm doing it in this way right now: create tmpfs partition in /etc/fstab cat /etc/rc.local (pseudocode) delete "tmpfs rsync" cronjob from /var/spool/cron/crontabs if there is any cp -r /path/to/non-tmpfs-backup /path/to/tmpfs/dir append /var/spool/cron/crontabs with "tmpfs rsync" cronjob What do you think?

    Read the article

  • Switching BIOS SATA RAID/AHCI setting causes BSOD at Windows Start - Why?

    - by thephatp
    I just changed my disk setup from: 1 SATA HDD Primary OS Disk 2x SATA HDD Backup Disks in RAID 1 TO: 1 SATA SSD Primary OS Disk 1 SATA HDD Backup Disk [No RAID] Everything worked great, no problem. So, since I don't have a RAID array anymore, I decided that I could change my BIOS setting to AHCI instead of RAID. I have a Gigabyte GA-P35-DS3R v1.0 mobo. These are my steps: Settings Integrated Peripherals "SATA RAID/AHCI Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Next Step: Launch BIOS settings Integrated Peripherals "Onboard SATA/IDE Ctrl Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Switch both settings back to RAID, reboot, and Windows starts up just fine, no issues. What am I missing? Why can't I set it to AHCI mode without BSODs?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >