Search Results

Search found 10673 results on 427 pages for 'recovery disk'.

Page 194/427 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • SQL Server Log File Size Management

    - by Rob
    I have all my databases in full recovery and have the log backups happening every 15 minutes so my log files are usually pretty small. question is if there is a nightly operation that causes lots of transactions to happen and causes my log files to grow, should i shrink them back down afterward? Does having a gigantic log file negatively affect the database performance? Disk space isn't an issue at this time.

    Read the article

  • Error in unzipping a file from a python script running as daemon

    - by Fedrick
    I am getting an error whenever i try to run following unzip command from a python script which is running as a daemon Command : unzip abcd.zip /dev/null Error End-of-central-directory signature not found$ a zip file, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive unzip: cannot find zipfile directory in one of abcd.zip$ abcd.zip.zip, and cannot find abcd.zip.ZIP, period. Could anyone help me in this regard? Thanks in advance.

    Read the article

  • Shortest-path algorithms which use a space-time tradeoff?

    - by Chris Mounce
    I need to find shortest paths in an unweighted, undirected graph. There are algorithms which can find a shortest path between two nodes, but this can take time. There are also algorithms for computing shortest paths for all pairs of nodes in the graph, but storing such a lookup table would take lots of disk space. What I'm wondering: Is there an algorithm which offers a space-time tradeoff that's somewhere between these two extremes? In other words, is there a way to speed up a shortest-path search, while using less disk space than would be occupied by an all-pairs shortest-path table? I know there are ways to efficiently store lookup tables for this problem, and I already have a couple of ideas for speeding up shortest-path searches using precomputed data. But I don't want to reinvent the wheel if there's already some established algorithm that solves this problem.

    Read the article

  • global scope of variable

    - by shantanuo
    The following shell scrip will check the disk space and change the variable "diskfull" to 1 if the usage is more than 10% The last echo always shows 0 I tried the global diskfull=1 in the if clause but it did not work. How do I change the variable to 1 if the disk consumed is more than 10% #!/bin/sh diskfull=0 ALERT=10 df -HP | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do #echo $output usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 ) partition=$(echo $output | awk '{ print $2 }' ) if [ $usep -ge $ALERT ]; then diskfull=1 exit fi done echo $diskfull

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • Lost P/W Tape Backup

    - by Jeff Rawlins
    We need to perform complete restore to critical server using tape backup (Veritas). Found out that former IT director put p/w on tape and we cannot locate or figure it out. Options for recovery?

    Read the article

  • LNK1106 with big binary resource

    - by E Dominique
    I have a rather huge .dat-file (896MB) included as a BIN resource in my project. Now I get a LNK1106 link error ("fatal error LNK1106: invalid file or disk full: cannot seek to 0x382A3920".) I use Visual Studio 2005 under Windows XP, and have tried on a 4GB RAM machine with high Virtual Memory settings and lots of disk space. I have tried a number of different optimization flags, but to no avail. Does anyone have a clue? EDIT: I have narrowed it down to a specific size of the compiled resource. If the .res file is 544078588 bytes (about 518.9MB) or larger, the error occurs. If it is smaller it works just fine. Still no solution, though...

    Read the article

  • Server restarted while rebuilding array, what to do?

    - by user239054
    It's a HP ProLiant DL380 Generation 7, that has four hard drives, one is dead, I suppose that another one is dead too, but is there any way to force the array to rebuild again? The server has restarted while it was rebuilding an array. My windows server(2008) that is on it won't boot, it goes directly to system recovery screen. I have an image backup, would restoring it be my only option?If I restore, it will get back to regular automatically or will I have to configure something?

    Read the article

  • How do I merge a local branch into TFS

    - by Johnny
    hi, I did a stupid thing and branched my project on my local disk instead of doing it on the TFS. So now I have two projects on my disk: the old one which has TFS bindings and the new, which doesn't. I want to merge those changes back into the TFS project. How would I go about doing that? I can't do Compare because my local branch has no TFS bindings. There should be some way to compare the differences between the two projects locally and then meld the differences into the old project and check-in, but I can't find an easy way of doing that. Any other solutions?

    Read the article

  • Windows 7 Ultimate Restore From Another Machine

    - by Guy Thomas
    The situation I have an old machine running Windows Server 2008. I make a backup of selected directories. New Machine: I install Windows 7 Ultimate (and Windows Server 2008 [dual boot]) I cannot restore those files that I backed-up on the old machine. On the New machine neither the Windows 7 or Windows Server 2008 Restore (Recovery) can 'see' the backup files even though they are on the local machine. Is this behaviour (non-behaviour) 'by design' or am I missing something?

    Read the article

  • Are there any free .NET OCR libraries that will perform OCR on an application window directly?

    - by Kelsey
    I am looking for a free .NET OCR library that will be able to do OCR on a given application window or even a image in memory (I can take a snapshot of the application window myself). I have looked at tessnet2 and MODI but both require an image located on disk. I need to use OCR because the application I am trying to write a script for does some wacky stuff that cannot be read using windows API and I need to scrape data from the screen. I have tested both of tessnet2 and MODI and they both can read the text mostly but because this has to run in an enviroment that will not be able to write to disk, I need it to be able to read directly from the applciation window or some type of memory stream. I am thinking OCR is my only soution but there could be other methods that I am not thinking of. Suggestions?

    Read the article

  • P2V Server 2008R2 with vmware converter 4.3.0 No source volumes

    - by Peter
    I'm trying to P2V a Windows Server 2008R2 to vmware vsphere 4.1 using Vmware vCenter Converter standalone 4.3. Source Box Windows Server 2008R2 24GB ram Dual Quad-Cores 1 Raid 5 drive 680 GB 3 GB "Recovery" partition 40 GB OS partition 637 GB Data partion When we try and convert using the stand alone converter and the built in vCenter converter there are no source volumes listed. Has anyone dealt with this before?

    Read the article

  • xp stop error 50 page fault in non paged area

    - by Tony
    I have a laptop that fails to boot with the BSOD error: "page_fault_in_non_paged_area" STOP: 0x00000050 (0xEC6B738D, 0x00000000, 0x8649308C, 0x00000000) The laptop has 2 memory DIMMs. I removed each DIMM one at a time and the error remained with just one DIMM installed. I have run spinrite 6.0 on the hard drive no errors found. Booted to recovery mode and ran CHKDSK /R, it found and fixed errors but still gets the stop error. Any other suggestions to try?

    Read the article

  • Saving a huge dataset into SQL table in a XML column - MS SQL, C#.Net

    - by NLV
    Hello all I'm having a requirement where i've to save a dataset which has multiple tables in it in a sql table XML column by converting into an XML. The problem is that in some cases the dataset grows extremely huge and i get OutOfMemoryException. What i basically do is that i convert the dataset into an xml file and save it in the local disk and then load the XML again and send it as a stored procedure parameter. But when i write the dataset xml into the disk the file sizes over 700 Mb and when i load it in the XMLDocument object in memory i get OutOfMemoryException. How can get the dataset xml without saving it in a file and re-reading it again? Thank You NLV

    Read the article

  • Temp file that exists only in RAM?

    - by Auraomega
    I'm trying to write an encrpytion using the OTP method. In keeping with the security theories I need the plain text documents to be stored only in memory and never ever written to a physical drive. The tmpnam command appears to be what I need, but from what I can see it saves the file on the disk and not the RAM. Using C++ is there any (platform independent) method that allows a file to exist only in RAM? I would like to avoid using a RAM disk method if possible. Thanks Edit: Thanks, its more just a learning thing for me, I'm new to encryption and just working through different methods, I don't actually plan on using many of them (esspecially OTP due to doubling the original file size because of the "pad"). If I'm totally honest, I'm a Linux user so ditching Windows wouldn't be too bad, I'm looking into using RAM disks for now as FUSE seems a bit overkill for a "learning" thing.

    Read the article

  • Load a MySQL innodb database into memory

    - by jack
    I have a MySQL innodb database at 1.9GB, showed by following command. SELECT table_schema "Data Base Name", -> sum( data_length + index_length ) / 1024 / -> 1024 "Data Base Size in MB", -> sum( data_free )/ 1024 / 1024 "Free Space in MB" -> FROM information_schema.TABLES -> GROUP BY table_schema ; +--------------------+----------------------+------------------+ | Data Base Name | Data Base Size in MB | Free Space in MB | +--------------------+----------------------+------------------+ | database_name | 1959.73437500 | 31080.00000000 | My questions are: Does it mean if I set the innodb_buffer_pool_size to 2GB or larger, the whole database can be loaded into memory so much fewer read from disk requests are needed? What does the free space of 31GB mean? If the maximum RAM can be allocated to innodb_buffer_pool_size is 1GB, is it possible to specify which tables to loaded into memory while keep others always read from disk? Thanks in advance.

    Read the article

  • Fully automated SQL Server Restore

    - by hasen j
    I'm not very fluent with SQL Server commands. I need a script to restore a database from a .bak file and move the logical_data and logical_log files to a specific path. I can do: restore filelistonly from disk='D:\backups\my_backup.bak' This will give me a result set with a column LogicalName, next I need to use the logical names from the result set in the restore command: restore database my_db_name from disk='d:\backups\my_backups.bak' with file=1, move 'logical_data_file' to 'd:\data\mydb.mdf', move 'logical_log_file' to 'd:\data\mylog.ldf' How do I capture the logical names from the first result set into variables that can be supplied to the "move" command? I think the solution might be trivial, but I'm pretty new to SQL Server.

    Read the article

  • Windows 2003 registry corrupt - endless reboot

    - by Jack
    Windows 2003 will run the loading screen then it stop with "Stop c000218 registry file failure or corrupt. The registry can not load the hive \systemroot\system32\config\security" then it start a count down about dumping the physical memory to disk and reboot itself again. I found Error starting Windows SBS 2003 - STOP: c0000218 but the config is different directory than mine. Is it the same step to try for recovery console?

    Read the article

  • cannot access new drive through nfs

    - by l.thee.a
    I am running nfs-kernel-server to access my files on my linux machine(ubuntu - /share). The disk I have been using is full. So I have added a new disk and mounted it to /share/data. My other pc mounts the /share folder to /mnt/nfs; but cannot see the contents of /mnt/nfs/data. I have tried adding /share/data to /etc/exports, but it did not help. What do I do?

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >