Search Results

Search found 14157 results on 567 pages for 'drive failure'.

Page 504/567 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Windows Vista will not boot; the file header checksum does not match the computed checksum

    - by Magnus
    Right out of the blue, my wife's Sony Vaio stopped booting. This, not so fun, error message displays immediately after POST: The system cannot boot. The file is possibly corrupt. The file header checksum does not match the computed checksum The repair option on the Vista DVD says everything is fine and dandy, it couldn't be more happier or more clueless... Any ideas? Update: CHKDSK reports no issues. CHKDSK /r reports no issues. (Heck, both Windows Repair and CHKDSK could just as well tell me that I have won on lottery or that the earth is flat... ) Some have reported that a mem diagnostic could help, but for me the mem diag has just ran through 5 passes. It doesn't seem to help. According to Sony, pressing F10 should bring up the restore menu, but it doesn't, the error pops up straight after Bios POST. It seems that this error is first in line of all options at this point, and is doesn't put a smile on my face. I have attached an external USB drive and copied all user data/documents to it. I feel an OS re-install is around the corner.

    Read the article

  • How can I do an SELINUX filesystem relabel without rebooting first?

    - by Skaperen
    I can touch the file /.autorelabel and reboot and during the initialization coming back up it will do the SELINUX relabel for me. But I want to do this in a different situation where the system has just been copied to a hard drive image. I can chroot to the originating file tree, or chroot to the just populated device image and run it. I just can't find anything that says what to be run. This image is being made into an AMI on AWS EC2, and contains CentOS 6.3. But the time it takes to relabel is too long (6 minutes or more). I want to move the relabel to the image build where the extra time is not an issue (because it happens once instead of every time an AMI is launched). I can make this relabel be the very last thing just before the filesystem is unmounted for the last time until it becomes an AMI and will launch. I just need to know what to call to do it. I have searched man pages with no luck. I have searched system init scripts but where /.autorelabel is detected, it is unclear what is happening. Documents like http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-fsrelabel.html only tell how to do things that still really do the work after a reboot. I need to have the work doing BEFORE the "reboot" (unmount, build AMI, and launch ready to go). The big point is ... yes there will be a reboot ... but I want the relabel work to be done before that so it won't be done every time an AMI is launched (because it takes so long).

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Install Ubuntu 10.10 from loopback mounted ISO image

    - by Zifre
    I have a laptop with a faulty BIOS that has stopped booting from CDs even though it supports it (and it doesn't support booting from USB drives). I am trying to install Ubuntu 10.10 on it. I already had 9.10 installed. I tried using Kexec, but it refused to accept the kernel image. Eventually I found this page which shows how to make GRUB 2 boot from an ISO file. That worked fine, and I am now running the live image from the file. (If I can get this to work, it will be my new preferred way of installing Ubuntu, as it saves CDs and boots much faster.) However, I can't install it. The installer won't make changes to the hard drive, because the partition containing the ISO is mounted (and can't be unmounted because it is in use). Even if I only choose to use other partitions that are not mounted, the installer refuses to go any farther. Clearly, it should be possible using other partitions on the same disk. Is there any way to work around this issue or force the installer to go ahead?

    Read the article

  • Variable size encrypted container

    - by Cray
    Is there an application similar to TrueCrypt, but the one that can make variable size containers opposed to fixed-size or only-growing-to-certain-amount containers which can be made by TrueCrypt? I want this container to be able to be mounted to a drive/folder, and the size of the outer container not be much different from the total size of all the files that I put into the mounted folder, while still providing strong encryption. If to put it in other words, I want a program like truecrypt, which not only automatically grows the container if I put in new files, but also decreases it's size if some files are deleted. I know there are some issues of course, and it would not work 100% as truecrypt, because it basically works on the sector level of the disk, giving all the filesystem-control to the OS, and so when I remove a file, it might as well be left there, or there might be some fragmentation issues that would stop just truncating the volume from working, but perhaps a program can be built in some other way? Instead of providing sector-level interface, it would provide filesystem-level interface? A filesystem inside a file which would support shrinking when files are deleted?

    Read the article

  • Windows system restore deletes various executables and *.js files. How does it decide which files to delete?

    - by Leftium
    I restored my system from a Windows System Restore point. It solved some issues I was having, but introduced other strange problems (like my optical drive disappeared). One thing that surprised me was several files from my Web2Py installation were deleted: the executables and *.js files; possibly some others (like favicon.ico). I did not expect this because Web2Py is basically a portable, standalone application. You just unzip it and run the executable inside, so nothing should be registered with Windows. My question is: what files does Windows system restore delete, and how does it decide this? I'm just wondering what other files I'm missing and if there's a way to get restore them (without rolling back the restore point). Perhaps it scans for certain files types (like exe, js, ico, dll) with a creation date that was after the restore point creation date? Some other people who experienced a similar problem: Dropbox: Lost Files User files missing after run system restore. update: I found some more references on how Windows System Restore works: Understanding how System Restore in Windows Vista treats executable files Why Vista's System Restore is Dangerous and What to do About it

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Facebook Chat through XMPP protocol on Pidgin Portable - Will not Authorize

    - by Sara Neff
    I heard you can use facebook chat on desktops now. Thats awsome! What i didn't hear is that it is a pain in the butt! Not awsome! I've followed six nearly identical sets of instructions from six different websides, including the one that facebook generates for you, to get facebook chat connected through Pidgin. Its the latest portable version, so from what i hear the plugin is out of the question. Whenever I go to try and connect i get a message saying "Not Authorized" and buttons to either modify the account info, or retry. NOTHING i have done has fixed this, and I can't find anything remotely usefull anywhere. I am running windows xp, and running pidgin (portable) off of a flash drive. Someone please tell me what i have to do. I read about authorizing the chat on my actual facebook page. I'd have tried that if i could find out how to do it, but if its there they hid it good. HELP?!

    Read the article

  • Cannot login to Windows7 in normal mode but can in safe mode

    - by Guy
    I have a Windows 7 Ultimate computer (Shuttle) that I built myself and in it I put a Solid State Drive (SSD). It's been working well for a number of months but now when I start it there are problems. I have 2 users setup on the computer and when I try and sign in with either user it claims that the password is incorrect. I could understand the odd typo but I've had my wife try it as well and we've got the passwords correct. On top of that it will remain at the login screen for 1 minute and 20 seconds and then spontaneously reboot without shutting down. So I'm trying to work out if this is a hard disk problem or something else. Any ideas? (I have a nightly backup to a WHS so it will be easy to recover but I don't want to do that unless I have to and don't want to waste time putting in a new HD just to discover it's something else.) More info: If I start in Safe Mode I am able to login with the password and all appears as normal as it can in Safe Mode. However, normal boot continues with same problem.

    Read the article

  • Windows Media Player 12 Library import keeps dying

    - by duckworth
    I cannot get WMP 12 to import my library. I have searched around various forums and tried all the common solutions like disabling Media Sharing, deleted my %LOCALAPPDATA%\Microsoft\Media Player directory and tried reimporting, etc. but nothing works. I have even removed the Media features from Windows setup and re-added them. I have a large mp3 collection shared on the network from another Windows box. I add the folder (tried as a mapped drive and UNC path) and it begins importing. After about 30 minutes into the import (the CurrentDatabase_372.wmdb hits just under 400MB) my WMP player stops importing and all of the icons in WMP turn to red x's and my library is gone. I close and reopen WMP 12 and the library is empty and the CurrentDatabase_372.wmdb is small and it strarts importing again. Rinse, lather, repeat. I am going nuts as WMP11 on Vista handles this same setup perfectly. I am at my wits end on what else to try. I am running a legit Windows 7 Ultimate X64 RTM install. Here is a screenshot of what WMP12 looks like when the import dies: Any other ideas? Edit: OK, I Just confirmed this is definitely a problem not specific to my computer or configuration. I just did a clean installation of Windows 7 Ultimate x86 on an old test machine, opened WMP12 and added the same network folder of mp3's and it crashed about an hour into the import with the same appearance as the screenshot I posted above and the library disappears. So the problem has to be one of several things: The large size of the library The fact that the library is on the network A specific file or file is causing it the player to crash

    Read the article

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

  • Please insert a disk into SD/MMC - Vista problem [closed]

    - by Naunidh
    Possible Duplicate: Please insert a disk into SD/MMC - Vista problem Hi I tried pushing my 2GB micro SD card using the inbuilt card reader. On clicking the drive I get "Please insert a disk into SD/MMC". This problem is really frustrating. The card works fine on other computers so does the microSD to SD attachment. I have done following o fix. - Updated Vista and installed SP1. - Updated the TI drivers for FlashDrive. - Checked Vaio site for updates (none required). - Added a new entry HKLM\SYSTEM\ControlS* et001\Services\tifm21\Parameters/SDParam=1 took the hint from (http://tinyurl.com/nk33tp) I have restarted the PC multiple times. As soon as I put the card in, the SD/MMC device icon blips, so it seems the hardware is at least detecting something. The card reader was working fine few days back. I guess some windows update has broken something, does any one have any idea on how to proceed. MY laptop is VGN-N365E.

    Read the article

  • sql server 2008 cluster hang when a heavy load is run

    - by Billy OT
    we have a sql server 2008 active/active cluster running on wondows 2008R2 O/S. 14GB RAM, 4xCPU. we have set a ceiling of 12GB for sql server. We're running an agent job which loads 3 million records to a database. during this load the job fails and the cluster seems to attempt to fail over to the other node but unsuccessfully i.e., the cluster address is no longer accessible. we have to manually fail the cluster node back. during the load on viewing task manager we can see that memory usage hits a max of 12.5GB and CPU at times hits 100% on all 4 CPU, but for the most part fluctuates at an average of about 60%. I suppose my question is, will a cluster try to fail over if memory or CPU are taking a heavy hit? or am i barking up the wrong tree? also any ideas why it wouldn't fully fail over? we've crawled through logs, of which there are a lot, and can't find anything useful. we've also tried recreating the issue but it ran successfully at a later time. Also 3 million rows doesn't seem like a lot but in terms of resources should 14GB RAM and 4xCPU not be sufficient? Further information on this, we ran the load again today and corrupted the database! We received the error message : LogWriter: Operating system error 170. It looks like, under the heavy load, the sql cluster attempted to fail over and in doing so migrated a lun (or drive) which meant the disk was no longer reachable. (this is just our theory). The database is now 'suspect' and requiring restoration. The 170 error above also indicates that on failing over to the other node, the sql service could not start as it was already in use, therefore it couldn't fail over fully?? But I'm wondering why would it need to fail over in the first place? My assumptions could be completely wrong on this, so any ideas would be appreciated.

    Read the article

  • Windows 7 Black Screen On Boot, Seperate Bootable VHD Works Fine

    - by David Osborn
    I have a Window 7 x64 install with a bootable VHD (also Windows 7 x64). I was having problems getting my homeserver to do backups (VSS erred) so I ran check disk and used a tool from MS (cleanc2r.exe) to remove an empty Q drive from the VHD that I believe was a result of installing Office 2010 Beta. (All of this was done on the bootable VHD, not the main install.) Now I can't boot into the main install. It gets past the Starting Windows screen and then goes black. I can still boot into the bootable VHD and everything works fine from there. I have tried to boot the main install in Safe Mode/Safe Mode with Networking/and Safe Mode command prompt and it has the same issue. I ran chkdsk /r on the main install and after doing all the work there was a message about correcting some free space that was marked as allocated and also that it was unable to make an entry into the event log. I tried the startup repair utility and it found no problems. I don't see the setting for restore to last know good configuration so I couldn't do that. I don't recall installing anything new to the main install nor having hooked up any new hardware recently.

    Read the article

  • cups log kills ubuntu 12.04 and sudoer permissions changed

    - by peterretief
    I am using Ubuntu 12.04 as a desktop and recently had a weird crash with the log file for cups filling up the entire drive and not letting me back in, also what changed was /var/lib/sudo had changed from root to peter (me) I didn't make this change - I checked the history! I set the sudoers back to root and capped the max size for cups log Anyone had a similar experience? It feels like someone is messing around with my settings Is there any way to trace how the error occurred? Logs auth.log Jan 1 02:04:13 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:04:13 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Jan 1 02:06:53 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:06:53 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 syslog Jan 1 02:04:13 peter-desktop rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="903" x-info="http://www.rsyslog.com"] start Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's groupid changed to 103 Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's userid changed to 101 Jan 1 02:04:13 peter-desktop rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] Jan 1 02:04:13 peter-desktop bluetoothd[898]: Failed to init gatt_example plugin Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Linux version 3.2.0-25-generic-pae (buildd@palmer) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #40-Ubuntu SMP Wed May 23 22:11:24 UTC 2012 (Ubuntu 3.2.0-25.40-generic-pae 3.2.18) Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] KERNEL supported cpus: Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Intel GenuineIntel Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] AMD AuthenticAMD Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] NSC Geode by NSC

    Read the article

  • WIM2VHD failing with "Cannot derive Volume GUID from mount point."

    - by Jacob
    I'm trying to use WIM2VHD according to the instructions on Scott Hanselman's blog post to create a Sysprepped VHD image to boot from. I've installed the WAIK, and I have my Windows 7 sources mounted as a virtual drive. When I try to run WIM2VHD like this: cscript WIM2VHD.wsf /wim:F:\sources\install.wim /sku:Ultimate /vhd:E:\WindowsSeven.vhd /size:30721 I get the following log: Log for WIM2VHD 6.1.7600.0 on 11/2/2009 at 10:51:18.16 Copyright (C) Microsoft Corporation. All rights reserved. MACHINE INFO: Build=7600 Platform=x86fre OS=Windows 7 Ultimate ServicePack= Version=6.1 BuildLab=win7_rtm BuildDate=090713-1255 Language=en-ZA INFO: Looking for IMAGEX.EXE... INFO: Looking for BCDBOOT.EXE... INFO: Looking for BCDEDIT.EXE... INFO: Looking for REG.EXE... INFO: Looking for DISKPART.EXE... INFO: Session key is E01E1ED7-C197-4814-BDE4-43B73E14FCC4 INFO: Inspecting the WIM... INFO: Configuring and formatting the VHD... ******************************************************************************* Error: 0: Cannot derive Volume GUID from mount point. ******************************************************************************* INFO: Unmounting the VHD due to error... WARNING: In order to help resolve the issue, temporary files have not been deleted. They are in: C:\Users\Jacob\AppData\Local\Temp\WIM2VHD.WSF\E01E1ED7-C197-4814-BDE4-43B73E14FCC4 *emphasized text*Summary: Errors: 1, Warnings: 1, Successes: 0 INFO: Done. Any ideas?

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Everyone can access my Windows 7 Homegroup file shares - Even Windows XP computers

    - by Adrian Grigore
    I have 3 computers in my network, two running Windows 7 and one running Windows XP. I've set up a homegroup on both Windows 7 computers. Also, all computers are in the same Workgroup. The problem is that one of the Windows 7 computers makes all shares accessible to the entire Workgroup instead of just sharing to the Homegroup as it should be. I created the file share in Windows 7 via right-click in the explorer, then click on "Share For" - "Homegroup (Read/Write)" (translated from German, so the actual wording may be different). Also, when I look at the file sharing properties of that folder, Windows Explorer informs me that Users must have a valid account and password for this Computer to access drive shares. Unfortunately this is not true. Being in the same Workgroup is enough to get access. Homegroup restrictions work as expected on my other Windows 7 computer. When trying to browse those shares from the XP computer, I get a dialog asking for a login and password. What might cause homegroup restrictions to fail and how can I fix this?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • New SSD freezing on older motherboard (intel G31)

    - by DJM
    I have an ECS G31T-M motherboard running a Core2 Quad processor, 4gb RAM, Windows 7 32bit, Geforce 9600GT. Bought a Sandisk Extreme III 120GB (SDSSDX-120G-G25) and installed today. I'm outputting this from my 9600GT with included TV-out adapter to Component video (to my HDTV). This motherboard is SATA 2 and from what I can tell, SSDs run on the IDE controller and there is nothing fancy to set up advanced features of SSDs. I've noticed on other forums (but not verifying with ECS) this board does not support AHCI. I have two versions of Windows 7 installed on two drives, the SSD and an old 500G disc drive. When booted from my older 500G HDD, video plays fine on the HDTV. When booting the SSD windows 7 install, I am freezing constantly, as in, video plays OK for a minute, then picture freezes for 1-3 minutes (sometimes as audio continues playing) and returns for 20-30 seconds before doing the same thing again. Other tasks such as basic maneuvering through file folders seems to be no problem. Please help!! Do I need a new system for this thing to work, or could there be other fixes? I updated firmware to R201 to no avail. DJM

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >