Search Results

Search found 26285 results on 1052 pages for 'grant back'.

Page 224/1052 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Upgraded AGPM Server cannot connect to relocated archive

    - by thommck
    We were using the Advanced Group Policy Management (AGPM) v3.0 on out Windows Server 2008 DC. It kept the archive on the C: drive. When we upgraded to AGPM v4 we relocated the archive to the D: drive. Now when we try to look at a GPO's hisory in GPMC we get the following error Failed to connect to the AGPM Server. The following error occurred: The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs. System.ServiceModel.FaultException (80131501) You are able to click Retry or Cancel. Retry brings up the same error and Cancel takes you back to GPMC and the History tab displays "Archive not found". I installed the client on a Windows 7 computer (which is a n unsupported set up) and it could read the server archive without any issues. I followed the TechNet article "Move the AGPM Server and the Archive" but that didn't make a difference How can I tell the server where the archive is?

    Read the article

  • Exchange 2010 DAG Automatic Failover Testing/Issue. Not always automatically failing over to health

    - by Richard
    Ok I've got 2 exchange 2010 servers that run client access/hub transport/mailbox roles and one exchange 2010 server running just client access/hub transport roles and acts as my bridgehead. The two mailbox servers are running one database setup in a DAG. Server A shows the DB Mounted and Server B shows Healthy. If I reboot Server A via windows GUI Server B switches from healthy to mounted and I see hardly any interruption in service using Outlook 2007. Server A shows "Service down", then "Failed" then "Healthy" and leaves the DB mounted on Server B. This is how it should work, so far so good. Now if I test Server A being shut down cold, or unplugging both nics from network to simulate failure, Server B switches from Healthy to Mounted and server A switches to "Service Down" but my outlook client never connects to the DB mounted on server B! I can connect to server C (client access/hub transport) and get to my email and even send new email out, but incoming email doesn't deliver until Server A is brought back online and it's DB goes back to Healthy status. So I don't understand why it auto fail-overs when I reboot the server with the mounted DB copy, causing very little outlook 2007 hiccup if any. But when I shutdown or DC the mounted DB server it DOES mount the healthy copy but outlook 2007 clients can't connect.. I hope the picture I'm trying to paint makes some sense, it's driving me a little batty. Any help would be appreciated!

    Read the article

  • Adaptec 6405 RAID controller turned on red LED

    - by nn4l
    I have a server with an Adaptec 6405 RAID controller and 4 disks in a RAID 5 configuration. Staff in the data center called me because they noticed a red LED was turned on in one of the drive bays. I have then checked the status using 'arcconf getconfig 1' and I got the status message 'Logical devices/Failed/Degraded: 2/0/1'. The status of the logical devices was listed as 'Rebuilding'. However, I did not get any suspicious status of the affected physical device, the S.M.A.R.T. setting was 'no', the S.M.A.R.T. warnings were '0' and also 'arcconf getsmartstatus 1' returned no problems with any of the disk drives. The 'arcconf getlogs 1 events tabular' command gives lots of output (sorry, can't paste the log file here as I only have remote console access, I could post a screenshot though). Here are some sample entries: eventtype FSA_EM_EXPANDED_EVENT grouptype FSA_EXE_SCSI_GROUP subtype FSA_EXE_SCSI_SENSE_DATA subtypecode 12 cdb 28 00 17 c4 74 00 00 02 00 00 00 00 data 70 00 06 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 0 The 'arcconf getlogs 1 device tabular' command reports mediumErrors 1 for two of the disks. Today, I have checked the status of the controller again. Everything is back to normal, the controller status is now 'Logical devices/Failed/Degraded: 2/0/0', the logical devices are also all back to 'Optimal'. I was not able to check the LED status, my guess is that the red LED is off again. Now I have a lot of questions: what is a possible cause for the medium error, why it is not reported by the SMART log too? Should I replace the disk drives? They were purchased just a month ago. The rebuilding process took one or two days, is that normal? The disks are 2 TByte each and the storage system is mostly idling. the timestamp of the logs seem to show the moment of the log retrieval, not the moment of the incident. Please advise, all help is very appreciated.

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • Plesk SSL Certificate (Default cert when SSL enabled, CORRECT cert when SSL is disabled)

    - by hztetra
    I'm running Plesk 8.6.0: I have an SSL cert installed through Plesk's admin interface. But I have a bit of an issue: When I enabled SSL for the site, and selected my cert, then restart httpd, Plesk defaults to using my self-signed default certificate. Conversely, when I disable SSL support for the domain, all of a sudden Plesk is using my new SSL certificate. Unfortunately, when I try to view any folder on the site (mydomain.tld/folder) I'm simply met with a 404 (with files placed both in httpdocs and httpsdocs). I switch SSL support back on, and Plesk defaults back to the default self-signed cert and I can then view the folders that were not previously accessible. Any ideas? One further note: I tried following http://kb.parallels.com/en/939 . Once I tried to restart httpd with the edited ssl.conf file, I received an httpd could not start error. I restored the original ssl.conf file, and still received the could not start error. So as of now, I am running without an ssl.conf file. The following is the error I receive when I attempt to reintroduce ssl.conf: Starting httpd: [Mon Aug 23 15:45:40 2010] [warn] module ssl_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

  • Failed Backup Job With Backup Exec 12 and AOFO

    - by Mort
    I am backing up a Windows 2003 Small Business Server with SP2. We are running Backup Exec 12 with SP4. Recently the backup job started failing on backing up the system state with the following error: V-79-57344-34110 - AOFO: Initialization failure on: "System?State". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS). Snapshot provider error (0xE000FE7D): Access is denied. To back up or restore System State, administrator privileges are required. Check the Windows Event Viewer for details. Upon review of Symantec's website the error indicates a credential problem. However when I test the credentials they come back with no failures. I have found another forum here referencing a similar error and have tried what has been indicated with no succesful results. I have created new jobs based on new selection lists with no succesful results. I suspect a new update possibly from Microsoft may be causing this but I have no idea which one. I am looking for feedback. Thanks.

    Read the article

  • James - mail server configuration help needed

    - by Chaitanya
    Hi, I am trying to setup James mail server on a linux machine. The linux machine has public static ip address assigned. I installed James and added in the config.xml added the servername as mydomain.com. In the DNS for mydomain.com, I have created a A-record, say mx.mydomain.com, which corresponds to the ipaddress of the above mail server machine. Then added mx.mydomain.com as MX record for mydomain.com. In James, I have created a new user test. Then from gmail, I sent a mail to [email protected]. The mail is not received back and it didn't even bounce back. The linux machine is behind a firewall with only 22, 80, 8080 ports open for external network. My question here is, Do I require do open any other ports on the firewall so that the mail I send from gmail arrives to James? If it's not the port problem, any views on solving this issue? I don't want to send mails from this server. It's only for receiving the mails.

    Read the article

  • Windows Server 2003 R2 Terminal Server : Internet Explorer Enhanced Security won't disable for Users

    - by Tubs
    The Internet Explorer Enhanced Security (IEES) won't disable using the normal method of disabling it from the Add/Remove Programs/Windows components. This came to light immediately after testing. IEES was disabled after Terminal Services were installed for admin and users, and after IE8 was installed. My initial thoughts were that there was some clash between IE8 and IE6 (which is the default on 2003 R2), so I uninstalled IE8 and reverted back to IE6. The same symptoms were displayed, when a normal user logged on Internet Explorer Enhnaced Security was enforced. I then thought it could be a problem that Terminal Server wasn't recognising the removal as IEES was on when initially installed. I uninistalled the Terminal Server Componants using the server roles, and then reactivated and deavtived IEES. Windows Server 2003 R2 allows a limited number of users to connect to RDP by default, so I logged on as a normal user, and IEES was disabled. I then reinstalled Terminal Server, and logged on as a normal user. IEES was back enabled. Why is this?

    Read the article

  • Does the Intel DX79TO motherboard support x8 devices (SAS HBAs) on PCIe x16 slots?

    - by Zac B
    Context: I have an Intel DX79TO motherboard and a Sun SAS3081E-S/LSI 1060E-S HBA card with a PCIe x8 interface. I plug the HBA into my mobo next to my graphics card, and the HBA power lights illuminate, but the BIOS and OSes (tried Linux, ESXi, Win7) don't see the HBA at all. Question: Does the DX79TO motherboard support non-x16/non-GPU devices in its PCIe x16 slots? According to this question, some consumer motherboards don't support this, but I can't figure out whether or not this motherboard/family does. The answer will affect whether I buy a new motherboard or RMA the SAS card, with money attached to each course, so I figured I'd ask here first. What I've Tried: I've read the spec/manuals for the motherboard and the HBA, and I didn't see anything regarding whether or not the x16 slots were back-compatible to lower lane widths/non graphics-card devices, or whether or not the card could run in wider slots than x8. I've tried contacting Intel, but that was over a month ago and I haven't yet heard anything back except an automated "we got your email!" message.

    Read the article

  • Having trouble using psservice and sc.exe between Windows Server 2008 machines

    - by Teflon Mac
    I'm trying to control services on one W2k8 machine from another; no domain just a workgroup. The user account I'm logged in as is an administrator on both machines. I've tried both psservice and sc.exe. These work in a Windows Server 2003 environment but it looks like I need to an extra step or two due to the changed security model in 2008. Any ideas as to how grant permission to the Service Control Manager (psservice) or OpenService (sc)? I tried running the DOS window with "Run As Administrator" and it made no difference. With psservice I get the following D:\mydir>psservice \\REMOTESERVER -u "adminid" -p "adminpassword" start "Display Name of Service" PsService v2.22 - Service information and configuration utility Copyright (C) 2001-2008 Mark Russinovich Sysinternals - www.sysinternals.com Unable to access Service Control Manager on \\REMOTESERVER: Access is denied. In the remote server, I get the following message in the Security Log so I know I connect and login to the remote machine. I assume it then fails on a subsequent authorization step. The logoff message in the security log is just that ("An account was logged off."), so no extra info there. Special privileges assigned to new logon. Subject: Security ID: REMOTESERVER\adminid Account Name: adminid Account Domain: REMOTESERVER Logon ID: 0xxxxxxxx Privileges: SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege SeSystemEnvironmentPrivilege SeLoadDriverPrivilege SeImpersonatePrivilege sc.exe is similar. The command syntax and error differs as below but I also see the same login message in the remote server's security log. D:\mydir>sc \\REMOTESERVER start "Registry Name of Service" [SC] StartService: OpenService FAILED 5: Access is denied.

    Read the article

  • Ideal Bacula appliance?

    - by Ricket
    I'm an intern at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • Multiple Users use Script to Access Remote Server via Passwordless SSH

    - by jinanwow
    I am currently setting up a linux box that is tied into Active Directory. This box will allow users to SSH into it with their AD username and password to gather information (Box A). The issue is I am trying to create a function in /etc/bash.bashrc so the users has to do is type "get_info" for example, the function will SSH into a remote machine (Box B) run a command and output the information back to the user. The issue with this is, I have generated a rsa key on Box A, added it to the Box B authorized_keys and it works fine. The issue I am running into is, how do I set this up one time for the current users and any new user who logs into Box A. Is there a better approach than what I am currently doing. Essentially I just need to connect to the remote box, run a command, output the information back to the user and that is it. How can I allow new users to connect via a script to the remote box without having to generate RSA keys for them. The get_info fuction will be supplied a value 'get_info 012345' and returns the results.

    Read the article

  • Time machine disk icon on boot disk

    - by Ben Lings
    The icon for Macintosh HD (my boot disk) shows as a Time Machine disk. There is a file .com.apple.timemachine.supported in the root of the disk. If I delete the file and restart the computer, the icon goes back to a normal HD icon. However, the .com.apple.timemachine.supported file is recreated at some point on boot because when I log in again, the file has been recreated. If then reboot again, the icon goes back to being a Time Machine one. Any ideas about what is creating this file and why? More importantly - how can I get it to stop? It looks like something thinks the boot disk should be a Time Machine volume, but what? Console.app shows the following messages at approximately hourly intervals: 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Starting standard backup 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Cookie file is not readable or does not exist at path: /.<12 hex digits of MAC address for en0> 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Volume at path / does not appear to be the correct backup volume for this computer. (Cookies do not match) 19/01/2010 19:23:59 /System/Library/CoreServices/backupd[7459] Backup failed with error: 18 Other possibly relevant information: The boot HD isn't the original - the original failed so this is a SuperDuper'd clone of the original drive. I used to use the same disk for a SuperDuper clone as for Time Machine. These are the same same symptoms as this and this.

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • Personal VPN Solutions

    - by dragonmantank
    I want to set up a VPN for my laptop to connect back at home so that I don't have to directly expose my desktop computer to the internet. Here is what I have: Internet -> DD-WRT v24sp1-mega -> Desktop PC w/ Windows 7 Ultimate -> MacBook w/ OSX 10.6 What would be the easiest thing to do? DD-WRT has PPTP and OpenVPN built in and Windows 7 has RRAS itself but thus far I've run into some problems. Are there any other alternatives, or suggestions on getting these to work? PPTP I tried setting up PPTP directly on DD-WRT using these directions. When I tried connecting using my external IP from the MacBook I just kept getting that the remote server did not respond. OpenVPN According to the instructions here I don't have enough open nvram to set up OpenVPN. RRAS I got RRAS set up without a problem and can connect from the MacBook to the Windows 7 box while I'm on the same network. I port forwarded 1723 on the DD-WRT back to the Windows 7 box and made sure that PPTP Passthrough was enabled. Again, like PPTP, it just kept timing out.

    Read the article

  • Iomega eGo Encrypt Plus Encrypted Partition not mounting properly says "local disk"

    - by mosiac
    I'm working with an Iomega eGo 500gb Encrypt Plus portable drive. When I first set it up and installed the software and set a user password everything worked fine. The partition labeled "IomegaHDD" mounted properly and I could access the free space. Then I changed the ADMIN password which required me to lockout the device, wait 60 seconds, and then login to the Admin section and change the password, lockout the device again, wait 60 seconds, and then log back in with my user password. When I did that it of course unmounted the IomegaHDD partition to secure it, when it remounts it, it only shows up as "local disk" now and will not remount properly. I had not removed the cable while doing any of this. I have since tried unplugging and plugging back in to login to the drove but that has not worked. I'm wondering if I should remove every instance of "generic usb hub" from device manager and wait for it to re-add itself, or move it to a new set of USB ports temporarily to seee if that helps. Any ideas?

    Read the article

  • Mount points disappear from network share directory listing

    - by Barakando
    When browsing a network share which contains volume mount points, said mount points disappear from the directory listing. The mount points are still accessible directly by path, just not present in the directory listing. The machine is a Vista SP1 32 bit machine. It has a network share that contains volume mount points to the volumes of the Vista machine (created using the SetVolumeMountPoint API). When browsing the network share from another computer (either Win7 64 bit, Win7 32 bit or Vista SP1 32 bit) using Windows Explorer the following problem occurs: First, both volume mount points called C, D appear fine. I browse into directory C and see all its contents properly. I go back to the root of the shared folder and now I only see D. C has disappeared from the directory listing. I enter D and see all its contents. Go back to the root of the shared folder and now it's empty. D disappeared as well. If I manually go to \\<path to shared folder>\C from the address bar - then all is fine and I can browse its contents (same with D). The same issue does NOT occur when creating a similar share with volume mount points on Windows XP SP2 or SP3. Has anyone came across this problem? Any ideas how to work around it?

    Read the article

  • ASP.Net application can no longer write to DB after having run out of disk space

    - by remi.despres-smyth
    I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem. They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig). My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk". As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect. I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio. I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell. I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again. Can anyone help me out? What the heck is going on here?

    Read the article

  • Office Communicator and cannot sync Address book error

    - by Noah
    We are trying to get OCS 2007 R2 up and running. The clients login fine, but when I let it sit for a while, we still get the address book sync error message of: "Cannot synchronize with the corporate address book. This may be because the proxy server setting in your web browser does not allow access to the address book. If the problem persists, contact your system administrator". When I try and download the file locally, this error comes up: Could not load file or assembly 'ABServerHttpHandler, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) I googled and came across this post (http://social.technet.microsoft.com/Forums/en/ocsaddressbook/thread/c28ff2d8-66a4-456c-a5ad-e445a667e8ed) which suggests removing and reinstalling .NET 2.0 but that didn't seem to resolve the issue either. When we run abserver.exe -validateDB it works properly. We even tried the suggestion from Greg's Blog (http://blogs.technet.com/greganth/archive/2009/03/11/office-communicator-notifications-cannot-synchronize-address-book.aspx) about restarting the web component services but that didn't work either. Still seeing the same issue. So does anyone have an idea of where we go from here?

    Read the article

  • Excel cannot access the file with IIS7&Windows Serer 2008 R2(64bit)

    - by user838204
    I have a web project(.Net4) that needs to access Excel file, but it ends up with the following error message: Error occured during file generation.Microsoft Excel cannot access the file 'D:\xx\xx\abc.xls'. There are several possible reasons: • The file name or path does not exist. (Actually it's there) • The file is being used by another program.(It cant happen) • The workbook you are trying to save has the same name as a currently open workbook. In IIS7, I use DefaultAppPool with the Identity "myservice" who's under the Group of Administrators. In the Authentication Page of my website under IIS, Anonymous Authentication was enabled and set to "Application pool identity" and ASP.NET Impersonation was disabled. After searching the solution for hours, I found the following but NONE of them work Create folder in C:\Windows\SysWOW64\config\systemprofile\Desktop. Plz refer:this Grant rights of "myservice" in Component Services. Plz refer:this One thing strange, there is nothing in the Group of IIS_IUSRS. Is that normal? Cause I remember at least two users (DefaultAppPool & Classic .Net AppPool). Plz tell me how to fix the access problem. I assume that's permission problem of IIS but I cant solve it. Thank you.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Recovery of Windows DFS partion with shadow copy versioned files when overwritten with older modifie

    - by patjbs
    I've noticed the following "bug" on a DFS volume with shadow copies: Pretend you have the following folders/files under shadow copy versioning, going back two weeks. MyDirectory+ MyFile - Modified Date 8/1/2009 The current date: 8/30/2009 You have another version of MyFile stored elsewhere, with a modified date of 7/1/2009. Copy your other version of MyFile into MyDirectory, overwriting the newest version. I expected that you could roll back to the version that was there when it last imaged, say on the prior day and recover your 8/1 version. Not the case. Now, when you go to look at previous versions for the past two weeks, the versioning of that file will be entirely lost, and you'll be stuck with your older 7/1 version. Suckage. Questions: (1) Is this intentional, and if so, what's the rationale? I assume that DFS picks up on the versioning based on the current file, and that's what's wiping out prior versions, but it seems like a fairly stupid/naive way of handling versioning to me. (2) Is there a way to backtrack out of this, without resorting to restoration from other backup mediums? Thanks!

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >