Search Results

Search found 37457 results on 1499 pages for 'sql 2008 r2'.

Page 328/1499 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Why do my backup fail when I target a network share hosted by a Synology DS211 disk station?

    - by Larry
    My backups are failing when I try to use a network share hosted by a Synology DS211 disk station. They work fine if I target a different network share (i.e. \server1\data\larry). When I run the following command: Wbadmin start backup -backupTarget:\\diskstation\backup-larry -include:C: This is what I get: wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Note: The backed up data cannot be securely protected at this destination. Backups stored on a remote shared folder might be accessible by other people on the network. You should only save your backups to a location where you trust the other users who have access to the location or on a network that has additional security precautions in place. Retrieving volume information... This will back up volume WIN7(C:) to \\diskstation\backup-larry. Do you want to start the backup operation? [Y] Yes [N] No y Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it. The backup operation to \\diskstation\backup-larry is starting. Creating a shadow copy of the volumes specified for backup... Creating a shadow copy of the volumes specified for backup... The backup operation stopped before completing. Summary of the backup operation: ------------------ The backup operation stopped before completing. Detailed error: Access is denied. Windows Backup failed to write the file: '<backup location>\WindowsImageBackup\<Computer Name>\MediaId'. Access is denied. The backup creates the following path \\diskstation\backup-larry\WindowsImageBackup\LARRY-MYDOMAIN\ but its empty. I definitely have read/write access on the target directory (\diskstation\backup-larry). I have verified this by looking at the permission and by actually copying files to this location. Any suggestions?

    Read the article

  • What Defines an AD Object as "Inactive"

    - by Malnizzle
    I am going to be using some DSQUERY/DSMOVE scripts to clean up my AD Domin. One option is to move inactive objects to a OU that has restrictive GPOs applied to it. Something like: DSQUERY computer -inactive 10 | DSMOVE -newparent <distinguished name of target OU> My question is what value defines an object, both user and computer, as "inactive" for a period of time? Is it the last time a computer was logged on to for computer accounts, and for users is it the last time that the user account logged on to a computer? But what if, say for example, I had a web server that wasn't rebooted and or logged into for a couple of months but remain powered on and functioning as normal, would it be defined as "inactive" where as technically it's still serving web pages and so on? Thanks for the help!

    Read the article

  • Representing server state with a metric

    - by Sal
    I'm using Microsoft's Performance Monitor to dump logs of RAM, CPU, network, and disk usage from multiple servers. I'd like to get a single metric that captures the state of a given variable to a good extent. For instance, disk usage is pretty stable, so if I take a single reading that says I have 50% remaining disk space, that reading will give me an accurate measure for the day. (The servers aren't doing heavy IO writing.) However, the tricky part here is monitoring CPU and network usage. The logs currently dump the % CPU usage every ten seconds. If I take a straight average of the numbers, it may not represent reality, as % CPU will be much lower during the night than day. (We host websites that sell appliance items.) I'd like to get an average over a span during peak hours (about 5 hours in the day) and present a daily peak hour metric. Of course, there are most likely some readings that will come in as overly spiked (if multiple users pinged the server at once) or no use (a momentary idle state). Is there a standard distribution/test industries use in these situation?

    Read the article

  • Storing Attendance Data in database

    - by Ali Abbas
    So i have to store daily attendance of employees of my organisation from my application . The part where I need some help is, the efficient way to store attendance data. After some research and brain storming I came up with some approaches . Could you point me out which one is the best and any unobvious ill effects of the mentioned approaches. The approaches are as follows Create a single table for whole organisation and store empid,date,presentstatus as a row for every employee everyday. Create a single table for whole organisation and store a single row for each day with a comma delimited string of empids which are absent. I will generate the string on my application. Create different tables for each department and follow the 1 method. Please share your views and do mention any other good methods

    Read the article

  • Deploy virtual PC based on snapshot using Hyper-V

    - by user27786
    I have a server with Hyper-V configured, it is easy to create a new virtual machine using the management tools, but it can take some time. I often see those hosts that say "We can have your server ready in 5 seconds". And I wonder how do they manage this? I would also like to deploy a full functional clean server in 5 seconds. So how is this done? Is it by first installing a clean server, then take a snapshot of that for later to deploy a new server based on that snapshot? I tried doing this, but I did not find any where to install a new virtual machine based on the snapshot. Anyone got any thoughts to share on this?

    Read the article

  • What happens to encrypted mails when CA certificate expires in my Windows Domain

    - by Wolfgang
    does anybody know what will happen to encrypted /signed mails when a root authority certificate expires in my domain network? Can the certificate still be validated from the clients and will the clients recognize that the certificate was valid when the mail was encrypted / signed? Respectively what will happen when a migration to a new infrastructure will take place or if I install a new root-CA? Is there a need to also migrate the expired root certificate?

    Read the article

  • File open fails initially when trying to open a file located on a win2k8 share but eventually can su

    - by Ruddy Douglass
    Core of the problem: I receive "(0x80070002) The system cannot find the file specified" for roughly 8 to 9 seconds before it can open it successfully. In a nutshell, we have two COM components. Component A calls into Component B and asks for a UNC filename to write to - the filename returned doesn't exist yet, but the path does - it then does its work, creates and populates the file, and tells Component B its done by making another com call. At that point Component B will call MoveFile to rename the file to its "official" name. This code has worked for (literally) years. Its works fine on win2k3. Its works fine when its running on win2k8 and points to a share on a win2k3 server. It also runs fine, if the share is actually located on the same win2k8 machine that the code is running on. But if you run it on win2k8 and point it to a share on a different win2k8 server, it fails. Both Component A & component B exist in their own Windows Service running with as a domain admin account. The shares are configured for "Everyone/Full control" in all test environments, similarly so are the underlying folders that the share points to. All machines are in the same domain. During debugging I realized the file actually does exist by the time I get to checking manually for it - after several iterations it occurred to me that the file doesn't "show up" until some delay passes by - so I put in the loop below in component B as shown below: int nCounter = 0; while (true) { CFileStatus fs; if (CFile::GetStatus(tempname, fs)) break; SleepEx(100, FALSE); nCounter++; } This code does, in fact, exit and nCounter is generally between 80 & 90 iterations when it does -- indicating the file "appears" approx 8 to 9 seconds later. Once that loop exits the code can successfully rename the file and all further processing appears to work. I put a CFile::GetStatus in component A immediately before it calls into Component B and that indicates success - it can see the file and get its true size yet the call into component b made immediately after can not see the file until the above indicated delay passes. I have verified the pathnames are precisely the same, even though it would clearly have to be for the calls to eventually succeed after a pause of 8 to 9 seconds... When something like this occurs I always assume there is a bug in my code until proven otherwise, but given (a) this code has executed properly for a very long time and (other than my diagnostic loop added) has not changed, and (b) it works in all environments except the win2k8 - win2k8 share, I'm guessing there is some OS issue in here that i do not understand. Any insight would be helpful. Thanks!

    Read the article

  • IIS/SMTP - unable to move emails from inetpub/mailroot/Queue due to file lock

    - by Bryan Roth
    I have a listener that processes emails in the inetpub/mailroot/Queue directory. Once the listener is done processing an email it proceeds to move the email to another directory. However, moving the email is not possible due to a file lock by the process inetinfo.exe. I have noticed that this file lock is released after a period time that ranges from several hours to several days. You can see that the Queue directory can get pretty full over time. The only way I have been able to work around this is by manually stopping and starting my SMTP virtual server in IIS. Is it possible to release this file lock programmatically? If not, is it possible to expedite releasing this file lock?

    Read the article

  • Windows Server: Change AD account name

    - by Bastien974
    Hello everybody, In my SBS 08 (AD, exchange), is it possible to change the name, email address of a user because he is leaving and I'd like to transfer all the account and credential to the new employee that is replacing him. Lot's of thing are set up for this user and it would save me lots of time if I can transfer an account like this. Thanks for your help !

    Read the article

  • What is the IIS application pool size used for? Why is it important?

    - by PeanutsMonkey
    I have had a request come through to increase the pool size in IIS which I assume to be the application pool size. I have attempted to search for more information regarding what the pool size is, what it does, its importance as well as caveats in increasing its size. Am unsure where to find it in IIS 7 and 7.5 as well as what the default size is. Does changing the pool size also affect web gardens?

    Read the article

  • How to address a recurring low temperature error seen at every boot-up?

    - by GregC
    After updating to latest controller firmware, I started receiving the following error messages: LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7 Is this something I should worry about, or is it a Red Herring? Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.

    Read the article

  • Deploying Office 2010 with MDT 2012 - Multiple Customisaton Files?

    - by Tony Blunt
    I'm in the process of setting up a deployment share for Windows 7 and Office 2010 Pro Plus, using MDT 2012. My question involves the customisation of the Office 2010 install. I've imported Office into MDT and I've successfully created a custom MSP file to tailor the settings to our business. However, I need to have a number of different customisations for different groups of users. For instance, our laptop users need Outlook Anywhere configured whereas desktop users do not. Basically - what's the best way of doing this? Do I have to import Office into MDT more than once, each instance using a different MSP, then have the task sequence select the appropriate instance? It's just that this method seems extremely wasteful so I'm thinking that there's a more intelligent way to do it? Or am I coming at this from the wrong direction? Should I be looking at Group Policy to tweak the Outlook settings in this instance? I'm just aware that there are certain things that OCT can do that group policy cannot, so I would have thought that there must be something I can do in MDT. I'm new to MDT so any pointers will be apprectiated. Thanks in advance.

    Read the article

  • Piping perfmon logs over DFS

    - by Sal
    I'm running perfmon on several servers, and I'd like all of the output to be piped to one particular server. I'm trying to do this over DFS by modifying the Root directory arg on each of the servers and placing a DFS path like so: Root Directory: \\PERFMON_LOG_REPOSITORY\[MY_COMP_NAME] The trouble is that when I make the Root directory dump the logs to a file over DFS, I always get the following error upon starting up the Collector Set: when attempting to start the data collector set the following system error occurred: access is denied

    Read the article

  • How to select a server that supports Windows scheduled file IO

    - by Kristof Verbiest
    Background: I am developing an application that needs to read data from disk with a fairly consistent throughput. It is important that this throughput is not influenced by other actions that happen on the disk (e.g. by other processes). For this purpose, I was hoping to use the 'Scheduled File I/O' feature in Windows (throught the GetFileBandwithReservation and SetFileBandwithReservation functions). However, this StackOverflow question has thought me that this feature is only available if the device driver supports it. Currently I have no computer at my disposition that seems to support this feature (I have an HP Proliant server and a Dell Precision workstation). Question: If I were to order a new server, how can I know beforehand if this feature will be supported by the device driver? How 'upscale' does the server have to be? Has anybody used this feature with success and cares to share his experiences?

    Read the article

  • Inconsistent DHCP replies with Windows 2008R2 DHCP server

    - by verbalicious
    I've got a Windows 2008R2 standard server running DHCP services. We've noticed that certain clients are receiving inconsistent DHCP replies. We have over 175 Windows workstations in this VLAN that don't seem to have trouble getting DHCP leases. However, PXE-booting clients trying to reach our DHCP server are able to get a lease inconsistently. Additionally, we tried using the "dhcping" tool against our DHCP server and found that roughly two of every three requests time out with "no answer" -- and this holds true when we set the timeout value on dhcping to 20seconds. After a failed attempt, however, we may get a dhcp lease reply immediately with dhcping. This leads me to believe that this issue isn't confined to PXE booting clients, but something more systemic with my LAN layer2 or DHCP. And that possibly my 175 windows clients are experiencing this in some form without my knowledge. We have over 30% of our scope available so the addresses are there. I was unable to find anything in the Windows server "DHCP-Server" log. Of course, my goal is to have my DHCP server reply to every request that it receives on the LAN!

    Read the article

  • Windows Server NTFS volume list file name encodings and any illegal file names

    - by benbradley
    I'm having to deal with a Windows Server (NTFS) file server and our backup application appears to be failing with certain files. According to this https://en.wikipedia.org/wiki/NTFS#Internals NTFS apparently supports file names encoded in UTF-16 but according to their support team, our backup application only supports UTF-8. I'd like to confirm whether this is actually the problem by seeing the file name encoding for myself. The files that are failing appear to be using plain English A-Z letters and other ASCII characters. No accents or non-English letters etc. I suppose even though the letters appear to be plain A-Z the file name could still be encoded in UTF-16. Does anyone know of a utility or script that can recursively go through all files in a directory and show the encoding of the file name? Then I could try renaming to UTF-8 to see if the backup can proceed. I'm not a Windows developer so can't write this up myself. Presumably the encoding of the file name should be stored in the FS somewhere and therefore it should be possible to expose this.

    Read the article

  • Timesync on HyperV with CentOS 6.2

    - by WaldenL
    I've got a CentOS VM (release 6.2) running under HyperV. I have integration services installed (part of base now), and CentOS shows the current clocksource is hyperv_clocksource, however my time in the VM is about 10 minutes fast after a week of uptime. My understanding of the new IC and plugable clocksource is that this shouldn't happen any more. Is there any additional configuration necessary to get the plugable clocksource to "work?" I know there are plenty of links about setting kernel options to PIT and various stuff like that, but those all seem to pre-date the integrated clocksource support, and as I understand it shouldn't be needed anylonger. Nor should ntpd nor adjtimex.

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • Network Printer installs driver every time i connect

    - by Patrick Schneider
    running a Citrix XenApp 6.5 Farm with a strange problem. Using Ricoh UPD 3.10 and every time a user log on into a Citrix Session and get the printers connected (via logon script) Windows shows the "Finishing installation..." dialog for every printer. The Drivers are installed on all Citrix Server and every time a user connects a printer the dialog appears. Are there any settings how to disable this behaviour?

    Read the article

  • How to make network drives appear even if disconnected?

    - by Jake
    I have the same problem as many others: network and home drives set by group policy and AD are not connected on windows startup. The prime suspect is that the LAN or wireless does not connect until after user log in. I have already given up on that. Now, I just want the disconnected drives to continue to list in My Computer so that if the user goes in and double click the drive, it will connect again. However, on some machines the drive is completely missing from My Computer. If I right click My Computer Map Network Drive again, it does work. But it's very troublesome to do it all the time. And I don't want to use a script to map the drives because I don't want to appear to be using a hacky solution to the users. The drives listed as disconnected will look more like a "built-in feature", and gives users more confidence. How can I keep the disconnected drives in My Computer? I am using Windows 7 Professional and Win2k8.

    Read the article

  • create assembly from network location

    - by mjw06d
    The error I'm receiving: CREATE ASSEMBLY failed because it could not open the physical file "\\<server>\<folder>\<assembly>.dll": 5(Access is denied.). TSQL: exec sp_configure 'clr enabled', 1 reconfigure go create assembly <assemblyname> from '\\<server>\<folder>\<assembly>.dll' with permission_set = safe How can I create an assembly from a unc path?

    Read the article

  • P2V Server 2008R2 with vmware converter 4.3.0 No source volumes

    - by Peter
    I'm trying to P2V a Windows Server 2008R2 to vmware vsphere 4.1 using Vmware vCenter Converter standalone 4.3. Source Box Windows Server 2008R2 24GB ram Dual Quad-Cores 1 Raid 5 drive 680 GB 3 GB "Recovery" partition 40 GB OS partition 637 GB Data partion When we try and convert using the stand alone converter and the built in vCenter converter there are no source volumes listed. Has anyone dealt with this before?

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • trying to install lync 2010 and experiecing an error with central management store

    - by Itai Ganot
    I'm trying to install Lync 2010 and i'm getting stuck in the stage where i have to install or point to the local configuration store. I've tried finding it in the domain and without luck, any recommendations? PS C:\Users\Administrator.ASUTA> Get-csconfigurationstorelocation WARNING: No Configuration Store location has been set. PS C:\Users\Administrator.ASUTA> Get-CsComputer "$env:computername.$env:userdnsd omain" Get-CsComputer : Cannot find location of Central Management Store in Active Dir ectory. At line:1 char:15 + Get-CsComputer <<<< "$env:computername.$env:userdnsdomain" + CategoryInfo : ResourceUnavailable: (:) [Get-CsComputer], Manag ementStoreNotFoundException + FullyQualifiedErrorId : ManagementStoreNotFound,Microsoft.Rtc.Management .Xds.GetComputerCmdlet PS C:\Users\Administrator.ASUTA>

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >