Search Results

Search found 14708 results on 589 pages for 'sqlserver 2008'.

Page 183/589 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • How to configure CruiseControl.Net for Windows Authentication?

    - by balu
    I am using CruiseControl.Net for continuous integration which is now accessing the dashboard through login plugin, which in turn is authenticating and authorizing after verifying it with a set of users saved as XML file in the CruiseControl.Net server. Now, i need to bring in Windows Authentication to the system whereby which CruiseControl.Net server webdashboard when accessed from a client machine(local machine associated with a common server), would be authenticated and be authorized to access the CruiseControl.Net features based on the authority of the logged in users. Kindly guide me to go ahead with this, appreciate all kinds of resources that would be helpful for achieving this. Thanks.

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • How to change the default domain controller when querying AD in a different site?

    - by Linefeed
    We have 2 different locations, and at both site we have multiple domain controllers (Win2008). In our application we use Serverless Binding to execute our LDAP queries http://msdn.microsoft.com/en-us/library/ms677945(v=vs.85).aspx. If we look at de DnsHostName of the LDAP://RootDse on site B we always get the default domain controller of site A. Therefor all LDAP queries go much slower. Is there a way to change the default domain controller per site ?

    Read the article

  • Windows server backup question

    - by serveradminguy
    Hi, Is it possible to make a backup of my Windows Server from the built-in Microsoft tool, but as long as my Hyper-V backups are stored safely and not backed up anywhere, I can still restore my Windows Server from the native backup and use the Hyper-v machines? So if I lost my C:\ and my VMs are stored remotely, I can restore from an earlier backup and use those VMs.

    Read the article

  • How to logon with local account? RODC "There are no logon servers to process your request"

    - by g18c
    I have a site-to-site VPN, writeable DC in main office, Read-only DC. Today the VPN went down, but i couldnt log in to the read-only DC - the error message came up There are no logon servers to process your request. Since the RODC is a domain controller, there is no local administrator. How can i ensure that i am always able to log on to the RODC with a known account in an emergency if the writeable DC is not available?

    Read the article

  • How to see when stored procedures have last run

    - by Brandon Moore
    I want to see a listing of all the stored procs for each database on a server along with when the last time that store proc was run. I'm pretty good with SQL but I don't know about looking at stats like this that sql keeps so I'd appreciate a little help finding this info. EDIT: From the answers I'm getting it sounds like this is not possible the way I thought it would be. I was thinking that it could be done similarly to how you can see when a table was last accessed: select t.name, user_seeks, user_scans, user_lookups, user_updates, last_user_seek, last_user_scan, last_user_lookup, last_user_update from sys.dm_db_index_usage_stats i JOIN sys.tables t ON (t.object_id = i.object_id) where database_id = db_id() The above script was stolen from a comment on http://blog.sqlauthority.com/2009/05/09/sql-server-find-last-date-time-updated-for-any-table/.

    Read the article

  • Issue resolving names on Hyper-V guest with Routing and Remote Access

    - by John Sheehan
    I've got a Win2k8 standard server running Hyper-V with a Server 2003 web guest instance running. The host is publicly available on the internet. I've created an Internal Private network in the Hyper-V Virtual Network manager. I've set the host IP for that virtual adapter to 192.168.0.1. I've set the IP on the guest to 192.168.0.2. They can ping each other and share files. I can't browse the web on the guest though. NSLOOKUPs are working. I've tried setting the DNS server setting on the guest to 192.168.0.1 and something external like Google's 8.8.8.8 server to no avail. Windows firewall is disabled on the internal virtual network. I've tried it with both DNS installed on the host and without it. I'm not sure which RRAS/NAT settings are relevant to pass on so ask if you need me to clarify anything. How do I get outbound internet working on the guest VM?

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • MSMQ Resilience

    - by Paddy Carroll
    I have a requirement for a resilient MSMQ setup on VMWare ESX5. I am aware that we cannot allow the queue storage to be shared as it must be installed on physical disk mount, e.g. it cant be an CIFS or DFS Share. The following constraints apply: We don't use windows clustering We dont't rely on hot standbys Is there a way I can replicate the queue storage to another platform so that it can assume MSMQ duties on failure of the primary platforms using any method including queue forwarding?

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • Install DC again after removing on exchange server

    - by Kawharu
    I had a DC and Exchange 2010 installed on the same machine. I removed the DC role, and Exchange server went crazy. I tried to install the DC role again to fix the problem but ran into this error when running DCPROMO: Active Directory Domain Services could not create the NTDS Settings object for this Active Directory Domain Controller CN=NTDS Settings,CN=DC1,CN=Servers,CN=Manukau,CN=Sites,CN=Configuration,DC=AccessGroupnz,DC=com on the remote AD DC Server1.AccessGroupnz.com. Ensure the provided network credentials have sufficient permissions. "The DSA operation is unable to proceed because of a DNS lookup failure." Do you think I need to run this in an elevated command prompt, or change credentials somewhere to domain admin? Or is it something else?

    Read the article

  • "The zone can be scavenged after" keeps incrementing

    - by kce
    What are you trying to do? I'm trying to enable DNS scavenging on a DNS zone that has about a hundred stale DNS records. What have you tried in order to make it happen? I setup DNS Scavenging per everyone's favorite TechNet Blog post: Don't be afraid of DNS Scavenging. Just be patient. I first disabled scavenging on all of our domain controllers: DNSCmd . /ZoneResetScavengeServers contoso.com 192.168.1.1 192.168.1.2 I then enabled automatic scavenging on the DNS zone: I then enabled DNS scavenging on one of the domain controllers: I then found a few records that I expected to get delete with timstamps from a few years ago and ensured that that the Delete this record when it becomes stale and that time stamp was actually set: Finally I reloaded the zone and waited 14 days (the sum of the Refresh + No-Refresh periods). What results did you expect? I expected to see a 2501 Event in the DNS server logs noting the deletion of a bunch of DNS records. What actually happened? Nothing happened. The Zone Aging/Scavenging Properties showed that the zone could be scavenged after 6/12/2014 10:00:00 AM last week. No 2501/2502 events were recorded. All of the records with "aged" time stamps are still present. The date at which the zone can be scavenged after incremented another seven days to ?6/?18/?2014 10:00:00 AM. As I understand it until that date stays at least 14 days in the past nothing will ever even be eligible for scavenging let alone actually be scavenged. The only 2501 events recorded in the event logs are ones that I have triggered by right clicking and selecting "Scavenge Stale Resource Records". They note that scavenging will try to run again in 168 hours which was this morning. I have DNS scavenging enabled for a few months and have waited patiently for something to happen. I have reloaded the zone multiple times (which resets this timestamp). What am I missing here?

    Read the article

  • Virtual system drive is split between separate LUNs

    - by Tigran
    My hardware VMWare guy told me that a Win2008R2 server I have has a D drive that is split between two separate LUNs. He could not tell me if that's a good thing or bad just that it's not standard practice for him. Would you please explain the benefits or drawbacks of this setup? Thanks EDIT Some additional info. What happened was I had D drive already allocated. Then I asked for more. They said there's no more space on whatever LUN my D drive is on so the option they gave me was that part of the D drive will be on one LUN and other part will be on another LUN. Hope that helps

    Read the article

  • Is there a Kerberos testing tool?

    - by ixe013
    I often use openssl s_client to test and debug SSL connections (to LDAPS or HTTPS services). It allows me to isolate the problem down to SSL, without anything getting in the way. I know about klist that allows me to purge the ticket cache. Is there tool that would allow me to ask a Kerberos ticket for a given server, not event sending it ? Just enough to see the whole Kerberos exchange in Wireshark for example ?

    Read the article

  • Event ID 8021 The browser was unable to retrieve a list of servers from the browser master

    - by Ash
    We have a LAN where workstations are randomly losing network connectivity for brief moments of time. The workstations can also take a long time to login to the domain. During our troubleshooting we have found an error log on a few Windows 7 workstations: Warning BROWSER 8021 The browser was unable to retrieve a list of servers from the browser master \\random-pc on the network \Device\NetBT_Tcpip_{BBABCDE9-D8A0-4399-93F2-492FE0848B12}. The data is the error code. What do these errors mean? What computers should have the Computer Browser service enabled, workstations and/or servers? The environment is a mix of Windows 7 & Windows XP workstations on a Windows Server SBS 2011 SP1 domain.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • PHP ignores upload_tmp_dir?

    - by Matthias Vance
    LS, I am using IIS7 with PHP (FastCGI). I set up the upload_tmp_dir to "X:\Temp" instead of leaving it empty, but it's still using "C:\Windows\Temp" for some reason. I did give the following users full rights : NETWORK SERVICE, , IIS_IUSRS. I also restarted IIS after I made the change. Kind regards, Matthias Vance

    Read the article

  • moving to a new dedicated server and need advice on the new setup

    - by Eric Martin
    I currently have a dedicated server that we have outgrown and we are moving to another server. Our current setup was a 8gb W2008R2 server running a W2008R2 IIS virtual machine using VMWare. We are moving to a 2 cpu 24 gb server with W2012 R2 on Hyper-V. On our virtual machine we are running iis 7.5 and sql server. Sql Server seems to want to eat up all the memory so I had to cap it at 2gb, which doesn't seem sufficient. My question is, when I move the virtual machine to the new server, should I create 2 virtual machines, one for sql server and one for IIS? Or should I leave them both on the same virtual machine? Or even, put the Sql Server on the dedicated server and run the IIS in the vm? I'd like some input on how this should be done, I've not got the experience needed to make the right call. Thanks!

    Read the article

  • SQL Server Column Level Encryption - Rotating Keys

    - by BarDev
    We are thinking about using SQL Server Column (cell) Level Encryption for sensitive data. There should be no problem when we initially encryption the column, but we have requirements that every year the Encryption Key needs to change. It seems that this requirement may be problem. Assumption: The table that includes the column that has sensitive data will have 500 million records. Below are the steps we have thought about implementing. During the encryption/decryption process is the data online, and also how long would this process take? Initially encrypt the column New Year Decrypt the column Encrypt the column with new key. Question : When the column is being decrypted/encrypted is the data online (available to be query)? Does SQL Server provide feature that allows for key changes while the data is online? BarDev

    Read the article

  • IIS7 defaulting to default.php instead of default.aspx

    - by emzero
    Hi guys. My client has just got a new dedicated server running Win2008 (we had 2003 before), II7, etc. I started setting a little ASP.NET 2.0 web application we have. Running on its own AppPool 2.0. The problem is that when I browse the site root (locally or remotely), I get 404 because the url now points to http://domain/default.php, when it should be default.aspx. Yes, I've checked the Defaults Documents settins for the website and I deleted everything but default.aspx (default.php was not even listed). To finish, I'll say that if I navigate to http://domain/default.aspx, the site works perfectly and I can follow links without problem. Any idea why is this happening? Or at least where I should start looking? Thanks!

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >