Search Results

Search found 16890 results on 676 pages for '2008 archive'.

Page 208/676 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Adding a W2008 Authenticating Server to existing W2003 Domain?

    - by spelk
    I have an existing W2003 Domain, simple setup with one DC and a SQL Server (approx 100 users). There are issues with Windows 7 Clients and login scripts and we're now seeing much greater numbers of Windows 7 users turning up as they upgrade their PC/Laptops. What I want to do, is add another Server with W2008 on it, and authenticate the Windows 7 Clients - but leave the W2003 server running as is - to prevent disruption to the network and the existing WinXP users. Is it possible? Any advice as to how do this, without major disruption to the W2003 network?

    Read the article

  • Automate new AD user's home folder creation and permission setup

    - by vn.
    I know if we setup a base folder or a profile path in the Profile tab of an AD user, we can copy it and the folder creation and permission setup will be automated. My problem is that not all my users have a roaming profile and the home folder linking is done thru GPO. When I copy from these users, the home folder isn't created automatically and I have to create it manually and change permission and ownership on that folder, located on the fileserver. What should I do? A script may be nice but it'd have to be run everytime a new user is created and I don't think we can link a script to an AD user creation? I'd like to avoid any manual steps and keep my GPO that way. Using a W2008r2 DC on w7 client boxes. Thanks.

    Read the article

  • How do quotes/strings work in Powershell?

    - by Casey
    I'm have a command line that works in the regular old Windows Command Shell, but somehow gets misinterpreted in Powershell (I'm fairly new to Powershell). sqlcmd -S .\SQLEXPRESS -i "f:\SQLBackups\ExpressMaint.sql" -v DB="ksuite" -v OPTYPE="DB" -v BACKUPFOLDER="f:\SQLBackups" -v REPORTFOLDER="f:\SQLBackups\Reports" -v DBRETAINUNIT="days" -v DBRETAINVAL="7" Powershell seems to be stripping the drive letters out of the arguments that require paths. For example, I get the following when I attempt to run the above command in Powershell: Sqlcmd: ':\SQLBackups': Invalid argument. Enter '-?' for help. Well sure it's invalid without the drive letter. I have tried variations on double quoting it, escaping it, etc. but can't get it to work. What am I missing that Powershell does differently?

    Read the article

  • Can't Remote into Windows Server

    - by Brian
    Hello, I have a Dell server wired into the router. I was able to connect to it with my laptop (laptop is wireless) before my router died. My verizon router went kaput, and I got everything else back up and running on the wireless network other than the remoting in feature, even though I can access the server through windows explorer just fine. Any ideas why? What do I need to check? UPDATE: Interesting scenario, Network Discovery is off; I turn it on and save, but for some reason, even after that, network discovery is turning itself off... no idea why that is happening? Thanks.

    Read the article

  • MSSQLServer2008\Instance, Why?

    - by Ice
    Hi, im aware of the possibility to create instances but i don't know a real good reason to do it. This way one has per definition at least two sqlserver services running, but what for should this be good? The two instances have to share all the ressouces mainly the RAM. If you have to rename the server you will end up with an access like \NEWSQLServer\OldInstanceName. So what is the case for instances?

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Can I use dis-similar HW for Win2008r2 DFS-R

    - by cwheeler33
    The setup: Windows 2008R2 Ent on two machines. The roles on each server will include File Servers and DC's. The machines come from two different vendors (Dell/HP) The Dell is an Athlon and the HP is an Intel. Both have roughly the same speed CPU and 8GB of RAM. They have different Raid controllers, and more or less the same amount of disk space (roughly 6TB.) Can the servers use different types of hardware? Is there any documentation about this? The last question I have is about the network. Can DFS-R be forced to use a differen subnet from the regular network?

    Read the article

  • Why is file sharing over internet still working, despite all firewall exceptions for filesharing being disabled?

    - by Triynko
    Every exception in my windows server firewall that starts with "File and Printer Sharing" is disabled (ordered by name, so that includes domain, public (active), and private profiles). The Network and Sharing Center's options for everything except password protected sharing are off. Why would I still be able to access a network share on that server via an address like "\\my.server.com\" over the internet? The firewall is on for all profiles and blocking incoming connections by default. A "netstat -an" command on the server reveals the share connection is occurring over port 445 (SMB). I restarted the client to ensure it was actually re-establishing a new connection successfully. Is the "Password protected sharing: On" option in Network and Sharing Center bypassing the firewall restrictions, or adding some other exception somewhere that I'm missing? EDIT: "Custom" rules are not the problem. It's the "built-in" rules for Terminal Services that was the problem. Can you believe port 445 (File Sharing Port) has to be wide open to the internet to use Terminal Services Licensing?)

    Read the article

  • Active Directory management with low user rights

    - by DemonWareXT
    Our problem: The client, a normal user, has to be able to reset multiple passwords at once. Around 30 in one go. This would call for powershell or something along these lines, but for AD and Powershell one needs to be domain administrator. My solution would be to make a service that runs on the AD server and take connections from a program. The service would then do the AD changes. So far so good, I would just like to hear some other thoughts on this problem. Because I sure can't be the only one with it

    Read the article

  • How to send a direccion to the void using the Hosts file and without using 127.0.0.1?

    - by magallanes
    I have some name address that i want to send straight to the void using the HOSTS file but i don't want to use the 127.0.0.1. How can i do that?. Why?, I want to speed up some proccess but 127.0.0.1 is serving a webserver, so if i use 127.0.0.1 then this process will call my webserver, consuming resources and may be delaying the process. Right now, i am using 0.0.0.0 instead of 127.0.0.1 but i am not sure if it is correct. 0.0.0.0 crl.microsoft.com

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • What's better for deploying a website + DB on EC2: 2 small VM or a large one?

    - by devguy
    I'm planning the deployment of a mid-sized website with a SQL Server Standard DB. I've chosen Amazon EC2 to deploy it. I now have to choose between these 2 options: 1) get 2 small instances (1 core each, 1.7 GB of ram each): one for the IIS front-end, one for running the DB. Note: these "small instances" can only run the 32-bit version of Win2008 Server 2) a single large instance (4 cores, 7.5 gb of ram) where I'd install both IIS and the SQL Server. Note: this large instance can only run the 64-bit version of Win2008 Server What's better in terms on performance, scalability, ease of management (launch up a new instance while I backup the principal instance) etc. All suggestions and points of view are welcome!

    Read the article

  • Exchange server listening on port 25 but clients dont recieve emails

    - by Josh R
    My exchange server is listening on port 25, that is I can telnet into it and send an email but Outlook 2010/2007, OWA, and ActiveSync are not pulling down emails. Outlook 2010 specifically says Connected To Exchange Server and Updating Inbox but it never updates the inbox. Also, OWA shows some of the newer mail messages, but when I double click on one to open it up in OWA, it times out. Any idea what could be causing this? Also Exchange Transport and Information Store are started. Thanks!

    Read the article

  • Adding Internal DNS server in Host file

    - by Param
    I have added Global DNS server ip address to one of my Desktop ( please see the Network configuration screenshot ). and after that i have added my both domain controller ip address in host file, and it is working fine. ( please see the below screen-shot for your reference ) Can you please guide, what problem can i face if i kept my configuration in this way. but i am wondering, can this setting can create a problem? because the computer will be able to reach corp.abc.com easily, with the help of host file.

    Read the article

  • How to prevent a database from being restored?

    - by André
    Is there a way to prevent a database from being restored with a DDL trigger or something? The background is that I would like to prevent restoring a database on a test server by a colleague. So far I had a look a DDL trigger but didn't find the right event to react on the restore action.

    Read the article

  • HP p410i array controller - what happens if i add memory?

    - by James
    I have a p410i array controller that only has 256ram. We want to create a raid 5 so we have procured a 512 write back cache module. If we install the write back cache, will this erase the existing raid information. The server currently has 2 disks in raid 1. 6 are spare waiting for an upgrade to create a raid 5. the concern is if we replace/upgrade the memory for the controller, we will wipe the existing production raid 1 array. Thanks in advance.

    Read the article

  • WSUS - Auto-approve only "Needed" updates

    - by Jonathan Rioux
    I'v looked through all the settings in the Automatic Approval menu, but it could not find anything about automatically approve only the needed updates. Because if I check, for instance, to auto-approve only the "Definition updates", it will approve any Definition updates, whether they are needed by my workstations or not. This is because I dont want my WSUS server to download and store updates that are not needed by any of my workstations. Also we are a lazy SMB, and we dont want to waste time to manually approve updates and stuff. Is this even possible ?

    Read the article

  • How to tune Windows 2008r2 and IIS to maximize single file download speeds?

    - by uSlackr
    We recently put up an IIS site (on WinSvr 2008r2) that is used almost exclusively for downloading files over the internet. The data exists as a large collection of .zip files ranging from 1MB - 35GB in size. We want to allow a lot of downloads during a day (more than 500GB) but have implemented an outbound ASA throttle at 60mbps in order to preserve bandwidth for other uses. The total link speed is 100mbps. Here's the interesting part: While we can serve up multiple downloads to hit the 60mbps cap, we cannot get any single download to exceed 2.5M bytes/sec (20 Mbits/s). Is there any TCP or IIS tuning we can do to push up individual download speeds? Or something else to look at?

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • How can I retrieve statistics from my ghost cast server?

    - by Foxtrot
    I have a GhostCast server running for deploying images. I would like to have each ghost cast session to write to a file ( can be multiple text files or append to one file already there ) statistics. I know this is possible based on the options GhostCast software provides for writing to a log file, but I would like this automated for every image being backed up and restored. I don't want to have my employees click write to a new file every time. Is this possible?

    Read the article

  • IIS returning plain Forbidden response. No HTTP code

    - by Alex Pineda
    I'm running a ServiceStack application on IIS. My regular services work fine and have not had any problems with permissions. My new project involves providing generated pdfs. I gave IIS_IUSRS read/write permissions to the Temp directory under my app directory. I also allow non SSL connections to this directory. When I browse to the file which ServiceStack is supposed to automatically serve up (eg. http://ryu.com/Temp/201310171723337631.pdf ) I get this: Forbidden Request.HttpMethod: GET Request.PathInfo: Request.QueryString: Request.RawUrl: /ryu/Temp/201310171723337631.pdf App.IsIntegratedPipeline: True App.WebHostPhysicalPath: C:\inetpub\ryu App.WebHostRootFileNames: [global.asax,global.asax.cs,web.config,bin,temp] Now this doesn't look like a ServiceStack error message, more like IIS, but I'm not certain as to how to get to the bottom of this. Authorization settings are Allow All.

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >