Search Results

Search found 14350 results on 574 pages for 'sbs 2008'.

Page 202/574 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • is my xml cached by IIS7?

    - by user22817
    Hi guys, I have a flashplayer on my website. Behind it, there is an xml file which keeps the configuration of the products shown on that flashplayer. I do not understand why, if i modify something in the xml config file, the flash player is not updated at that moment, but after hours... Do you know why? I have not defined any rule for output caching in IIS7 so there is probably something default? Of course, i've tried Ctrl+f5 but does not work. Thanks.

    Read the article

  • Adding multiple websites with different SSL certificates in IIS 7

    - by Timka
    I'm having troubles using SSL for 2 different websites on my IIS 7 server. Please see my setup below: website1: my.corporate.portal.com SSL certificate for website1: *.corporate.portal.com https/443 binded to my.corporate.portal.com website2: client.portal.com SSL certificate issued for: client.portal.com When I try to bind https in IIS7 with the client's certificate, I don't have an option to put host name(grayed out) and as soon as I select 'client.portal.com' cert, I'm getting the following error in IIS: At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate? If I click 'yes' my.corporate.portal.com website stops using the proper SSL cert. Could you suggest something?

    Read the article

  • Can Ping but Cannot Telnet directly to SQL Server 2012 Cluster Nodes

    - by tresstylez
    We have a monitoring tool (Solarwinds Orion) that needs to connect to a 2-node failover SQL Server Cluster. For reasons outside of our control -- we cannot monitor the CLUSTER IP directly at this time, so we have fallen back to monitoring each cluster node IP directly. This is not working. Upon troubleshooting, we tried to test that the cluster node was listening on the proper (fixed) port by using telnet to the cluster node IP/port -- and the telnet failed. However, telnet'ing to the Cluster IP/Port was SUCCESSFUL! Each node has its own IP. Each node is listening on the identical FIXED port. Each node has Dynamic Ports disabled. Each node can be PINGED successfully from the monitoring tool. Windows Firewall is DISABLED. How can I troubleshoot why I cannot telnet to the listening port on the cluster nodes?

    Read the article

  • If my Remote Desktop Connection Broker server goes down, can users still access my two Terminal Servers?

    - by Frank Owen
    I would like to setup the Remote Desktop Connection Broker to allow better load balancing of the two terminal servers we have as well as allowing the user to re-establish to the correct server if they get disconnected. My worry is, if I set this up and the server this service is running goes down, does the terminal server stop accepting connections or will they just lose the benefit of having RDCB turned on? I don't want to add another point of failure in this equation unless I have to.

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • Trust relationship between this workstation and the primary domain has failed

    - by Funky Si
    I have one laptop used by one user that keeps getting the following error and is unable to log in. Trust relationship between this workstation and the primary domain has failed Can anyone help tell me what is causing this and what I can do to stop it happening. I have tried reformatting which helps for a while, I have also tried reformatting with a new name which helped for a while. This problem has not affected any other user or computer which suggests it is not a problem with the domain but I don't know what to do to stop it happening.

    Read the article

  • shut down FTP from IIS 6 after <X> failed login attempts

    - by Justin C
    Is there a setting in IIS 6 to turn an FTP site off after a specified number of failed login attempts? It has already been documented on this site that a Windows server sitting on a static IP address can record tens of thousands of failed login attempts a month. One server I maintain has had tens of thousands of attempts made against the FTP port. I have solid passwords in place, so I am not overly concerned. I rarely have to use the FTP, so for the most part I turn it on and off as I need it. Sometimes though I forget to turn it off when I am done, only to find the next day that my EventLog is full of audit failures. I would want to set a high number, in case I just messed up the password. Something like if 50 failed login attempts happen, just turn off the FTP site. Then if I need it later I can just start it again.

    Read the article

  • One vs. many domain user accounts in a server farm

    - by mjustin
    We are in a migration process of a group of related computers (Intranet servers, SQL, application servers of one application) to a new domain. In the past we used one domain user account for every computer (web1, web2, appserver1, appserver2, sql1, sqlbackup ...) to access central Windows resources like network shares. Every computer also has a local user account with the same name. I am not sure if this is necessary, or if it would be easier to configure and maintain to use one domain user account. Are there key advantages / disadvantages of having one single user account vs. dedicated accounts per computer for this group of background servers? If I am not wrong, one advantage besides easier administration of the user account could be that moving installed applications and services around between the computers does not require a check of the access rights anymore. (Except where IP addresses or ports are used)

    Read the article

  • DFS: Error creating domain-based namespace "The namespace cannot be queried. The RPC server is unavailable."

    - by Scolytus
    I try to create a domain-based namespace but when I hit "next" on the "choose type wizard page" I get the message: The namespace cannot be queried. The RPC server is unavailable. I found one article regarding this problem, and indeed the DNS resolution of the DomainDNSName was flawed. But although I fixed this DNS issue, I still get this message. A stand-alone namespace can be created. Any ideas how to solve this problem? Is it possible that this is related to the fact that I'm using a single-lable domain name?

    Read the article

  • Active Directory - Join domain in specific OU when its a workstation

    - by Jonathan Rioux
    I would like to know how can I accomplish the following: When I join a workstation (with Windows 7) in the domain, I want that computer to be put into a specific OU. Only when its a workstation with Windows 7. This is because I have GPOs that must apply to all workstations in the domain. Can I only accomplish this using a script? Or can I set a rule like if the computers has Windows 7, put that computer into this specific OU.

    Read the article

  • Transfering from Mail Enable to Hmail Server

    - by air
    i have one windows server (Plesk 9.5) with MailEnable free Edition, i want to transfer from MailEnable to Hmail Server, i am looking for some program or script to transfer all email accounts, email redirects and emails from MailEnable to Hmail Server. Thank

    Read the article

  • Recommended Win2k8 Server software to fix my RAID-0 issue

    - by Jason Kealey
    I'm running an Asus P6T V2 Deluxe. It has six SATA ports and supports onboard RAID. I am using two of those ports for a RAID0 array of 1.5TB Seagate drives using the onboard RAID controller. One of them is giving me SMART warnings and I want to preemptively replace it. I pulled out two other 1.5TB drives from another computer and am ready to use one or both, if necessary. I can't run any SMART diagnostic software from within Windows because it only sees the hardware RAID-0 array, not each individual drive. The first thing I tried was a slow sector-by-sector copy using a free tool called EASEUS Disk Copy. Used the bootdisk, copied (took like 16 hours), unplugged the defective drive and plugged the new one in its place. The motherboard didn't recognize the new drive as being part of the known setup, so it did not want to boot. The second thing I tried was using other software (I forget the name) to copy the partition from within Windows. The first software failed because I had a server operating system. I found another software (I forget the name) which supported a server OS and did a partition copy onto the new drive. This seemed to work and the OS started to boot, but blue screened and started a reboot cycle. I'm assuming the software I was using was no good as it was trying to copy the boot disk while it was in use. I am looking for recommendations on what software to use to fix my problem without doing a re-install. Everything is backed up but my computer works fine and I'd like to avoid re-installation when possible. However, my system would be back up now if I had just started over on a second RAID array. :)

    Read the article

  • How to export User cert with private key in PKCS12 format

    - by andreas-h
    I'm running Win2008R2, and have installed an Enterprise CA. I can create user certs, but no matter what I do, I cannot export the private key. I'm using the un-touched User certificate template, and the "allow export of private key" option is selected. Still, whenever I go to the "export" dialogue of the certificate (both as user and as administrator), I don't get asked if I want to export the private key, and the option to select PKCS12 format is grayed out. Any help is greatly appreciated!

    Read the article

  • .VBS scripts have stopped running... No idea why!

    - by Django Reinhardt
    We have two .vbs scripts that are run by our Task Scheduler that have suddenly stopped working for no reason we can fathom. We haven't significantly altered our system configuration in the last 24 hours, and the scripts have run without a hitch for months. According to the Task Scheduler the scripts just keep running and never stop, which is never the case. I stopped all running versions through the Scheduler and manually attempted to run one of the .vbs scripts. I got the following error message: Line: 15 Error: The system cannot locate the resource specified. Code: 800C0005 Source: msxml3.dll Line 15 (or 16 to be more accurate - line 15 itself is blank, but so is line 1) is: xml.Send Would could have suddenly caused this? Looking in system32\ and sysWOW64\ shows that msxml3.dll exists. Anybody got any ideas? Thanks a lot!

    Read the article

  • Why would the servers network type change from Private to Public?

    - by Phil Hannent
    Just found a fault with a server, other users have had problems connecting to it. The setting on the network card had changed from Private (domain) network to Public (the other option being Home). The switch to the network interface would have caused the firewall to block a lot of normal functions. I am guessing that since the event log showed no reason for the change that it might be due to a complete shutdown we had recently where someone powered up the machines, however the domain controllers might not have been booted up first. Any confirmation that this might be the case?

    Read the article

  • Create Report in Microsoft Excel - Permissions Issue in Team Foundation Server 2010

    - by sammarcow
    I am trying to use the feature "Create a Report in Microsoft Excel" for Visual Studio TFS 2010. I am being prompted for a username and password in Excel for any given Team Project when right clicking the item "Active Tasks", selecting "Create a Report in Microsoft Excel," found in the following path within the Team Explorer pane, from Visual Studio 2010: 'Collection Name' | 'Project Name' | Work Items | Iteration 1 | Active Tasks I am a Team Project Administrator and Collection Administrator. I checked The SharePoint site: http :// 'serverName'/Reports/Pages/Folder.aspx and have full sitewide and project permissions on this site. I loaded SQL Server Management Studio on the machine running the instance of TFS (and also the SQL backend for TFS) and ensured that I had roles "serveradmin" and "sysadmin". How do I run this report? Specifically, what permissions are required?

    Read the article

  • Tracking Administrator account for Domain Controller

    - by Param
    Have you ever created a Task Scheduler Event Notification via Email regarding password change or wrong attempt for administrator? Ok Let me Elaborate more.... As we know, that Administrator / domain admin / Enterprise admin is very important. So i want to keep a track of the following event - A) I must received a Email, whenever password is change of the administrator account - with date, time and ip address B) I must received a Email notification, whenever Administrator logs in Successfully or Unsuccessfully with date, time and ip address I am thinking to do the above task with Task Scheduler Event Notification, have you ever done with the same method? Thanks & Regards, Param

    Read the article

  • What is best configuration settings for Wordpress and MySQL on Win2008 + IIS7 stack?

    - by holiveira
    I currently have four blogs that uses Wordpress running on a shared hosting company. This blogs have a considerable amount of visits and I'm constantly receiving warnings from the hosting company saying that I'm consuming too much server CPU. Considering the fact that I have a dedicated server in another company with plenty of idle resources (it has a quad core Xeon 2.5GHz and 8GB of Ram and run on Win2008) I'm planning to move the blogs to this server in order to have some more freedom. I'm currently using this server to host some web applications using ASP.Net and SQL Express. I've installed a blog to test and it worked fine, but some issues appeared and raised some questions in my mind: How to properly set the permissions in the folders used by wordpress plugins, I mean, what permissions should I set for the IIS_User in some folders so that the plugins works correctly? What's the best caching plugin to use considering this is a Window Server? In the previous hosting company I used the WPSuperCache, but it was a Linux Stack. Or should I ignore the caching plugins and use the Dynamic Caching Feature of IIS7? How can I optmize the MySQL server running in this server (specially the settings regarding memory and caching) How can I protect the admin folders against hacker attacks? I know some people will advice me not to run Wordpress in a Windows stack, but that's my only choice. I don't even know were to start managing and LAMP stack, don't have the time to do so nor the money to rent another server.

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • Using my System as the server- need advice

    - by Ashwin
    We are deploying my web application in my system in jboss. And I am planning to use my system as the server as well. There are no html pages or jsp pages deployed. The client requests for resources and the server provides the resources in the form of objects. We are also using databases(Postgresql-15 tables) as the database server(this will also reside in my machine). My system configuration is Windows Vista, 2.3 GHZ, 4GB Ram, 32 bit. There can be many requests coming in at the same time. So is this configuration enough? Should we go with a different Operating System like Windows Server? I have never used Windows Server OS. How will it be different from other windows operating systems? Or if you feel please some other OS will be good in this situation, please suggest.

    Read the article

  • Attempted hack on VPS, how to protect in future, what were they trying to do?

    - by Moin Zaman
    UPDATE: They're still here. Help me stop or trap them! Hi SF'ers, I've just had someone hack one of my clients sites. They managed to get to change a file so that the checkout page on the site writes payment information to a text file. Fortunately or unfortunately they stuffed up, the had a typo in the code, which broke the site so I came to know about it straight away. I have some inkling as to how they managed to do this: My website CMS has a File upload area where you can upload images and files to be used within the website. The uploads are limited to 2 folders. I found two suspicious files in these folders and on examining the contents it looks like these files allow the hacker to view the server's filesystem and upload their own files, modify files and even change registry keys?! I've deleted some files, and changed passwords and am in the process of trying to secure the CMS and limit file uploads by extensions. Anything else you guys can suggest I do to try and find out more details about how they got in and what else I can do to prevent this in future?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >