Search Results

Search found 90459 results on 3619 pages for 'server cache'.

Page 319/3619 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Unable to use "Manage Content and Structure" after removing Project server form the SharePoint farm.

    - by Brian
    We're no longer using Office Project Server, and I've removed it from the farm in which it was installed. However, now that it's been removed, I am unable to access the "Manage Content and Structure" link on some of our SharePoint sites. I get an error indicating that SharePoint Failed to find the XML file at location '12\Template\Features\PWSCommitments\feature.xml' Anyone have an idea how to fix this?

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • What is the easiest/simplest way to change the HD on a Linux server?

    - by ArmlessJohn
    Hello. I have a machine running Ubuntu Server that has been presenting some HD-related problems. Instead of reinstalling and reconfiguring everything (and to save time) we'd like to copy everything from the current hard drive to a new one and start using it. We only have a single hard drive with a main partition and a swap partition. What tools or methods would you recommend for replacing a hard drive with minimum difficulty and chance of problems? Thank you.

    Read the article

  • Is RAID 5 or SnapRAID the better alternative for a media server raid system?

    - by rubo77
    I am using a raid 5 system for my ubuntu 12.04 xmbc media server with 5 disks. Since the data isn't changing a lot and a total loss wouldn't be so bad, cause I have another backup anyway I am thinking about using SnapRAID It sais: SnapRAID is mainly targeted for a home media center, where you have a lot of big files that rarely change The main advantage for me would be power-saving, cause not all disks have to run all the time. Would you recomment using this? (with a regular resync script once a day)

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • Changed Folder Redirection on Server 2008 R2, I didn't see anything change.

    - by Robert Hurst
    I need some help guys. I've just modified the Folder Redirection Policy on my server at work so the hiarcy resembles windows 7's user folder structure rather than XP. I did this to the book so I know its right. However, I'm still seeing that all the user accounts are still layed out as XP compatible folder hiracys. Is there somthing I need to do to update these accounts with the new policy?

    Read the article

  • Trying to upgrade SQL Server 2008 to R2 but SQL is sleeping or dead?

    - by oJM86o
    I've used the option to upgrade SQL Server 2008 to R2 but I noticed it gets to about 20-30% and it just sits on this screen: I've left it alone for over 2 hours, the PC is definately not frozen cause I can click the help or move the window around but it says "Install_sql_common_core_loc_Cpu64_1033_action: Install Files. Copying new files" for the past 2 hours. I have tried to do the install from a CD as well as a network drive, both with the same issue. Is there anything I can check or do ?

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • SQL Service Broker enabled causes 100% CPU

    - by user40373
    I have new set of code for a website that is using SqlCacheDependencies based on sql commands. I have enabled SQL Service Broker and some triggers on update/insert/delete and it is causing 100% CPU. Any ideas if I am doing something wrong or suggestions to improve? Here are the SQLchanges I ran: alter database DATABASE_NAME set enable_broker WITH ROLLBACK IMMEDIATE grant subscribe query notifications to CONNECTION_USER_NAME grant send on service::sqlquerynotificationservice to CONNECTION_USER_NAME ALTER AUTHORIZATION ON DATABASE::DATABASE_NAME TO CONNECTION_USER_NAME;

    Read the article

  • IIS seems to be caching files on a system share?

    - by scott novell
    Switching over to windows 2008 and IIS 7.5 and it seems whenever I make a change to a css file on a system share it does not show through the browser for a few mins. It is shown through the browser using an ISAPI filter. I have turned off output caching in IIS and also turned off caching on the share itself. The browser is not caching either forcing a 200 and it is cached. Any ideas

    Read the article

  • Recommendations for Windows Server 2008 Hyper-V VPS Host?

    - by user37042
    I'm looking for a well-respected, high-performing Windows Server 2008 Hyper-V VPS host. For my linux hosting, I use a shared WebFaction account, so I'm spoiled by their incredible service and support. RackSpaceCloud also sounds really good, especially for linux hosting, but it sounds like their Windows hosting is just getting off the ground. I've heard good things about SoftSysHosting, but I didn't know if there were any other VPS providers out there that people will give strong endorsements for (as I do for WebFaction every chance I get).

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Varnish + Tomcat vs Apache + mod_jk + Tomcat

    - by Adrian Ber
    Does anyone have some comparison data in terms of performance for using in front of Tomcat either Varnish or Apache with mod_jk. I know that AJP connector suppose to be faster than HTTP, but I was thinking that in combination Varnish which is lighter and highly optimized could perform better. There is also the discussion between static resources (which I think will perform faster with Varnish than Apache, even with mod_cache) and dynamic pages.

    Read the article

  • I'm trying to connect to a user session who's using remote desktop on my windows server 2003

    - by hitham
    I'm trying to connect to a user session who's using remote desktop on my windows server 2003 × 2024, from task manager users but it comes up with Connect Password Required. I tried his password that he uses to log on to RD but it wont work, i tried every password i know of and nothing. How do I connect to his session? What password do i use and if that's the one why wont it work? Thanks.

    Read the article

  • How to bind a domain for MS Project Server 2010?

    - by Gk
    I've installed MS Project Server 2010 and have to connect via a URL like this one: http://mysite/pwa/ I want to connect using new domain like that: http://newsite/. I can use redirection settings on IIS but cannot connect by Project Client. Anyway to do that thing? Thanks.

    Read the article

  • Is a dedicated server similar in setup to a VPS?

    - by Dr Hydralisk
    I was thinking about getting a dedicated server (I may need the extra power that a VPS can't provide) from The Planet but I don't know to much about how you would operate one. I have experience in setting up multiple VPS's on Linode and Slicehost, I just select my OS in their CP and connect via SSH in putty and do my thing. Is it the same with dedicated servers (just chose you OS from the CP and connect via SSH and put on whatever crap you want)?

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >