Search Results

Search found 2416 results on 97 pages for 'appfabric caching'.

Page 11/97 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • hibernate distributed 2nd level cache options

    - by ishmeister
    Not really a question but I'm looking for comments/suggestions from anyone who has experiences using one or more of the following: EhCache with RMI EhCache with JGroups EhCache with Terracotta Gigaspaces Data Grid A bit of background: our applications is read only for the most part but there is some user data that is read-write and some that is only written (and can also be reasonably inaccurate). In addition, it would be nice to have tools that enable us to flush and fill the cache at intervals or by admin intervention. Regarding the first option - are there any concerns about the overhead of RMI and performance of Java serialization?

    Read the article

  • Windows Azure AppFabric: ServiceBus Queue WPF Sample

    - by xamlnotes
    The latest version of the AppFabric ServiceBus now has support for queues and topics. Today I will show you a bit about using queues and also talk about some of the best practices in using them. If you are just getting started, you can check out this site for more info on Windows Azure. One of the 1st things I thought if when Azure was announced back when was how we handle fault tolerance. Web sites hosted in Azure are no much of an issue unless they are using SQL Azure and then you must account for potential fault or latency issues. Today I want to talk a bit about ServiceBus and how to handle fault tolerance.  And theres stuff like connecting to the servicebus and so on you have to take care of. To demonstrate some of the things you can do, let me walk through this sample WPF app that I am posting for you to download. To start off, the application is going to need things like the servicenamespace, issuer details and so forth to make everything work.  To facilitate this I created settings in the wpf app for all of these items. Then I mapped a static class to them and set the values when the program loads like so: StaticElements.ServiceNamespace = Convert.ToString(Properties.Settings.Default["ServiceNamespace"]); StaticElements.IssuerName = Convert.ToString(Properties.Settings.Default["IssuerName"]); StaticElements.IssuerKey = Convert.ToString(Properties.Settings.Default["IssuerKey"]); StaticElements.QueueName = Convert.ToString(Properties.Settings.Default["QueueName"]);   Now I can get to each of these elements plus some other common values or instances directly from the StaticElements class. Now, lets look at the application.  The application looks like this when it starts:   The blue graphic represents the queue we are going to use.  The next figure shows the form after items were added and the queue stats were updated . You can see how the queue has grown: To add an item to the queue, click the Add Order button which displays the following dialog: After you fill in the form and press OK, the order is published to the ServiceBus queue and the form closes. The application also allows you to read the queued items by clicking the Process Orders button. As you can see below, the form shows the queued items in a list and the  queue has disappeared as its now empty. In real practice we normally would use a Windows Service or some other automated process to subscribe to the queue and pull items from it. I created a class named ServiceBusQueueHelper that has the core queue features we need. There are three public methods: * GetOrCreateQueue – Gets an instance of the queue description if the queue exists. if not, it creates the queue and returns a description instance. * SendMessageToQueue = This method takes an order instance and sends it to the queue. The call to the queue is wrapped in the ExecuteAction method from the Transient Fault Tolerance Framework and handles all the retry logic for the queue send process. * GetOrderFromQueue – Grabs an order from the queue and returns a typed order from the queue. It also marks the message complete so the queue can remove it.   Now lets turn to the WPF window code (MainWindow.xaml.cs). The constructor contains the 4 lines shown about to setup the static variables and to perform other initialization tasks. The next few lines setup certain features we need for the ServiceBus: TokenProvider credentials = TokenProvider.CreateSharedSecretTokenProvider(StaticElements.IssuerName, StaticElements.IssuerKey); Uri serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", StaticElements.ServiceNamespace, string.Empty); StaticElements.CurrentNamespaceManager = new NamespaceManager(serviceUri, credentials); StaticElements.CurrentMessagingFactory = MessagingFactory.Create(serviceUri, credentials); The next two lines update the queue name label and also set the timer to 20 seconds.             QueueNameLabel.Content = StaticElements.QueueName;             _timer.Interval = TimeSpan.FromSeconds(20);             Next I call the UpdateQueueStats to initialize the UI for the queue:             UpdateQueueStats();             _timer.Tick += new EventHandler(delegate(object s, EventArgs a)                         {                      UpdateQueueStats();                  });             _timer.Start();         } The UpdateQueueStats method shown below. You can see that it uses the GetOrCreateQueue method mentioned earlier to grab the queue description, then it can get the MessageCount property.         private void UpdateQueueStats()         {             _queueDescription = _serviceBusQueueHelper.GetOrCreateQueue();             QueueCountLabel.Content = "(" + _queueDescription.MessageCount + ")";             long count = _queueDescription.MessageCount;             long queueWidth = count * 20;             QueueRectangle.Width = queueWidth;             QueueTickCount += 1;             TickCountlabel.Content = QueueTickCount.ToString();         }   The ReadQueueItemsButton_Click event handler calls the GetOrderFromQueue method and adds the order to the listbox. If you look at the SendQueueMessageController, you can see the SendMessage method that sends an order to the queue. Its pretty simple as it just creates a new CustomerOrderEntity instance,fills it and then passes it to the SendMessageToQueue. As you can see, all of our interaction with the queue is done through the helper class (ServiceBusQueueHelper). Now lets dig into the helper class. First, before you create anything like this, download the Transient Fault Handling Framework. Microsoft provides this free and they also provide the C# source. Theres a great article that shows how to use this framework with ServiceBus. I included the entire ServiceBusQueueHelper class in List 1. Notice the using statements for TransientFaultHandling: using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; The SendMessageToQueue in Listing 1 shows how to use the async send features of ServiceBus with them wrapped in the Transient Fault Handling Framework.  It is not much different than plain old ServiceBus calls but it sure makes it easy to have the fault tolerance added almost for free. The GetOrderFromQueue uses the standard synchronous methods to access the queue. The best practices article walks through using the async approach for a receive operation also.  Notice that this method makes a call to Receive to get the message then makes a call to GetBody to get a new strongly typed instance of CustomerOrderEntity to return. Listing 1 using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; using System.Xml.Serialization; using System.Diagnostics; namespace WPFServicebusPublishSubscribeSample {     class ServiceBusQueueHelper     {         RetryPolicy currentPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);         QueueClient currentQueueClient;         public QueueDescription GetOrCreateQueue()         {                        QueueDescription queue = null;             bool createNew = false;             try             {                 // First, let's see if a queue with the specified name already exists.                 queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 createNew = (queue == null);             }             catch (MessagingEntityNotFoundException)             {                 // Looks like the queue does not exist. We should create a new one.                 createNew = true;             }             // If a queue with the specified name doesn't exist, it will be auto-created.             if (createNew)             {                 try                 {                     var newqueue = new QueueDescription(StaticElements.QueueName);                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.CreateQueue(newqueue); });                 }                 catch (MessagingEntityAlreadyExistsException)                 {                     // A queue under the same name was already created by someone else,                     // perhaps by another instance. Let's just use it.                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 }             }             currentQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName);             return queue;         }         public void SendMessageToQueue(CustomerOrderEntity Order)         {             BrokeredMessage msg = null;             GetOrCreateQueue();             // Use a retry policy to execute the Send action in an asynchronous and reliable fashion.             currentPolicy.ExecuteAction             (                 (cb) =>                 {                     // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not                     // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.                     msg = new BrokeredMessage(Order);                     // Send the event asynchronously.                     currentQueueClient.BeginSend(msg, cb, null);                 },                 (ar) =>                 {                     try                     {                         // Complete the asynchronous operation.                         // This may throw an exception that will be handled internally by the retry policy.                         currentQueueClient.EndSend(ar);                     }                     finally                     {                         // Ensure that any resources allocated by a BrokeredMessage instance are released.                         if (msg != null)                         {                             msg.Dispose();                             msg = null;                         }                     }                 },                 (ex) =>                 {                     // Always dispose the BrokeredMessage instance even if the send                     // operation has completed unsuccessfully.                     if (msg != null)                     {                         msg.Dispose();                         msg = null;                     }                     // Always log exceptions.                     Trace.TraceError(ex.Message);                 }             );         }                 public CustomerOrderEntity GetOrderFromQueue()         {             CustomerOrderEntity Order = new CustomerOrderEntity();             QueueClient myQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName, ReceiveMode.PeekLock);             BrokeredMessage message;             ServiceBusQueueHelper serviceBusQueueHelper = new ServiceBusQueueHelper();             QueueDescription queueDescription;             queueDescription = serviceBusQueueHelper.GetOrCreateQueue();             if (queueDescription.MessageCount > 0)             {                 message = myQueueClient.Receive(TimeSpan.FromSeconds(90));                 if (message != null)                 {                     try                     {                         Order = message.GetBody<CustomerOrderEntity>();                         message.Complete();                     }                     catch (Exception ex)                     {                         throw ex;                     }                 }                 else                 {                     throw new Exception("Did not receive the messages");                 }             }             return Order;         }     } } I will post a link to the download demo in a separate post soon.

    Read the article

  • Appfabric WF4-WCF services, how to retrive current url in codeactivity without httpcontext?

    - by tartafe
    Hi, i have developed a wf-wcf services with a code activity and in it i want to retrive the current url of the service. If i disabling the persistence feature of appfabric i can retrive the url using HttpContext.Current.Request.Url.ToString() If the persistence feature is enabled the httpcontext is null. There is a different way to retrive the url of th wcf that host my code activity? Thanks in advace

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • AppFabric Caching Feedback

    - by Michael Stephenson
    We are running a survey to collect feedback around scenarios where people were experimenting with velocity on windows 2003 in the CTP but are now in a position where the beta requires windows 2008. We would like to understand how important this scenario is precieved to be. If you are in the Connected Systems Community and would like to provide feedback please complete the following survey http://www.surveymonkey.com/s/N3VKZWN

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • SharePoint 2010 HierarchicalConfig Caching Problem

    - by Damon
    We've started using the Application Foundations for SharePoint 2010 in some of our projects at work, and I came across a nasty issue with the hierarchical configuration settings.  I have some settings that I am storing at the Farm level, and as I was testing my code it seemed like the settings were not being saved - at least that is what it appeared was the case at first.  However, I happened to reset IIS and the settings suddenly appeared.  Immediately, I figured that it must be a caching issue and dug into the code base.  I found that there was a 10 second caching mechanism in the SPFarmPropertyBag and the SPWebAppPropertyBag classes.  So I ran another test where I waited 10 seconds to make sure that enough time had passed to force the caching mechanism to reset the data.  After 10 minutes the cache had still not cleared.  After digging a bit further, I found a double lock check that looked a bit off in the GetSettingsStore() method of the SPFarmPropertyBag class: if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval)) { //Need to exist so don't deadlock. rrLock.EnterWriteLock(); try { //make sure first another thread didn't already load... if (_settingStore == null) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } } finally { rrLock.ExitWriteLock(); } } What ends up happening here is the outer check determines if the _settingStore is null or the cache has expired, but the inner check is just checking if the _settingStore is null (which is never the case after the first time it's been loaded).  Ergo, the cached settings are never reset.  The fix is really easy, just add the cache checking back into the inner if statement. //make sure first another thread didn't already load... if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } And then it starts working just fine. as long as you wait at least 10 seconds for the cache to clear.

    Read the article

  • Image caching when rendering the same images on different pages

    - by HelpNeeder
    I'm told to think about caching of images that will be displayed on the page. The images will be repeated throughout the website on different pages and I'm told to figure out the best way to cache these images. There could be few to dozen of images on single page. Here's few questions: Will browser caching work to display the same images across different web pages? Should I rather store images in stringified form in a memory instead, using JavaScript arrays? Store them on hard drive using 'localStorage'? What would be easiest yet best option for this? Are there any other alternatives? How to force cache? Any other information would be greatly appreciated...

    Read the article

  • Microsoft Launches Pre-release Updates of AppFabric, BizTalk

    Microsoft Application Infrastructure Virtual Conference highlights integration between new cloud-savvy, .NET middleware and Microsoft's integration server....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using AppFabric session state provider, does each session get its own region?

    - by goombaloon
    I've been playing around with AppFabric Beta 2's session state provider. It appears that each new session get its own region (named "Default_Region_XXXX" (where XXXX is an apparent random sequence of numbers). If I understand regions correctly, it appears that each region is tied to a single cluster host, leaving a single point of failure. Why is each session being given it own region? Also, do sessions eventually timeout and clean themselves up in the cache or is that behavior just inherited from the cache settings? I'm wondering (if in a production application scenario), if one would use a separate named cache for session state apart from other application caches? Thanks.

    Read the article

  • Minimalistic PHP template engine with caching but not Smarty?

    - by Pekka
    There are loads of questions for "the right" PHP template engine, but none of them is focused on caching. Does anybody know a lightweight, high-quality, PHP 5 based template engine that does the following out of the box: Low-level templating functions (Replacements, loops, and filtering, maybe conditionals) Caching of the parsed results with the possibility to set an individual TTL per item, and of course to force a reload programmatically Extremely easy usage (like Smarty's) Modest in polluting the namespace (the ideal solution would be one class to interact with from the outside application) But not Smarty. I have nothing against, and often use, Smarty, but I am looking for something a bit simpler and leaner. I took a look at Fabien Potencier's Twig that looks very nice and compiles templates into PHP code, but it doesn't do any actual caching beyond that. I need and want a template engine, as I need to completely separate code and presentation in a way that a HTML designer can understand later on, so please no fundamental discussions about whether template engines in PHP make sense. Those discussions are important, but they already exist on SO.

    Read the article

  • OBIEE 11.1.1 - How to Enable Caching in Internet Information Services (IIS) 7.0+

    - by Ahmed A
    Follow these steps to configure static file caching and content expiration if you are using IIS 7.0 Web Server with Oracle Business Intelligence. Tip: Install IIS URL Rewrite that enables Web administrators to create powerful outbound rules. Following are the steps to set up static file caching for IIS 7.0+ Web Server: 1. In “web.config” file for OBIEE static files virtual directory (ORACLE_HOME/bifoundation/web/app) add the following highlight in bold the outbound rule for caching:<?xml version="1.0" encoding="UTF-8"?><configuration>    <system.webServer>        <urlCompression doDynamicCompression="true" />        <rewrite>            <outboundRules>                <rule name="header1" preCondition="FilesMatch" patternSyntax="Wildcard">                    <match serverVariable="RESPONSE_CACHE_CONTROL" pattern="*" />                    <action type="Rewrite" value="max-age=604800" />                </rule>                <preConditions>    <preCondition name="FilesMatch">                        <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/css|^text/x-javascript|^text/javascript|^image/gif|^image/jpeg|^image/png" />                    </preCondition>                </preConditions>            </outboundRules>        </rewrite>    </system.webServer></configuration>2. Restart IIS. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Abstracting entity caching in XNA

    - by Grofit
    I am in a situation where I am writing a framework in XNA and there will be quite a lot of static (ish) content which wont render that often. Now I am trying to take the same sort of approach I would use when doing non game development, where I don't even think about caching until I have finished my application and realise there is a performance problem and then implement a layer of caching over whatever needs it, but wrap it up so nothing is aware its happening. However in XNA the way we would usually cache would be drawing our objects to a texture and invalidating after a change occurs. So if you assume an interface like so: public interface IGameComponent { void Update(TimeSpan elapsedTime); void Render(GraphicsDevice graphicsDevice); } public class ContainerComponent : IGameComponent { public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it } public void Render(GraphicsDevice graphicsDevice) { foreach(var component in ChildComponents) { // draw every component } } } Then I was under the assumption that we just draw everything directly to the screen, then when performance becomes an issue we just add a new implementation of the above like so: public class CacheableContainerComponent : IGameComponent { private Texture2D cachedOutput; private bool hasChanged; public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it // set hasChanged to true if required } public void Render(GraphicsDevice graphicsDevice) { if(hasChanged) { CacheComponents(graphicsDevice); } // Draw cached output } private void CacheComponents(GraphicsDevice graphicsDevice) { // Clean up existing cache if needed var cachedOutput = new RenderTarget2D(...); graphicsDevice.SetRenderTarget(renderTarget); foreach(var component in ChildComponents) { // draw every component } graphicsDevice.SetRenderTarget(null); } } Now in this example you could inherit, but your Update may become a bit tricky then without changing your base class to alert you if you had changed, but it is up to each scenario to choose if its inheritance/implementation or composition. Also the above implementation will re-cache within the rendering cycle, which may cause performance stutters but its just an example of the scenario... Ignoring those facts as you can see that in this example you could use a cache-able component or a non cache-able one, the rest of the framework needs not know. The problem here is that if lets say this component is drawn mid way through the game rendering, other items will already be within the default drawing buffer, so me doing this would discard them, unless I set it to be persisted, which I hear is a big no no on the Xbox. So is there a way to have my cake and eat it here? One simple solution to this is make an ICacheable interface which exposes a cache method, but then to make any use of this interface you would need the rest of the framework to be cache aware, and check if it can cache, and to then do so. Which then means you are polluting and changing your main implementations to account for and deal with this cache... I am also employing Dependency Injection for alot of high level components so these new cache-able objects would be spat out from that, meaning no where in the actual game would they know they are caching... if that makes sense. Just incase anyone asked how I expected to keep it cache aware when I would need to new up a cachable entity.

    Read the article

  • Is there some sort of CacheDependency in System.Runtime.Caching?

    - by Venemo
    I heard that .NET 4 has a new caching API. Okay, so the good old System.Web.Caching.Cache (which is, by the way, still there in .NET 4) has the ability to set so-called CacheDependency objects to determine whether a cached item is expired or not. One can also specify custom logic for determining whether a cached item is still useable or not by deriving a custom subclass from CacheDependency. I'm curious, is there a way to provide such a logic in the new API?

    Read the article

  • memcached vs. internal caching in PHP?

    - by waitinforatrain
    Hey, I'm working on some old(ish) software in PHP that maintains a $cache array to reduce the number of SQL queries. I was thinking of just putting memcached in its place and I'm wondering whether or not to get rid of the internal caching. Would there still be a worthwihle performance increase if I keep the internal caching, or would memcached suffice?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >