Search Results

Search found 24620 results on 985 pages for 'microsoft chart controls'.

Page 807/985 | < Previous Page | 803 804 805 806 807 808 809 810 811 812 813 814  | Next Page >

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • Flex 3 - How to read data dynamically from XML

    - by Brian Roisentul
    Hi Everyone, I'm new at Flex and I wanted to know how to read an xml file to pull its data into a chart, using Flex Builder 3. Even though I've read and done some tutorials, I haven't seen any of them loading the data dynamically. For example, I'd like to have an xml like the following: <data> <result month="April-09"> <visitor> <value>8</value> <fullname>Brian Roisentul</fullname> <coid>C01111</coid> </visitor> <visitor> <value>15</value> <fullname>Visitor 2</fullname> <coid>C02222</coid> </visitor> <visitor> <value>20</value> <fullname>Visitor 3</fullname> <coid>C03333</coid> </visitor> </result> <result month="July-09"> <visitor> <value>15</value> <fullname>Brian Roisentul</fullname> <coid>C01111</coid> </visitor> <visitor> <value>6</value> <fullname>Visitor 2</fullname> <coid>C02222</coid> </visitor> <visitor> <value>12</value> <fullname>Visitor 3</fullname> <coid>C03333</coid> </visitor> </result> <result month="October-09"> <visitor> <value>10</value> <fullname>Brian Roisentul</fullname> <coid>C01111</coid> </visitor> <visitor> <value>14</value> <fullname>Visitor 2</fullname> <coid>C02222</coid> </visitor> <visitor> <value>6</value> <fullname>Visitor 3</fullname> <coid>C03333</coid> </visitor> </result> </data> and then loop through every "visitor" xml item and draw their values, and display their "fullname" when the mouse is over their line. If you need some extra info, please let me just know. Thanks, Brian

    Read the article

  • ASP.NET Grid AjaxPanel Download Issue

    - by Mahesh
    Hi, I have a telerik grid which is performing operations like searching,sorting,filtering etc. To make customers happy, we put this control in an ajax panel for seamless experience. Now, we added a new functionality to the grid where the customer can download the entire row information as a csv file. As the response is a file, ajax panel is trying to parse the output and throwing the following exception: Microsoft JScript runtime error: Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled. Details: Error parsing near '?'. Could you please help me in having both functionalities( Ajax and Download) in place without any error?? Thanks, Mahesh

    Read the article

  • jni4net - how to set the absolute path to jni4net.j-0.7.1.0.jar

    - by w1z
    Guys, help me... I use jni4net in my WCF service. in the ctor of the service I try to create BridgeSetup object. var bridgeSetup = new BridgeSetup(false); bridgeSetup.AddAllJarsClassPath("."); Bridge.CreateJVM(bridgeSetup); As I understand in this moment jni4ne tryes to generate jni4net.j-0.7.1.0.dll from jni4net.j-0.7.1.0.jar. It tryes to find jni4net.j-0.7.1.0.jar near the jni4net.n-0.7.1.0.dll and can't. So I get next error... c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\fileprocessingservice\76f0fa69\5db44426\assembly\dl3\4fa263c6\f424b7fa_c8ccca01\jni4net.j-0.7.1.0.DLL Anybody know how to solve the problem? Thanks..

    Read the article

  • Visual Studio 2010 RC + ASP.NET MVC 2

    - by qntmfred
    Now that ASP.NET MVC 2 is out, I tried to install it on my development machine, which already has Visual Studio 2010 RC installed and I got this error message during installation Component Microsoft ASP.NET MVC 2 has failed to install with the following error message: "A different version of ASP.NET MVC 2 is already installed on your system. Please uninstall this version before proceeding with this install." Sure enough, the MVC 2 release notes state: Note Because Visual Studio 2008 and Visual Studio 2010 RC share a component of ASP.NET MVC 2, installing the ASP.NET MVC 2 RTM release on a computer where Visual Studio 2010 RC is also installed is not supported. So my question is, though officially unsupported, if I uninstall VS 2010 RC, install MVC 2 then re-install VS 2010 RC, might this work? And would I then be able to target MVC 2 in VS2010?

    Read the article

  • nunit with testdriven.net problem in .net 4

    - by Nima
    Greeting, currently we migrate our project to .net 4. also we use .nunit 2.5.5 with testdriven.net 3. I got this error, when I run tests. Test 'TestCase1' failed: System.IO.FileNotFoundException : Could not load file or assembly 'Microsoft.VisualStudio.QualityTools.UnitTestFramework, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. at NetSpec.TestingExtensions.ShouldEqual(Object o, Object expected) at NetSpec.TestingExtensions.ShouldBe(Object o, Object expected) Personnel\CivilServant\SubCategorySpec.cs(37,0): at Azarakhsh.Domain.Test.Personnel.CivilServant.when_validate_a_subCategoey.should_have_code() 0 passed, 1 failed, 0 skipped, took 9.35 seconds (NUnit 2.5.5).

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • MathML or OMML to PNG w/ .NET?

    - by charliedigital
    Are there any libraries which take MathML (or, even more preferably, OMML) and outputs a .PNG file? I am putting together an export process for .docx files and, as a part of this process, I'd like to extract equations and render them as .PNG files. Word 2007 does this natively when you save a document for the web, but so far, I have not been able to find a way to do this programmatically (if anyone has an answer for that, it would be even better). So the next best thing is to take the OMML and use the Microsoft provided XSL stylesheets and transform them to MathML. Unfortunately, I haven't been able to find any (working) rendering libraries for either MathML or OMML. If there aren't any pure .NET libraries for this, I'll settle for just about anything that I can call from a commandline to output a .PNG from either MathML or OMML.

    Read the article

  • How to avoid OLEDB converting "."s into "#"s in column names?

    - by Andrew Miner
    I'm using the ACE OLEDB driver to read from an Excel 2007 spreadsheet, and I'm finding that any '.' character in column names get converted to a '#' character. For example, if I have the following in a spreadsheet: Name Amt. Due Due Date Andrew 12.50 4/1/2010 Brian 20.00 4/12/2010 Charlie 1000.00 6/30/2010 the name of the second column would be reported as "Amt# Due" when read with the following code: OleDbConnection connection = new OleDbConnection( "Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; " + "Extended Properties=\"Excel 12.0 Xml;HDR=YES;FMT=Delimited;IMEX=1\""); OldDbCommand command = new OleDbCommand("SELECT * FROM MyTable", connection); OleDbReader dataReader = command.ExecuteReader(); System.Console.WriteLine(dataReader.GetName(1)); I've read through all the documentation I can find and I haven't found anything which even mentions that this will happen. Has anyone run into this before? Is there a way to fix this behavior?

    Read the article

  • MIDL2003 Error in VC6 project

    - by graham.reeds
    While bug fixing I tracked a problem to an old vc6 compiled dll that hasn't been touched in nearly 3 years. After checking out the most recent source I am getting the following error when trying to compile. Processing C:\PROGRAM FILES\MICROSOFT SDK\INCLUDE\msxml.idl msxml.idl .\ocidl.idl(1524) : error MIDL2003 : redefinition : IErrorLog .\ocidl.idl(1541) : error MIDL2003 : redefinition : IPropertyBag Google gives lots of suggestions regarding Visual Studio 2002 - 2003 errors but I can't find anything that relates to Visual Studio 6 or can be applied to my problem. I did find this page but following it's advice didn't fix my problem. Does anyone have any suggestions on how to fix this? (I am presuming that it did work once.) Other items of interest: I have the February 2003 Platform SDK installed, and looking at the add/remove program page I have Micrsoft XML Parser and SDK, MSXML 4.0 SP2 and MSXML 6.0 Parser too.

    Read the article

  • Flowcharting tool/add-in for Visual Studio

    - by Rajah
    A 2 part question: Are flow-charts, as tool to analyze source-code, no longer considered useful? The reason I ask this question is that if you Google for source-code to flowchart tools for Visual Studio, there are no relevant results. Also, Microsoft does not seem to have a tool for this purpose. Are people just not using flowcharts that much any more? (I find them to be a great way to document complex logic in my code). Are there any VS tools that can take .Net (C#, VB.Net) code and convert it to a flowchart? (The only tool that I found is Aviosto's Visustin, which does not provide VS integration). Any other tools that you can recommend for this purpose?

    Read the article

  • Most popular classroom, bootcamp, or online training for ASP.NET 3.5

    - by Curtis White
    What are the most popular and highest quality training sources for ASP.NET 3.5. I am interested in both "boot camp" class room training and online self-paced training. I am interested in both training that can be applied to certification but also non certification based training in the following areas: ASP.NET 3.5, AJAX, and web security. The training should be geared to real world projects and not memorization. I am most interested to hear from Microsoft MVP's on the matter and those who personally have attended or scheduled such training.

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Switching VS2010 to use Windows 7.1 SDK

    - by freefallr
    I've used VS2008 on my development machine for some years now, with windows SDK v7.1. I've installed VS2010, and it's using the Windows SDK v7.0a, but I need it to use the Windows 7.1 SDK (which I had installed prior to installing VS2010). When I run the Windows SDK 7.1 configuration tool, to switch the Windows SDK in use, the tool updates for VS2008, but not for VS2010. The message it reports is: "The Windows SDK Configuration Tool has successfully set Windows SDK version v7.1 as the current version for Visual Studio 2008" The configuration tool is installed with the Windows 7.1 SDK and is found here: "C:\Program Files\Microsoft SDKs\Windows\v7.1\Setup\WindowsSdkVer.exe" VS2010 continues to use WSDK 7.0a, which extremely frustrating, as I need to do DirectShow development (so I need to build the baseclasses, which aren't released with 7.0a release of WSDK). Would I be correct in assuming that it's not updating VS2010 settings because VS2010 wasn't installed at the time that I installed Windows 7.1 SDK? Can I fix this manually, or should I uninstall Windows 7.1 SDK, then reinstall it? Any other suggestions / workarounds for this?

    Read the article

  • How to Create Site using Sharepoint web services?

    - by Pari
    Hi All, I am tring to create site on sharepoint programatically using Sharepoint Web Services.(C#). I tried Admin.asmx service (CreateSite method). But it's showing error: "An unhandled exception of type 'System.InvalidOperationException' occurred in System.Web.Services.dll". I tried with all possible parameters. Curremtly referring Below Links: http://www.oliebol.org/blog/Lists/Posts/Post.aspx?ID=6 http://msdn.microsoft.com/en-us/library/administration.admin.createsite.aspx My Code: Admin admService = new Admin(); admService.Credentials = new NetworkCredential(username,password,domain); admService.Url = "http://mychserver/_vti_adm/admin.asmx"; admService.PreAuthenticate = true; try { String SitePath = "http://myserver/SiteDirectory/SharepointSampleSite"; admService.CreateSite(SitePath,"First Site", "Sample Site", 1033, "STS#0", "Domain\\username",username,userid, "", ""); } catch (System.Web.Services.Protocols.SoapException ex) { MessageBox.Show("Message:\n" + ex.Message + "\nDetail:\n" +ex.Detail.InnerText + "\nStackTrace:\n" + ex.StackTrace); } Thanx,

    Read the article

  • Detect Browser Refresh vs. Form Submit in ASP.Net MVC 2

    - by Eric J.
    I have an ASP.Net questionnaire application that resubmits data to the same page, showing a different question each time. There are BACK and NEXT buttons to navigate between questions. I would like to detect when the form is submitted due to a browser refresh vs. one of the buttons being pressed. I came across a WebForms approach but don't know how to apply those principals in an MVC 2 application since page events aren't available (as far as I know... I'm pretty new to Microsoft's MVC model). How would one apply that principle to MVC 2? Is there a better way to detect refresh?

    Read the article

  • installshield: Windir returns c:\documents & settings\fcuser\windows instead of c:\windows

    - by Sakhawat Ali
    we have a setup developed in installshield vr 6.3. it is a self extractable single setup. it work fine in most on most of the Windows version but on Windows server 2003 64bit in Execution mode when doing RD it return user windows directory against WINDIR i.e. c:\documents & settings\fcuser\windows instead of C:\Windows. According to http://support.microsoft.com/?kbid=186499 it should work fine when i change the compatibility bit of Setup but it didn't. i tried changing compatibility bit of these key too (INSTRUN, SETUP and SETUP1 ) but it didn't work either. but when i when i run the setup within the self extractable by extracting it work fine.

    Read the article

  • Can you use the Phoenix compiler as a more powerful NGEN?

    - by TraumaPony
    In case you don't know of Phoenix, it's a compiler framework from Microsoft that's apparantly going to be the foundation of all their new compilers. It can read in code from CIL, x86, x64, and IA64; and emit code in x86, x64, IA64, or CIL. Can I use it to transform a pure .Net app into a pure native app? By which I mean, it will not have to load any .Net .dll (not even mscoree), and will have the same semantics? This is excluding Reflection, of course.

    Read the article

  • Is DataGrid a necessity in WPF?

    - by Jobi Joy
    I have seen a lot of discussions going on and people asking about DataGrid for WPF and complaining about Microsoft for not having one with their WPF framework till date. We know that WPF is a great UI technology and have the Concept of ItemsControl,DataTemplate, etc,etc to make great UX. Even WPF has got a more closely matching control- ListView, which can be easily templated to give better UX than a traditional Datagrid like display. And I would say a readymade DataGrid control will kill or hide a lot of creativity and it surely will decrease the innovations in User Experience field. So what is your opinion about the need of DataGrid in WPF as a Framework component? If you feel it is necessary then is it just because the world is so used to the DatGrid way of data display for many years? Some other threads having the discussion about DatGrid are here and here Link to WPF ToolKit - Latest WPF DatGrid

    Read the article

  • Why does ASP.Net locks when I update code with TortoiseSVN

    - by Malartre
    Hi, when I update Adobe Flash/Flex code that is not related to ASP.Net with TortoiseSVN (latest) on a Windows Server 2008, the complete website locks and stop responding. Is it ASP.Net recompiling my code, is it IIS 7 or is it Tortoise locking the file system? How can I prevent or minimize this if I need to do an update when 1000 users are using the ASP.Net website? UPDATE: Thanks to Aito and Bryan, I learned more about AppDomain. I found these two links where I discover that folder creation/deletion recycle the AppDomain in ASP.Net 2. --If TortoiseSVN creates folders in it's hidden .svn folders hierarchy, I guess it will lock the app! ASP.NET v2.0 - AppDomain recycles, more common than before http://weblogs.asp.net/owscott/archive/2006/02/21/ASP.NET-v2.0-2D00-AppDomain-recycles_2C00_-more-common-than-before.aspx FIX: ASP.NET 2.0-connected applications on a Web site may appear to stop responding http://support.microsoft.com/kb/911272 I'm testing this. Carl

    Read the article

  • Migrating test cases & defects from Quality Center to TFS 2008/2010

    - by stackoverflowuser
    Tool that can be used to migrate (or even better..synchronize) test cases and bugs between: TFS 2008 and Quality Center 9.2 (or later) TFS 2010 and Quality Center 9.2 (or later) I am aware of the following tools: Test Case Migrator (Excel/MHT) Tool TFS Bug Item Synchronizer 2.2 for Quality Center Also shai raiten mentions on his blog about QC 2 Team System 2010 migration tool that he has been working on and its done. But could not find any link for downloading the tool. http://blogs.microsoft.co.il/blogs/shair/archive/2009/12/31/quality-center-migration-to-team-system-2010-done.aspx Before jumping on coding with TFS SDK and QC components to come up with my own tool I need some inputs from the stackoverflow community.

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Server side speech to text

    - by teepusink
    Hi, I'm trying to install a speech recognition engine server side. (non commercial preferred since it's just for experimentation) The idea is to allow a user to say something from a website then whatever he/she says will show up on the screen (as text) I've read about many available softwares ranging from Microsoft Speech, Sphinx, Julius etc just not sure which one will perform best and easiest to install. Also do typically do I need to have root permission on my hosting to do this kind of stuff? I'm using a regular shared hosting right now. Thank you, Tee

    Read the article

  • web.config inheritance problem

    - by abszero
    Hello everyone, I am having an issue when I try and prevent inhertiance from a parent web.config down to a child specifically related to the node. My parent web.config contains about a dozen appSettings values while my child web.config contains none. Now, I originally began wrapping around only my system.net, web, etc nodes however when I ran my child application I recieved the following error: The configuration section 'appSettings' cannot be read because it is missing a section declaration I really wasn't sure what this meant and a google search didn't yield me any decent results so I attempted to move the location node to also encompass the field in my parent web.config but to no avail. My child web.config is pretty vanilla so I am not sure what the problem is. Any help you can provide would be greatly appreciated! Thanks. The child config is completely vanilla except for the addition of an Xml Namespace and a 'clear' in the config sections as follows: <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <configSections> <clear /> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> The error in question is thrown at The parent web.config is far from vanilla as it is part of an Umbraco install. Here is the corrosponding section from the parent config <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <configSections> <section name="urlrewritingnet" restartOnExternalChanges="true" requirePermission="false" type="UrlRewritingNet.Configuration.UrlRewriteSection, UrlRewritingNet.UrlRewriter" /> <!-- ASPNETAJAX --> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <section name="umbraco.presentation.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> The next line after the ConfigSections is: <location path="." inheritInChildApplications="false"> AppSettings, CnxStrings, System.Web, Net, WebServer settings </location>

    Read the article

  • What does sub error code 568 mean for Ldap Error 49 with Active Directory

    - by Dean Povey
    I am writing some Java code that authenticates to Active Directory using SASL GSSAPI. Mostly this code is working fine but for one user I am getting the response: javax.naming.AuthenticationException: [LDAP: error code 49 - 8 0090304: LdapErr: DSID-0C0904D1, comment: AcceptSecurityContext error, data 568, v1772 ] I know that 49 means this is an authentication failure, and that the relevant sub code is 568, but I am only aware of the following meanings for that data: 525 - user not found 52e - invalid credentials 530 - not permitted to logon at this time 532 - password expired 533 - account disabled 701 - account expired 773 - user must reset password So far I am unable to find an authorative source of these error codes from Microsoft (this list is pieced together from forum posts) and I can't find anything for that 568 error. Does anyone know what it means?

    Read the article

< Previous Page | 803 804 805 806 807 808 809 810 811 812 813 814  | Next Page >