Search Results

Search found 283 results on 12 pages for 'randy heaps'.

Page 1/12 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Find your HEAPS

    - by NeilHambly
    I will not go into a full discussion as to why you would want to convert HEAP into a Clustered table .. as there are plenty of resources out there that describe those elements and the relevant Pro's & Con's However you may just want to understand which database tables are of the HEAP variety and how many of them "percentage wise" exist in each of your Databases So here is a useful script I have (it uses the sp_msforeachDB to iterate through all DBs on an instance), that easily...(read more)

    Read the article

  • Microsoft 2010 Product Tour

    - by dmccollough
    Randy Walker, Co-Founder of the Northwest Arkansas .Net User Group and Microsoft MVP has arranged for a couple of Microsoft experts, Sarika Calla Team Lead on the IDE Team and Kevin Halverson to give presentations on newly released Visual Studio 2010.   June 1 – Bentonville, Arkansas Wal-Mart .Net User Group June 1 – Rogers, Arkansas Northwest Arkansas SQL Server User Group (lunch meeting) June 1 – Springdale, Arkansas Tyson devLoop June 1 – Fayetteville, Arkansas Northwest Arkansas .Net User Group June 2 – Fort Smith, Arkansas Datatronics June 2 – Little Rock, Arkansas Little Rock .Net User Group June 3 – Fort Worth, Texas Fort Worth .Net User Group   Please contact Randy Walker with questions at randy[email protected].

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Efficient heaps in purely functional languages

    - by Kim
    As an exercise in Haskell, I'm trying to implement heapsort. The heap is usually implemented as an array in imperative languages, but this would be hugely inefficient in purely functional languages. So I've looked at binary heaps, but everything I found so far describes them from an imperative viewpoint and the algorithms presented are hard to translate to a functional setting. How to efficiently implement a heap in a purely functional language such as Haskell? Edit: By efficient I mean it should still be in O(n*log n), but it doesn't have to beat a C program. Also, I'd like to use purely functional programming. What else would be the point of doing it in Haskell?

    Read the article

  • Displaying/scrolling through heaps of pictures in the browser

    - by user347256
    I want to be able to browse through heaps of images in the browser, fast. THe easy way (just load 2000 images and scroll) slows down the scrolling a lot, assumedly because there's too much images to be kept in memory. I'd love to hear thoughts on strategies to be able to quickly scroll through 10000s of images (as if you were on your desktop) in the browser. What would expected bottlenecks be? How to address them? How to fake things so that the user experience is still good? Examples in the wild?

    Read the article

  • If I allocate memory in one thread in C++ can I de-allocate it in another

    - by Shane MacLaughlin
    If I allocate memory in one thread in C++ (either new or malloc) can I de-allocate it in another, or must both occur in the same thread? Ideally, I'd like to avoid this in the first place, but I'm curious to know is it legal, illegal or implementation dependent. Edit: The compilers I'm currently using include VS2003, VS2008 and Embedded C++ 4.0, targetting XP, Vista, Windows 7 and various flavours of Windows CE / PocketPC & Mobile. So basically all Microsoft but across an array of esoteric platforms.

    Read the article

  • Know more about shared pool subpool

    - by Liu Maclean(???)
    ????T.askmaclean.com???Shared Pool?SubPool?????,????????_kghdsidx_count ? subpool ??subpool????( ???duration)???: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> set linesize 200 pagesize 1400 SQL> show parameter kgh NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ _kghdsidx_count                      integer                          7 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11783.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11783.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036110 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x60036110 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f938 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f938 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049160 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049160 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052988 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052988 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c1b0 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c1b0 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659d8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659d8 HEAP DUMP heap name="sga heap(7,0)"  desc=0x6006f200 FIVE LARGEST SUB HEAPS for heap name="sga heap(7,0)"   desc=0x6006f200 SQL> alter system set "_kghdsidx_count"=6 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  859832320 bytes Fixed Size                  2100104 bytes Variable Size             746587256 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11908.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11908.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360f0 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x600360f0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f918 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f918 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049140 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049140 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052968 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052968 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c190 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c190 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659b8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659b8 SQL> SQL> alter system set "_kghdsidx_count"=2 scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12003.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12003.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d SQL> alter system set cpu_count=16 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL>  oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12065.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12065.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d8 SQL> show parameter sga_target NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ sga_target                           big integer                      0 SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL>  oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12148.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12148.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(1,1)"  desc=0x60037ee8 HEAP DUMP heap name="sga heap(1,2)"  desc=0x60039740 HEAP DUMP heap name="sga heap(1,3)"  desc=0x6003af98 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 HEAP DUMP heap name="sga heap(2,1)"  desc=0x60041710 HEAP DUMP heap name="sga heap(2,2)"  desc=0x60042f68 _enable_shared_pool_durations:?????????10g????shared pool duration??,?????sga_target?0?????false; ???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? SQL> alter system set "_enable_shared_pool_durations"=false scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12233.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options\ [oracle@vrh8 dbs]$ grep "sga heap"   /s01/admin/G10R25/udump/g10r25_ora_12233.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 ??:1._kghdsidx_count ??? shared pool subpool???, _kghdsidx_count???????7 ??? 7? shared pool subpool 2.??????? subpool???4? sub partition ?: sga heap(1,0) sga heap(1,1) sga heap(1,2) sga heap(1,3) ????? cpu??? ?????_kghdsidx_count, ???? ?10g ?AUTO SGA ??? shared pool duration???, duration ??4?: Session duration Instance duration (never freed) Execution duration (freed fastest) Free memory ??? shared pool duration???? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) reference : http://www.oracledatabase12g.com/archives/understanding-automatic-sga-memory-management.html

    Read the article

  • rand() generating the same number – even with srand(time(NULL)) in my main!

    - by Nick Sweet
    So, I'm trying to create a random vector (think geometry, not an expandable array), and every time I call my random vector function I get the same x value, thought y and z are different. int main () { srand ( (unsigned)time(NULL)); Vector<double> a; a.randvec(); cout << a << endl; return 0; } using the function //random Vector template <class T> void Vector<T>::randvec() { const int min=-10, max=10; int randx, randy, randz; const int bucket_size = RAND_MAX/(max-min); do randx = (rand()/bucket_size)+min; while (randx <= min && randx >= max); x = randx; do randy = (rand()/bucket_size)+min; while (randy <= min && randy >= max); y = randy; do randz = (rand()/bucket_size)+min; while (randz <= min && randz >= max); z = randz; } For some reason, randx will consistently return 8, whereas the other numbers seem to be following the (pseudo) randomness perfectly. However, if I put the call to define, say, randy before randx, randy will always return 8. Why is my first random number always 8? Am I seeding incorrectly?

    Read the article

  • Is it so bad to have heaps of elements in your DOM?

    - by alex
    I am making a real estate non interactive display for their shop window. I have kicked jCarousel into doing what I want: Add panels per AJAX Towards the end of the current set, go and AJAX some new panels and insert them This works fine, but it appears calling jQuery's remove() on the prior elements cause an ugly bump. I'm not sure if calling hide() will free up any resources, as the element will still exist (and the element will be off screen anyway). I've seen this, and tried carousel.reset() from within a callback. It just clears out all the elements. This will be running on Google Chrome on Windows XP, and will solely by displaying on LCD televisions. I am wondering, if I can't find a reasonable solution to remove the extra DOM elements, will it bring my application to a crawl, or will Chrome do some clever garbage collecting? Or, how would you solve this problem? Thanks

    Read the article

  • Microsoft 2010 Product Tour

    - by Randy Walker
    I’m proud to announce that two Microsoft employees, Sarika Calla and Kevin Halverson, who works on the Visual Studio Product Team will be visiting various User Groups and Companies in Arkansas and Texas! Bios: Sarika Calla – Speaking about a Woman’s perspective at Microsoft, this natively born Indian holds a Masters in Computer Science from Georgia Tech and has been with Microsoft for the past 8 years.  Sarika is now a Team Lead on the IDE Team.  (pic is Redmond sacalla mthumb.jpg) Kevin Halverson – With 7 years as a Microsoft employee, Kevin has expertise in LINQ Expression Trees, Code Model, and COM/Office Interop and has a background as a former Unix Sys Admin. (his pic is the profile.jpg)   June 1 – Walmart .Net User Group June 1 – Northwest Arkansas SQL Server User Group (lunch meeting) June 1 – Tyson devLoop June 1 – Northwest Arkansas .Net User Group   June 2 – Datatronics June 2 – Little Rock .Net User Group June 3 – Dallas Customer Visit * June 3 – Forth Worth .Net User Group * Please contact Randy Walker if you would like Sarika & Kevin to visit your company

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • App.Config vs. AppName.exe.Config

    - by Randy Minder
    I'm building a Windows Service app that has configuration data stored in App.Config. However, I noticed that when I build my application a AppName.Exe.Config is generated. Can someone tell me the relationship between these two files? Is the AppName.Exe.Config file what I install with my Windows Service app, instead of the app.config? Thanks - Randy

    Read the article

  • How to create a column that can never be updated?

    - by Randy Minder
    In SQL Server 2008, is it possible to create a column that can have a value inserted into it, but can never be updated? In other words, the column can have an initial value inserted into it, but once it contains a non-null value, it can never be changed. If possible, I would prefer to do it without using a trigger. Thanks - Randy

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Why doesn't $("#RadioButtons:checked").val() work in IE?

    - by Randy Heaps
    Why doesn't $("#RadioButtons:checked").val() - id selector - work in Internet Explorer but $("input:radio[name='RadioButtons']:checked").val() - name selector - does? <input name="RadioButtons" id="RadioButtons" type="radio" value="1" checked> <input name="RadioButtons" id="RadioButtons" type="radio" value="2"> <script> alert($("#RadioButtons:checked").val()); alert($("input:radio[name='RadioButtons']:checked").val()); </script>

    Read the article

  • On Life and TechEd&hellip;

    - by MOSSLover
    I haven’t been writing here I know.  I am very sorry, but I am just too busy trying to make my personal life just as good as my SharePoint life.  So I was at TechEd again helping with the hands on labs, but I had a very long couple of weeks.  I will have a long week again.  I started out going to my friend, Randy Walker’s, wedding and I ended up at my grandmother’s condo in Fort Lauderdale.  It was a very trying week.  I had to drive 5 1/2 hours to the wedding from St. Louis and back.  So Randy is an awesome person.  He is a great guy and he was the first person ever to nominate me for MVP.  I knew it probably wasn’t going anywhere, but it was the thought that count.  I met Randy in 2008 at Tulsa Techfest.  We had fun jamming out to Rockband.  I knew he was good people back then.  He has let me vent and I have let him vent over countless personal problems.  He has always been a great friend.  So it was a no brainer when I decided to go to his wedding no matter how much driving or stress or lack of sleep it was worth it.  I am incredibly happy for him to finally find a diamond amongst a lot of coal.  To take part in his celebration was so awesome.  I thank him again for letting me participate in this ceremony. Now after Randy’s wedding I drove 5 1/2 hours landed in St. Louis, fed a cat an asthma pill hidden in wet cat food, and slept for 4 hours.  I immediately saw my best friend who dropped me off at the airport and proceeded to TechEd.  I slept 1 hour on each flight, then ended up working a 3 hour shift at TechEd.  The rest of the week was a haze of connecting with people and sleeping very little.  I got to see my friend Tasha and her husband Casey plus a billion other people.  It was a great week and then I got a call from my grandmother.  It turns out her husband was admitted to the hospital. My grandparents on my dad's side have been divorced since the 60s, which means I never got to see them together.  I always felt like they never cliqued.  When I was a kid we would always spend half our time in Chicago at grandma’s and half our time at grandpa’s houses.  We would hang out with my grandpa’s wife Bobbi and my grandma’s husband Leo.  My cousin’s always called Leo by Pappa and my brother and I would use Leo.  My cousins lived in Chicago up until my cousin Gavi was born then they moved to Philadelphia.  I remember complaining to my dad that we never visited anywhere cool just Chicago and Kansas City.  I also remember Leo teaching me and my brother, Sam, how to climb a tree and play tennis on the back of the apartment wall.  My grandfather’s was kind of stuffy and boring, but we always enjoyed my grandmother’s.  She had games and Disney Movies and toys.  Leo always made it a ton of fun to visit.  I’m not sure a lot of people knew this fact, but when I was 16 years old I saw my grandfather on my mother’s side slowly die of congestive heart failure in a nursing home.  When I was 18 I saw my grandmother on my mother’s side slowly die of an infection over a 30 day period.  I hate hospitals and I hate nursing homes, but sometimes you have to suck in your gut and be strong.  I tried to do that this weekend and I hope I did an ok job.  I really hope things get better, but if they don’t I really appreciate everything he has given me in life.  I am a terrible tennis player and I haven’t climbed a tree in ages, but they both encouraged me when encouragement from my parents was lacking.  It was Father’s Day today and I have to at least pay tribute to Leo Morris, because he has been a good person to me all my life.  Technorati Tags: TechEd,Grandparents,Father's Day

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Using Substring() in XML FLOWR Queries

    - by Jonathan Kehayias
    Tonight I was monitoring the #sqlhelp hashtag on Twitter for a response to a question I asked when Randy Knight ( Twitter ) asked a question about using SUBSTRING in FLOWR statements with XML. #sqlhelp Is there a way to do a SQL Type "LIKE" or "SUBSTRING" in the where clause of FLWOR statement? Need to evaluate just first n chars. By the time I posted a response, Randy had figured out how to use the contains() function to solve his problem, but I am going to blog this because...(read more)

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >