Search Results

Search found 7466 results on 299 pages for 'selected'.

Page 78/299 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to create animated sliding windows/tabs menu?

    - by Forte
    I have created navigation menu in YUI 2.8 as below : I have also animated tabs using CSS transitions. CSS transitions are not widely supported by browsers and my animations are not working in Opera, IE etc. Since i'm already using YUI 2.8 on that page, can somebody tell me how do i animate those tabs? When i click on any tab, it should expand in vertical dimension smoothly (animated). Below are the properties of tabs which are going to change when i select any tab (Below properties of tabs should be animated) : Paddings Margins Background-Color Borders Please note in above image : There is little space left on right side in case #1 when 1st tab is selected. In case #2 and case #3 there is space left on left as well as right side. In case #4, there is some space left on left side when last tab is selected.

    Read the article

  • VS2010 (older) installer project - two or more objects have the same target location.

    - by Hamish Grubijan
    This installer project was created back in 2004 and upgraded ever since. There are two offending dll files, which produce a total of 4 errors. I have searched online for this warning message and did not find a permanent fix (I did manage to make it go away once until I have done something like a clean, or built in Release, and then in Debug). I also tried cleaning, and then refreshing the dependencies. The duplicated entries are still in there. I also did not find a good explanation for what this error means. Additional warnings are of this nature: Warning 36 The version of the .NET Framework launch condition '.NET Framework 4' does not match the selected .NET Framework bootstrapper package. Update the .NET Framework launch condition to match the version of the .NET Framework selected in the Prerequisites Dialog Box. So, where is this prerequisites box? I want to make both things agree on .Net 4.0, just having a hard time locating both of them.

    Read the article

  • Append before parent after getSelection()

    - by sanceray3
    Hi all, I have one question. I try to develop a little WYSIWYG editor. I would like create a <h1> button which allows me to generate a <h1> title (by clicking on button after text selection in <p> element). I obtain the selection with window.getSelection().... But now, I would like to put my append("<h1>my text selected</h1>") just before the <p>. It’s my problem, because my <p> elements don’t have id or class. So do you know a means to put my append just before the <p> where my text has been selected? I don’t know if with my bad English you’ll understand me! Thanks very much for your help.

    Read the article

  • What control to use

    - by Tarscher
    Hi all, I have a list of devices that I need to filter on according to options selected by the user. One such option is cooling: when the user selects cooling only the devices with cooling are shown. If cooling is not selected then all devices (with or without cooling) are shown. I wonder what kind of control I best use for this. My feeling is thata checkbox is not a good control since it represents: No cooling (unchecked) / only cooling (checked) while I want cooling and no cooling (unchecked)/ only cooling (checked). What control is best used here? Thanks.

    Read the article

  • JQuery Slider, Need to Create a Slider with range as Current Date - 3 Months...

    - by Cozmoz
    I am new to Jquery but my task is to create a Slider in Jquery, which will have a range as Current Date to (Last 3 Months Date). Slider --------------------------------------------------------------------- (Below is Just a Reference, the Slider should have a callout box or an area to show the Current Date which is selected) 1-----------------31 | 1---------28 | 1---------30 | 1------Curr Date Jan Feb Mar Apr Also the Data should change according to the range selected. If anyone who has come across any such problem/solution please let me know, I would be more than grateful to him/her. Thanks!

    Read the article

  • how to get IndexPath

    - by user145883
    in iphone application. I'm trying to get indexPath.row value to do something on the basis of row selected in programmatically created tableview. Can someone please tell me why is the indexPath.row value is not coming correct. It's some thing like 21487...3? NSUInteger nSections = [myTableView numberOfSections]; NSUInteger nRows = [myTableView numberOfRowsInSection:nSections]; NSIndexPath *indexPath = [NSIndexPath indexPathForRow: nRows inSection:nSections]; NSLog(@"No of sections in my table view %d",nSections); NSLog(@"No of rows in my table view %d",nRows); NSLog(@"Value of selected indexPath Row %d", indexPath.row); Thanks, Aaryan

    Read the article

  • Get records in particular date range

    - by developer
    Hi All, I have a column in database that shows DateCreated. Now I want to filter records depending on the date range selected. Eg, Created within 60 days, created withing year, etc. I have a variable dateCreated that shows me what the user has selected as the range.i.e., whether it is Created within 60 days, created withing year. if (datecreated == "Created within 60 days") { DateTime CurrTime = DateTime.Now; if (prg.Subscriber.Username == CurrentUsername && prg.Program.DateCreated<=DateTime.Now - 60) { UserPrgList.Add(new ProgramSubscriptionViewModel(prg)); } } But the above code wont work..what would be the syntax to get the records within a particular range??

    Read the article

  • Using JCA Adapter with OSB 11.1.1.3

    - by James Taylor
    In OSB 10g to use the JCA adapters you were required to use JDeveloper to create the necessary WSDLs and XSDs etc using the associated adapter wizard. These files were imported into Oracle Workshop (Eclipse) and used to create the business service as you would any other web service. In 11g unfortunately JDeveloper is still required. The process has changed slightly as described below. As an example I have used the JCA DB adapter as an example. Start JDeveloper 11.1.1.3 Create a new SOA Application Create a new SOA Project and call it DBAdapters. Choose the Empty Composite Template Drag a Database Adapter Component to the External References panel on the composite. Provide a service name. Create a new database connection, or use an existing one Take note of the JNDI Name, e.g. eis/DB/MyConnection This will be used to configure the DB connection in the WebLogic Console. In my example I use a stored procedure, but you can use what ever operation you require. Please refer to the following link for other options: User's Guide for Technology Adapters Select a schema and stored procedure Once the procedure has been selected, accept the defaults and finish. Startup your OEPE version of Eclipse. Create a new Oracle Service Bus Configuration Project (you can use an existing project if you have one) Create a new Oracle Service Bus Project in the configuration project created above. Instead of importing the WSDL and XSD files you import the jca file created in JDeveloper. In Eclipse right click the Oracle Service Bus Project and select Import –> Import    Choose File System Browse to the directory where JDeveloper stores its project Select the jca, wsdl, and xsd files based on the service you created in step 5. Also check the ‘Create selected folders only’ radio button. When you import you may have a little red x indicating the files are invalid. This is due to the location of the files. Open the invalid files and fix the path in relation to where you store your files in the OSB project.   Once you have the files all valid, Right-Click the jca file and select Oracle Service Bus –> Generate Service. This will create a new Business Service. In the WebLogic Console configure the JNDI name defined in step 7. You can now deploy your project and test

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Microsoft Tech-Ed North America 2010 - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practi

    - by ssqa.net
    It is just a week to go for Tech-Ed North America 2010 in New Orleans, this time also I'm speaking at this conference on the subject - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practices from the Field... more from here .. It is a coincedence that this is the 2nd time the same talk has been selected in Tech-Ed North America for the topic I have presented in SQLBits before....(read more)

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Deciding which technology to use is a big decision when no technology is an obvious choice

    Deciding which technology to use in a new venture or project is a big decision for any company when no technology is an obvious choice. It is always best to analyze the current requirements of the project, and also evaluate the existing technology climate so that the correct technology based on the situation at the time is selected. When evaluation the requirements of a new project it is best to be open to as many technologies as possible initially so a company can be sure that the right decision gets made. Another important aspect of the technology decision is what can the current network and  hardware environment handle, and what would be needed to be adjusted if a specific technology was selected. For example if the current network operating system is Linux then VB6 would force  a huge change in the current computing environment. However if the current network operation system was windows based then very little change would be needed to allow for VB6 if any change had to be done at all. Finally and most importantly an analysis should be done regarding the current technical employees pertaining to their skills and aspirations. For example if you have a team of Java programmers then forcing them to build something in C# might not be an ideal situation. However having a team of VB.net developers who want to develop something in C# would be a better situation based on this example because they are already failure with the .Net Framework and have a desire to use the new technology. In addition to this analysis the cost associated with building and maintaining the project is also a key factor. If two languages are ideal for a project but one technology will increase the budget or timeline by 50% then it might not be the best choice in that situation. An ideal situation for developing in C# applications would be a project that is built on existing Microsoft technologies. An example of this would be a company who uses Windows 2008 Server as their network operating system, Windows XP Pro as their main operation system, Microsoft SQL Server 2008 as their primary database, and has a team of developers experience in the .net framework. In the above situation Java would be a poor technology decision based on their current computing environment and potential lack of Java development by the company’s developers. It would take the developers longer to develop the application due the fact that they would have to first learn the language and then become comfortable with the language. Although these barriers do exist, it does not mean that it is not due able if the company and developers were committed to the project.

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Sunday, September 30, 2012

    CodePlex Daily Summary for Sunday, September 30, 2012Popular ReleasesCAPTCHA Solver: Initial Release: This is the initial Release :) Still very much a WIP.MCEBuddy 2.x: MCEBuddy 2.2.17: Reccomended update to 2.2.16 Changelog for 2.2.17 (32bit and 64bit) 1. Fixed bugs around thread synchronization with new remote model (fixes cause the app to crash or hang) 2. Updated UPnP code base, faster and more reliable now 3. Now you can get audio/video properties for multiple files on main page. Selected multiple files and right click, all selected files properties will be shown. 4. Fix a bug, not able to enter a conversion task name in the GUIAggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.D3 Loot Tracker: 1.4: Session name is displayed in the UI. Changes data directory for clickonce deployment so that sessions files are persisted between versions. Added a delete button in the sessions list window. Allow opening of the sessions local folder from the session list widow. Display the session name in the main window Ability to select which diablo process to hook up to when pressing new () function BUT only if multi-process support is selected in the generals settings tab menu. Session picker...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor 1.1 Beta: Visual Ribbon Editor 1.1 Beta What's New: Fixed scrolling issue in UnHide dialog Added support for connecting via ADFS / IFD Added support for more than one action for a button Added support for empty StringParameter for Javascript functions Fixed bug in rule CrmClientTypeRule when selecting Outlook option Extended Prefix field in New Button dialogVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.Snake!: Snake 1.0: Version 1 StablePaging SharePoint ListItems using listitems position: Paginglistitems V1.0: This is a console application which has two methods both on CSOM and SOM to display the listitems in a paged manner.SharePoint Move Discussion Threads: SharePoint Move Discussion Threads ver 0.1: ver 0.1NTCPMSG: V1.1.1.0: increase the performance. Support .net framework 4.0.BlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????New Projects2D Sprite Editor: This is a 2d sprite editor. Import your sprite sheet, trace your animations frame and export the coordinates points in a simple txt file, ready to import.caifenweb1: test project.CatchThatException: This is a small logging library We created at developerpath.com to help us log exceptions. It write it to a text file and you can easilay open that txt.FsxWs - WebServices for Microsoft FSX: WebServices for MS Flight Simulator. Get flights data as JSON, KML. !! Still in SetUp phase - be patient !!GetTPB: Some training in downloading and parsing web pages, with multithreading too.JSON-RPC Client Generator (for XBMC): The goal of this project is to provide a .Net client for the XBMC JSONRPC API. The main part is not XBMC dependent and may be used for any JSON-RPC client.matlab-silhouette-pose-wtf: Whatevermfp: this is random codeMVC Grid: MVC Grid ExampleMyWebSocketTry: sssssssssssssssssssssssssssssssssssssssNetduino Console: Netduino Console is an interface with built in messaging layers that allows you as a developer to dynamically create plugins following a provided interface to iSharePoint ASP.NET Verifier: Project will allow to verify SharePoint 2010 components using ASP.NET web applicationSharepoint Custom Upload: This is a SharePoint solution that allows an administrator to customize the upload page individually for each document library in a site.. It allows you to makeWinWeb Browser Deluxe: WinWeb Browser Deluxe es un navegador web de código abierto basado en Internet Explorer hecho en Visual Basic .NET. Descargalo ya!writethatoutput: This is the official release page for WriteThatOutPut from developerpath.com

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlor http://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI file Click "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine (Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2. Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button.9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before) Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine 12. Associated more storage disk to the Virtual machine, if you have more VDI files. (Not our case)The disk must be selected as IDE Primary Master. 13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

  • SQLAuthority News – Virtual Launch Event for Office 2010 – Contest – Win MS Office License

    - by pinaldave
    Office products are integral products of any PC. I accept that without Office Suites, I can not survive or make enough leaving. I am blogger and use word to create my blogs. I am SQL Server Trainer  and I use PowerPoint as my presentation tool. I am SQL Server consultant and I use Excel to keep my work log. I can not see my life with Office Tools. Just like any other Microsoft Product there is strong community following Office Tools. Please count me in. The same community is hosting a Virtual Launch Event for Office 2010 on May 25 and 26th. The webcasts is FREE to attend and people can take part either online or by going to the nearest available center. The sessions will be delivered by MVPs. To register please visit: http://www.meraoffice.com. In June, limited cities will be hosting Community Launch Events for Office 2010. At the launch events, attendees will get to see Office 2010 in action and learn how to do their work better with Office 2010.  The details are available on http://office.merawindows.com. To support one of the largest community, I am announcing one contents. It is very easy to take part in the contest. You just have to answer one very simple question. Contest: Choose best option: With which Microsoft Office Product Powerpivot is associated? Options: 1) PowerPoint 2) Excel 3) Word Hint: http://search.sqlauthority.com Rules: Winner will be awarded 1 Office 2007 Home and Student. This will be freely upgradeable to Office 2010 once it releases in June. The winners will be sent emails and they will redeem their awards via microsoftstore.co.in The prizes can only be shipped to India and Indian residents are eligible. Winner will be selected by selected community leaders and MVPs at their sole discretion. Winner will be informed by email about the award. Most creative and informative comment will win the contest. Please spread the words about this contest. SQLAuthority.com will also send SQL Server book to the person who generates the most traffic to this blog post using Twitter, Facebook and other social media. This competition is also open to Indian residents only. I will measure the traffic using my wordpress.com stats plugin. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Office

    Read the article

  • Multiple file upload with asp.net 4.5 and Visual Studio 2012

    - by Jalpesh P. Vadgama
    This post will be part of Visual Studio 2012 feature series. In earlier version of ASP.NET there is no way to upload multiple files at same time. We need to use third party control or we need to create custom control for that. But with asp.net 4.5 now its possible to upload multiple file with file upload control. With ASP.NET 4.5 version Microsoft has enhanced file upload control to support HTML5 multiple attribute. There is a property called ‘AllowedMultiple’ to support that attribute and with that you can easily upload the file. So what we are waiting for!! It’s time to create one example. On the default.aspx file I have written following. <asp:FileUpload ID="multipleFile" runat="server" AllowMultiple="true" /> <asp:Button ID="uploadFile" runat="server" Text="Upload files" onclick="uploadFile_Click"/> Here you can see that I have given file upload control id as multipleFile and I have set AllowMultiple file to true. I have also taken one button for uploading file.For this example I am going to upload file in images folder. As you can see I have also attached event handler for button’s click event. So it’s time to write server side code for this. Following code is for the server side. protected void uploadFile_Click(object sender, EventArgs e) { if (multipleFile.HasFiles) { foreach(HttpPostedFile uploadedFile in multipleFile.PostedFiles) { uploadedFile.SaveAs(System.IO.Path.Combine(Server.MapPath("~/Images/"),uploadedFile.FileName)); Response.Write("File uploaded successfully"); } } } Here in the above code you can see that I have checked whether multiple file upload control has multiple files or not and then I have save that in Images folder of web application. Once you run the application in browser it will look like following. I have selected two files. Once I have selected and clicked on upload file button it will give message like following. As you can see now it has successfully upload file and you can see in windows explorer like following. As you can see it’s very easy to upload multiple file in ASP.NET 4.5. Stay tuned for more. Till then happy programming. P.S.: This feature is only supported in browser who support HTML5 multiple file upload. For other browsers it will work like normal file upload control in asp.net.

    Read the article

  • Dynamic meta description and keyword tags for your MasterPages

    - by Aamir Hasan
     Today we're going to look at a technique for dynamically inserting meta tags into your master pages. By taking control of the head tag and inserting your own HtmlMeta you can easily customise these tags.Might have noticed that when you create a new master page in visual studio your <head> tag gets decorated with a runat="server" attribute.Asp.net doesn't add this kind of decoration to any other html tags (although you are free to add it if you want). So what makes the head tag special?By adding the runat="server" you're giving actually converting the control into a HtmlHead control. That doesn't particularly matter for this tutorial other than to note that given a reference to the head control you get all the extras that come with asp.net controls such as access to its controls collection.The HtmlMeta control lets us wrap up <meta> tags via asp.net code. To add a meta description we need to create an instance, set the name property, the content property, and then add it to the head: asp.net using (C#)protected void Page_Init(object sender, EventArgs e){  // Add meta description tag  HtmlMeta metaDescription = new HtmlMeta();  metaDescription.Name = "Description";  metaDescription.Content = "Short, unique and keywords rich page description.";  Page.Header.Controls.Add(metaDescription);   // Add meta keywords tag  HtmlMeta metaKeywords = new HtmlMeta();  metaKeywords.Name = "Keywords";  metaKeywords.Content = "selected,page,keywords";  Page.Header.Controls.Add(metaKeywords);}asp.net ( VB.NET )Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init  ' Add meta description tag  Dim metaDescription As HtmlMeta = New HtmlMeta()  metaDescription.Name = "Description"  metaDescription.Content = "Short, unique and keywords rich page description."  Page.Header.Controls.Add(metaDescription)   ' Add meta keywords tag  Dim metaKeywords As HtmlMeta = New HtmlMeta()  metaKeywords.Name = "Keywords"  metaKeywords.Content = "selected,page,keywords"  Page.Header.Controls.Add(metaKeywords)End Sub

    Read the article

  • Meta package / quick reference for string manipulation commands?

    - by Dylan McCall
    The latest version of the Scribes text editor lets us select some text, hit Alt+X, and then run an arbitrary command. For example, I can run the sort command and the selected text is replaced appropriately. This is quite useful but I am also not very well-versed in awk and the like. Is there something I can grab that will provide more of these commands like sort? Maybe a package with a whole bunch of handy, task-specific string manipulation commands?

    Read the article

  • Efficient way of drawing outlines around sprites

    - by Greg
    Hi, I'm using XNA to program a game, and have been experimenting with various ways to achieve a 'selected' effect on my sprites. The trouble I am having is that each clickable that is drawn in the spritebatch is drawn using more than a single sprite (each object can be made up of up to 6 sprites). I'd appreciate it if someone could advise me on how I could achieve adding an outline to my sprites of X pixels (so the outline width can be various numbers of whole pixels). Thanks in advance, Greg.

    Read the article

  • TSQL Challenge 28 - SELECT TOP N articles from each category from a SQL Server 2000 database

    The challenge is to write a query that returns the articles to be displayed in the home page of the website. N number of articles from each category is to be selected where N is configured in the Categories table. Each category should select the most recent N articles. ArticleID can be used to identify the most recent articles. An article with a higher number indicates a more recent article.

    Read the article

  • RTS Movement + Navigation + Destination

    - by Oliver Jones
    I'm looking into building my own simple RTS game, and I'm trying to get my head around the movement of single, and multi selected units. (Developing in Unity) After much research, I now know that its a bigger task than I thought. So I need to break it down. I already have an A* navigation system with static obstacles taken into account. I don't want to worry about dynamic local avoidance right now. So I guess my first break down question would be: How would I go about moving mutli units to the same location. Right now - my units move to the location, but because they're all told to go to the same location, they start to 'fight' over one another to get there. I think theres two paths to go down: 1) Give each individual unit a separate destination point that is close to the 'master' destination point - and get the units to move to that. 2) Group my selected units in a flock formation, and move that entire flock group towards the destination point. Question about each path: 1a) How can I go about finding a suitable destination point that is close to the master destination? What happens if there isn't a suitable destination point? 1b) Would this be more CPU heavy? As it has to compute a path for each unit? (40 unit count). 2a) Is this a good idea? Not giving the units themselves a destination, but instead the flock (which holds the units within). The units within the flock could then maintain a formation (local avoidance) - though, again local avoidance is not an issue at this current time. 2b) Not sure what results I would get if I have a flock of 5 units, or a flock of 40 units, as the radius would be greater - which might mess up my A* navigation system. In other words: A flock of 2 units will be able to move down an alleyway, but a flock of 40 wont. But my nav system won't take that into account. I would appreciate any feedback. Kind regards, Ollie Jones

    Read the article

  • Charms and the App Bar

    - by Dennis Vroegop
    Ok. I admit. I made a mistake in the last post about our planespotter app. I have dedicated a full part of the hub to Social. I also had a section called Friends but that made sense since I said that “Friends” is a special group of people that connect to each other through our app and only our app. Social however is sharing our spots with Twitter, Facebook and so on. Now, we could write that functionality in our app in a different section but there is one small problem with that: users don’t expect that. Ok, I admit. The mistake was quite deliberate to give me an excuse to write this part. But still: the mistake is one I see a lot. People are trying to do stuff in their application that they shouldn’t be doing. This always strike me as slightly odd: why do some work when others have already done it for you and you can just use it? After all: good developers are lazy (lazy people will always try to find the easiest way to do something and in development land this usually means the cleanest and best to support way…) So. What is that part that Microsoft has done for us and we don’t have to do ourselves? The answer lies on the right hand of your Win8 screen: This is a screenshot of my tablet (as you can see I am writing this right now….) When I swipe my finger from out of the screen on the right inside the screen (or move the mouse to the upper right corner) this menu will appear. Next to settings and the start menu button we’ll find the Search and the Share charms. These are two ways that your app can share the information it contains with the rest of the world, or at least: the rest of your system. So don’t write a Search feature in your app. Don’t write a Share feature in your app. It’s here already. Users, once they are used to Windows 8, will use that feature and expect it to work. If it doesn’t, they won’t like your app and you can kiss you dreams of everlasting fame goodbye. So use these two. What are they? Well, simply they are parts of a contract. In your app you say somewhere in code that you are supporting Search and Share. So when the user selects Share the system will interrogate the current app in the foreground if it supports this feature. Your app will say “But why, yes, I do!” Then the system will ask the app “Ok then, wisecrack, then share!” and you will have to provide the system with some information about the format. Other applications have subscribed to be at the receiving end of the Share contract. They have told the system that they support Sharing (receiving) and which formats they understand. If one or more of them support the formats you specify, the user will see them. The user clicks / taps on the app of their choice and data is moved from your app to the new one. So if you say you support Facebook and Twitter users can post data from your app to these networks by selecting Share. The same applies to Search. Don’t make a “search” button in your app but use the contract to tell the system that you support search and use that instead. Users will be grateful (remember that bar with men/women/creatures that are waiting for you?) The more and more people get to know Windows 8, the more they will use this. And if you are one of the people who wrote an app that helped them learn the system, well, that’s even better. So. We don’t have a Share or a Search button. We do have other buttons. Most important: we probably need a “New Spot” button. And a “Filter” might be useful. Or someway to open the camera so you can add a picture to the spot. Where will be put those? The answer is the “Appbar” . This is a application / context aware menu that slides up from the bottom of the screen when you move your finger / mouse from below the screen into it. From above downwards works just as well. Here you see an example of the appbar from the People app. (click on it for a larger version). This appears whenever you slide your finger up from below of down from above. This is where you put your commands. Remember, this is context aware so this menu will change when you are in different parts of your app or when you have selected different items. There are a few conventions when you create this appbar. First, the items on the right are “General” items, meaning they have little to do with what is on the screen right now. I think this would be a great place to add our “New Spot” icon. On the far left are items associated with the current selected item or screen. So if you have a spot selected, the button for Add Photo should be visible here and on the left hand side. Not everything is as clear as this, but this is what you should strive for. Group items together. And please note: this is the only place in Metro design where we are allowed to use lines as separators. So when you want to separate a group of icons from another group, add a line. Also note the simplicity of the buttons. No colors, no lights or shadows, no 3D. After a couple of years of fancy almost realistic looking icons people have finally decided that hey, this is a virtual world: it’s ok to look virtual as well. So make things as readable and clear as possible and don’t try to duplicate nature. It’s all about the information, remember? (If you don’t remember I’d like to point you to a older blog post of mine about the what and why of Metro). So.. think about the buttons a bit and think about Share and Search. What will you put there? Remember: this is the way the users interact with your apps and while you shouldn’t judge a book by its covers when it comes to people, this isn’t entirely so when it comes to apps. People DO judge an app by its looks and the way it feels. Take advantage of that. History has learned that a crappy app with a GREAT user interface gets better reviews than a GREAT app with a lousy UI… I know: developers will find this extremely unfair but that’s the world we live in (No, I am not saying you should deliver rubbish apps). Next time: we’ll start by building the darn thing!

    Read the article

  • Auto-select external soundcard?

    - by dandelion
    Just got an external usb sound card because my internal speakers broke. The new sound card works just fine after plugging it in and going to System Settings Sound, and then selecting the device. However, I was curious as to whether there was a way to make ubuntu auto-select the device without having to go through System Settings. Can it just be selected upon plugging it in? Any and all help appreciated :)) (ps its running 12.04)

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >