Search Results

Search found 33538 results on 1342 pages for 'select query'.

Page 293/1342 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Is it possible to write a SQL query to return specific rows, but then join some columns of those row

    - by Rob
    I'm having trouble wrapping my head around how to write this query. A hypothetical problem that is that same as the one I'm trying to solve: Say I have a table of apples. Each apple has numerous attributes, such as color_id, variety_id and the orchard_id they were picked from. The color_id, variety_id, and orchard_id all refer to their respective tables: colors, varieties, and orchards. Now, say I need to query for all apples that have color_id = '3', which refers to yellow in the colors table. I want to somehow obtain this yellow value from the query. Make sense? Here's what I was trying: SELECT * FROM apples, colors.id WHERE color_id = '3' LEFT JOIN colors ON apples.color_id = colors.id

    Read the article

  • After grouping by, can I refer to the elements of the original IEnumerable in a LINQ query?

    - by michielvoo
    Example: from OriginalObject in ListOfOriginalObjects group new CustomObject { X = OriginalObject.A, Y = OriginalObject.B } by OriginalObject.Z into grouping select new GroupOfCustomObjects { Z = grouping.Key, C = OriginalObject.C, group = grouping } In the select part of the query, I'd like to add a property (OriginalObject.C) to the type GroupOfCustomObjects. But it seems that OriginalObject is out of scope in that part of the query. I can sort of understand why, since I am not grouping on that property and I am also not making that property part of CustomObject that I'm grouping. One workaround is to add a property C to CustomObject and the in the GroupOfCustomObjects read the value of the first CustomObject in the grouping. My issue with that is that I'm adding a property to an object that doesn't need it (CustomObject), just to be able to add it to the GroupOfCustomObjects. I hope I have explained this properly! Is there a way to refer to the OriginalObject that the query starts with? Thanks!

    Read the article

  • How to build a LINQ query from text at runtime?

    - by Danvil
    I have a class A { public int X; public double Y; public string Z; // and more fields/properties ... }; and a List<A> data and can build a linq query like e.g. var q = from a in data where a.X > 20 select new {a.Y, a.Z}; Then dataGridView1.DataSource = q.ToList(); displays the selection in my DataGridView. Now the question, is it possible to build the query from a text the user has entered at runtime? Like var q = QueryFromText("from a in data where a.X > 20 select new {a.Y, a.Z}"); The point being, that the user (having programming skills) can dynamically and freely select the displayed data.

    Read the article

  • c# How to make linq master detail query for 0..n relationship?

    - by JK
    Given a classic DB structure of Orders has zero or more OrderLines and OrderLine has exactly one Product, how do I write a linq query to express this? The output would be OrderNumber - OrderLine - Product Name Order-1 null null // (this order has no lines) Order-2 1 Red widget I tried this query but is not getting the orders with no lines var model = (from po in Orders from line in po.OrderLines select new { OrderNumber = po.Id, OrderLine = line.LineNumber, ProductName = line.Product.ProductDescription, } ) I think that the 2nd from is limiting the query to only those that have OrderLines, but I dont know another way to express it. LINQ is very non-obvious if you ask me!

    Read the article

  • Dynamic SQL Rows & Columns...cells require subsequent query. Best approach?

    - by Pyrrhonist
    I have the following tables below City --------- CityID StateID Name Description Reports --------- ReportID HeaderID FooterID Description I’m trying to generate a grid for use in a .Net control (Gridview, Listview…separate issue about which will be the ‘best’ one to use for my purposes) which will assign the reports as the columns and the cities as the rows. Which cities get displayed is based on the state selected, and is easy enough SELECT * FROM CITIES WHERE STATEID=@StateID However, the user is able to select which reports are being generated for each City (Demographics, Sales, Land Area, etc.). Further, the resultant cells (City * Report) is a sub-query on different tables based on the city selected and the report. Ie. Column Sales selected yields SELECT * FROM SALES WHERE CITYID=@CityID I’ve programmed a VERY inelegant solution using multiple queries and brute-forcing the grid to be created (line by line, row by row creation of data elements), but I’m positive there’s got to be a better way of accomplishing this…? Any / all suggestions appreciated here as the brute force approach I’ve gotten is slow and cumbersome…and this will have to be used often by the client, so I’m not sure it’ll be acceptable in it’s current implementation.

    Read the article

  • MySQL Query to receive random combinations from two tables.

    - by Michael
    Alright, here is my issue, I have two tables, one named firstnames and the other named lastnames. What I am trying to do here is to find 100 of the possible combinations from these names for test data. The firstnames table has 5494 entries in a single column, and the lastnames table has 88799 entries in a single column. The only query that I have been able to come up with that has some results is: select * from (select * from firstnames order by rand()) f LEFT JOIN (select * from lastnames order by rand()) l on 1=1 limit 10; The problem with this code is that it selects 1 firstname and gives every lastname that could go with it. While this is plausible, I will have to set the limit to 500000000 in order to get all the combinations possible without having only 20 first names(and I'd rather not kill my server). However, I only need 100 random generations of entries for test data, and I will not be able to get that with this code. Can anyone please give me any advice?

    Read the article

  • How does the dataset determine the return type of a scalar query?

    - by Tobias Funke
    I am attempting to add a scalar query to a dataset. The query is pretty straight forward, it's just adding up some decimal values in a few columns and returning them. I am 100% confident that only one row and one column is returned, and that it is of decimal type (SQL money type). The problem is that for some reason, the generated method (in the .designer.cs code file) is returning a value of type object, when it should be decimal. What's strange is that there's another scalar query that has the exact same SQL but is returning decimal like it should. How does the dataset designer determine the data type, and how can I tell it to return decimal?

    Read the article

  • How do you DELETE rows in a mysql table that have a field IN the results of another query?

    - by user354825
    here's what the statement looks like: DELETE FROM videoswatched vw2 WHERE vw2.userID IN ( SELECT vw.userID FROM videoswatched vw JOIN users u ON vw.userID=u.userID WHERE u.companyID = 1000 GROUP BY userID ) that looks decent to me, and the SELECT statement works on its own (producing rows with a single column 'userID'. basically, i want to delete entries in the 'videoswatched' table where the userID in the videoswatched entry, after joining to the users table, is found to have companyID=1000. how can i do this without getting the error in my sql syntax? it says the error is near: vw2 WHERE vw2.userID IN ( SELECT vw.userID FROM videoswatched vw JOIN users u and on line 1. thanks!

    Read the article

  • How does the dataset designer determine the return type of a scalar query?

    - by Tobias Funke
    I am attempting to add a scalar query to a dataset. The query is pretty straight forward, it's just adding up some decimal values in a few columns and returning them. I am 100% confident that only one row and one column is returned, and that it is of decimal type (SQL money type). The problem is that for some reason, the generated method (in the .designer.cs code file) is returning a value of type object, when it should be decimal. What's strange is that there's another scalar query that has the exact same SQL but is returning decimal like it should. How does the dataset designer determine the data type, and how can I tell it to return decimal?

    Read the article

  • MVC more specified models should be populated by more precise query too?

    - by KevinUK
    If you have a Car model with 20 or so properties (and several table joins) for a carDetail page then your LINQ to SQL query will be quite large. If you have a carListing page which uses under 5 properties (all from 1 table) then you use a CarSummary model. Should the CarSummary model be populated using the same query as the Car model? Or should you use a separate LINQ to SQL query which would be more precise? I am just thinking of performance but LINQ uses lazy loading anyway so I am wondering if this is an issue or not.

    Read the article

  • Any idea why this query always returns duplicate items?

    - by Kardo
    I want to get all Images not used by current ItemID. The this subquery but it also always returns duplicate Images: EDITED select Images.ImageID, Images.ItemStatus, Images.UserName, Images.Url, Image_Item.ItemID, Image_Item.ItemID from Images left join (select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1) image_item on Images.imageid = image_item.imageid where ItemID is not null I guess the problem is with the subquery which I can't avoid duplicate rows: select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1 Result: F2EECBDC-963D-42A7-90B1-4F82F89A64C7 0578AC61-3C32-4A1D-812C-60A09A661E71 F2EECBDC-963D-42A7-90B1-4F82F89A64C7 9A4EC913-5AD6-4F9E-AF6D-CF4455D81C10 42BC8B1A-7430-4915-9CDA-C907CBC76D6A CB298EB9-A105-4797-985E-A370013B684F 16371C34-B861-477C-9A7C-DEB27C8F333D 44E6349B-7EBF-4C7E-B3B0-1C6E2F19992C Table: Images ImageID uniqueidentifier UserName nvarchar(100) DateCreated smalldatetime Url nvarchar(250) ItemStatus char(1) Table: Image_Item ImageID uniqueidentifier ItemID uniqueidentifier UserName nvarchar(100) ItemStatus char(1) DateCreated smalldatetime Any kind help is highly appreciated.

    Read the article

  • how to get the second batch and 3rd batch in the same query result in oracle sql + yii framework?

    - by sasori
    let' say i have 20 results in the sql query. if am gonna use the limit in the yii active record, I'll obviously get the first four from the result, but what if i wanna get the 2nd four and then 3rd four in the same query result ? how to query that via sql ? e.g $criteria2 = new CDbCriteria(); $criteria2->select = 'USERID, ADID ,ADTYPE, ADTITLE, ADDESC, PAGEVIEW, DISPPUBLISHDATE'; $criteria2->addCondition("STATUS = 1"); $criteria2->order = '"t".PAGEVIEW DESC,"t".PUBLISHDATE DESC'; $criteria2->limit = 4; $criteria2->with = array('subcat','adimages'); $result = $this->findAll($criteria2); return $result;

    Read the article

  • how to query sqlite for certain rows, i.e. dividing it into pages (perl DBI)

    - by user1380641
    sorry for my noob question, I'm currently writing a perl web application with sqlite database behind it. I would like to be able to show in my app query results which might get thousands of rows - these should be split in pages - routing should be like /webapp/N - where N is the page number. what is the correct way to query the sqlite db using DBI, in order to fetch only the relavent rows. for instance, if I show 25 rows per page so I want to query the db for 1-25 rows in the first page, 26-50 in the second page etc.... Thanks in advanced!

    Read the article

  • How do I use 2 include statements in a single MVC EF query?

    - by alockrem
    I am trying to write a query that includes 2 joins. 1 StoryTemplate can have multiple Stories 1 Story can have multiple StoryDrafts I am starting the query on the StoryDrafts object because that is where it's linked to the UserId. I don't have a reference from the StoryDrafts object directly to the StoryTemplates object. How would I build this query properly? public JsonResult Index(int userId) { return Json( db.StoryDrafts .Include("Story") .Include("StoryTemplate") .Where(d => d.UserId == userId) ,JsonRequestBehavior.AllowGet); } Thank you for any help.

    Read the article

  • How to Create Auto Playlists in Windows Media Player 12

    - by DigitalGeekery
    Are you getting tired of the same old playlists in Windows Media Player? Today we’ll show you how to create dynamic auto playlists based on criteria you choose in WMP 12 in Windows 7. Auto Playlists In Library view, click on Create playlist dropdown arrow and select Create auto playlist. On the New Auto Playlist window type in a name for the playlist in the text box. Now we need to choose our criteria by which to filter your playlist. Select Click here to add criteria. For our example, we will create a playlist of songs that were added to the library in the last week from the Alternative genre. So, we will first select Date Added from the dropdown list. Many criteria will have addition options to configure. In the example below you will see that we have a few options to fine tune.   We will filter all the songs added to the library in the last 7 days. We will select Is After from the first dropdown list. Then select Last 7 Days from the second dropdown list. You can add multiple criteria to further filter your playlist. If you can’t find the criteria you are looking for, select “More” at the bottom of the dropdown list.   This will pull up a filter window with all the criteria. Select a filter and then click OK when finished.   From the Genre dropdown, we will select Alternative. If you’d like to add Pictures, Videos, or TV Shows to your auto playlists you can do so by selecting them from the dropdown list under And also include. You will then be able to select criteria for your pictures, videos, or TV shows from the dropdown list.   Finally, you can also add restrictions to your music such as the number of items, duration, or total size. We will limit the duration of our playlist to one hour by selecting Limit Total Duration To… Then type in 1 hour…Click OK.   Our library is automatically filtered and a playlist is created based on the criteria we selected. When additional songs are added to the Windows Media Player library, any of new songs that fit the criteria will automatically be added to the New Songs playlist. You can also save a copy of an auto playlist as a regular playlist. Switch to Playlists view by clicking Playlists from either the top menu or the navigation bar. Select the Play tab and then click Clear list to remove any tracks from the list pane.   Right-click on the playlist you want to save, select Add to, and then Play list. The songs from your auto playlist will appear as an Unsaved list on the list pane. Click Save list. Type in a name for your playlist. Your auto playlist will continue to change as you add or remove items from your Media Player library that meet the criteria you established. The new saved playlist we just created will stay as it is currently. Editing a Auto playlist is easy. Right-click on the playlist and select Edit. Now you are ready to enjoy your playlist. Conclusion Auto playlists are great way to keep your playlists fresh in Windows Media Player 12. Users can get creative and experiment with the wide variety of criteria to customize their listening experience. If you are new to playlists in Windows Media Player, you may want to check our our previous post on how to create custom playlists in Windows Media Player 12. Are you looking to get better sound from WMP 12? Take a look at how to improve playback using enhancements in Windows Media Player 12. Similar Articles Productive Geek Tips Create Custom Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxMake Windows Media Player Automatically Open in Mini Player ModeMake VLC Player Look like Windows Media Player 10 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Running Multiple Queries in Oracle SQL Developer

    - by thatjeffsmith
    There are two methods for running queries in SQL Developer: Run Statement Run Statement, Shift+Enter, F9, or this button Run Script No grids, just script (SQL*Plus like) ouput is fine, thank you very much! What’s the Difference? There are some obvious differences between the two features, the most obvious being the format of the output delivered. But there are some other, more subtle differences here, primarily around fetching. What is Fetch? After you run send your query to Oracle, it has to do 3 things: Parse Execute Fetch Technically it has to do at least 2 things, and sometimes only 1. But, to get the data back to the user, the fetch must occur. If you have a 10 row query or a 1,000,000 row query, this can mean 1 or many fetches in groups of records. Ok, before I went on the Fetch tangent, I said there were two ways to run statements in SQL Developer: Run Statement Run statement brings your query results to a grid with a single fetch. The user sees 50, 100, 500, etc rows come back, but SQL Developer and the database know that there are more rows waiting to be retrieved. The process on the server that was used to execute the query is still hanging around too. To alleviate this, increase your fetch size to 500. Every query ran will come back with the first 500 rows, and rows will be continued to be fetched in 500 row increments. You’ll then see most of your ad hoc queries complete with a single fetch. Scroll down, or hit Ctrl+End to force a full fetch and get all your rows back. Run Script Run Script runs the contents of the worksheet (or what’s highlighted) as a ‘script.’ What does that mean exactly? Think of this as being equivalent to running this in SQL*Plus: @my_script.sql; Each statement is executed. Also, ALL rows are fetched. So once it’s finished executing, there are no open cursors left around. The more obvious difference here is that the output comes back formatted as plain old text. Run one or more commands plus SQL*Plus commands like SET and SPOOL The Trick: Run Statement Works With Multiple Statements! It says ‘run statement,’ but if you select more than one with your mouse and hit the button – it will run each and throw the results to 1 grid for each statement. If you mouse hover over the Query Result panel tab, SQL Developer will tell you the query used to populate that grid. This will work regardless of what you have this preference set to: DATABASE – WORKSHEET – SHOW QUERY RESULTS IN NEW TABS Mind the fetch though! Close those cursors by bring back all the records or closing the grids when you’re done with them.

    Read the article

  • Retreiving upcoming calendar events from a Google Calendar

    - by brian_ritchie
    Google has a great cloud-based calendar service that is part of their Gmail product.  Besides using it as a personal calendar, you can use it to store events for display on your web site.  The calendar is accessible through Google's GData API for which they provide a C# SDK. Here's some code to retrieve the upcoming entries from the calendar:  .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public class CalendarEvent 2: { 3: public string Title { get; set; } 4: public DateTime StartTime { get; set; } 5: } 6:   7: public class CalendarHelper 8: { 9: public static CalendarEvent[] GetUpcomingCalendarEvents 10: (int numberofEvents) 11: { 12: CalendarService service = new CalendarService("youraccount"); 13: EventQuery query = new EventQuery(); 14: query.Uri = new Uri( 15: "http://www.google.com/calendar/feeds/userid/public/full"); 16: query.FutureEvents = true; 17: query.SingleEvents = true; 18: query.SortOrder = CalendarSortOrder.ascending; 19: query.NumberToRetrieve = numberofEvents; 20: query.ExtraParameters = "orderby=starttime"; 21: var events = service.Query(query); 22: return (from e in events.Entries select new CalendarEvent() 23: { StartTime=(e as EventEntry).Times[0].StartTime, 24: Title = e.Title.Text }).ToArray(); 25: } 26: } There are a few special "tricks" to make this work: "SingleEvents" flag will flatten out reoccurring events "FutureEvents", "SortOrder", and the "orderby" parameters will get the upcoming events. "NumberToRetrieve" will limit the amount coming back  I then using Linq to Objects to put the results into my own DTO for use by my model.  It is always a good idea to place data into your own DTO for use within your MVC model.  This protects the rest of your code from changes to the underlying calendar source or API.

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • Getting a Temporary Table Returned from from Dynamic SQL in SQL Server 05, and parsing

    - by gloomy.penguin
    So I was requested to make a few things.... (it is Monday morning and for some reason this whole thing is turning out to be really hard for me to explain so I am just going to try and post a lot of my code; sorry) First, I needed a table: CREATE TABLE TICKET_INFORMATION ( TICKET_INFO_ID INT IDENTITY(1,1) NOT NULL, TICKET_TYPE INT, TARGET_ID INT, TARGET_NAME VARCHAR(100), INFORMATION VARCHAR(MAX), TIME_STAMP DATETIME DEFAULT GETUTCDATE() ) -- insert this row for testing... INSERT INTO TICKET_INFORMATION (TICKET_TYPE, TARGET_ID, TARGET_NAME, INFORMATION) VALUES (1,1,'RT_ID','IF_ID,int=1&IF_ID,int=2&OTHER,varchar(10)=val,ue3&OTHER,varchar(10)=val,ue4') The Information column holds data that needs to be parsed into a table. This is where I am having problems. In the resulting table, Target_Name needs to become a column that holds Target_ID as a value for each row in the resulting table. The string that needs to be parsed is in this format: @var_name1,@var_datatype1=@var_value1&@var_name2,@var_datatype2=@var_value2&@var_name3,@var_datatype3=@var_value3 And what I ultimately need as a result (in a table or table variable): RT_ID IF_ID OTHER 1 1 val,ue3 1 2 val,ue3 1 1 val,ue4 1 2 val,ue4 And I need to be able to join on the result. Initially, I was just going to make this a function that returns a table variable but for some reason I can't figure out how to get it into an actual table variable. Whatever parses the string needs to be able to be used directly in queries so I don't think a stored procedure is really the right thing to be using. This is the code that parses the Information string... it returns in a temporary table. -- create/empty temp table for var_name, var_type and var_value fields if OBJECT_ID('tempdb..#temp') is not null drop table #temp create table #temp (row int identity(1,1), var_name varchar(max), var_type varchar(30), var_value varchar(max)) -- just setting stuff up declare @target_name varchar(max), @target_id varchar(max), @info varchar(max) set @target_name = (select target_name from ticket_information where ticket_info_id = 1) set @target_id = (select target_id from ticket_information where ticket_info_id = 1) set @info = (select information from ticket_information where ticket_info_id = 1) --print @info -- some of these variables are re-used later declare @col_type varchar(20), @query varchar(max), @select as varchar(max) set @query = 'select ' + @target_id + ' as ' + @target_name + ' into #target; ' set @select = 'select * into ##global_temp from #target' declare @var_name varchar(100), @var_type varchar(100), @var_value varchar(100) declare @comma_pos int, @equal_pos int, @amp_pos int set @comma_pos = 1 set @equal_pos = 1 set @amp_pos = 0 -- while loop to parse the string into a table while @amp_pos < len(@info) begin -- get new comma position set @comma_pos = charindex(',',@info,@amp_pos+1) -- get new equal position set @equal_pos = charindex('=',@info,@amp_pos+1) -- set stuff that is going into the table set @var_name = substring(@info,@amp_pos+1,@comma_pos-@amp_pos-1) set @var_type = substring(@info,@comma_pos+1,@equal_pos-@comma_pos-1) -- get new ampersand position set @amp_pos = charindex('&',@info,@amp_pos+1) if @amp_pos=0 or @amp_pos<@equal_pos set @amp_pos = len(@info)+1 -- set last variable for insert into table set @var_value = substring(@info,@equal_pos+1,@amp_pos-@equal_pos-1) -- put stuff into the temp table insert into #temp (var_name, var_type, var_value) values (@var_name, @var_type, @var_value) -- is this a new field? if ((select count(*) from #temp where var_name = (@var_name)) = 1) begin set @query = @query + ' create table #' + @var_name + '_temp (' + @var_name + ' ' + @var_type + '); ' set @select = @select + ', #' + @var_name + '_temp ' end set @query = @query + ' insert into #' + @var_name + '_temp values (''' + @var_value + '''); ' end if OBJECT_ID('tempdb..##global_temp') is not null drop table ##global_temp exec (@query + @select) --select @query --select @select select * from ##global_temp Okay. So, the result I want and need is now in ##global_temp. How do I put all of that into something that can be returned from a function (or something)? Or can I get something more useful returned from the exec statement? In the end, the results of the parsed string need to be in a table that can be joined on and used... Ideally this would have been a view but I guess it can't with all the processing that needs to be done on that information string. Ideas? Thanks!

    Read the article

  • Iteration, or copying in XSL, based on a count. How do I do it?

    - by LOlliffe
    I'm trying to create an inventory list (tab-delimited text, for the output) from XML data. The catch is, I need to take the line that I created, and list it multiple times (iterate ???), based on a number found in the XML. So, from the XML below: <?xml version="1.0" encoding="UTF-8"?> <library> <aisle label="AA"> <row>bb</row> <shelf>a</shelf> <books>4</books> </aisle> <aisle label="BB"> <row>cc</row> <shelf>d</shelf> <books>3</books> </aisle> </library> I need to take the value found in "books", and then copy the text line that number of times. The result looking like this: Aisle Row Shelf Titles AA bb a AA bb a AA bb a AA bb a BB cc d BB cc d BB cc d So that the inventory taker, can then write in the titles found on each shelf. I have the basic structure of my XSL, but I'm not sure how to do the "iteration" portion. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" xmlns:xd="http://www.oxygenxml.com/ns/doc/xsl" version="2.0"> <xsl:output omit-xml-declaration="yes"/> <xsl:variable name="tab" select="'&#09;'"/> <xsl:variable name="newline" select="'&#10;'"/> <xsl:template match="/"> <!-- Start Spreadsheet header --> <xsl:text>Aisle</xsl:text> <xsl:value-of select="$tab"/> <xsl:text>Row</xsl:text> <xsl:value-of select="$tab"/> <xsl:text>Shelf</xsl:text> <xsl:value-of select="$tab"/> <xsl:text>Titles</xsl:text> <xsl:value-of select="$newline"/> <!-- End spreadsheet header --> <!-- Start entering values from XML --> <xsl:for-each select="library/aisle"> <xsl:value-of select="@label"/> <xsl:value-of select="$tab"/> <xsl:value-of select="row"/> <xsl:value-of select="$tab"/> <xsl:value-of select="shelf"/> <xsl:value-of select="$tab"/> <xsl:value-of select="$tab"/> <xsl:value-of select="$newline"/> </xsl:for-each> <!-- End of values from XML --> <!-- Iteration of the above needed, based on count value in "books" --> </xsl:template> </xsl:stylesheet> Any help would be greatly appreciated. For starters, is "iteration" the right term to be using for this? Thanks!

    Read the article

  • Merge functionality of two xsl files into a single file (not a xsl import or include issue)

    - by anuamb
    I have two xsl files; both of them perform different tasks on source xml one after another. Now I need a single xsl file which will actually perform both these tasks in single file (its not an issue of xsl import or xsl include): say my source xml is: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# <CREDIT_DER/ </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 My first xsl (tr1.xsl) removes all nodes whose value is blank or null: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|node()"> <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template </xsl:stylesheet The output here is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# </LVL21 </R7P1_1 </LIST_R7P1_1 And my second xsl (tr2.xsl) does a global replace (of #+# with text blank'') on the output of first xsl: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:template match="@*|*"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet So my final output is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ </LVL21 </R7P1_1 </LIST_R7P1_1 My concern is that instead of these two xsl (tr1.xsl and tr2.xsl) I only need a single xsl (tr.xsl) which gives me final output? Say when I combine these two as <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" <xsl:template match="@*|node()" <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:call-template name="globalReplace" <xsl:with-param name="outputString" select="."/ <xsl:with-param name="target" select="'#+#'"/ <xsl:with-param name="replacement" select="''"/ </xsl:call-template </xsl:template <xsl:template match="@|" <xsl:copy <xsl:apply-templates select="@*|node()"/ </xsl:copy </xsl:template </xsl:stylesheet it outputs: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT <CREDIT_DER/ </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 Only replacement is performed but not null/blank node removal.

    Read the article

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • No option to select ASP.Net Version in IIS 6?

    - by GenericTypeTea
    I'm running Windows Server 2003 64bit edition. I've just installed the .Net 4 Framework in order to get a new WCF service up and running. However, I have no options anywhere in IIS 6 for selecting the ASP.Net framework version. I.e. Right click Properties on the website should have as ASP.Net tab from where I should be able to select v2 or v4. Does anyone know why they're not there and how I can make them appear? For the time being I've had to go into Website Properties Home Directory Configuration and change the .svc extension to use v4.0.30319 instead. So, everything's now working for my WCF service, however every other extension is set to v2. How can I get the tab? It's not visible on any of my 23 websites.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >