Search Results

Search found 35571 results on 1423 pages for 'sql developer'.

Page 596/1423 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • SQLite delete the last 25% of records in a database.

    - by Steven smethurst
    I am using a SQLite database to store values from a data logger. The data logger will eventually fills up all the available hard drive space on the computer. I'm looking for a way to remove the last 25% of the logs from the database once it reaches a certain limit. Using the following code: $ret = Query( 'SELECT id as last FROM data ORDER BY id desc LIMIT 1 ;' ); $last_id = $ret[0]['last'] ; $ret = Query( 'SELECT count( * ) as total FROM data' ); $start_id = $last_id - $ret[0]['total'] * 0.75 ; Query( 'DELETE FROM data WHERE id < '. round( $start_id, 0 ) ); A journal file gets created next to the database that fills up the remaining space on the drive until the script fails. How/Can I stop this journal file from being created? Anyway to combined all three SQL queries in to one statement?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • Speaking at Triangle SQL Server User Group 16 Mar 2010!

    - by andyleonard
    I'm excited to present Applied SSIS Design Patterns to the Triangle SQL Server User Group 16 Mar 2010! This is a reprise of my PASS Summit 2009 spotlight session. If you read this blog and make the meeting, introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Cannot convert lambda expression to type 'string' because it is not a delegate type

    - by RememberME
    I have the following code written by another developer on 2 pages of my site. This used to work just fine, but now is giving the error "Cannot convert lambda expression to type 'string' because it is not a delegate type" on the Delete line with Ajax.ThemeRollerActionLink. I don't go into this section of the site often, and we recently upgraded from MVC 1.0 to 2.0. I'm guessing that's probably when it stopped working. I've looked up this error and the recommended fix seems to be add using System.Linq However, the page already has <%@ Import Namespace="System.Linq" %> <% Html.Grid(Model).Columns(col => { col.For(c => "<a href='" + Url.Action("Edit", new { userName = c }) + "' class=\"fg-button fg-button-icon-solo ui-state-default ui-corner-all\"><span class=\"ui-icon ui-icon-pencil\"></span></a>").Named("Edit").DoNotEncode(); col.For(c => Ajax.ThemeRollerActionLink("fg-button fg-button-icon-solo ui-state-default ui-corner-all", "ui-icon ui-icon-close", "Delete", new { userName = c }, new AjaxOptions { Confirm = "Delete User?", HttpMethod = "Delete", InsertionMode = InsertionMode.Replace, UpdateTargetId = "gridcontainer", OnSuccess = "successDeleteAssignment", OnFailure = "failureDeleteAssignment" })).Named("Delete").DoNotEncode(); col.For(c => c).Named("User"); }).Attributes(id => "userlist").Render(); %>

    Read the article

  • Legit? Two foreign keys referencing the same primary key.

    - by Ryan
    Hi All, I'm a web developer and have recently started a project with a company. Currently, I'm working with their DBA on getting the schema laid out for the site, and we've come to a disagreement regarding the design on a couple tables, and I'd like some opinions on the matter. Basically, we are working on a site that will implement a "friends" network. All users of the site will be contained in a table tblUsers with (PersonID int identity PK, etc). What I am wanting to do is to create a second table, tblNetwork, that will hold all of the relationships between users, with (NetworkID int identity PK, Owners_PersonID int FK, Friends_PersonID int FK, etc). Or conversely, remove the NetworkID, and have both the Owners_PersonID and Friends_PersonID shared as the Primary key. This is where the DBA has his problem. Saying that "he would only implement this kind of architecture in a data warehousing schema, and not for a website, and this is just another example of web developers trying to take the easy way out." Now obviously, his remark was a bit inflammatory, and that have helped motivate me to find an suitable answer, but more so, I'd just like to know how to do it right. I've been developing databases and programming for over 10 years, have worked with some top-notch minds, and have never heard this kind of argument. What the DBA is wanting to do is instead of storing both the Owners_PersonId and Friends_PersonId in the same table, is to create a third table tblFriends to store the Friends_PersonId, and have the tblNetwork have (NetworkID int identity PK, Owner_PersonID int FK, FriendsID int FK(from TBLFriends)). All that tblFriends would house would be (FriendsID int identity PK, Friends_PersonID(related back to Persons)). To me, creating the third table is just excessive in nature, and does nothing but create an alias for the Friends_PersonID, and cause me to have to add (what I view as unneeded) joins to all my queries, not to mention the extra cycles that will be necessary to perform the join on every query. Thanks for reading, appreciate comments. Ryan

    Read the article

  • Linq - how does it work??

    - by clarkeyboy
    Hey, I have just been looking into Linq with ASP.Net. It is very neat indeed. I was just wondering - how do all the classes get populated? I mean in ASP.Net, suppose you have a Linq file called Catalogue, and you then use a For loop to loop through Catalogue.Products and print each Product name. How do the details get stored? Does it just go through the Products table on page load and create another instance of class Product for each row, effectively copying an entire table into an array of class Product? If so, I think I have created a system very much like this, in the sense that there is a SiteContent module with an instance of each Manager class - for example there is UserManager, ProductManager, SettingManager and alike. UserManager contains an instance of the User class for each row in the Users table. They also contain methods such as Create, Update and Remove. These Managers and their "Items" are created on every page load. This just makes it nice and easy to access users, products, settings etc in every page as far as I, the developer, am concerned. Any any subsequent pages I need to create, I just need to reference SiteContent.UserManager to access a list of users, rather than executing a query from within that page (ie this method separates out data access from the workings of the page, in the same way as using code behind separates out the workings of the page from how the page is layed out). However the problem is that this technique seems rather slow. I mean it is effectively creating a database on every page load, taking data from another database. I have taken measures such as preventing, for example, the ProductManager from being created if it is not referenced on page load. Therefore it does not load data into storage when it is not needed. My question is basically whether my technique does the exact same thing as Linq, in the sense of duplicating data from tables into properties of classes.. Thanks in advance for any advice or answers about this. Regards, Richard Clarke

    Read the article

  • Sharepoint OLE DB - field not updateable

    - by Pandincus
    I need to write a simple C# .NET application to retrieve, update, and insert some data in a Sharepoint list. I am NOT a Sharepoint developer, and I don't have control over our Sharepoint server. I would prefer not to have to develop this in a proper sharepoint development environment simply because I don't want to have to deploy my application on the Sharepoint server -- I'd rather just access data externally. Anyway, I found out that you can access Sharepoint data using OLE DB, and I tried it successfully using some ADO.NET: var db = DatabaseFactory.CreateDatabase(); DataSet ds = new DataSet(); using (var command = db.GetSqlStringCommand("SELECT * FROM List")) { db.LoadDataSet(command, ds, "List"); } The above works. However, when I try to insert: using (var command = db.GetSqlStringCommand("INSERT INTO List ([HeaderName], [Description], [Number]) VALUES ('Blah', 'Blah', 100)")) { db.ExecuteNonQuery(command); } I get this error: Cannot update 'HeaderName'; field not updateable. I did some Googling and apparently you cannot insert data through OLE DB! Does anyone know if there are some possible workarounds? I could try using Sharepoint Web Services, but I tried that initially and was having a heck of a time authenticating. Is that my only option?

    Read the article

  • oracle pl/sql bug: can't put_line more than 2000 characters

    - by FrustratedWithFormsDesigner
    Has anyone else noticed this phenomenon where dbms_output.put_line is unable to print more than 2000 characters at a time? Script is: set serveroutput on size 100000; declare big_str varchar2(2009); begin for i in 1..2009 loop big_str := big_str||'x'; end loop; dbms_output.put_line(length(big_str)); dbms_output.put_line(big_str); end; / I copied and pasted the output into an editor (Notepad++) which told me there were only 2000 characters, not 2009 which is what I think should have been pasted. This also happens with a few of my test scripts - only 2000 characters get printed. I have a workaround to print like this: dbms_output.put_line(length(big_str)); dbms_output.put_line(substr(big_str,1,1999)); dbms_output.put_line(substr(big_str,2000)); This adds new lines to the output, makes it hard to read when the text you're working with is preformatted. Has anyone else noticed this? Is it really a bug or some sort of obscure feature? Is there a better workaround? Is there any other information on this out there? Oracle version is: 10.2.0.3.0, using PL/SQL Developer (from Allround Automation).

    Read the article

  • How can I prevent 'objects you are adding to the designer use a different data connection...'?

    - by Timothy Khouri
    I am using Visual Studio 2010, and I have a LINQ-to-SQL DBML file that my colleagues and I are using for this project. We have a connection string in the web.config file that the DBML is using. However, when I drag a new table from my "Server Explorer" onto the DBML file... I get presented with a dialog that demands that do one of these two options: Allow visual studio to change the connection string to match the one in my solution explorer. Cancel the operation (meaning, I don't get my table). I don't really care too much about the debate as why the PMs/devs who made this tool didn't allow a third option - "Create the object anyway - don't worry, I'm a developer!" What I am thinking would be a good solution is if I can create a connection in the Server Explorer - WITHOUT A WIZARD. If I can just paste a connection string, that would be awesome! Because then the DBML designer won't freak out on me :O) If anyone knows the answer to this question, or how to do the above, please lemme know!

    Read the article

  • Sharepoint OLE DB - cannot insert records? "Field not updateable" error

    - by Pandincus
    I need to write a simple C# .NET application to retrieve, update, and insert some data in a Sharepoint list. I am NOT a Sharepoint developer, and I don't have control over our Sharepoint server. I would prefer not to have to develop this in a proper sharepoint development environment simply because I don't want to have to deploy my application on the Sharepoint server -- I'd rather just access data externally. Anyway, I found out that you can access Sharepoint data using OLE DB, and I tried it successfully using some ADO.NET: var db = DatabaseFactory.CreateDatabase(); DataSet ds = new DataSet(); using (var command = db.GetSqlStringCommand("SELECT * FROM List")) { db.LoadDataSet(command, ds, "List"); } The above works. However, when I try to insert: using (var command = db.GetSqlStringCommand("INSERT INTO List ([HeaderName], [Description], [Number]) VALUES ('Blah', 'Blah', 100)")) { db.ExecuteNonQuery(command); } I get this error: Cannot update 'HeaderName'; field not updateable. I did some Googling and apparently you cannot insert data through OLE DB! Does anyone know if there are some possible workarounds? I could try using Sharepoint Web Services, but I tried that initially and was having a heck of a time authenticating. Is that my only option?

    Read the article

  • transfer database from local machine to hosting server

    - by c11ada
    hey all, im trying to transfer my database from local machine to server, im using the publish to provider wizard in visual web developer to generate a scrip, im then using the generated script on the serever database. i keep getting the following error can some one please tell where im going wrong Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 53 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 58 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 87 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 92 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 48 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 52 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 79 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 83 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 93 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 15151, Level 16, State 1, Line 1 Cannot find the object 'aspnet_UsersInRoles_AddUsersToRoles', because it does not exist or you do not have permission. Msg 15151, Level 16, State 1, Line 1 Cannot find the object 'aspnet_UsersInRoles_RemoveUsersFromRoles', because it does not exist or you do not have permission. thanks

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • iBatis how to solve a more complex N+1 problem

    - by Alvin
    I have a database that is similar to the following: create table Store(storeId) create table Staff(storeId_fk, staff_id, staffName) create table Item(storeId_fk, itme_id, itemName) The Store table is large. And I have create the following java bean public class Store { List<Staff> myStaff List<Item> myItem .... } public class Staff { ... } public class Item { ... } My question is how can I use iBatis's result map to EFFICIENTLY map from the tables to the java object? I tried: <resultMap id="storemap" class="my.example.Store"> <result property="myStaff" resultMap="staffMap"/> <result property="myItem" result="itemMap"/> </resultMap> (other maps omitted) But it's way too slow since the Store table is VERY VERY large. I tried to follow the example in Clinton's developer guide for the N+1 solution, but I cannot warp my mind around how to use the "groupBy" for an object with 2 list... Any help is appreciated!

    Read the article

  • Would you allow this type of query?

    - by user564577
    I'm exploring using an ORM tool in our development shop, and in particular Entity Framework 4.0. Since we work with VERY large databases, I'm a bit concerned about the query's it generates. Doing something simple like getting clients with an address in a state looks like below. As a database developer or admin would you allow this? Is it as bad as it looks? Assume every join is on a clustered index. SELECT [Project2].[ClientKey] AS [ClientKey], [Project2].[FirstName] AS [FirstName], [Project2].[LastName] AS [LastName], [Project2].[IsEnabled] AS [IsEnabled], [Project2].[ChangeUser] AS [ChangeUser], [Project2].[ChangeDate] AS [ChangeDate], [Project2].[C1] AS [C1], [Project2].[AddressKey] AS [AddressKey], [Project2].[ClientKey1] AS [ClientKey1], [Project2].[AddressTypeCode] AS [AddressTypeCode], [Project2].[PrimaryAddress] AS [PrimaryAddress], [Project2].[AddressLine1] AS [AddressLine1], [Project2].[AddressLine2] AS [AddressLine2], [Project2].[City] AS [City], [Project2].[State] AS [State], [Project2].[ZIP] AS [ZIP] FROM ( SELECT [Distinct1].[ClientKey] AS [ClientKey], [Distinct1].[FirstName] AS [FirstName], [Distinct1].[LastName] AS [LastName], [Distinct1].[IsEnabled] AS [IsEnabled], [Distinct1].[ChangeUser] AS [ChangeUser], [Distinct1].[ChangeDate] AS [ChangeDate], [Extent3].[AddressKey] AS [AddressKey], [Extent3].[ClientKey] AS [ClientKey1], [Extent3].[AddressTypeCode] AS [AddressTypeCode], [Extent3].[PrimaryAddress] AS [PrimaryAddress], [Extent3].[AddressLine1] AS [AddressLine1], [Extent3].[AddressLine2] AS [AddressLine2], [Extent3].[City] AS [City], [Extent3].[State] AS [State], [Extent3].[ZIP] AS [ZIP], CASE WHEN ([Extent3].[AddressKey] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C1] FROM (SELECT DISTINCT [Extent1].[ClientKey] AS [ClientKey], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[IsEnabled] AS [IsEnabled], [Extent1].[ChangeUser] AS [ChangeUser], [Extent1].[ChangeDate] AS [ChangeDate] FROM [Common].[Clients] AS [Extent1] INNER JOIN [Common].[ClientAddresses] AS [Extent2] ON [Extent1].[ClientKey] = [Extent2].[ClientKey] WHERE (( CAST(CHARINDEX(UPPER('D'), UPPER([Extent1].[LastName])) AS int)) > 0) AND ([Extent1].[IsEnabled] = 1) AND ([Extent2].[City] IS NOT NULL) AND ((UPPER([Extent2].[City])) = (UPPER('Colorado Springs'))) ) AS [Distinct1] LEFT OUTER JOIN [Common].[ClientAddresses] AS [Extent3] ON [Distinct1].[ClientKey] = [Extent3].[ClientKey] ) AS [Project2] ORDER BY [Project2].[ClientKey] ASC, [Project2].[FirstName] ASC, [Project2].[LastName] ASC, [Project2].[IsEnabled] ASC, [Project2].[ChangeUser] ASC, [Project2].[ChangeDate] ASC, [Project2].[C1] ASC

    Read the article

  • Anyone have experience developing with ESQL/C for INFORMIX-SQL?

    - by Frank Developer
    Does anyone have experience developing with ESQL/C for INFORMIX-SQL, as in calling C funcs within "Perform" screen generator and "ACE" report writer? I have ISQL without ESQL/C. I experimented compiling a perform screen, where in the instructions section I put "ON BEGINNING CALL userfunc() END" and although I don't have ESQL/C, the Perform screen successfully compiled without errors!.. Apparently, the compiler didn't reject the C call even though there's no ESQL/C or C program linked.

    Read the article

  • Dataset.ReadXml() - Invalid character in the given encoding

    - by NLV
    Hello all I have saved a dataset in the sql database in an xml column using the following code. XmlDataDocument dd = new XmlDataDocument(dataset); and passing this xml document as sql parameter using param.value = new XmlNodeReader(dd); The XML is like <NewDataSet><SubContractChangeOrders><AGCol>1</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>006</Contract_x0020_Number><ContractID>30</ContractID><ChangeOrderID>211</ChangeOrderID><Amount>0.0000</Amount><Udf_CostReimbursableFlag>false</Udf_CostReimbursableFlag><Udf_CustomerCode /><Udf_SubChangeOrderStatus /></SubContractChangeOrders><SubContractChangeOrders><AGCol>2</AGCol><SCO_x0020_Number>002</SCO_x0020_Number><Contract_x0020_Number>006</Contract_x0020_Number><ContractID>30</ContractID><ChangeOrderID>212</ChangeOrderID><Amount>0.0000</Amount><Udf_CostReimbursableFlag>false</Udf_CostReimbursableFlag></SubContractChangeOrders><SubContractChangeOrders><AGCol>3</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>111</Contract_x0020_Number><ContractID>87</ContractID><ChangeOrderID>12</ChangeOrderID><Amount>300.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>4</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>222</Contract_x0020_Number><ContractID>80</ContractID><ChangeOrderID>6</ChangeOrderID><Amount>100.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>5</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>777</Contract_x0020_Number><ContractID>79</ContractID><ChangeOrderID>5</ChangeOrderID><Amount>200.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>6</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>786</Contract_x0020_Number><ContractID>77</ContractID><ChangeOrderID>3</ChangeOrderID><Amount>100.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>7</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>787</Contract_x0020_Number><ContractID>78</ContractID><ChangeOrderID>4</ChangeOrderID><Amount>500.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>8</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 009</Contract_x0020_Number><ContractID>219</ContractID><ChangeOrderID>78</ChangeOrderID><Amount>9000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>9</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 010</Contract_x0020_Number><ContractID>220</ContractID><ChangeOrderID>79</ChangeOrderID><Amount>13000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>10</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 012</Contract_x0020_Number><ContractID>222</ContractID><ChangeOrderID>83</ChangeOrderID><Amount>2300.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>11</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 020</Contract_x0020_Number><ContractID>226</ContractID><ChangeOrderID>86</ChangeOrderID><Amount>5400.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>12</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 021</Contract_x0020_Number><ContractID>227</ContractID><ChangeOrderID>87</ChangeOrderID><Amount>2300.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>13</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con001</Contract_x0020_Number><ContractID>208</ContractID><ChangeOrderID>72</ChangeOrderID><Amount>3000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>14</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con002</Contract_x0020_Number><ContractID>209</ContractID><ChangeOrderID>73</ChangeOrderID><Amount>400.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>15</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con003</Contract_x0020_Number><ContractID>210</ContractID><ChangeOrderID>74</ChangeOrderID><Amount>6000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>16</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con004</Contract_x0020_Number><ContractID>211</ContractID><ChangeOrderID>75</ChangeOrderID><Amount>9000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>17</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con005</Contract_x0020_Number><ContractID>213</ContractID><ChangeOrderID>76</ChangeOrderID><Amount>17000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>18</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Cont001</Contract_x0020_Number><ContractID>228</ContractID><ChangeOrderID>89</ChangeOrderID><Amount>2000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>19</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>PUR001</Contract_x0020_Number><ContractID>229</ContractID><ChangeOrderID>88</ChangeOrderID><Amount>1000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>20</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>PUR002</Contract_x0020_Number><ContractID>230</ContractID><ChangeOrderID>90</ChangeOrderID><Amount>3000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>21</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>SC-002</Contract_x0020_Number><ContractID>2</ContractID><ChangeOrderID>7</ChangeOrderID><Amount>200.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>22</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>SC-004</Contract_x0020_Number><ContractID>7</ContractID><ChangeOrderID>65</ChangeOrderID><Amount>1000.0000</Amount></SubContractChangeOrders></NewDataSet> I'm trying to read it back as follows using (SqlConnection con = new SqlConnection("Server=#####;Initial Catalog=#####;User ID=####;Pwd=########")) { using (SqlCommand com = new SqlCommand("select * from dbo.tbl_#####", con)) { using (SqlDataAdapter ada = new SqlDataAdapter(com)) { ada.Fill(dt); MemoryStream ms = new MemoryStream(); object contractXML1 = dt.Rows[0]["SCOXML1"]; System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); bf.Serialize(ms, contractXML1); ms.Seek(0, SeekOrigin.Begin); ds.ReadXml(ms); } } } I'm getting the following error Data at the root level is invalid. Line 1, position 6. Any ideas? NLV

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • ASP.NET MVC/LINQ: What's the proper way to iterate through a Linq.EntitySet in a View?

    - by Terminal Frost
    OK so I have a strongly-typed Customer "Details" view that takes a Customer object Model. I am using LINQ to SQL and every Customer can have multiple (parking) Spaces. This is a FK relationship in the database so my LINQ-generated Customer model has a "Spaces" collection. Great! Here is a code snippet from my CustomerRepository where I iterate through the Customer's parking spaces to delete all payments, spaces and then finally the customer: public void Delete(Customer customer) { foreach (Space s in customer.Spaces) db.Payments.DeleteAllOnSubmit(s.Payments); db.Spaces.DeleteAllOnSubmit(customer.Spaces); db.Customers.DeleteOnSubmit(customer); } Everything works as expected! Now in my "Details" view I want to populate a table with the Customer's Spaces: <% foreach (var s in Model.Spaces) { %> <tr> <td><%: s.ID %></td> <td><%: s.InstallDate %></td> <td><%: s.SpaceType %></td> <td><%: s.Meter %></td> </tr> <% } %> I get the following error: foreach statement cannot operate on variables of type 'System.Data.Linq.EntitySet' because 'System.Data.Linq.EntitySet' does not contain a public definition for 'GetEnumerator' Finally, if I add this bit of code to my Customer partial class and use the foreach in the view to iterate through ParkingSpaces everything works as expected: public IEnumerable<Space> ParkingSpaces { get { return Spaces.AsEnumerable(); } } The problem here is that I don't want to repeat myself. I was also thinking that I could use a ViewModel to pass a Spaces collection to the View, however LINQ already infers and creates the Spaces property on the Customer model so I think it would be cleanest to just use that. I am missing something simple or am I approaching this incorrectly? Thanks!

    Read the article

  • Manual (Dynamic) LINQ subquery using IN clause

    - by immortalali-msn-com
    Hi Everyone, I want to query the DB through LINQ writing manual SQL, my linq method is: var q = db.TableView.Where(sqlAfterWhere); returnValue = q.Count(); this method queries well if the value passed to variable "sqlAfterWhere" is: (this variable is String type) it.Name = 'xyz' but what if i want to use IN clause, using a sub query. (i need to use 'it' before every column name in the above query to work), i cant use 'it' before the sub query columns as its a separate query, so what should i do, if i dont use any thing, and use column names directly it gives error saying " could not be resolved" where is my column names with out 'it' at the begining. So the query not working is: (this is a string passed to the variable above): it.Name IN (SELECT Name FROM TableName WHERE Address LIKE '%SomeAddress%') the errors come out as: Name could not be resolved Address could not be resolved The exact error is: "'Name' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near simple identifier, line 6, column 25." Same error for "Address as well if i use 'it.' before these columns it gives error as: "The element type 'Edm.Int32' and the CollectionType 'Transient.collection[Transient.rowtype(GroupID,Edm.Int32(Nullable=True,DefaultValue=))]' are not compatible. The IN expression only supports entity, primitive, and reference types. , near WHERE predicate, line 6, column 14." Thanks for the help

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >