Search Results

Search found 63598 results on 2544 pages for 'sql add on'.

Page 664/2544 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Foreign keys and pagination

    - by whitstone86
    This is my pagination script: <?php /*********************************** * PhpMyCoder Paginator * * Created By PhpMyCoder * * 2010 PhpMyCoder * * ------------------------------- * * You may use this code as long * * as this notice stays intact and * * the proper credit is given to * * the author. * ***********************************/ ?> <head> <title> Pagination Test - Created By PhpMyCoder</title> <style type="text/css"> #nav { font: normal 13px/14px Arial, Helvetica, sans-serif; margin: 2px 0; } #nav a { background: #EEE; border: 1px solid #DDD; color: #000080; padding: 1px 7px; text-decoration: none; } #nav strong { background: #000080; border: 1px solid #DDD; color: #FFF; font-weight: normal; padding: 1px 7px; } #nav span { background: #FFF; border: 1px solid #DDD; color: #999; padding: 1px 7px; } </style> </head> <?php //Require the file that contains the required classes include("pmcPagination.php"); //PhpMyCoder Paginator $paginator = new pmcPagination(20, "page"); //Connect to the database mysql_connect("localhost","root","PASSWORD"); //Select DB mysql_select_db("tvguide"); //Select only results for today and future $result = mysql_query("SELECT programme, channel, airdate, expiration, episode, setreminder FROM epdata1 where airdate >= now() order by expiration GROUP BY airdate"); //You can also add reuslts to paginate here mysql_data_seek($queryresult,0) ; while($row = mysql_fetch_array($result)) { $paginator->add(new paginationData($row['programme'], $row['channel'], $row['airdate'], $row['expiration'], $row['episode'], $row['setreminder'])); } ?> <?php //Show the paginated results $paginator->paginate (); ?><? include("pca-footer1.php"); ?> <?php //Show the navigation $paginator->navigation(); ?> However, I have two tables in this, and they are epdata1 (where the airtimes for my show House M.D. are) and housemdep plus the setreminder table. How can I use foreign keys in relation to this? I'm not sure if this will work for my script, but am willing to try. What I would like to do is to select certain episodes from the table housemdep (episodes of the show) and if any are selected it shows them as this: House M.D. showing on Channel 1 June 6th - 8:00pm "Wilson" Set Reminder House M.D. showing on Channel 1 June 7th - 1:30am "Wilson" Set Reminder House M.D. showing on Channel 1 June 7th - 12:55pm "House's Head" Set Reminder or like this, if I have not selected an episode from the row: House M.D. showing on Channel 1 June 7th - 8:00pm "House's Head" Set Reminder House M.D. showing on Channel 1 June 8th - 9:00pm Set Reminder House M.D. showing on Channel 1 June 9th - 2:30pm Set Reminder House M.D. showing on Channel 1 June 7th - 8:00pm "Que Sera Sera" Set Reminder Foreign keys and relationship of interlinked tables are new to me, if anyone could help I'd appreciate this. I've tried some of what Google suggested on foreign keys in another version of this script (this is a clone of the original on my localhost server running Apache and PHP 5.28/MySQL), but am not sure how to implement this. Thanks.

    Read the article

  • How do I get a JPanel with an empty JLabel to take up space in a GridBagLayout

    - by user2888663
    I am working on a GUI for a project at school. I am using a GridBagLayout in swing. I want to have a label indicating the input(a type of file @ x = 0, y = 0), followed by another label(the actual file name once selected @ x = 1, y = 0), followed by a browse button for a file chooser( @ x = 2, y = 0). The label at (1,0) is initially blank, however I want the area that the text will occupy to take up some space when the label contains no text. I also want the space between the label at (0,0) and the button at (2,0) to remain constant. To achieve this, I'm trying to put the label onto a panel and then play with the layouts. However I can't seam to achieve the desired results. Could anyone offer some suggestions? The next three rows of the GridBagLayout will be laid out exactly the same way. Here is a link to a screen shot of the GUI. calibrationFileSelectionValueLabel = new JLabel("",Label.LEFT); calibrationFileSelectionValueLabel.setName("calibrationFileSelection"); calibrationFileSelectionValueLabel.setMinimumSize(new Dimension(100,0)); calibrationFileSelectionValuePanel = new JPanel(); calibrationFileSelectionValuePanel.setBorder(BorderFactory.createEtchedBorder()); calibrationFileSelectionValuePanel.add(calibrationFileSelectionValueLabel); c.gridx = 0; c.gridy = 0; c.fill = GridBagConstraints.NONE; filesPanel.add(calibrationFileLabel,c); c.gridy = 1; filesPanel.add(frequencyFileLabel,c); c.gridy = 2; filesPanel.add(sampleFileLabel,c); c.gridy = 3; filesPanel.add(outputFileLabel,c); c.gridx = 1; c.gridy = 0; c.fill = GridBagConstraints.BOTH; // filesPanel.add(calibrationFileSelection,c); filesPanel.add(calibrationFileSelectionValuePanel,c); c.gridy = 1; // filesPanel.add(frequencyFileSelection,c); filesPanel.add(frequencyFileSelectionValueLabel,c); c.gridy = 2; // filesPanel.add(sampleFileSelection,c); filesPanel.add(sampleFileSelectionValueLabel,c); c.gridy = 3; // filesPanel.add(outputFileSelection,c); filesPanel.add(outputFileSelectionValueLabel,c); c.gridx = 2; c.gridy = 0; c.fill = GridBagConstraints.NONE; filesPanel.add(calibrationFileSelectionButton,c); c.gridy = 1; filesPanel.add(frequencyFileSelectionButton,c); c.gridy = 2; filesPanel.add(sampleFileSelectionButton,c); c.gridy = 3; filesPanel.add(createOutputFileButton,c); panelForFilesPanelBorder = new JPanel(); panelForFilesPanelBorder.setBorder(BorderFactory.createCompoundBorder(new EmptyBorder(5,10,5,10), BorderFactory.createEtchedBorder())); panelForFilesPanelBorder.add(filesPanel); buttonsPanel = new JPanel(); buttonsPanel.setLayout(new FlowLayout(FlowLayout.CENTER)); buttonsPanel.setBorder(BorderFactory.createCompoundBorder(new EmptyBorder(5,10,10,10), BorderFactory.createEtchedBorder())); buttonsPanel.add(startButton); buttonsPanel.add(stopButton); basePanel.add(panelForFilesPanelBorder); basePanel.add(numericInputPanel); basePanel.add(buttonsPanel);

    Read the article

  • Linq2SQL vs NHibernate performance (have I gone mad?)

    - by HeavyWave
    I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms. Linq: [Test] public void Test1000ReadsWritesToAgentStateLinqPrecompiled() { Stopwatch sw = new Stopwatch(); Stopwatch swIn = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { swIn.Reset(); swIn.Start(); ReadWriteAndDeleteAgentStateWithLinqPrecompiled(); swIn.Stop(); Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms"); } sw.Stop(); Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms"); Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms"); } private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail> GetCampaignById = CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>( (ctx, sessionId) => (from cd in ctx.CampaignDetails join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId where a.AgentStateId == sessionId select cd).FirstOrDefault()); private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled() { int id = 0; using (var ctx = new AgentDesktop3DataContext()) { EntityModel.AgentState agentState = new EntityModel.AgentState(); var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" }; var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" }; campaignDisposition.Description = "abc"; campaign.CampaignDispositions.Add(campaignDisposition); agentState.CallState = 3; campaign.AgentCampaigns.Add(new AgentCampaign { AgentState = agentState }); ctx.CampaignDetails.InsertOnSubmit(campaign); ctx.AgentStates.InsertOnSubmit(agentState); ctx.SubmitChanges(); id = agentState.AgentStateId; } using (var ctx = new AgentDesktop3DataContext()) { var dbAgentState = ctx.GetAgentStateById(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); var campaignDetails = GetCampaignById(ctx, id); Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc"); } using (var ctx = new AgentDesktop3DataContext()) { ctx.DeleteSessionById(id); } } NHibernate (the loop is the same): private void ReadWriteAndDeleteAgentState() { var id = WriteAgentState().Id; StartNewTransaction(); var dbAgentState = agentStateRepository.Get(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc"); var campaignId = dbAgentState.Campaigns[0].Id; agentStateRepository.Delete(dbAgentState); NHibernateSession.Current.Transaction.Commit(); Cleanup(campaignId); NHibernateSession.Current.BeginTransaction(); } Results: NHibernate: Total Time: 9469 ms Average time to execute 13 queries: 9 ms Linq: Total Time: 127200 ms Average time to execute 13 queries: 127 ms Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled). This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.

    Read the article

  • Array not showing in Jlist but filled in console

    - by OVERTONE
    Hey there. been a busy debugger today. ill give ths short version. ive made an array list that takes names from a database. then i put the contents of the arraylist into an array of strings. now i want too display the arrays contents in a JList. the weird thing is it was working earlier. and ive two methods. ones just a little practice too make sure i was adding to the Jlist correctly. so heres the key codes. this is the layout of my code. variables constructor methods in my variables i have these 3 defined String[] contactListNames = new String[5]; ArrayList<String> rowNames = new ArrayList<String>(); JList contactList = new JList(contactListNames); simple enough. in my constructor i have them again. contactListNames = new String[5]; contactList = new JList(contactListNames); //i dont have the array list defined though. printSqlDetails(); // the prinSqldetails was too make sure that the connectionw as alright. and its working fine. fillContactList(); // this is the one thats causing me grief. its where all the work happens. // fillContactListTest(); // this was the tester that makes sure its adding to the list alright. heres the code for fillContactListTest() public void fillContactListTest() { for(int i = 0;i<3;i++) { try { String contact; System.out.println(" please fill the list at index "+ i); Scanner in = new Scanner(System.in); contact = in.next(); contactListNames[i] = contact; in.nextLine(); } catch(Exception e) { e.printStackTrace(); } } } heres the main one thats supposed too work. public void fillContactList() { int i =0; createConnection(); ArrayList<String> rowNames = new ArrayList<String>(); try { Statement stmt = conn.createStatement(); ResultSet namesList = stmt.executeQuery("SELECT name FROM Users"); try { while (namesList.next()) { rowNames.add(namesList.getString(1)); contactListNames =(String[])rowNames.toArray(new String[rowNames.size()]); // this used to print out contents of array list // System.out.println("" + rowNames); while(i<contactListNames.length) { System.out.println(" " + contactListNames[i]); i++; } } } catch(SQLException q) { q.printStackTrace(); } conn.commit(); stmt.close(); conn.close(); } catch(SQLException e) { e.printStackTrace(); } } i really need help here. im at my wits end. i just cant see why the first method would add to the JList no problem. but the second one wont. both the contactListNames array and array list can print fine and have the names in them. but i must be transfering them too the jlist wrong. please help p.s im aware this is long. but trust me its the short version.

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • Dataset.ReadXml() - Invalid character in the given encoding

    - by NLV
    Hello all I have saved a dataset in the sql database in an xml column using the following code. XmlDataDocument dd = new XmlDataDocument(dataset); and passing this xml document as sql parameter using param.value = new XmlNodeReader(dd); The XML is like <NewDataSet><SubContractChangeOrders><AGCol>1</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>006</Contract_x0020_Number><ContractID>30</ContractID><ChangeOrderID>211</ChangeOrderID><Amount>0.0000</Amount><Udf_CostReimbursableFlag>false</Udf_CostReimbursableFlag><Udf_CustomerCode /><Udf_SubChangeOrderStatus /></SubContractChangeOrders><SubContractChangeOrders><AGCol>2</AGCol><SCO_x0020_Number>002</SCO_x0020_Number><Contract_x0020_Number>006</Contract_x0020_Number><ContractID>30</ContractID><ChangeOrderID>212</ChangeOrderID><Amount>0.0000</Amount><Udf_CostReimbursableFlag>false</Udf_CostReimbursableFlag></SubContractChangeOrders><SubContractChangeOrders><AGCol>3</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>111</Contract_x0020_Number><ContractID>87</ContractID><ChangeOrderID>12</ChangeOrderID><Amount>300.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>4</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>222</Contract_x0020_Number><ContractID>80</ContractID><ChangeOrderID>6</ChangeOrderID><Amount>100.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>5</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>777</Contract_x0020_Number><ContractID>79</ContractID><ChangeOrderID>5</ChangeOrderID><Amount>200.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>6</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>786</Contract_x0020_Number><ContractID>77</ContractID><ChangeOrderID>3</ChangeOrderID><Amount>100.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>7</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>787</Contract_x0020_Number><ContractID>78</ContractID><ChangeOrderID>4</ChangeOrderID><Amount>500.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>8</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 009</Contract_x0020_Number><ContractID>219</ContractID><ChangeOrderID>78</ChangeOrderID><Amount>9000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>9</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 010</Contract_x0020_Number><ContractID>220</ContractID><ChangeOrderID>79</ChangeOrderID><Amount>13000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>10</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 012</Contract_x0020_Number><ContractID>222</ContractID><ChangeOrderID>83</ChangeOrderID><Amount>2300.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>11</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 020</Contract_x0020_Number><ContractID>226</ContractID><ChangeOrderID>86</ChangeOrderID><Amount>5400.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>12</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con 021</Contract_x0020_Number><ContractID>227</ContractID><ChangeOrderID>87</ChangeOrderID><Amount>2300.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>13</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con001</Contract_x0020_Number><ContractID>208</ContractID><ChangeOrderID>72</ChangeOrderID><Amount>3000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>14</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con002</Contract_x0020_Number><ContractID>209</ContractID><ChangeOrderID>73</ChangeOrderID><Amount>400.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>15</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con003</Contract_x0020_Number><ContractID>210</ContractID><ChangeOrderID>74</ChangeOrderID><Amount>6000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>16</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con004</Contract_x0020_Number><ContractID>211</ContractID><ChangeOrderID>75</ChangeOrderID><Amount>9000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>17</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Con005</Contract_x0020_Number><ContractID>213</ContractID><ChangeOrderID>76</ChangeOrderID><Amount>17000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>18</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>Cont001</Contract_x0020_Number><ContractID>228</ContractID><ChangeOrderID>89</ChangeOrderID><Amount>2000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>19</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>PUR001</Contract_x0020_Number><ContractID>229</ContractID><ChangeOrderID>88</ChangeOrderID><Amount>1000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>20</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>PUR002</Contract_x0020_Number><ContractID>230</ContractID><ChangeOrderID>90</ChangeOrderID><Amount>3000.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>21</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>SC-002</Contract_x0020_Number><ContractID>2</ContractID><ChangeOrderID>7</ChangeOrderID><Amount>200.0000</Amount></SubContractChangeOrders><SubContractChangeOrders><AGCol>22</AGCol><SCO_x0020_Number>001</SCO_x0020_Number><Contract_x0020_Number>SC-004</Contract_x0020_Number><ContractID>7</ContractID><ChangeOrderID>65</ChangeOrderID><Amount>1000.0000</Amount></SubContractChangeOrders></NewDataSet> I'm trying to read it back as follows using (SqlConnection con = new SqlConnection("Server=#####;Initial Catalog=#####;User ID=####;Pwd=########")) { using (SqlCommand com = new SqlCommand("select * from dbo.tbl_#####", con)) { using (SqlDataAdapter ada = new SqlDataAdapter(com)) { ada.Fill(dt); MemoryStream ms = new MemoryStream(); object contractXML1 = dt.Rows[0]["SCOXML1"]; System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); bf.Serialize(ms, contractXML1); ms.Seek(0, SeekOrigin.Begin); ds.ReadXml(ms); } } } I'm getting the following error Data at the root level is invalid. Line 1, position 6. Any ideas? NLV

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • ASP.NET MVC/LINQ: What's the proper way to iterate through a Linq.EntitySet in a View?

    - by Terminal Frost
    OK so I have a strongly-typed Customer "Details" view that takes a Customer object Model. I am using LINQ to SQL and every Customer can have multiple (parking) Spaces. This is a FK relationship in the database so my LINQ-generated Customer model has a "Spaces" collection. Great! Here is a code snippet from my CustomerRepository where I iterate through the Customer's parking spaces to delete all payments, spaces and then finally the customer: public void Delete(Customer customer) { foreach (Space s in customer.Spaces) db.Payments.DeleteAllOnSubmit(s.Payments); db.Spaces.DeleteAllOnSubmit(customer.Spaces); db.Customers.DeleteOnSubmit(customer); } Everything works as expected! Now in my "Details" view I want to populate a table with the Customer's Spaces: <% foreach (var s in Model.Spaces) { %> <tr> <td><%: s.ID %></td> <td><%: s.InstallDate %></td> <td><%: s.SpaceType %></td> <td><%: s.Meter %></td> </tr> <% } %> I get the following error: foreach statement cannot operate on variables of type 'System.Data.Linq.EntitySet' because 'System.Data.Linq.EntitySet' does not contain a public definition for 'GetEnumerator' Finally, if I add this bit of code to my Customer partial class and use the foreach in the view to iterate through ParkingSpaces everything works as expected: public IEnumerable<Space> ParkingSpaces { get { return Spaces.AsEnumerable(); } } The problem here is that I don't want to repeat myself. I was also thinking that I could use a ViewModel to pass a Spaces collection to the View, however LINQ already infers and creates the Spaces property on the Customer model so I think it would be cleanest to just use that. I am missing something simple or am I approaching this incorrectly? Thanks!

    Read the article

  • Manual (Dynamic) LINQ subquery using IN clause

    - by immortalali-msn-com
    Hi Everyone, I want to query the DB through LINQ writing manual SQL, my linq method is: var q = db.TableView.Where(sqlAfterWhere); returnValue = q.Count(); this method queries well if the value passed to variable "sqlAfterWhere" is: (this variable is String type) it.Name = 'xyz' but what if i want to use IN clause, using a sub query. (i need to use 'it' before every column name in the above query to work), i cant use 'it' before the sub query columns as its a separate query, so what should i do, if i dont use any thing, and use column names directly it gives error saying " could not be resolved" where is my column names with out 'it' at the begining. So the query not working is: (this is a string passed to the variable above): it.Name IN (SELECT Name FROM TableName WHERE Address LIKE '%SomeAddress%') the errors come out as: Name could not be resolved Address could not be resolved The exact error is: "'Name' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near simple identifier, line 6, column 25." Same error for "Address as well if i use 'it.' before these columns it gives error as: "The element type 'Edm.Int32' and the CollectionType 'Transient.collection[Transient.rowtype(GroupID,Edm.Int32(Nullable=True,DefaultValue=))]' are not compatible. The IN expression only supports entity, primitive, and reference types. , near WHERE predicate, line 6, column 14." Thanks for the help

    Read the article

  • Pass dataset to subreport with SQL Server Reporting Services

    - by Juliet
    I'm using SQL Server Reporting Services and the report designer that comes with Visual Studio. I've got a really big report. It's actually so large that Visual Studio hangs (sometimes for hours at a time) or just crashes when I make changes. There is preciously little I can do to solve the problem, so I've decided to just move the bottom half of the report into a sub-report. So, I started with one enormous, unresponsive report and ended with two small, manageable reports -- surprisingly, this actually works. One problem: my subreport uses the same data as my main report. Right now, it populates its dataset by re-querying the database. The extra round-trip to the database causes the report to take twice as long to generate; up from 45 minutes to 1 1/2 hours to generate. I'd like to avoid hitting the database again, and instead use the same dataset in both reports. How can I share or pass a dataset between a report and subreport?

    Read the article

  • Using OLEDB parameters in .NET when connecting to an AS400/DB2

    - by Jeff Stock
    I have been pulling my hair out trying to figure out what I can't get parameters to work in my query. I have the code written in VB.NET trying to do a query to an AS/400. I have IBM Access for Windows installed and I am able to get queries to work, just not with parameters. Any time I include a parameter in my query (ex. @MyParm) it doesn't work. It's like it doesn't replace the parameter with the value it should be. Here's my code: I get the following error: SQL0206: Column @MyParm not in specified tables Here's my code: Dim da As New OleDbDataAdapter Dim dt As New DataTable da.SelectCommand = New OleDbCommand da.SelectCommand.Connection = con da.SelectCommand.CommandText = "SELECT * FROM MyTable WHERE Col1 = @MyParm" With da.SelectCommand.Parameters .Add("@MyParm", OleDbType.Integer, 9) .Item("@MyParm").Value = 5 End With ' I get the error here of course da.Fill(dt) I can replace @MyParm with a literal of 5 and it works fine. What am I missing here? I do this with SQL Server all the time, but this is the first time I am attempting it on an AS400.

    Read the article

  • Get an IDataReader from a typed List

    - by Jason Kealey
    I have a List<MyObject> with a million elements. (It is actually a SubSonic Collection but it is not loaded from the database). I'm currently using SqlBulkCopy as follows: private string FastInsertCollection(string tableName, DataTable tableData) { string sqlConn = ConfigurationManager.ConnectionStrings[SubSonicConfig.DefaultDataProvider.ConnectionStringName].ConnectionString; using (SqlBulkCopy s = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.TableLock)) { s.DestinationTableName = tableName; s.BatchSize = 5000; s.WriteToServer(tableData); s.BulkCopyTimeout = SprocTimeout; s.Close(); } return sqlConn; } I use SubSonic's MyObjectCollection.ToDataTable() to build the DataTable from my collection. However, this duplicates objects in memory and is inefficient. I'd like to use the SqlBulkCopy.WriteToServer method that uses an IDataReader instead of a DataTable so that I don't duplicate my collection in memory. What's the easiest way to get an IDataReader from my list? I suppose I could implement a custom data reader (like here http://blogs.microsoft.co.il/blogs/aviwortzel/archive/2008/05/06/implementing-sqlbulkcopy-in-linq-to-sql.aspx) , but there must be something simpler I can do without writing a bunch of generic code. Edit: It does not appear that one can easily generate an IDataReader from a collection of objects. Accepting current answer even though I was hoping for something built into the framework.

    Read the article

  • Read multiple tables from dataset in Powershell

    - by Lucas
    I am using a function that collects data from a SQL server: function Invoke-SQLCommand { param( [string] $dataSource = "myserver", [string] $dbName = "mydatabase", [string] $sqlCommand = $(throw "Please specify a query.") ) $SqlConnection = New-Object System.Data.SqlClient.SqlConnection $SqlConnection.ConnectionString = "Server=$dataSource;Database=$dbName;Integrated Security=True" $SqlCmd = New-Object System.Data.SqlClient.SqlCommand $SqlCmd.CommandText = $sqlCommand $SqlCmd.Connection = $SqlConnection $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter $SqlAdapter.SelectCommand = $SqlCmd $DataSet = New-Object System.Data.DataSet $SqlAdapter.Fill($DataSet) $SqlConnection.Close() $DataSet.Tables[0] } It works great but returns only one table. I am passing several Select statements, so the dataset contains multiple tables. I replaced $DataSet.Tables[0] with for ($i=0;$i -lt $DataSet.tables.count;$i++){ $Dataset.Tables[$i] } but the console only shows the content of the first table and blank lines for each records of what should be the second table. The only way to see the result is to change the code to $Dataset.Tables[$i] | out-string but I do not want strings, I want to have table objects to work with. When I assign what is returned by the Invoke-SQLCommand to a variable, I can see that I have an array of datarow objects but only from the first table. What happened to the second table? Any help would be greatly appreciated. Thanks

    Read the article

  • Change the default SqlCommand CommandTimeout with configuration rather than recompile?

    - by robertc
    I am supporting an ASP.Net 3.5 web application and users are experiencing a timeout error after 30 seconds when trying to run a report. Looking around the web it seems it's easy enough to change the timeout in the code, unfortunately I'm not able to access the code and recompile. Is there anyway to configure the default for either the web app, the worker process, IIS or the whole machine? Here is the stack trace up to the point where it's in System.Data in case I'm missing some other problem: [SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1948826 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4844747 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlDataReader.ConsumeMetaData() +33 System.Data.SqlClient.SqlDataReader.get_MetaData() +83 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +297 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +130 System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) +162 System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) +115 --Edit There must be something outside the code itself - I've downloaded the database and run it against the same web site installed on a test server and it runs for longer than 30 seconds and returns the report. I've compared the machine.config and web.config files from the .Net directory on the live and test and they seem the same, compared the two IIS setups, also looked at the SQL Server configuration and the only difference is that the live server is clustered on 64bit W2K3 while the test server is on 32bit.

    Read the article

  • Database design for summarized data

    - by holden
    I have a new table I'm going to add to a bunch of other summarized data, basically to take some of the load off by calculating weekly avgs. My question is whether I would be better off with one model over the other. One model with days of the week as a column with an additional column for price or another model as a series of fields for the DOW each taking a price. I'd like to know which would save me in speed and/or headaches? Or at least the trade off. IE. ID OBJECT_ID MON TUE WED THU FRI SAT SUN SOURCE OR ID OBJECT_ID DAYOFWEEK PRICE SOURCE

    Read the article

  • index help for a MySQL query using greater-than operator and ORDER BY

    - by Jaymon
    I have a table with at least a couple million rows and a schema of all integers that looks roughly like this: start stop first_user_id second_user_id The rows get pulled using the following queries: SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N AND second_user_id=N ORDER BY start ASC SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N ORDER BY start ASC I cannot figure out the best indexes to speed up these queries. The problem seems to be the ORDER BY because when I take that out the queries are fast. I've tried all different types of indexes using the standard index format: ALTER TABLE tbl_name ADD INDEX index_name (index_col_1,index_col_2,...) And none of them seem to speed up the queries. Does anyone have any idea what index would work? Also, should I be trying a different type of index? I can't guarantee the uniqueness of each row so I've avoided UNIQUE indexes. Any guidance/help would be appreciated. Thanks!

    Read the article

  • LINQ to Entities pulling back entire table

    - by Focus
    In my application I'm pulling back a user's "feed". This contains all of that user's activities, events, friend requests from other users, etc. When I pull back the feed I'm calling various functions to filter the request along the way. var userFeed = GetFeed(db); // Query to pull back all data userFeed = FilterByUser(userFeed, user, db); // Filter for the user userFeed = SortFeed(userFeed, page, sortBy, typeName); // Sort it The data that is returned is exactly what I need, however when I look at a SQL Profile Trace I can see that the query that is getting this data does not filter it at the database level and instead is selecting ALL data in the table(s). This query does not execute until I iterate through the results on my view. All of these functions return an IEnumerable object. I was under the impression that LINQ would take all of my filters and form one query to pull back the data I want instead of pulling back all the data and then filtering it on the server. What am I doing wrong or what don't I understand about the way LINQ evaluates queries?

    Read the article

  • SQL Server Express 2005 Merge Replication using RMO causes Null Reference exception

    - by Craig Shearer
    I'm trying to use RMO to programmatically perform merge synchronization. I've basically copied the SQL Server example code, as follows: // Create a connection to the Subscriber. ServerConnection conn = new ServerConnection(subscriberName); MergePullSubscription subscription; try { // Connect to the Subscriber. conn.Connect(); // Define the pull subscription. subscription = new MergePullSubscription(subscriptionDbName, publisherName, publicationDbName, publicationName, conn, false); // If the pull subscription exists, then start the synchronization. if (subscription.LoadProperties()) { // Check that we have enough metadata to start the agent. if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null) { subscription.SynchronizationAgent.Synchronize(); } else { throw new ApplicationException("There is insufficent metadata to " + "synchronize the subscription. Recreate the subscription with " + "the agent job or supply the required agent properties at run time."); } } else { // Do something here if the pull subscription does not exist. throw new ApplicationException(String.Format( "A subscription to '{0}' does not exist on {1}", publicationName, subscriberName)); } } catch (Exception ex) { // Implement appropriate error handling here. throw new ApplicationException("The subscription could not be " + "synchronized. Verify that the subscription has " + "been defined correctly.", ex); } finally { conn.Disconnect(); } I've got the server merge publication defined correctly, but when I run the above code, I get a null reference exception on the call to: subscription.SynchronizationAgent.Synchronize(); The stack trace is as follows: at Microsoft.SqlServer.Replication.MergeSynchronizationAgent.StatusEventSinkMethod(String message, Int32 percent, Int32* returnValue) at Test.ConsoleTest.Program.SynchronizePullSubscription() in F:\Visual Studio Projects\Test\source\Test.ConsoleTest\Program.cs:line 124 It seems, from the stack trace, like something to do with the Status event, but I don't have a handler defined, and defining one makes no difference.

    Read the article

  • SQL Server Reporting Services Data Extention

    - by Vercinegetorix
    Hello! So... here's my story: I'm trying to create a SQL server data extension (to be precise, I'm trying to get some sample code to run) (SSRS2005). I've done the following: Placed the extension assembly into the ReportServer/bin folder. Placed the assembly into the Private Assemblies folder. Modified rsreportserver.config in, and added the assembly info to the data section. Modified rssrvpolicy.config, and added a code group for the assembly with Full Trust. Modified RSReportDesigner.config in PrivateAssemblies. Added the assembly to the data and the designer sections, specifying the generic query designer. Modified RSPreviewPolicy.config. Added the assembly with Full Trust. The new Data Source type is available for selection, but when I try to view the dataset I get this error: The data extension DataSet could not be loaded. Check the configuration file RSReportDesigner.config. The location of the assembly is configured properly (I think), because I've added logging code and I can see that the constructor of the Connection object is being called. In fact, I've added logging code to every method of every class in the assembly, and as far as I can tell the failure occurs right after the connection object's constructor is called. Any ideas on how I might proceed to debug this? Thanks alot!

    Read the article

  • what's wrong with sql statements sqlite3_column_text ??

    - by ahmet732
    if(sqlite3_open([databasePath UTF8String], & database) == SQLITE_OK) { NSLog(@"DB OPENED"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement ="select name from Medicine"; sqlite3_stmt *compiledStatement; int result = sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, nil); NSLog(@"RESULT %d", result); if(result == SQLITE_OK) { NSLog(@"RESULT IS OK..."); // Loop through the results and add them to the feeds array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { // Read the data from the result row NSLog(@"WHILE IS OK"); **araci = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)];** NSLog(@"Data read"); NSLog(@"wow: %",araci); }//while }//if sqlite_prepare v2 // Release the compiled statement from memory sqlite3_finalize(compiledStatement); NSLog(@"compiled stmnt finalized.."); } sqlite3_close(database); NSLog(@"MA_DB CLOSED"); Questions 1 ) Everything is alright until bold code. when i run the app, throws an exception below. I have one table and one row in my db relatively, id (integer), name (varchar), desc (varchar) (I created my table using SQLite Manager). 2-) What is text type in SQLite Manager? Is that NSString? Also how can I put the string values(in table) to an NSMutableArray ? (I am really having difficulty in conversion.) *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[NSString stringWithUTF8String:]: NULL cString'

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >