Search Results

Search found 324 results on 13 pages for 'poco'.

Page 10/13 | < Previous Page | 6 7 8 9 10 11 12 13  | Next Page >

  • Patterns to implement this grammar into C# code

    - by MexicanHacker
    Hey guys, I'm creating this little BNF grammar and I wanted to <template>::= <types><editors> <types>::= <type>+ <type>::= <property>+ <property>::= <name><type> <editors>::= <editor>+ <editor>::= <name><type>(<textfield>|<form>|<list>|<pulldown>)+ <textfield>::= <label><property>[<editable>] <form>::= <label><property><editor> <list>::= <label><property><item-editor> <pulldown>::= <label><property><option>+ <option>::= <value> One possible solution we have in mind is to create POCO's that have annotations of the XMLSerialization namespace, like this, for example: [XMLRoot("template")] public class Template{ [XMLElement("types")] public Types types{ } } However I want to explore more solutions, what do you guys think?

    Read the article

  • Entity Framework 4: C# !(ReferenceEquals()) vs !=

    - by Eric J.
    Unless a class specifically overrides the behavior defined for Object, ReferenceEquals and == do the same thing... compare references. In property setters, I have commonly used the pattern private MyType myProperty; public MyType MyProperty { set { if (myProperty != value) { myProperty = value; // Do stuff like NotifyPropertyChanged } } } However, in code generated by Entity Framework, the if statement is replaced by if (!ReferenceEquals(myProperty, value)) Using ReferenceEquals is more explicit (as I guess not all C# programmers know that == does the same thing if not overridden). Is there any difference that's escaping me between the two if-variants? Are they perhaps accounting for the possibility that POCO designers may have overridden ==? In short, if I have not overridden ==, am I save using != instead of ReferencEquals()?

    Read the article

  • hidden form element not bound?

    - by csetzkorn
    Is it standard/intended behaviour that the value of a hidden form field is not bound to the view model (nor is a query string value)? Example: <%= Html.Hidden("test", "13" )%> if my poco view model contains a property test it should be bound shouldn't it?! i have to set it explicitely in the controller at the moment which kind of defeats the objective doesn't it? bla( formviewmodel m, string test) { m.test = test; } any feedback appreciated. thanks! christian

    Read the article

  • Convert jpg Byte[] to Texture2D

    - by Damien Sawyer
    I need to import jpeg images into a WP7/XNA app with associated metadata. The program which manages these images exports to an XML file with an encoded byte[] of the jpg files. I've written a custom importer/processor which successfully imports the reserialized objects into my XNA project. My question is, given the byte[] of the jpg, what is the best way to convert it back to Texture2D. // 'Standard' method for importing image Texture2D texture1 = Content.Load<Texture2D>("artwork"); // Uses the standard Content processor "Texture - XNA Framework" to import an image. // 'Custom' method var myCustomObject = Content.Load<CompiledBNBImage>("gamedata"); // Uses my custom content Processor to return POCO "CompiledBNBImage" byte[] myJPEGByteArray = myCustomObject.Image; // byte[] of jpeg Texture2D texture2 = ???? // What is the best way to convert myJPEGByteArray to a Texture2D? Thanks very much for your help. :-) DS

    Read the article

  • Is EF4 "Code Only" ready for production use?

    - by Tommy Jakobsen
    I've been looking at the new Entity Framework 4 Code Only features, and I really like them. But I'm having a hard time finding good resource on the feature. Everything seems to be spread around blongs here and there, so this make me wonder if it's ready to be used for a serious project? What do you think? Is it ready for production use or should I use the more traditional approach (EDMX designer, POCO objects)? Also, I would like to know if there are any features that Code Only does not support yet, compared to the EDMX designer? What do you think about the Code Only feature? Is it "mature" yet? Thank you.

    Read the article

  • Linq to SQL Strange SQL Translation

    - by Master Morality
    I have a simple query that is generating some odd SQL translations, which is blowing up my code when the object is saturated. from x in DataContext.MyEntities select new { IsTypeCDA = x.EntityType == "CDA" //x.EntityType is a string and EntityType.CDA is a const string... } I would expect this query should translate to: SELECT (CASE WHEN [t0].[EntityType] = @p1 THEN 1 ELSE 0 END) as [IsTypeCDA] ... Instead I get this : SELECT (CASE WHEN @p1 = [t0].[EntityType] THEN 1 WHEN NOT (@p1 = [t0].[EntityType]) THEN 0 ELSE NULL END) AS [IsTypeCDA] ... Since I'm saturating a POCO where IsTypeCDA is a bool, it blows up stating I can't assign null to bool. Any thoughts? Edit: fixed the property names so they make sense...

    Read the article

  • Entity Framework 4.0 Unit Testing

    - by Steve Ward
    Hi, I've implemented unit testing along the lines of this article with a fake object context and IObjectSet with POCO in EF4. http://blogs.msdn.com/adonet/archive/2009/12/17/test-driven-development-walkthrough-with-the-entity-framework-4-0.aspx But I'm unsure how to implement a couple of methods on my fake object context for testing. I have CreateQuery and ExecuteFunction methods on my object context interface so that I can execute ESQL and Stored Procedures but I cant (easily) implement them in my fake object context. An alternative would be to use a test double of my repository instead of a double of my object context as suggested here: http://social.msdn.microsoft.com/Forums/en-US/adonetefx/thread/c4921443-e8a3-4414-92dd-eba1480a07ad/ But this would mean my real repository isnt being tested and would seem to just bypass the issue. Can anyone offer any recommendations?

    Read the article

  • C# / Entity Framework / Linq question regarding calling a method when a class is accessed...

    - by Daniel
    So this is probably really basic, but I'm fairly new to all this. I am using Entity Framework with POCO entities. I want to call a method when a class property is set. I am trying to build an advertisement platform. I have a Customer class, a Venue class and an Advertisement class. I have my indexes set up in such a way that I can call customer.venue. However, I want to be able to call Customer.Venue.CurrentAdvertisement and have it execute a method (if CurrentAdvertisement is null) and return the current advertisement. I know I can explicitly set it every time, but I want to be able to override my classes so that whenever the CurrentAdvertisement property is accessed via LINQ it runs that method to return an ad. In order to do this I need to pass the Venue class a variable (venue name).

    Read the article

  • Getting .NET schema for a stored procedure result

    - by PHeiberg
    I have a couple of stored procedures in T-SQL where each stored procedure has a fixed schema for the result set. I need to map the result sets for each procedure to a POCO object and need the column name and type for each column in the result set. Is there a quick way of accessing the information? The best way I have found so far is accessing each stored procedure from .NET and writing my own extension method on a IDataReader/IDataRecord for dumping the information (column names and types) out. Example, a stored procedure executing the following query: SELECT Id, IntField, NullableIntField, VarcharField, DateField FROM SomeTable would require me to have the mapping information: Id - Guid IntField - System.Int32 NullableIntField - Nullable<System.Int32> VarcharField - String DateField - DateTime

    Read the article

  • NHibernate 3 and MySQL setup tutorial

    - by ryanzec
    Since I have given up on using the entity framework 4 as my ORM (getting it to work with MySQL and mapping table/field names like this_table/this_field to object naming like ThisTable/ThisField is POCO) I am now looking at NHibernate as it seems the the next big well know ORM for C# that probably with not die off any time soon. I am trying to lookup some tutorials and a lot of them in the configuration section have 2-2 in it and was wondering if those configuration would work with NHibernate 3? I am just curious if the 2-2 refers to the version of NHibernate or something different.

    Read the article

  • In Entity framework, we can use Model first, DB first, Code first but how can we create table programmatically

    - by AukI
    In entity framework we can use 3 approaches model first , code first , database first but each one of them needs manual hand touch(means creating database or create model or write the POCO class codes or entity class codes) before proceeding to the next step ( using EF in context ). What if I want to create database and tables and table relationships programatically and still want to have to features of EntityFramework 4.3. To be more specific , from this example http://support.microsoft.com/kb/307283 we can create database , tables and everything using SQL command but we can't have the advantages of entity framework. So if we want to have that what should we do?

    Read the article

  • What is Visual C++ 2005 Service Pack 1 Redistributable Package for?

    - by Stan
    I am using Poco library and when running my program on other machines which don't have VS2005 installed, I have to install "Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package ATL Security Update", otherwise the the program will give error when launching. What is this redistributable package for? Is there any way to avoid installing this but still let my program running well? Also, there're so many vcredist_x86.exe out there. How can I know which one is necessary or not when getting error? Thanks.

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Slide Men&uacute; con Jquery &amp; Asp.net

    - by Jason Ulloa
    En este post, trabajaremos una parte que en ocasiones se nos hace un “mundo”, la creación de menús en nuestras aplicaciones web. Nuestro objetivo será evitar la utilización de elementos que puedan ocasionar que la página se vuelva un poco lenta, para ello utilizaremos jquery que viene siendo una herramienta muy semejante a ajax para crear nuestro menú. Para crear nuestro menús de ejemplo necesitaremos de tres elementos: 1. CSS, para aplicar los estilos. 2. Jquery para realizar las animaciones. 3. Imágenes para armar los menús. Nuestro primer Paso: Será agregar la referencias a nuestra página, para incluir los CSS y los Scripts. 1: <link rel="stylesheet" type="text/css" href="Styles/jquery.hrzAccordion.defaults.css" /> 2: <link rel="stylesheet" type="text/css" href="Styles/jquery.hrzAccordion.examples.css" /> 3: <script type="text/javascript" src="JS/jquery-1.3.2.js"></script> 1:  2: <script type="text/javascript" src="JS/jquery.easing.1.3.js"> 1: </script> 2: <script type="text/javascript" src="JS/jquery.hrzAccordion.js"> 1: </script> 2: <script type="text/javascript" src="JS/jquery.hrzAccordion.examples.js"> </script> Nuestro segundo paso: Será la definición del html que contendrá los elementos de tipo imagen y el texto. 1: <li> 2: <div class="handle"> 3: <img src="images/title1.png" /></div> 4: <img src="images/image_test.gif" align="left" /> 5: <h3> 6: Contenido 1</h3> 7: <p> 8: Contenido de Ejemplo 1.<br> 9: <br> 10: Agregue todo el contenido aquí</p> 11: </li> En el código anterior, hemos definido un elemento que contendrá una imagen que se mostrará dentro del menú una vez desplegado. Una etiqueta H3 de html que tendrá el Título y un elemento <p> para definir el parrado de texto. Como vemos es algo realmente sencillo. Si queremos agregar mas elementos, será nada mas copiar el div anterior y agregar nuevo contenido. Al final, nuestro menú debe lucir algo así: Por último, les dejo el ejemplo para descargar

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Creando un File Upload

    - by jaullo
    Para iniciar hablaremos un poco sobre el control File Upload, de esta forma daremos una idea general de que es y como trabaja. El File Upload es un control de asp.net que permite que los usuarios seleccionen un archivo de cualquier ubicación en el equipo y lo suban a un directorio predeterminado a traves de una página asp.net. En principio este control esta limitado para no permitir subir archivos de mas de 4 MB. Sin embargo, desde el webconfig de nuestra aplicacón podremos cambiar ese valor, ya sea para aumentarlo o bien para disminuirlo. Nuestro ejemplo, se enfocará en crear un webcontrol que permita seleccionar un archivo y guardarlo, asi que empecemos. Lo primero será agregar a nuestra página un webcontrol que llamaremos Upload.ascx Posteriormente en nuestro webcontrol, agregamos el siguiente código: <table style="width: 100%">         <tr>             <td colspan="3">             <div align="center">                  <asp:Label ID="Label1" runat="server" Text="File Upload"></asp:Label>              </div>             </td>                    </tr>         <tr>             <td style="width: 456px" rowspan="2">                                                             &nbsp;</td>             <td style="width: 386px">                                <div align="center">                         <asp:FileUpload ID="FileUpload1" runat="server" Height="24px" Width="243px" />                         <span id="Span1" runat="server" />                            </div>                      </td>             <td rowspan="2">                                                             </td>         </tr>         <tr>             <td style="width: 386px">                 <div align="center">                      <asp:ImageButton Id="btnupload" runat="server" OnClick="btnupload_Click"                     ImageUrl="~/Styles/img/upload.png" style="text-align: center" />           </div>                  </td>         </tr>         <tr>             <td colspan="3">                 &nbsp;</td>         </tr>     </table>  De esta forma nuestro control deberá verse algo así   Por último en el code behin de nuestro control agregamos el código a nuestro boton, el cual será el encargado de leer el archivo que se encuentra en el File Upload y guardarlo en la ruta especificada.  Protected Sub btnupload_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles btnupload.Click         If FileUpload1.HasFile Then             Dim fileExt As String             fileExt = System.IO.Path.GetExtension(FileUpload1.FileName)             If (fileExt = ".exe") Then                 Label1.Text = "You can´t upload .exe file!"             Else                 Try                     FileUpload1.SaveAs(decrpath & _                        FileUpload1.FileName)                     Label1.Text = "File name: " & _                       FileUpload1.PostedFile.FileName & "<br>" & _                       "File Size: " & _                       FileUpload1.PostedFile.ContentLength & " kb<br>" & _                       "Content type: " & _                       FileUpload1.PostedFile.ContentType                 Catch ex As Exception                     Label1.Text = "ERROR: " & ex.Message.ToString()                 End Try             End If         Else             Label1.Text = "You have not specified a file!"         End If            End Sub   Como vemos en el código anterior tambien hemos agregado otros elementos los cuales nos dirán el nombre del archivo, el tipo de contenido y el tamaño en kb una vez que el archivo ha sido súbido al servidor. Por último deben tomar en cuenta que decrpath es la ruta en donde será subido el archivo, la cual deben variar a su gusto.

    Read the article

  • La búsqueda de la eficiencia como Santo Grial de las TIC sanitarias

    - by Eloy M. Rodríguez
    Las XVIII Jornadas de Informática Sanitaria en Andalucía se han cerrado el pasado viernes con 11.500 horas de inteligencia colectiva. Aunque el cálculo supongo que resulta de multiplicar las horas de sesiones y talleres por el número de inscritos, lo que no sería del todo real ya que la asistencia media calculo que andaría por las noventa personas, supongo que refleja el global si incluimos el montante de interacciones informales que el formato y lugar de celebración favorecen. Mi resumen subjetivo es que todos somos conscientes de que debemos conseguir más eficiencia en y gracias a las TIC y que para ello hemos señalado algunas pautas, que los asistentes, en sus diferentes roles debiéramos aplicar y ayudar a difundir. En esa línea creo que destaca la necesidad de tener muy claro de dónde se parte y qué se quiere conseguir, para lo que es imprescindible medir y que las medidas ayuden a retroalimentar al sistema en orden de conseguir sus objetivos. Y en este sentido, a nivel anecdótico, quisiera dejar una paradoja que se presentó sobre la eficiencia: partiendo de que el coste/día de hospitalización es mayor al principio que los últimos días de la estancia, si se consigue ser más eficiente y reducir la estancia media, se liberarán últimos días de estancia que se utilizarán para nuevos ingresos, lo que hará que el número de primeros días de estancia aumente el coste económico total. En este caso mejoraríamos el servicio a los ciudadanos pero aumentaríamos el coste, salvo que se tomasen acciones para redimensionar la oferta hospitalaria bajando el coste y sin mejorer la calidad. También fue tema destacado la posibilidad/necesidad de aprovechar las capacidades de las TIC para realizar cambios estructurales y hacer que la medicina pase de ser reactiva a proactiva mediante alarmas que facilitasen que se actuase antes de ocurra el problema grave. Otro tema que se trató fue la necesidad real de corresponsabilizar de verdad al ciudadano, gracias a las enormes posibilidades a bajo coste que ofrecen las TIC, asumiendo un proceso hacia la salud colaborativa que tiene muchos retos por delante pero también muchas más oportunidades. Y la carpeta del ciudadano, emergente en varios proyectos e ideas, es un paso en ese aspecto. Un tema que levantó pasiones fue cuando la Directora Gerente del Sergas se quejó de que los proyectos TIC eran lentísimos. Desgraciadamente su agenda no le permitió quedarse al debate que fue bastante intenso en el que salieron temas como el larguísimo proceso administrativo, las especificaciones cambiantes, los diseños a medida, etc como factores más allá de la eficiencia especifica de los profesionales TIC involucrados en los proyectos. Y por último quiero citar un tema muy interesante en línea con lo hablado en las jornadas sobre la necesidad de medir: el Índice SEIS. La idea es definir una serie de criterios agrupados en grandes líneas y con un desglose fino que monitorice la aportación de las TIC en la mejora de la salud y la sanidad. Nos presentaron unas versiones previas con debate aún abierto entre dos grandes enfoques, partiendo desde los grandes objetivos hasta los procesos o partiendo desde los procesos hasta los objetivos. La discusión no es sólo académica, ya que influye en los parámetros a establecer. La buena noticia es que está bastante avanzado el trabajo y que pronto los servicios de salud podrán tener una herramienta de comparación basada en la realidad nacional. Para los interesados, varios asistentes hemos ido tuiteando las jornadas, por lo que el que quiera conocer un poco más detalles puede ir a Twitter y buscar la etiqueta #jisa18 y empezando del más antiguo al más moderno se puede hacer un seguimiento con puntos de vista subjetivos sobre lo allí ocurrido. No puedo dejar de hacer un par de autocríticas, ya que soy miembro de la SEIS. La primera es sobre el portal de la SEIS que no ha tenido la interactividad que unas jornadas como estas necesitaban. Pronto empezará a tener documentos y análisis de lo allí ocurrido y luego vendrán las crónicas y análisis más cocinados en la revista I+S. Pero en la segunda década del siglo XXI se necesita bastante más. La otra es sobre la no deseada poca presencia de usuarios de las TIC sanitarias en los roles de profesionales sanitarios y ciudadanos usuarios de los sistemas de información sanitarios. Tenemos que ser proactivos para que acudan en número significativo, ya que si no estamos en riesgo de ser unos TIC-sanitarios absolutistas: todo para los usuarios pero sin los usuarios. Tweet

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • Using ViewModel in ASP.NET MVC with FluentValidation

    - by Brian McCord
    I am using ASP.NET MVC with Entity Framework POCO classes and the FluentValidation framework. It is working well, and the validation is happening as it should (as if I were using DataAnnotations). I have even gotten client-side validation working. And I'm pretty pleased with it. Since this is a test application I am writing just to see if I can get new technologies working together (and learn them along the way), I am now ready to experiment with using ViewModels instead of just passing the actual Model to the view. I'm planning on using something like AutoMapper in my service to do the mapping back and forth from Model to ViewModel but I have a question first. How is this going to affect my validation? Should my validation classes (written using FluentValidation) be written against the ViewModel instead of the Model? Or does it need to happen in both places? One of the big deals about DataAnnotations (and FluentValidation) was that you could have validation in one place that would work "everywhere". And it fulfills that promise (mostly), but if I start using ViewModels, don't I lose that ability and have to go back to putting validation in two places? Or am I just thinking about it wrong?

    Read the article

  • ASP.Net Entity Framework, objectcontext error

    - by Chris Klepeis
    I'm building a 4 layered ASP.Net web application. The layers are: Data Layer Entity Layer Business Layer UI Layer The entity layer has my data model classes and is built from my entity data model (edmx file) in the datalayer using T4 templates (POCO). The entity layer is referenced in all other layers. My data layer has a class called SourceKeyRepository which has a function like so: public IEnumerable<SourceKey> Get(SourceKey sk) { using (dmc = new DataModelContainer()) { var query = from SourceKey in dmc.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query; } } Lazy loading is disabled since I do not want my queries to run in other layers of this application. I'm receiving the following error when attempting to access the information in the UI layer: The ObjectContext instance has been disposed and can no longer be used for operations that require a connection. I'm sure this is because my DataModelContainer "dmc" was disposed. How can I return this IEnumerable object from my data layer so that it does not rely on the ObjectContext, but solely on the DataModel? Is there a way to limit lazy loading to only occur in the data layer?

    Read the article

  • EF4 querying from parent to grandchildren

    - by Hans Kesting
    I have a model withs Parents, Children and Grandchildren, in a many-to-many relationship. Using this article I created POCO classes that work fine, except for one thing I can't yet figure out. When I query the Parents or Children directly using LINQ, the SQL reflects the LINQ query (a .Count() executes a COUNT in the database and so on) - fine. The Parent class has a Children property, to access it's children. But (and now for the problem) this doesn't expose an IQueryable interface but an ICollection. So when I access the Children property on a particular parent all the Parent's Children are read. Even worse, when I access the Grandchildren (theParent.Children.SelectMany(child => child.GrandChildren).Count()) then for each and every child a separate request is issued to select all data of it's grandchildren. That's a lot of separate queries! Changing the type of the Children property from ICollection to IQueryable doesn't solve this. Apart from missing methods I need, like Add() and Remove(), EF just doesn't recognize the navigation property then. Are there correct ways (as in: low database interaction) of querying through children (and what are they)? Or is this just not possible?

    Read the article

  • mvvm - prismv2 - INotifyPropertyChanged

    - by depictureboy
    Since this is so long and prolapsed and really doesnt ask a coherent question: 1: what is the proper way to implement subproperties of a primary object in a viewmodel? 2: Has anyone found a way to fix the delegatecommand.RaiseCanExecuteChanged issue? or do I need to fix it myself until MS does? For the rest of the story...continue on. In my viewmodel i have a doctor object property that is tied to my Model.Doctor, which is an EF POCO object. I have onPropertyChanged("Doctor") in the setter as such: Private Property Doctor() As Model.Doctor Get Return _objDoctor End Get Set(ByVal Value As Model.Doctor) _objDoctor = Value OnPropertyChanged("Doctor") End Set End Property The only time OnPropertyChanged fires if the WHOLE object changes. This wouldnt be a problem except that I need to know when the properties of doctor changes, so that I can enable other controls on my form(save button for example). I have tried to implement it in this way: Public Property FirstName() As String Get Return _objDoctor.FirstName End Get Set(ByVal Value As String) _objDoctor.FirstName = Value OnPropertyChanged("Doctor") End Set End Property this is taken from the XAMLPowerToys controls from Karl Shifflet, so i have to assume that its correct. But for the life of me I cant get it to work. I have included PRISM in here because I am using a unity container to instantiate my view and it IS a singleton. I am getting change notification to the viewmodel via eventaggregator that then populates Doctor with the new value. The reason I am doing all this is because of PRISM's DelegateCommand. So maybe that is my real issue. It appears that there is a bug in DelegateCommand that does not fire the RaiseCanExecuteChanged method on the commands that implement it and therefore needs to be fired manually. I have the code for that in my onPropertyChangedEventHandler. Of course this isnt implemented through the ICommand interface either so I have to break and make my properties DelegateCommand(of X) so that I have access to RaiseCanExecuteChanged of each command.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13  | Next Page >