Search Results

Search found 42756 results on 1711 pages for 'model based testing'.

Page 61/1711 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Testing Routes in ASP.NET MVC with MvcContrib

    - by Guilherme Cardoso
    I've decide to write about unit testing in the next weeks. If we decide to develop with Test-Driven Developement pattern, it's important to not forget the routes. This article shows how to test routes. I'm importing my routes from my RegisterRoutes method from the Global.asax of Project.Web created by default (in SetUp). I'm using ShouldMapTp() from MvcContrib: http://mvccontrib.codeplex.com/ The controller is specified in the ShouldMapTo() signature, and we use lambda expressions for the action and parameters that are passed to that controller. [SetUp] public void Setup() { Project.Web.MvcApplication.RegisterRoutes(RouteTable.Routes); } [Test] public void Should_Route_HomeController() { "~/Home" .ShouldMapTo<HomeController>(action => action.Index()); } [Test] public void Should_Route_EventsController() { "~/Events" .ShouldMapTo<EventsController>(action => action.Index()); "~/Events/View/44/Concert-DevaMatri-22-January-" .ShouldMapTo<EventosController>(action => action.Read(1, "Title")); // In this example,44 is the Id for my Event and "Concert-DevaMatri-22-January" is the title for that Event } [TearDown] public void teardown() { RouteTable.Routes.Clear(); }

    Read the article

  • A Reusable Builder Class for Javascript Testing

    - by Liam McLennan
    Continuing on my series of builders for C# and Ruby here is the solution in Javascript. This is probably the implementation with which I am least happy. There are several parts that did not seem to fit the language. This time around I didn’t bother with a testing framework, I just append some values to the page with jQuery. Here is the test code: var initialiseBuilder = function() { var builder = builderConstructor(); builder.configure({ 'Person': function() { return {name: 'Liam', age: 26}}, 'Property': function() { return {street: '127 Creek St', manager: builder.a('Person') }} }); return builder; }; var print = function(s) { $('body').append(s + '<br/>'); }; var build = initialiseBuilder(); // get an object liam = build.a('Person'); print(liam.name + ' is ' + liam.age); // get a modified object liam = build.a('Person', function(person) { person.age = 999; }); print(liam.name + ' is ' + liam.age); home = build.a('Property'); print(home.street + ' manager: ' + home.manager.name); and the implementation: var builderConstructor = function() { var that = {}; var defaults = {}; that.configure = function(d) { defaults = d; }; that.a = function(type, modifier) { var o = defaults[type](); if (modifier) { modifier(o); } return o; }; return that; }; I still like javascript’s syntax for anonymous methods, defaults[type]() is much clearer than the Ruby equivalent @defaults[klass].call(). You can see the striking similarity between Ruby hashes and javascript objects. I also prefer modifier(o) to the equivalent Ruby, yield o.

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Service Testing made easy with SO-Aware Test Workbench

    - by cibrax
    I happy to announce today a new addition to our SO-Aware service repository toolset, SO-Aware Test Workbench, a WPF desktop application for doing functional and load testing against existing WCF Services. This tool is completely integrated to the SO-Aware service repository, which makes configuring new load and functional tests for WCF Soap and REST services a breeze. From now on, the service repository can play a very important role in an organization by facilitating collaboration between developers and testers. Developers can create and register new services in the repository with all the related artifacts like configuration. On the other hand, Testers can just pick one of the existing services in the repository and create functional or load tests from there, with no need to deal with specific details of the service implementation, location or configuration settings. Developers and Testers can later use the result of those tests to modify the services or adjust different settings on the tests or service configuration. Gustavo Machado, one of the developers behind this project, has written an excellent post describing all the functionality that can find today in the tool. You can also see the tool in action in this Endpoint Tv episode with Jesus and Ron Jacobs.

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • MVC 3 Client Validation works intermittently

    - by Gutek
    I have MVC 3 version, System.Web.Mvc product version is: 3.0.20105.0 modified on 5th of Jan 2011 - i think that's the latest. I’ve notice that client validation is not working as it suppose in the application that we are creating, so I’ve made a quick test. I’ve created basic MVC 3 Application using Internet Application template. I’ve added Test Controller: using System.Web.Mvc; using MvcApplication3.Models; namespace MvcApplication3.Controllers { public class TestController : Controller { public ActionResult Index() { return View(); } public ActionResult Create() { Sample model = new Sample(); return View(model); } [HttpPost] public ActionResult Create(Sample model) { if(!ModelState.IsValid) { return View(); } return RedirectToAction("Display"); } public ActionResult Display() { Sample model = new Sample(); model.Age = 10; model.CardNumber = "1324234"; model.Email = "[email protected]"; model.Title = "hahah"; return View(model); } } } Model: using System.ComponentModel.DataAnnotations; namespace MvcApplication3.Models { public class Sample { [Required] public string Title { get; set; } [Required] public string Email { get; set; } [Required] [Range(4, 120, ErrorMessage = "Oi! Common!")] public short Age { get; set; } [Required] public string CardNumber { get; set; } } } And 3 views: Create: @model MvcApplication3.Models.Sample @{ ViewBag.Title = "Create"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @*@{ Html.EnableClientValidation(); }*@ @using (Html.BeginForm()) { @Html.ValidationSummary(false) <fieldset> <legend>Sample</legend> <div class="editor-label"> @Html.LabelFor(model => model.Title) </div> <div class="editor-field"> @Html.EditorFor(model => model.Title) @Html.ValidationMessageFor(model => model.Title) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Age) </div> <div class="editor-field"> @Html.EditorFor(model => model.Age) @Html.ValidationMessageFor(model => model.Age) </div> <div class="editor-label"> @Html.LabelFor(model => model.CardNumber) </div> <div class="editor-field"> @Html.EditorFor(model => model.CardNumber) @Html.ValidationMessageFor(model => model.CardNumber) </div> <p> <input type="submit" value="Create" /> </p> </fieldset> @*<fieldset> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> *@ } <div> @Html.ActionLink("Back to List", "Index") </div> Display: @model MvcApplication3.Models.Sample @{ ViewBag.Title = "Display"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Display</h2> <fieldset> <legend>Sample</legend> <div class="display-label">Title</div> <div class="display-field">@Model.Title</div> <div class="display-label">Email</div> <div class="display-field">@Model.Email</div> <div class="display-label">Age</div> <div class="display-field">@Model.Age</div> <div class="display-label">CardNumber</div> <div class="display-field">@Model.CardNumber</div> </fieldset> <p> @Html.ActionLink("Back to List", "Index") </p> Index: @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <p> @Html.ActionLink("Create", "Create") </p> <p> @Html.ActionLink("Display", "Display") </p> Everything is default here – Create Controller, AddView from controller action with model specified with proper scaffold template and using provided layout in sample application. When I will go to /Test/Create client validation in most cases works only for Title and Age fields, after clicking Create it works for all fields (create does not goes to server). However in some cases (after a build) Title validation is not working and Email is, or CardNumber or Title and CardNumber but Email is not. But never all validation is working before clicking Create. I’ve tried creating form with Html.EditorForModel as well as enforce client validation just before BeginForm: @{ Html.EnableClientValidation(); } I’m providing a source code for this sample on dropbox – as maybe our dev env is broken :/ I’ve done tests on IE 8 and Chrome 10 beta. Just in case, in web config validation scripts are enabled: <appSettings> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/> </appSettings> So my questions are Is there a way to ensure that Client validation will work as it supposed to work and not intermittently? Is this a desired behavior and I'm missing something in configuration/implementation?

    Read the article

  • Backbone.js Adding Model to Collection Issue

    - by jtmgdevelopment
    I am building a test application in Backbone.js (my first app using Backbone). The app goes like this: Load Data from server "Plans" Build list of plans and show to screen There is a button to add a new plan Once new plan is added, add to collection ( do not save to server as of now ) redirect to index page and show the new collection ( includes the plan you just added) My issue is with item 5. When I save a plan, I add the model to the collection then redirect to the initial view. At this point, I fetch data from the server. When I fetch data from the server, this overwrites my collection and my added model is gone. How can I prevent this from happening? I have found a way to do this but it is definitely not the correct way at all. Below you will find my code examples for this. Thanks for the help. PlansListView View: var PlansListView = Backbone.View.extend({ tagName : 'ul', initialize : function() { _.bindAll( this, 'render', 'close' ); //reset the view if the collection is reset this.collection.bind( 'reset', this.render , this ); }, render : function() { _.each( this.collection.models, function( plan ){ $( this.el ).append( new PlansListItemView({ model: plan }).render().el ); }, this ); return this; }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end NewPlanView Save Method var NewPlanView = Backbone.View.extend({ tagName : 'section', template : _.template( $( '#plan-form-template' ).html() ), events : { 'click button.save' : 'savePlan', 'click button.cancel' : 'cancel' }, intialize: function() { _.bindAll( this, 'render', 'save', 'cancel' ); }, render : function() { $( '#container' ).append( $( this.el ).html(this.template( this.model.toJSON() )) ); return this; }, savePlan : function( event ) { this.model.set({ name : 'bad plan', date : 'friday', desc : 'blah', id : Math.floor(Math.random()*11), total_stops : '2' }); this.collection.add( this.model ); app.navigate('', true ); event.preventDefault(); }, cancel : function(){} }); Router (default method): index : function() { this.container.empty(); var self = this; //This is a hack to get this to work //on default page load fetch all plans from the server //if the page has loaded ( this.plans is defined) set the updated plans collection to the view //There has to be a better way!! if( ! this.plans ) { this.plans = new Plans(); this.plans.fetch({ success: function() { self.plansListView = new PlansListView({ collection : self.plans }); $( '#container' ).append( self.plansListView.render().el ); if( self.requestedID ) self.planDetails( self.requestedID ); } }); } else { this.plansListView = new PlansListView({ collection : this.plans }); $( '#container' ).append( self.plansListView.render().el ); if( this.requestedID ) self.planDetails( this.requestedID ); } }, New Plan Route: newPlan : function() { var plan = new Plan({name: 'Cool Plan', date: 'Monday', desc: 'This is a great app'}); this.newPlan = new NewPlanView({ model : plan, collection: this.plans }); this.newPlan.render(); } FULL CODE ( function( $ ){ var Plan = Backbone.Model.extend({ defaults: { name : '', date : '', desc : '' } }); var Plans = Backbone.Collection.extend({ model : Plan, url : '/data/' }); $( document ).ready(function( e ){ var PlansListView = Backbone.View.extend({ tagName : 'ul', initialize : function() { _.bindAll( this, 'render', 'close' ); //reset the view if the collection is reset this.collection.bind( 'reset', this.render , this ); }, render : function() { _.each( this.collection.models, function( plan ){ $( this.el ).append( new PlansListItemView({ model: plan }).render().el ); }, this ); return this; }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end var PlansListItemView = Backbone.View.extend({ tagName : 'li', template : _.template( $( '#list-item-template' ).html() ), events :{ 'click a' : 'listInfo' }, render : function() { $( this.el ).html( this.template( this.model.toJSON() ) ); return this; }, listInfo : function( event ) { } });//end var PlanView = Backbone.View.extend({ tagName : 'section', events : { 'click button.add-plan' : 'newPlan' }, template: _.template( $( '#plan-template' ).html() ), initialize: function() { _.bindAll( this, 'render', 'close', 'newPlan' ); }, render : function() { $( '#container' ).append( $( this.el ).html( this.template( this.model.toJSON() ) ) ); return this; }, newPlan : function( event ) { app.navigate( 'newplan', true ); }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end var NewPlanView = Backbone.View.extend({ tagName : 'section', template : _.template( $( '#plan-form-template' ).html() ), events : { 'click button.save' : 'savePlan', 'click button.cancel' : 'cancel' }, intialize: function() { _.bindAll( this, 'render', 'save', 'cancel' ); }, render : function() { $( '#container' ).append( $( this.el ).html(this.template( this.model.toJSON() )) ); return this; }, savePlan : function( event ) { this.model.set({ name : 'bad plan', date : 'friday', desc : 'blah', id : Math.floor(Math.random()*11), total_stops : '2' }); this.collection.add( this.model ); app.navigate('', true ); event.preventDefault(); }, cancel : function(){} }); var AppRouter = Backbone.Router.extend({ container : $( '#container' ), routes : { '' : 'index', 'viewplan/:id' : 'planDetails', 'newplan' : 'newPlan' }, initialize: function(){ }, index : function() { this.container.empty(); var self = this; //This is a hack to get this to work //on default page load fetch all plans from the server //if the page has loaded ( this.plans is defined) set the updated plans collection to the view //There has to be a better way!! if( ! this.plans ) { this.plans = new Plans(); this.plans.fetch({ success: function() { self.plansListView = new PlansListView({ collection : self.plans }); $( '#container' ).append( self.plansListView.render().el ); if( self.requestedID ) self.planDetails( self.requestedID ); } }); } else { this.plansListView = new PlansListView({ collection : this.plans }); $( '#container' ).append( self.plansListView.render().el ); if( this.requestedID ) self.planDetails( this.requestedID ); } }, planDetails : function( id ) { if( this.plans ) { this.plansListView.close(); this.plan = this.plans.get( id ); if( this.planView ) this.planView.close(); this.planView = new PlanView({ model : this.plan }); this.planView.render(); } else{ this.requestedID = id; this.index(); } if( ! this.plans ) this.index(); }, newPlan : function() { var plan = new Plan({name: 'Cool Plan', date: 'Monday', desc: 'This is a great app'}); this.newPlan = new NewPlanView({ model : plan, collection: this.plans }); this.newPlan.render(); } }); var app = new AppRouter(); Backbone.history.start(); }); })( jQuery );

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Identifying Data Model Changes Between EBS 12.1.3 and Prior EBS Releases

    - by Steven Chan
    The EBS 12.1.3 Release Content Document (RCD, Note 561580.1) summarizes the latest functional and technology stack-related updates in a specific release.  The E-Business Suite Electronic Technical Reference Manual (eTRM) summarizes the database objects in a specific EBS release.  Those are useful references, but sometimes you need to find out which database objects have changed between one EBS release and another.  This kind of information about the differences or deltas between two releases is useful if you have customized or extended your EBS instance and plan to upgrade to EBS 12.1.3. Where can you find that information?Answering that question has just gotten a lot easier.  You can now use a new EBS Data Model Comparison Report tool:EBS Data Model Comparison Report Overview (Note 1290886.1)This new tool lists the database object definition changes between the following source and target EBS releases:EBS 11.5.10.2 and EBS 12.1.3EBS 12.0.4 and EBS 12.1.3EBS 12.1.1 and EBS 12.1.3EBS 12.1.2 and EBS 12.1.3For example, here's part of the report comparing Bill of Materials changes between 11.5.10.2 and 12.1.3:

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authorization

    - by Your DisplayName here!
    In the previous post I showed how token based authentication can be implemented for WCF HTTP based services. Authentication is the process of finding out who the user is – this includes anonymous users. Then it is up to the service to decide under which circumstances the client has access to the service as a whole or individual operations. This is called authorization. By default – my framework does not allow anonymous users and will deny access right in the service authorization manager. You can however turn anonymous access on – that means technically, that instead of denying access, an anonymous principal is placed on Thread.CurrentPrincipal. You can flip that switch in the configuration class that you can pass into the service host/factory. var configuration = new WebTokenWebServiceHostConfiguration {     AllowAnonymousAccess = true }; But this is not enough, in addition you also need to decorate the individual operations to allow anonymous access as well, e.g.: [AllowAnonymousAccess] public string GetInfo() {     ... } Inside these operations you might have an authenticated or an anonymous principal on Thread.CurrentPrincipal, and it is up to your code to decide what to do. Side note: Being a security guy, I like this opt-in approach to anonymous access much better that all those opt-out approaches out there (like the Authorize attribute – or this.). Claims-based Authorization Since there is a ClaimsPrincipal available, you can use the standard WIF claims authorization manager infrastructure – either declaratively via ClaimsPrincipalPermission or programmatically (see also here). [ClaimsPrincipalPermission(SecurityAction.Demand,     Resource = "Claims",     Operation = "View")] public ViewClaims GetClientIdentity() {     return new ServiceLogic().GetClaims(); }   In addition you can also turn off per-request authorization (see here for background) via the config and just use the “domain specific” instrumentation. While the code is not 100% done – you can download the current solution here. HTH (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • How to implement friction in a physics engine based on "Advanced Character Physics"

    - by paldepind
    I have implemented a physics engine based on the concepts in the classic text Advanced Character Physics by Thomas Jakobsen. Friction is only discussed very briefly in the article and Jakobsen himself notes how "other and better friction models than this could and should be implemented." Generally how could one implement a believable friction model on top of the concepts from the mentioned article? And how could the found friction be translated into rotation on a circle? I do not want this question to be about my specific implementation but about how to combine Jakobsens ideas with a great friction system more generally. But here is a live demo showing the current state of my engine which does not handle friction in any way: http://jsfiddle.net/Z7ECB/embedded/result/ Below is a picture showing and example on how collision detection could work in an engine based in the paper. In the Verlet integration the current and previous position is always stored. Based on these a new position is calculated. In every frame I calculate the distance between the circles and the lines. If this distance is less than a circles radius a collision has occurred and the circle is projected perpendicular out of the offending line according to the size of the overlap (offset on the picture). Velocity is implicit due to Verlet integration so changing position also changes the velocity. What I need to do know is to somehow determine the amount of friction on the circle and move it backwards parallel to the line in order to reduce its speed.

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by joycsharp
    I just released an open source project at codeplex, which includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are  based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable. In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

  • Announcing SO-Aware Test Workbench

    - by gsusx
    Yesterday was a big day for Tellago Studios . After a few months hands down working, we announced the release of the SO-Aware Test Workbench tool which brings sophisticated performance testing and test visualization capabilities to theWCF world. This work has been the result of the feedback received by many of our SO-Aware and Tellago customers in terms of how to improve the WCF testing. More importantly, with the SO-Aware Test Workbench we are trying to address what has been one of the biggest challenges...(read more)

    Read the article

  • Generic and type safe I/O model in any language

    - by Eduardo León
    I am looking for an I/O model, in any programming language, that is generic and type safe. By genericity, I mean there should not be separate functions for performing the same operations on different devices (read_file, read_socket, read_terminal). Instead, a single read operation works on all read-able devices, a single write operation works on all write-able devices, and so on. By type safety, I mean operations that do not make sense should not even be expressible in first place. Using the read operation on a non-read-able device ought to cause a type error at compile time, similarly for using the write operation on a non-write-able device, and so on. Is there any generic and type safe I/O model?

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Podcast: Oracle Introduces Oracle Communications Data Model

    - by kimberly.billings
    I recently sat down with Tony Velcich from Oracle's Communications Product Management team to learn more about the new Oracle Communications Data Model (OCDM). OCDM is a standards-based data model for Oracle Database that helps communications service providers jumpstart data warehouse and business intelligence initiatives to realize a fast return on investment by reducing implementation time and costs. Listen to the podcast. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Calculating instantaneous speed and acceleration for a simple Car software model

    - by Dylan
    I am trying to model a speedometer and tachometer for a simple software model of a car dashboard. I want this to be relatively simple, so for my purposes I won't likely simulate variables such as drag (or, assume that drag is a constant). But I would like to know the general formulas for: 1) Calculating the RPM, depending on a position of a graphical slider representing the accelerator. 2) Using this information to find the instantaneous speed (or, magnitude of instantaneous velocity?). I am not sure, in the case of 2), what other independent variables I need to consider. Do I need to consider the frequency of rotation of the wheels (assuming a fixed radius), in addition to the RPM? If anyone can give me a rough explanation plus relevant formulas, or alternatively direct me to other trusted resources online (I have had a hard time sifting through info and determining the accuracy), it would be much appreciated.

    Read the article

  • MEF, IServiceProvider and Testing Visual Studio Extensions

    - by Daniel Cazzulino
    In the latest and greatest version of Visual Studio, MEF plays a critical role, one that makes extending VS much more fun than it ever was. So typically, you just [Export] something, and then someone [Import]s it and that's it. MEF in all its glory kicks in and gets all your dependencies satisfied. Cool, you say, so let's now import ITextTemplating and have some T4-based codegen going! Ah, if only it was that easy. Turns out by default, none of the VS built-in services are exposed to MEF, apparently because there wasn't enough time to analyze the lifetime, initialization, dependencies, etc. for each one before launch, which makes perfect sense. You don't want to blindly export everything now just in case. There's also the whole VS package initialization thing which in this version of VS is not so transparently integrated with the MEF publishing side (i.e. a MEF export from a package can get instantiated before its owning package, and in fact, the package can remain unloaded forever and the export will continue to be visible to anyone)....Read full article

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >