Search Results

Search found 124582 results on 4984 pages for 'net code'.

Page 46/4984 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • ASP.NET MVC: What is the lifetime of a Controller instance?

    - by Kivin
    I was unable to find any documentation on the MSDN site. Is the lifetime (construction and disposition) of the Controller object defined in the ASP.NET MVC Spec? The reason for this question is to determine whether or not it is safe to store contextual information in Controller members/properties or whether using the HttpContext would be more appropriate.

    Read the article

  • ASP.NET MVC Route based on Web Browser/Device (e.g. iPhone)

    - by Alex
    Is it possible, from within ASP.NET MVC, to route to different controllers or actions based on the accessing device/browser? I'm thinking of setting up alternative actions and views for some parts of my website in case it is accessed from the iPhone, to optimize display and functionality of it. I don't want to create a completely separate project for the iPhone though as the majority of the site is fine on any device. Any idea on how to do this?

    Read the article

  • EF 6 Code First Many to many With Payload and self referencing many to many

    - by lesley86
    I Have the problem where i have a many to many relationship and on one of the tables there will be a self referencing many to many. So basically a school have zero or many groups and many groups can have 0 or many schools. The groups table will contain a parent child many to many with itself because a group can be a child of another group or it can have no children and that child can have a child, one child can also have many parents or a entity can have no parents. I created a mapping table with Payload to solvethe first many to many problem. code snippet public class School { public virtual ICollection<SchoolGroupMap> SchoolGroupMaps } public class SchoolGroup { public virtual ICollection<SchoolGroupMap> SchoolGroupMaps } public class SchoolGroupMap { public virtual School School public virtual SchoolGroup SchoolGroup } i Then tried modifying the code the following way for the the self referencing many to many public class SchoolGroup { public virtual ICollection<SchoolGroupMap> SchoolGroupMaps public virtual ICollection<SchoolGroup> Parents public virtual ICollection<SchoolGroup> Children } I changed the context with has many and an auto mapping table (forgive me i have been trying so many things today i do not have the exact code). I received an error the properties on the classes must match. Can anyone help please. I want to do create navigation properties on the self referencing many to many. Also a seed example would be appreciated regards

    Read the article

  • .NET Framework 4.0 installation is very slow

    - by Dimitri C.
    On my Windows Vista, it takes a full 12 minutes to install the .NET Framework 4.0. a) Is this normal? b) If not, can something be done about it? The reason I'm concerned about the speed is because it slows down the testing of our product installer considerably. Testing an installer is time consuming already, but this new .NET Framework installer makes it almost undoable. Detail: I did the test on a clean Vista inside a VirtualBox virtual machine. This setup does not show any performance issues in other situations. I tried both dotNetFx40_Full_x86_x64.exe and dotNetFx40_Client_x86_x64.exe. They both take approximately the same time to install.

    Read the article

  • Attempting to install .NET framework 4 *full* installs *client* instead

    - by msorens
    On a Win7 SP1 32-bit machine, I initially had .NET 4 client installed and wanted to upgrade to .NET 4 full. I downloaded the full installer dotNetFx40_Full_x86_x64.exe from Microsoft. After download the file showed 48.11MB, the correct size for the full package (vs. 41MB for the client). I ran the installer and it first prompted to repair or remove the existing package. I chose to remove, so uninstalled the two parts, 4 extended and 4 client. Reboot. I reran the installer and it began installation, showing that it was installing the client. Though this raised an eyebrow for me, I let it run to completion, thinking it might be reporting the full install in sections. But after completion, I again ended up with 4 extended and 4 client installed! Obviously I am missing something; ideas...?

    Read the article

  • Why does my .NET 4 application know .NET 4 is not installed

    - by Tergiver
    I developed an application that targeted .NET 4 the other day and XCOPY-installed it to a Windows XP machine. I had told the owner of the machine that they would need to install .NET Framework 4 to run my app and he told me he did (not a reliable source). When I ran the application I was presented with a message box that said this app requires .NET Framework 4, would I like to install it? Clicking the Yes button took me to the Microsoft web site and a few clicks later .NET 4 was installed, and the application successfully launched. Now I normally don't develop applications that target the latest version of .NET, I always target the lowest version I can (what features do I really need?). So this was my first .NET 4 app (and I only targeted 4 because it used a library that did). In the past, XCOPY-installing .NET applications to a machine that didn't have the correct version of .NET installed resulted in the application simply crashing on startup with no useful information presented to the user. Was it built into my app because I targeted .NET X? Was it something already installed on the target machine? I love the feature, I just want to know precisely how to leverage it in the future.

    Read the article

  • EF4 Code First - Many to many relationship issue

    - by Yngve B. Nilsen
    Hi! I'm having some trouble with my EF Code First model when saving a relation to a many to many relationship. My Models: public class Event { public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Tag> Tags { get; set; } } public class Tag { public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Event> Events { get; set; } } In my controller, I map one or many TagViewModels into type of Tag, and send it down to my servicelayer for persistence. At this time by inspecting the entities the Tag has both Id and Name (The Id is a hidden field, and the name is a textbox in my view) The problem occurs when I now try to add the Tag to the Event. Let's take the following scenario: The Event is already in my database, and let's say it already has the related tags C#, ASP.NET If I now send the following list of tags to the servicelayer: ID Name 1 C# 2 ASP.NET 3 EF4 and add them by first fetching the Event from the DB, so that I have an actual Event from my DbContext, then I simply do myEvent.Tags.Add to add the tags.. Problem is that after SaveChanges() my DB now contains this set of tags: ID Name 1 C# 2 ASP.NET 3 EF4 4 C# 5 ASP.NET This, even though my Tags that I save has it's ID set when I save it (although I didn't fetch it from the DB)

    Read the article

  • Dissecting ASP.NET Routing

    The ASP.NET Routing framework allows developers to decouple the URL of a resource from the physical file on the web server. Specifically, the developer defines routing rules, which map URL patterns to a class or ASP.NET page that generates the content. For instance, you could create a URL pattern of the form Categories/CategoryName and map it to the ASP.NET page ShowCategoryDetails.aspx; the ShowCategoryDetails.aspx page would display details about the category CategoryName. With such a mapping, users could view category about the Beverages category by visiting www.yoursite.com/Categories/Beverages. In short, ASP.NET Routing allows for readable, SEO-friendly URLs. ASP.NET Routing was first introduced in ASP.NET 3.5 SP1 and was enhanced further in ASP.NET 4.0. ASP.NET Routing is a key component of ASP.NET MVC, but can also be used with Web Forms. Two previous articles here on 4Guys showed how to get started using ASP.NET Routing: Using ASP.NET Routing Without ASP.NET MVC and URL Routing in ASP.NET 4.0. This article aims to explore ASP.NET Routing in greater depth. We'll explore how ASP.NET Routing works underneath the covers to decode a URL pattern and hand it off the the appropriate class or ASP.NET page. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WebGrid Helper and Complex Types

    - by imran_ku07
        Introduction:           WebGrid helper makes it very easy to show tabular data. It was originally designed for ASP.NET Web Pages(WebMatrix) to display, edit, page and sort tabular data but you can also use this helper in ASP.NET Web Forms and ASP.NET MVC. When using this helper, sometimes you may run into a problem if you use complex types in this helper. In this article, I will show you how you can use complex types in WebGrid helper.       Description:             Let's say you need to show the employee data and you have the following classes,   public class Employee { public string Name { get; set; } public Address Address { get; set; } public List<string> ContactNumbers { get; set; } } public class Address { public string City { get; set; } }               The Employee class contain a Name, an Address and list of ContactNumbers. You may think that you can easily show City in WebGrid using Address.City, but no. The WebGrid helper will throw an exception at runtime if any Address property is null in the Employee list. Also, you cannot directly show ContactNumbers property. The easiest way to show these properties is to add some additional properties,   public Address NotNullableAddress { get { return Address ?? new Address(); } } public string Contacts { get { return string.Join("; ",ContactNumbers); } }               Now you can easily use these properties in WebGrid. Here is the complete code of this example,  @functions{ public class Employee { public Employee(){ ContactNumbers = new List<string>(); } public string Name { get; set; } public Address Address { get; set; } public List<string> ContactNumbers { get; set; } public Address NotNullableAddress { get { return Address ?? new Address(); } } public string Contacts { get { return string.Join("; ",ContactNumbers); } } } public class Address { public string City { get; set; } } } @{ var myClasses = new List<Employee>{ new Employee { Name="A" , Address = new Address{ City="AA" }, ContactNumbers = new List<string>{"021-216452","9231425651"}}, new Employee { Name="C" , Address = new Address{ City="CC" }}, new Employee { Name="D" , ContactNumbers = new List<string>{"045-14512125","21531212121"}} }; var grid = new WebGrid(source: myClasses); } @grid.GetHtml(columns: grid.Columns( grid.Column("NotNullableAddress.City", header: "City"), grid.Column("Name"), grid.Column("Contacts")))                    Summary:           You can use WebGrid helper to show tabular data in ASP.NET MVC, ASP.NET Web Forms and  ASP.NET Web Pages. Using this helper, you can also show complex types in the grid. In this article, I showed you how you use complex types with WebGrid helper. Hopefully you will enjoy this article too.  

    Read the article

  • Creating packages in code - Workflow

    This is just a quick one prompted by a question on the SSIS Forum, how to programmatically add a precedence constraint (aka workflow) between two tasks. To keep the code simple I’ve actually used two Sequence containers which are often used as anchor points for a constraint. Very often this is when you have task that you wish to conditionally execute based on an expression. If it the first or only task in the package you need somewhere to anchor the constraint too, so you can then set the expression on it and control the flow of execution. Anyway, back to my code sample, here’s a quick screenshot of the finished article: Now for the code, which is actually pretty simple and hopefully the comments should explain exactly what is going on. Package package = new Package(); package.Name = "SequenceWorkflow"; // Add the two sequence containers to provide anchor points for the constraint // If you use tasks, it follows exactly the same pattern, they all derive from Executable Sequence sequence1 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence1.Name = "SEQ Start"; Sequence sequence2 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence2.Name = "SEQ End"; // Add the precedence constraint, here we use the package's constraint collection // as it hosts the two objects we want to constrain (link) // The default constraint is a basic On Success constraint just like in the designer PrecedenceConstraint constraint = package.PrecedenceConstraints.Add(sequence1, sequence2); // Change the settings to use a (dummy) expression only constraint.EvalOp = DTSPrecedenceEvalOp.Expression; constraint.Expression = "1 == 1";   The complete code file is available to download below. SequenceWorkflow.cs

    Read the article

  • HTTP Module in detail

    - by Jalpesh P. Vadgama
    I know this post may sound like very beginner level. But I have already posted two topics regarding HTTP Handler and HTTP module and this will explain how http module works in the system. I have already posted What is the difference between HttpModule and HTTPHandler here. Same way I have posted about an HTTP Handler example here as people are still confused with it. In this post I am going to explain about HTTP Module in detail. What is HTTP Module As we all know that when ASP.NET Runtimes receives any request it will execute a series of HTTP Pipeline extensible objects. HTTP Module and HTTP handler play important role in extending this HTTP Pipelines. HTTP Module are classes that will pre and post process request as they pass into HTTP Pipelines.  So It’s one kind of filter we can say which will do some procession on begin request and end request. If we have to create HTTP Module we have to implement System.Web.IHttpModule interface in our custom class. An IHTTP Module contains two method dispose where you can write your clean up code and another is Init where your can write your custom code to handle request. Here you can your event handler that will execute at the time of begin request and end request. Let’s create an HTTP Module which will just print text in browser with every request. Here is the code for that. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace Experiment { public class MyHttpModule:IHttpModule { public void Dispose() { //add clean up code here if required } public void Init(HttpApplication context) { context.BeginRequest+=new EventHandler(context_BeginRequest); context.EndRequest+=new EventHandler(context_EndRequest); } public void context_BeginRequest(object o, EventArgs args) { HttpApplication app = (HttpApplication)o; if (app != null) { app.Response.Write("<h1>Begin Request Executed</h1>"); } } public void context_EndRequest(object o, EventArgs args) { HttpApplication app = (HttpApplication)o; if (app != null) { app.Response.Write("<h1>End Request Executed</h1>"); } } } } Here in above code you can see that I have created two event handler context_Beginrequest and context_EndRequest which will execute at begin request and end request when request are processed. In this event handler I have just written a code to print text on browser. Now In order enable this HTTP Module in HTTP pipeline we have to put a settings in web.config  HTTPModules section to tell which HTTPModule is enabled. Below is code for HTTPModule. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpModules> <add name="MyHttpModule" type="Experiment.MyHttpModule,Experiment"/> </httpModules> </system.web> </configuration> Now I just have created a sample webform with following code in HTML like following. <form id="form1" runat="server"> <B>test of HTTP Module</B> </form> Now let’s run this web form in browser and you can see here it the output as expected.   Technorati Tags: HTTPModule,ASP.NET,Request

    Read the article

  • Reading source code to learn

    - by perl.j
    As you develop as a programmer, IMO, you begin to see different practices, different Algorithms, and "more than one way to do it". Seeing this code can be a great learning experience for you, even though you did not write the code. But is doing this only going to confuse you? For example, let's say you have a library in any language that was created by a colleague, and you have been using it for a while. You decide to look at the actual source code, regardless of how extensive it is, and get a better look at how this library is written. For the sake of example, the function you use most often from this library is the max function, which finds the largest of two numbers. But this function is a lot more complicated than it needs to be. The way it is written is confusing the heck out of you, and you don't know how this works. Will this make you a better programmer, because you realize how complicated it is for such a simple function, or will it make you a worse coder because you feel less confidant? So my question, in general, is does reading source code make you a better programmer and if so how? If not why do people still do it?.

    Read the article

  • Combining Code Review with Trust Metrics

    - by DragonFax
    I don't get the chance to partake of it at work. But I love the idea of code review. Especially of online open source code review like Gerrit Code Review. I love what Trust Metrics have done for forums and collective intelligences sites on the internet like stackexchange, reddit, and wikipedia. Would it be possible to combine the two and come up with an open source project management system. Something that ends up being mostly community driven. Perhaps a kind of wikipedia of code for a project. Where submitters become popular/trusted by having lots of patches reviewed favoriably by others, and accepted into the trunk. And popular/trusted submitters get their patchs accepted faster/easier. I'm looking for some opinions on the idea, or perhaps pointers to where its been done before, if thats the case. This might leave the lead maintiner little more to do than: wrangle the direction of the project by fast-tracking or vetoing specific patches. settling disputes when the CI tests break, or fixing it himself. Is design by community worse than design by committee?

    Read the article

  • ASP.Net MVC 1.0 Web Hosting

    - by Saravanan I M
    I am developing a website using ASP.Net MVC 1.0. Can i host that website on a server having ASP.Net 2.0? Because my hosting provider supports only ASP.Net 2.0. Does anyone know how to host a website developed using ASP.Net MVC 1.0 in a web server supports ASP.Net 2.0

    Read the article

  • Easiest Way To Get Started In Dot Net

    - by Avery Payne
    Ok, so the initial search in StackOverflow shows nothing related for this question. So here it goes: Let's pretend for a moment that you're just getting started in a career in computer programming. Let's say that, for whatever reason, you decide to use the .Net framework as a basis for your programming. Let's also say that you've been exposed to some programming background, but not one in .Net, so it seems foreign to you at first. And lastly, you don't have the benefit of 25 years of exposure to the Win32 API, which explains why it seems so foreign to you when you start looking at it. So the questions are: What is a comprehensive overview of what .Net is? It appears to be a combination of a runtime environment, a set of languages, a common set of libraries, and perhaps a few other things...so it's about as clear as mud. Specifically, what are the key components to .Net? What is the easiest way to understand .Net programming with regard to available APIs? Which language would best suit beginning programming out of the "stock" languages that Microsoft has to offer? (C++, C#, VB, etc.) What are some differences between .Net programming and programming in a procedural language (aka Pascal, Modula, etc.) What are some differences between .Net programming and programming in a "traditional" object-oriented language? (aka Smalltalk, Java, Python, Ruby, etc.) As I currently understand it, the CLR provides a foundation for all of the other languages to run on. What are some of the inherent limitations of the CLR? Given the enormous amount of API to cover, would it even be worth learning a .Net language (using the Microsoft APIs) given that you would not have prior exposure to Win32 programming? Let's say you write a for-profit program with .Net. Can you resell the program without running afoul of licensing issues? Let's say you write a gratis (free) program with .Net. Can you offer the program to the public under a "free" license (GPL, BSD, Artistic, etc.) without running afoul of licensing issues? Thank you in advance for your patience.

    Read the article

  • Default.aspx with IIS 6.0 and .Net 4?

    - by Amitabh
    We have deployed a .net 4 asp.net site on IIS 6.0. Default.aspx is configured as one of the default document. When we access the site using the following url http://testsite We expect it to render http://testsite/Default.aspx But instead we get 404 Not found error. We did not had this issue when it was deployed on .Net 2.0. Only thing that has changed on the server is that we use .Net 4 instead of .Net 2.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Writing the tests for FluentPath

    - by Bertrand Le Roy
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that is not an option as nothing in System.IO enables us to plug a mock file system in. As a consequence, we are left with few options. A few people have suggested me to abstract my calls to System.IO away so that I could tell FluentPath – not System.IO – to use a mock instead of the real thing. That in turn is getting a little silly: FluentPath already is a thin abstraction around System.IO, so layering another abstraction between them would double the test surface while bringing little or no value. I would have to test that new abstraction layer, and that would bring us back to square one. Unless I’m missing something, the only option I have here is to bite the bullet and test against the real file system. Of course, the tests that do that can hardly be called unit tests. They are more integration tests as they don’t only test bits of my code. They really test the successful integration of my code with the underlying System.IO. In order to write such tests, the techniques of BDD work particularly well as they enable you to express scenarios in natural language, from which test code is generated. Integration tests are being better expressed as scenarios orchestrating a few basic behaviors, so this is a nice fit. The Orchard team has been successfully using SpecFlow for integration tests for a while and I thought it was pretty cool so that’s what I decided to use. Consider for example the following scenario: Scenario: Change extension Given a clean test directory When I change the extension of bar\notes.txt to foo Then bar\notes.txt should not exist And bar\notes.foo should exist This is human readable and tells you everything you need to know about what you’re testing, but it is also executable code. What happens when SpecFlow compiles this scenario is that it executes a bunch of regular expressions that identify the known Given (set-up phases), When (actions) and Then (result assertions) to identify the code to run, which is then translated into calls into the appropriate methods. Nothing magical. Here is the code generated by SpecFlow: [NUnit.Framework.TestAttribute()] [NUnit.Framework.DescriptionAttribute("Change extension")] public virtual void ChangeExtension() { TechTalk.SpecFlow.ScenarioInfo scenarioInfo = new TechTalk.SpecFlow.ScenarioInfo("Change extension", ((string[])(null))); #line 6 this.ScenarioSetup(scenarioInfo); #line 7 testRunner.Given("a clean test directory"); #line 8 testRunner.When("I change the extension of " + "bar\\notes.txt to foo"); #line 9 testRunner.Then("bar\\notes.txt should not exist"); #line 10 testRunner.And("bar\\notes.foo should exist"); #line hidden testRunner.CollectScenarioErrors();} The #line directives are there to give clues to the debugger, because yes, you can put breakpoints into a scenario: The way you usually write tests with SpecFlow is that you write the scenario first, let it fail, then write the translation of your Given, When and Then into code if they don’t already exist, which results in running but failing tests, and then you write the code to make your tests pass (you implement the scenario). In the case of FluentPath, I built a simple Given method that builds a simple file hierarchy in a temporary directory that all scenarios are going to work with: [Given("a clean test directory")] public void GivenACleanDirectory() { _path = new Path(SystemIO.Path.GetTempPath()) .CreateSubDirectory("FluentPathSpecs") .MakeCurrent(); _path.GetFileSystemEntries() .Delete(true); _path.CreateFile("foo.txt", "This is a text file named foo."); var bar = _path.CreateSubDirectory("bar"); bar.CreateFile("baz.txt", "bar baz") .SetLastWriteTime(DateTime.Now.AddSeconds(-2)); bar.CreateFile("notes.txt", "This is a text file containing notes."); var barbar = bar.CreateSubDirectory("bar"); barbar.CreateFile("deep.txt", "Deep thoughts"); var sub = _path.CreateSubDirectory("sub"); sub.CreateSubDirectory("subsub"); sub.CreateFile("baz.txt", "sub baz") .SetLastWriteTime(DateTime.Now); sub.CreateFile("binary.bin", new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0xFF}); } Then, to implement the scenario that you can read above, I had to write the following When: [When("I change the extension of (.*) to (.*)")] public void WhenIChangeTheExtension( string path, string newExtension) { var oldPath = Path.Current.Combine(path.Split('\\')); oldPath.Move(p => p.ChangeExtension(newExtension)); } As you can see, the When attribute is specifying the regular expression that will enable the SpecFlow engine to recognize what When method to call and also how to map its parameters. For our scenario, “bar\notes.txt” will get mapped to the path parameter, and “foo” to the newExtension parameter. And of course, the code that verifies the assumptions of the scenario: [Then("(.*) should exist")] public void ThenEntryShouldExist(string path) { Assert.IsTrue(_path.Combine(path.Split('\\')).Exists); } [Then("(.*) should not exist")] public void ThenEntryShouldNotExist(string path) { Assert.IsFalse(_path.Combine(path.Split('\\')).Exists); } These steps should be written with reusability in mind. They are building blocks for your scenarios, not implementation of a specific scenario. Think small and fine-grained. In the case of the above steps, I could reuse each of those steps in other scenarios. Those tests are easy to write and easier to read, which means that they also constitute a form of documentation. Oh, and SpecFlow is just one way to do this. Rob wrote a long time ago about this sort of thing (but using a different framework) and I highly recommend this post if I somehow managed to pique your interest: http://blog.wekeroad.com/blog/make-bdd-your-bff-2/ And this screencast (Rob always makes excellent screencasts): http://blog.wekeroad.com/mvc-storefront/kona-3/ (click the “Download it here” link)

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • Having trouble running code analysis from command prompt with msbuild

    - by devlife
    I'm using VS2010 RC while targeting .NET 3.5. I can run code analysis via Visual Studio without a problem. However, when I try to run code analysis on our CI server it isn't getting executed. When I attempt to build using msbuild 4.0 I get the following exception: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\CodeAnalysis\Microsoft.CodeAnalysis.targets(129,9): error MSB4018: The "CodeAnalysis" task failed unexpectedly. C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\CodeAnalysis\Microsoft.CodeAnalysis.targets(129,9): error MSB4018: System.TypeLoadException: Could not load type 'System.Runtime.Versioning.TargetFrameworkAttribute' from assembly 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Like I said, it works fine when I run it through VS.

    Read the article

  • Searching for tasks with code – Executables and Event Handlers

    Searching packages or just enumerating through all tasks is not quite as straightforward as it may first appear, mainly because of the way you can nest tasks within other containers. You can see this illustrated in the sample package below where I have used several sequence containers and loops. To complicate this further all containers types, including packages and tasks, can have event handlers which can then support the full range of nested containers again. Towards the lower right, the task called SQL In FEL also has an event handler not shown, within which is another Execute SQL Task, so that makes a total of 6 Execute SQL Tasks 6 tasks spread across the package. In my previous post about such as adding a property expressionI kept it simple and just looked at tasks at the package level, but what if you wanted to find any or all tasks in a package? For this post I've written a console program that will search a package looking at all tasks no matter how deeply nested, and check to see if the name starts with "SQL". When it finds a matching task it writes out the hierarchy by name for that task, starting with the package and working down to the task itself. The output for our sample package is shown below, note it has found all 6 tasks, including the one on the OnPreExecute event of the SQL In FEL task TaskSearch v1.0.0.0 (1.0.0.0) Copyright (C) 2009 Konesans Ltd Processing File - C:\Projects\Alpha\Packages\MyPackage.dtsx MyPackage\FOR Counter Loop\SQL In Counter Loop MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL\OnPreExecute\SQL On Pre Execute for FEL SQL Task MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SEQ Nested Lvl 2\SQL In Nested Lvl 2 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #1 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #2 6 matching tasks found in package. The full project and code is available for download below, but first we can walk through the project to highlight the most important sections of code. This code has been abbreviated for this description, but is complete in the download. First of all we load the package, and then start by looking at the Executables for the package. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { int matchCount = 0; // Look in the package's executables ProcessExecutables(package.Executables, ref matchCount); ... // // ... // Write out final count Console.WriteLine("{0} matching tasks found in package.", matchCount); } The ProcessExecutables method is a key method, as an executable could be described as the the highest level of a working functionality or container. There are several of types of executables, such as tasks, or sequence containers and loops. To know what to do next we need to work out what type of executable we are dealing with as the abbreviated version of method shows below. private static void ProcessExecutables(Executables executables, ref int matchCount) { foreach (Executable executable in executables) { TaskHost taskHost = executable as TaskHost; if (taskHost != null) { ProcessTaskHost(taskHost, ref matchCount); ProcessEventHandlers(taskHost.EventHandlers, ref matchCount); continue; } ... // // ... ForEachLoop forEachLoop = executable as ForEachLoop; if (forEachLoop != null) { ProcessExecutables(forEachLoop.Executables, ref matchCount); ProcessEventHandlers(forEachLoop.EventHandlers, ref matchCount); continue; } } } As you can see if the executable we find is a task we then call out to our ProcessTaskHost method. As with all of our executables a task can have event handlers which themselves contain more executables such as task and loops, so we also make a call out our ProcessEventHandlers method. The other types of executables such as loops can also have event handlers as well as executables. As shown with the example for the ForEachLoop we call the same ProcessExecutables and ProcessEventHandlers methods again to drill down into the hierarchy of objects that the package may contain. This code needs to explicitly check for each type of executable (TaskHost, Sequence, ForLoop and ForEachLoop) because whilst they all have an Executables property this is not from a common base class or interface. This example was just a simple find a task by its name, so ProcessTaskHost really just does that. We also get the hierarchy of objects so we can write out for information, obviously you can adapt this method to do something more interesting such as adding a property expression. private static void ProcessTaskHost(TaskHost taskHost, ref int matchCount) { if (taskHost == null) { return; } // Check if the task matches our match name if (taskHost.Name.StartsWith(TaskNameFilter, StringComparison.OrdinalIgnoreCase)) { // Build up the full object hierarchy of the task // so we can write it out for information StringBuilder path = new StringBuilder(); DtsContainer container = taskHost; while (container != null) { path.Insert(0, container.Name); container = container.Parent; if (container != null) { path.Insert(0, "\\"); } } // Write the task path // e.g. Package\Container\Event\Task Console.WriteLine(path); Console.WriteLine(); // Increment match counter for info matchCount++; } } Just for completeness, the other processing method we covered above is for event handlers, but really that just calls back to the executables. This same method is called in our main package method, but it was omitted for brevity here. private static void ProcessEventHandlers(DtsEventHandlers eventHandlers, ref int matchCount) { foreach (DtsEventHandler eventHandler in eventHandlers) { ProcessExecutables(eventHandler.Executables, ref matchCount); } } As hopefully the code demonstrates, executables (Microsoft.SqlServer.Dts.Runtime.Executable) are the workers, but within them you can nest more executables (except for task tasks).Executables themselves can have event handlers which can in turn hold more executables. I have tried to illustrate this highlight the relationships in the following diagram. Download Sample code project TaskSearch.zip (11KB)

    Read the article

  • Best way of implementing DropDownList in ASP.NET MVC 2?

    - by Kelsey
    I am trying to understand the best way of implementing a DropDownList in ASP.NET MVC 2 using the DropDownListFor helper. This is a multi-part question. First, what is the best way to pass the list data to the view? Pass the list in your model with a SelectList property that contains the data Pass the list in via ViewData How do I get a blank value in the DropDownList? Should I build it into the SelectList when I am creating it or is there some other means to tell the helper to auto create an empty value? Lastly, if for some reason there is a server side error and I need to redisplay the screen with the DropDownList, do I need to fetch the list values again to pass into the view model? This data is not maintained between posts (at least not when I pass it via my view model) so I was going to just fetch it again (it's cached). Am I going about this correctly?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >