Search Results

Search found 71506 results on 2861 pages for 'csharp source net'.

Page 31/2861 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Is there any real benefit to using ASP.Net Authentication with ASP.Net MVC?

    - by alchemical
    I've been researching this intensely for the past few days. We're developing an ASP.Net MVC site that needs to support 100,000+ users. We'd like to keep it fast, scalable, and simple. We have our own SQL database tables for user and user_role, etc. We are not using server controls. Given that there are no server controls, and a custom membershipProvider would need to be created, where is there any benefit left to use ASP.Net Auth/Membership? The other alternative would seem to be to create custom code to drop a UniqueID CustomerID in a cookie and authenticate with that. Or, if we're paranoid about sniffers, we could encrypt the cookie as well. Is there any real benefit in this scenario (MVC and customer data is in our own tables) to using the ASP.Net auth/membership framework, or is the fully custom solution a viable route?

    Read the article

  • Asp.Net MVC - Plugins Directory, Community etc?

    - by Jörg Battermann
    Good evening everyone, I am currently starting to dive into asp.net mvc and I really like what I see so far.. BUT I am somewhat confused about 'drop-in' functionality (similiar to what rails and it's plugins and nowadays gems are), an active community to contact etc. For rails there's github with one massiv index of plugins/gems/code-examples regarding mostly rails (despite their goal being generic source-code hosting..), for blogs, mailing lists etc it's also pretty easy to find the places the other developers flock around, but... for asp.net mvc I am somewhat lost where to go/look. It all seems scattered across codeplex and private sites, google code hosting etc etc.. but is there one (or few places) where to turn to regarding asp.net mvc development, sample code etc? Cheers and thanks, -Jörg

    Read the article

  • advise how to implement a code generator for asp.NET mvc 2

    - by loviji
    Hello, I would like your advice about how best to solve my problem. In a Web server is running. NET Framework 4.0. Whatever the methods and technologies you would advise me. applications built on the basis Asp.NET MVC 2. I have a database table in MS SQL Server. For each database, I must implement the interface for viewing, editing, and deleting. So code generator must generate model, controller and views.. Generation should happen after clicking on the button. as model I use .NET Entity Framework. Now, I need to generate controllers and views. So if i have a table with name tableN1. and below its colums: [ID] [bigint] IDENTITY(1,1) NOT NULL, [name] [nvarchar 20] NOT NULL, [fullName] [nvarchar 50] NOT NULL, [age] [int] NOT NULL [active] [bit] NULL for this table, i want to generate views and controller. thanks.

    Read the article

  • My first .net web app - should I go straight to MVC framework (c.f. ASP.net)

    - by Greg
    Hi, I'm done some WinForms work in C# but now moving to have to develop a web application front end in .NET (C#). I have experience developing web apps in Ruby on Rails (& a little with Java with JSP pages & struts mvc). Should I jump straight to MVC framework? (as opposed to going ASP.net) That is from the point of view of future direction for Microsoft & as well ease in ramping up from myself. Or if you like, given my experience to date, what would the pros/cons for me re MVC versus ASP.net? thanks

    Read the article

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • Custom ASPNetMembership FailureInformation always null, OnValidatingPassword issue

    - by bigb
    As stated here http://msdn.microsoft.com/en-us/library/system.web.security.membershipprovider.onvalidatingpassword.aspx "When the ValidatingPassword event has completed, the properties of the ValidatePasswordEventArgs object supplied as the e parameter can be examined to determine whether the current action should be canceled and if a particular Exception, stored in the FailureInformation property, should be thrown." Here is some details/code which really shows why FailureInformation shouldn't be always null http://forums.asp.net/t/991002.aspx if any password security conditions not matched. According with my Membership settings i should get an exception that password does not match password security conditions, but it is not happened. Then i did try to debug System.Web.ApplicationServices.dll(in .NET 4.0 System.Web.Security located here) Framework Code to see whats really happens there, but i cant step into this assembly, may be because of this [TypeForwardedFrom("System.Web, Version=2.0.0.0, Culture=Neutral, PublicKeyToken=b03f5f7f11d50a3a")] public abstract class MembershipProvider : ProviderBase Easily i may step into any another .NET 4.0 assembly, but in this one not. I did check, symbols for System.Web.ApplicationServices.dll loaded. Now i have only one idea how ti fix it - to override method OnValidatingPassword(ValidatePasswordEventArgs e). Thats my story. May be some one may help: 1) Any ideas why OnValidatingPassword not working? 2) Any ideas how to step into it?

    Read the article

  • Reuse security code between WCF and MVC.NET

    - by mrjoltcola
    First the background: I jumped into MVC.NET from the Java MVC world, so my implementation below is possibly cheating, I don't know. I avoided fooling with a custom membership provider and I just implemented the base code needed to authenticate and load roles in my LogOn action. Typically I just need to check roles programatically, and have no use for all of the other membership features, so I didn't originally think I needed a full Membership provider. I have a successful WCF project with a custom authentication and authorization layer that I did at least write per the proper API. I implemented it with custom IPrincipal, UserNamePasswordValidator and IAuthorizationPolicy classes to load from an Oracle database. In my WCF services, I use declarative security: [PrincipalPermission(SecurityAction.Demand, Role="ADMIN")]. The question (on the ASP.NET/MCV.NET side): All my reading indicates I should implement a custom Membership/Roles provider, and use [Authorize(Roles="ADMIN")] on my controller actions. At this point, I don't have a true Membership provider, but I'm using the same User class that implements the IPrincipal interface that works with the WCF security. I plan to share common code between the WCF and ASP.NET modules. So my LogOn action is not using the FormsService (and I assume this is bad). I had commented it out, and just used my "UserService" to access the Oracle db. Note my "TODO" comment below. public ActionResult LogOn(LogOnModel model, string returnUrl) { log.Info("Login attempt by " + model.UserName); if (ModelState.IsValid) { User user = userService.findByUserName(model.UserName); // Commented original MemberShipService code, this is probably bad // if (MembershipService.ValidateUser(model.UserName, model.Password)) if (user != null && user.Authenticate(model.Password) == true) { log.Info("Login success by " + model.UserName); FormsService.SignIn(model.UserName, model.RememberMe); // TODO: Override with Custom identity / roles? user.AddRoles(userService.listRolesByUser(user)); // pull in roles from db if (!String.IsNullOrEmpty(returnUrl)) return Redirect(returnUrl); else return RedirectToAction("Index", "Home"); } else { log.Info("Login failure by " + model.UserName); ModelState.AddModelError("", "The user name or password provided is incorrect."); } } // If we got this far, something failed, redisplay form return View(model); } So can I make the above work? Can I stick the IPrincipal (User) into the CurrentContext or HttpContext? Can I integrate the custom IPrincipal I've already created without writing a full Membership/Roles Provider? I currently stick the User object into the session and access it from all MVC.NET controllers with "CurrentUser" property which grabs it from the session on demand. But this doesn't work with the [Authorize] attribute; I assume that is because it knows nothing about my custom Principal in the session, and is instead using whatever FormsService.SignIn() produces. I also found that session timeouts screw up the login redirect, the user doesn't get forwarded, instead we get a null exception accessing User from the session, and I assume it is related to my "skipping steps" to get a quick implementation. Thanks.

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

  • Should a c# dev switch to VB.net when the team language base is mixed?

    - by jjr2527
    I recently joined a new development team where the language preferences are mixed on the .net platform. Dev 1: Knows VB.net, does not know c# Dev 2: Knows VB.net, does not know c# Dev 3: Knows c# and VB.net, prefers c# Dev 4: Knows c# and VB6(VB.net should be pretty easy to pick up), prefers c# It seems to me that the thought leaders in the .net space are c# devs almost universally. I also thought that some 3rd party tools didn't support VB.net but when I started looking into it I didn't find any good examples. I would prefer to get the whole team on c# but if there isn't any good reason to force the issue aside from preference then I don't think that is the right choice. Are there any reasons I should lead folks away from VB.net?

    Read the article

  • .NET Weak Event Handlers – Part II

    - by João Angelo
    On the first part of this article I showed two possible ways to create weak event handlers. One using reflection and the other using a delegate. For this performance analysis we will further differentiate between creating a delegate by providing the type of the listener at compile time (Explicit Delegate) vs creating the delegate with the type of the listener being only obtained at runtime (Implicit Delegate). As expected, the performance between reflection/delegate differ significantly. With the reflection based approach, creating a weak event handler is just storing a MethodInfo reference while with the delegate based approach there is the need to create the delegate which will be invoked later. So, at creating the weak event handler reflection clearly wins, but what about when the handler is invoked. No surprises there, performing a call through reflection every time a handler is invoked is costly. In conclusion, if you want good performance when creating handlers that only sporadically get triggered use reflection, otherwise use the delegate based approach. The explicit delegate approach always wins against the implicit delegate, but I find the syntax for the latter much more intuitive. // Implicit delegate - The listener type is inferred at runtime from the handler parameter public static EventHandler WrapInDelegateCall(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs>(EventHandler<TArgs> handler) where TArgs : EventArgs; // Explicite delegate - TListener is the type that defines the handler public static EventHandler WrapInDelegateCall<TListener>(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs, TListener>(EventHandler<TArgs> handler) where TArgs : EventArgs;

    Read the article

  • Parallelism in .NET – Part 19, TaskContinuationOptions

    - by Reed
    My introduction to Task continuations demonstrates continuations on the Task class.  In addition, I’ve shown how continuations allow handling of multiple tasks in a clean, concise manner.  Continuations can also be used to handle exceptional situations using a clean, simple syntax. In addition to standard Task continuations , the Task class provides some options for filtering continuations automatically.  This is handled via the TaskContinationOptions enumeration, which provides hints to the TaskScheduler that it should only continue based on the operation of the antecedent task. This is especially useful when dealing with exceptions.  For example, we can extend the sample from our earlier continuation discussion to include support for handling exceptions thrown by the Factorize method: // Get a copy of the UI-thread task scheduler up front to use later var uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); // Start our task var factorize = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); // When we succeed, report the results to the UI factorize.ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), CancellationToken.None, TaskContinuationOptions.NotOnFaulted, uiScheduler); // When we have an exception, report it factorize.ContinueWith(task => textBox1.Text = string.Format("Error: {0}", task.Exception.Message), CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted, uiScheduler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code works by using a combination of features.  First, we schedule our task, the same way as in the previous example.  However, in this case, we use a different overload of Task.ContinueWith which allows us to specify both a specific TaskScheduler (in order to have your continuation run on the UI’s synchronization context) as well as a TaskContinuationOption.  In the first continuation, we tell the continuation that we only want it to run when there was not an exception by specifying TaskContinuationOptions.NotOnFaulted.  When our factorize task completes successfully, this continuation will automatically run on the UI thread, and provide the appropriate feedback. However, if the factorize task has an exception – for example, if the Factorize method throws an exception due to an improper input value, the second continuation will run.  This occurs due to the specification of TaskContinuationOptions.OnlyOnFaulted in the options.  In this case, we’ll report the error received to the user. We can use TaskContinuationOptions to filter our continuations by whether or not an exception occurred and whether or not a task was cancelled.  This allows us to handle many situations, and is especially useful when trying to maintain a valid application state without ever blocking the user interface.  The same concepts can be extended even further, and allow you to chain together many tasks based on the success of the previous ones.  Continuations can even be used to create a state machine with full error handling, all without blocking the user interface thread.

    Read the article

  • How to Install vaio-control-center from source?

    - by KasiyA
    I have a problem about turning off keyboard backlight that I asked here with no useful answers. After searching on internet I find a package for vaio control center and downloaded it from here, I don't know how to install it. This is the output of trying one solution: USER@XXXXpc:~/vaio-control-center-0.1$ ls compile Makefile run vaio-control-center vcc COPYING moc_main_window.cpp ui_main_window.h vaio-control-center.pro USER@XXXXpc:~/vaio-control-center-0.1$ ./compile make: *** No rule to make target `/usr/share/qt/mkspecs/linux-g++-64/qmake.conf', needed by `Makefile'. Stop. USER@XXXXpc:~/vaio-control-center-0.1$ ./run ./run: 3: ./run: ./vaio-control-center: not found USER@XXXXpc:~/vaio-control-center-0.1$ Updated I tried also with @pandya's suggestion from here. and the output is as follows: root@user-pc:/# cd /home/user/vaio-f11-linux.control-center root@user-pc:/home/user/vaio-f11-linux.control-center# ls compile COPYING resource.qrc run sony-acpid vaio-control-center.pro vcc root@user-pc:/home/user/vaio-f11-linux.control-center# ./compile -su: ./compile: Permission denied root@user-pc:/home/user/vaio-f11-linux.control-center# sudo ./compile sudo: ./compile: command not found root@user-pc:/home/user/vaio-f11-linux.control-center# gksudo ./compile root@user-pc:/home/user/vaio-f11-linux.control-center# <----- nothing happened here root@user-pc:/home/user/vaio-f11-linux.control-center# ./run -su: ./run: Permission denied root@user-pc:/home/user/vaio-f11-linux.control-center# sudo ./run sudo: ./run: command not found root@user-pc:/home/user/vaio-f11-linux.control-center# gksudo ./run root@user-pc:/home/user/vaio-f11-linux.control-center# <----- nothing happened here root@user-pc:/home/user/vaio-f11-linux.control-center# and after running that I didn't see any affect on keyboard backlight.

    Read the article

  • Building a better .NET Application Configuration Class - revisited

    - by Rick Strahl
    Managing configuration settings is an important part of successful applications. It should be easy to ensure that you can easily access and modify configuration values within your applications. If it's not - well things don't get parameterized as much as they should. In this post I discuss a custom Application Configuration class that makes it super easy to create reusable configuration objects in your applications using a code-first approach and the ability to persist configuration information into various types of configuration stores.

    Read the article

  • Interesting/Innovative Open Source tools for indie games

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • .NET Properties - Use Private Set or ReadOnly Property?

    - by tgxiii
    In what situation should I use a Private Set on a property versus making it a ReadOnly property? Take into consideration the two very simplistic examples below. First example: Public Class Person Private _name As String Public Property Name As String Get Return _name End Get Private Set(ByVal value As String) _name = value End Set End Property Public Sub WorkOnName() Dim txtInfo As TextInfo = _ Threading.Thread.CurrentThread.CurrentCulture.TextInfo Me.Name = txtInfo.ToTitleCase(Me.Name) End Sub End Class // ---------- public class Person { private string _name; public string Name { get { return _name; } private set { _name = value; } } public void WorkOnName() { TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo; this.Name = txtInfo.ToTitleCase(this.Name); } } Second example: Public Class AnotherPerson Private _name As String Public ReadOnly Property Name As String Get Return _name End Get End Property Public Sub WorkOnName() Dim txtInfo As TextInfo = _ Threading.Thread.CurrentThread.CurrentCulture.TextInfo _name = txtInfo.ToTitleCase(_name) End Sub End Class // --------------- public class AnotherPerson { private string _name; public string Name { get { return _name; } } public void WorkOnName() { TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo; _name = txtInfo.ToTitleCase(_name); } } They both yield the same results. Is this a situation where there's no right and wrong, and it's just a matter of preference?

    Read the article

  • ASP.NET ViewState Tips and Tricks #1

    - by João Angelo
    In User Controls or Custom Controls DO NOT use ViewState to store non public properties. Persisting non public properties in ViewState results in loss of functionality if the Page hosting the controls has ViewState disabled since it can no longer reset values of non public properties on page load. Example: public class ExampleControl : WebControl { private const string PublicViewStateKey = "Example_Public"; private const string NonPublicViewStateKey = "Example_NonPublic"; // DO public int Public { get { object o = this.ViewState[PublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[PublicViewStateKey] = value; } } // DO NOT private int NonPublic { get { object o = this.ViewState[NonPublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[NonPublicViewStateKey] = value; } } } // Page with ViewState disabled public partial class ExamplePage : Page { protected override void OnLoad(EventArgs e) { base.OnLoad(e); this.Example.Public = 10; // Restore Public value this.Example.NonPublic = 20; // Compile Error! } }

    Read the article

  • Hosting a web application on discountasp.net using sql ce 5

    - by David Stanley
    I am hoping that someone may have experience with this, since the discountasp site is very lacking in straightforward answers. I am building a lightweight web application and have decided to have sql ce as the database for it. Two questions regarding this: Do i need to get an actual database hosted as well as the site, in order for it to work? Do you know if discountasp supports the use of sql ce (not with webmatrix or any cms builds, completely custom)? If they don't, do you have any experience/recommendations with getting this done?

    Read the article

  • Secure login for a game that is open source

    - by David Park
    I am making a game which i will be open sourcing. Its a simple arcade like game but requires a network connection because it is meant to be played with other people. The thing i am worrying about is how would i be sure that the client is the one that i put out for the end user to play with? Kind of a like of sv_pure for Team Fortress 2. I was thinking of different ways to combat this such as the server requesting the client's version or even it's md5 hash but people with simple java knowledge could just force a method to always return what the server wants.

    Read the article

  • Interesting/Innovative Open Source tools for indie games [closed]

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • Suggestions for getting an open source electron beam tracking code going

    - by Boaz
    I work in the field of accelerator physics and synchrotron radiation. High energy electrons circulating in large rings of magnets produce x-rays that are used for a variety of different kinds of science. Running and improving these facilities requires controlling and modelling the electron beam as it circulates in the ring. A code to model this basically requires trackers to follow the electrons through the elements (something called a symplectic integrator), and then the computation of different parameters associated with this motion. The problem with these codes is that every facility has there own. In principle the code is not so complex. And as a modelling project, one might think it has some general interest. Who doesn't want to be able to create a track in space out of magnets and watch the electrons circulate? There is a Matlab based code to do this called Accelerator Toolbox, but the creator of the code is no longer in the field. I put the code in Sourceforge under the name atcollab. The basic resource is in C- it is the set of symplectic integrators. These are available in the atcollab code here. It has been useful to put the code on Sourceforge in order to exchange code, but the community of users is quite small and most are too busy to put that much time into collaboration. So in terms of really improving the code, I don't think it has been so successful. Any piece of this picture could be recreated without that much difficulty, but overall it is a bit complex, and because each lab has their own installation with lots of add-on Matlab code, people find it hard to really work together and share code. Somehow I think we need to involve a wider community in our development, or just use some standard tools. But for that, I suppose it needs to be of some general interest. I think symplectic integrators may have some general interest. And the part about a plug-in architecture to build up the ring ought to fit other patterns. Or the other option is to just accept that this is not a problem of general interest, and work harder within our small community. Suggestions, or anecdotes of analogous experience would be appreciated.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >