Search Results

Search found 19525 results on 781 pages for 'say'.

Page 392/781 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • not using partial mocking? do they also mean in web-app?

    - by 01
    Im learning Mockito and in chapter 16 they say you should not use partial mocking in new system. I disagree, for example in one of my actions i use partial mocking for static framework methods, sql calls, etc. I extracted the stuff into methods and then mock it in tests. Most of those methods are specific to this action and wont be call from other actions, so it not worth to extract special components. I agree that you shouldn't using partial mocking in frameworks, but not in hard to mock actions. What are minuses of using partial mocking in web-app?

    Read the article

  • Downloading file from server (asp.net) to IE8 Content-Disposition problem with file name

    - by David
    I am downloading a file from the server/database via aspx page. When using the content-disposition inline the document opens in correct application but the file name is the same as the web page. I want the document to open in say MS Word but with the correct file name. Here is the code that I am using Response.Buffer = true; Response.ClearContent(); Response.ClearHeaders(); Response.Clear(); Response.ContentType = MimeType(fileName); //function to return the correct MIME TYPE Response.AddHeader("Content-Disposition", @"inline;filename=" + fileName); Response.AddHeader("Content-Length", image.Length.ToString()); Response.BinaryWrite(image); Response.Flush(); Response.Close(); So again, I want the file to open in MS Word with the correct document file name so that the user can properly save/view. Ideas? thanks

    Read the article

  • Is the switch to Dvorak worth it?

    - by Kevin Weil
    To those who were experienced ( 70 WPM, say) typists before the switch to Dvorak -- were you faster after switching? There are a couple good SO threads on Dvorak, but they are more on how to learn or reduction in typing pain than speed before/after. I know it will take me 1-2 months to feel comfortable, but I want to know if I should expect to be faster afterward. I am a programmer and type maybe 90-110 WPM on QWERTY. EDIT: I agree that coding is not typically IO-bound, and that a minimum typing speed is sufficient. This is half from curiosity, but it will be an undertaking to achieve QWERTY parity, so I want to know if I should at least expect some asymptotic improvement.

    Read the article

  • firefox: How to enable local javascript to read/write files on my PC?

    - by Nok Imchen
    well, since greasemonkey cant read/write files from local hard disk. I've heard people suggesting Google gears but i've no idea obout gears :( So, what i've decided is to add a script type="text/javascript" src="file:///c:/test.js"/script Now, this test will use FileSystemObject to read/write file. Since, the "file:///c:/test.js" is a javascript file from local hard disk, so it should probable be able to read/write file on my local hard disk. I tried it but firefox prevented the "file:///c:/test.js" script to read/write files from local diak :( What i want to know is: Is there any setting in firefox about:config where we can specify to let a particular script, say from localfile or xyz.com, to have read/write permission on my local disk files??

    Read the article

  • Rewrite rules for subfolders

    - by pg
    This may seem like a silly question but I can't figure it out. let's say I have a public_html folder with various folders like: Albatross, Blackbirds, Crows and Faqs. I want to make it so that any traffic to Albatross/faqs.php, Blackbirds/faqs.php, Crows/faqs.php etc will see the file that is at faqs/faqs.php?bird=albatross or faqs/faqs.php?bird=crows or what have you. If I go into the Albatross folder's .htaccess file I can do this RewriteRule faqs.php$ /faqs/faqs.php?cat=albatross[QSA] Which works fine, but I want to put something in the top level .htacces that works for all of them, so tried: RewriteRule faqs.php$ /faqs/faqs.php?cat=albatross[QSA] RewriteRule /(.*)/faqs.php$ /faqs/faqs.php?cat=$1 [QSA] and even RewriteRule /albatross/faqs.php$ /faqs/faqs.php?cat=albatross [QSA] and various others but nothing seems to work, when I go to http://www.birdsandwhatnot.com/albatross/faqs.php I see the same file the same way it's always been. Does the presence of an .htaccess file in the subfolder conflict with the higher up .htaccess file? Am I missing something?

    Read the article

  • StructureMap : How to define default constructor by code ?

    - by cik
    Hi, I can't figure out how to define the default constructor (when it exists overloads) for a type in StructureMap (version 2.5) by code. I want to get an instance of a service and the container has to inject a Linq2Sql data context instance into it. I wrote this in my 'bootstrapper' method : ForRequestedType<MyDataContext>().TheDefault.Is.OfConcreteType<MyDataContext>(); When I run my app, I got this error : StructureMap Exception Code: 202 No Default Instance defined for PluginFamily MyNamespace.Data.SqlRepository.MyDataContext, MyNamespace.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null If I comment out all Linq2Sql generated contructors that I don't need, it works fine. Update : Oh, and I forgot to say that I would not use the [StructureMap.DefaultConstructor] attribute.

    Read the article

  • Need help with NHibernate / Fluent NHibernate mapping

    - by Mark Boltuc
    Let's say your have the following table structure: ============================== | Case | ============================== | Id | int | | ReferralType | varchar(10) | +---------| ReferralId | int |---------+ | ============================== | | | | | | | ====================== ====================== ====================== | SourceA | | SourceB | | SourceC | ====================== ====================== ====================== | Id | int | | Id | int | | Id | int | | Name | varchar(50) | | Name | varchar(50) | | Name | varchar(50) | ====================== ====================== ====================== Based on the ReferralType the ReferralId contains id to the SourceA, SourceB, or SourceC I'm trying to figure out how to map this using Fluent NHibernate or just plain NHibernate into an object model. I've tried a bunch of different things but I haven't been succesful. Any ideas? The object model might be something like: public class Case { public int Id { get; set; } public Referral { get; set; } } public class Referral { public string Type { get; set; } public int Id { get; set; } public string Name { get; set; } }

    Read the article

  • Validation in asp.net MVC - validation only to happen when "asked"

    - by jeriley
    I've got a slightly different validation requirement than the usual "when save, validate!". The system can allow someone to update a bunch of times without being "bothered" with a validation listing UNTIL they say it's complete. I thought I might be able to pull a fast one and put the [HandleError] on the method of which this would fire, but that validates every save. Is there an attribte I can put into my custom validator that can handle this or am I going to have to write up my own HandleError attribute?

    Read the article

  • get whole url with # in php

    - by Prashant
    Hello all. I want to get the whole url from address bar, along with #, if any, using php. say for eg. my url is http://test.com/#/test Using $_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'] I am getting only http://test.com. ie everything after # is stripped off. If it is not possible with php, suggest me how to do it using javascript. Please help me out. Thanks.

    Read the article

  • MS Access 2003 - ordering the string values for a chart not alphabetical

    - by Justin
    Here is a silly question. Lets say I have a query that produces for a list box, and it produces values for three stores Store A 18 Store B 32 Store C 54 Now if I ORDER BY in the sql statement the only thing it will do is descending or ascending alphabetically but I want a certain order (only because THEY WANT A CERTAIN ORDER) .....so is there a way for me to add something to the SQL to get Store B Store C Store A i.e. basically row by row what i want. thanks!

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Applying atyle sheets in pyqt

    - by Jebagnanadas
    Hello all, If i apply a property to a parent widget it is automatically applied for child widgets too.. Is there any way of preventing this?? For example if i set background color as white in a dialog the button,combo boxes and scroll bars looks white as it lacks it native look(have to say it's unpleasant & ugly).. Is there any way that i can apply the stylesheets only to a parent widget not to it's children??? Experts help please..

    Read the article

  • Wrapping/warping a CALayer/UIView (or OpenGL) in 3D (iPhone)

    - by jbrennan
    I've got a UIView (and thus a CALayer) which I'm trying to warp or bend slightly in 3D space. That is, imagine my UIView is a flat label which I want to partially wrap around a beer bottle (not 360 degrees around, just on one "side"). I figured this would be possible by applying a transform to the view's layer, but as far as I can tell, this transform is limited to rotation, scale and translation of the layer uniformly. I could be wrong here, as my linear algebra is foggy at this point, to say the least. How can I achieve this?

    Read the article

  • How do I get autotest (ZenTest) to see my namespaced stuff?

    - by Blaine LaFreniere
    Autotest is supposed to map my tests to a class, I believe. When I have class Foo and class FooTest, autotest should see FooTest and say, "Hey, this test corresponds to the unit Foo, so I'll look for changes there and re-run tests when changes occur." And that works, however... When I have Foo::Bar and Foo::BarTest, autotest doesn't seem to make the connection, and whenever I edit Foo::Bar, autotest does not re-run Foo::BarTest Am I doing something wrong? EDIT: File structure might be helpful. Here it is: Module and class files: lib/foo.rb lib/foo/bar.rb lib/foo/baz.rb Test files: test/unit/foo/bar.rb test/unit/baz.rb I would think that autotest is able to make the connection between Foo::Bar and Foo::BarTest, but apparently it doesn't.

    Read the article

  • jQuery - datepicker - onblur

    - by d3020
    I'm using the jQuery datepicker where I have two textboxes (start date / end date). The user clicks in a respective textbox and the calendar shows up. That is OK. What I need to do is when the user say, clicks in the start date textbox and picks a date. I need an onblur event to be triggered then. Where do I set that onblur event for the calendar control after a date has been selected? I tried in the actual textbox but that's not right because that happens before an actual date is picked from the calendar control.

    Read the article

  • Why is XHTML1.1 dated *before* XHTML1.0 ? What is the preferred XHTML today?

    - by Cheeso
    I'm not clear on the status of XHTML - v1.0 and v1.1. Can someone explain which is preferred at this point, and why? The specs from w3c say that XHTML 1.1 predates* XHTML 1.0, which is exactly counter-intuitive, to me. http://www.w3.org/TR/xhtml11/ - W3C Recommendation 31 May 2001 http://www.w3.org/TR/xhtml1/ - W3C Recommentation, updated 1 August 2002 Also, I noted earlier today that the latest version of htmltidy emits XHTML 1.0, when I request xhtml. Hmmm....Even though the XHTML 1.1 spec is 9 years old, it's still not supported by mainstream tools. That suggests that XHTML 1.1 is either completely unnecessary or spurious. Which one should I use if I am authoring pages today? What if I am building tools - should I bother to support both? Or do I need only one? Thanks.

    Read the article

  • How to update Child grid in asp.net using LINQ

    - by Raj Kumar
    Hi I have an asp.net page where i am using LINQdatasource to bind grid. Now whenever, if some one changes something in grid I want to update a history table. which is also shown as child grid for each row Let say I have a grid with two column Name and Age. it also has a child row with column field and datetime. so when ever if some one changes something in Name or Age column and saves it. A new row is inserted in child row with the name of field changed and date time when it was changed

    Read the article

  • Resolving IEnumerable<T> with Unity

    - by Mark Seemann
    Can Unity automatically resolve IEnumerable<T>? Let's say I have a class with this constructor: public CoalescingParserSelector(IEnumerable<IParserBuilder> parserBuilders) and I configure individual IParserBuilder instances in the container: container.RegisterType<IParserSelector, CoalescingParserSelector>(); container.RegisterType<IParserBuilder, HelpParserBuilder>(); container.RegisterType<IParserBuilder, SomeOtherParserBuilder>(); can I make this work without having to implement a custom implementation of IEnumerable<IParserBuilder>? var selector = container.Resolve<IParserSelector>(); So far I haven't been able to express this in any simple way, but I'm still ramping up on Unity so I may have missed something.

    Read the article

  • How can I define custom 'contentGroups' in a custom Flex 4 component?

    - by swidnikk
    The spark panel component for example can be written like this And its skin file will handle layout of the contentGroup, controlBarGroup, and titleDisplay. Notice, however that the contentGroup is doesn't appear in the code above and that the controlBarGroup accepts child mxml components. Now say I want to create a custom component that defines various required and non-required skinparts, such as 'headerGroup', 'navigationGroup', and 'accountPreferencesGroup'. I'd like to write this custom component like this The motivation here is that I can now create a couple different skin files to change the look and layout of those subgroups. Reading source of the spark panel, there are some calls within the mx_internal namespace such as getMXMLContent() which is a method of the spark group component, but which I have no access to. Does the description above make sense? How can I create custom 'contentGroups' in my custom Flex4 component that can use nested mxml child components? Should I approach this a different way?

    Read the article

  • A detail question when applying genetic algorithm to traveling salesman

    - by burrough
    I read various stuff on this and understand the principle and concepts involved, however, none of paper mentions the details of how to calculate the fitness of a chromosome (which represents a route) involving adjacent cities (in the chromosome) that are not directly connected by an edge (in the graph). For example, given a chromosome 1|3|2|8|4|5|6|7, in which each gene represents the index of a city on the graph/map, how do we calculate its fitness (i.e. the total sum of distances traveled) if, say, there is no direct edge/link between city 2 and 8. Do we follow some sort of greedy algorithm to work out a route between 2 and 8, and add the distance of this route to the total? This problem seems pretty common when applying GA to TSP. Anyone who's done it before please share your experience. Thanks.

    Read the article

  • How to check for changes on remote (origin) git repository?

    - by Lernkurve
    Question What are the Git commands to do the following workflow? Scenario: I cloned from a repository and did some commits of my own to my local repository. In the meantime, my colleagues did commits to the remote repository. Now, I want to: Check whether there are any new commits from other people on the remote repository, i.e. "origin"? Say there were 3 new commits on the remote repository since mine last pull, I would like to diff the remote repository's commits, i.e. HEAD~3 with HEAD~2, HEAD~2 with HEAD~1 and HEAD~1 with HEAD. After knowing what changed remotely, I want to get the latest commits from the others. My findings so far For step 2: I know the caret notation HEAD^, HEAD^^ etc. and the tilde notation HEAD ~2, HEAD~3 etc. For step 3: That is, I guess, just a git pull.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • PHP correct indentation?

    - by brainle55
    I'm a beginner PHP coder, recently I've been told I indent my code not correctly. They say this is wrong: if($something) { do_something(); } else { something_more(); and_more(); } While this is right? if($something) { do_something(); } else { something_more(); and_more(); } Really? I am willing to become opensource coder in nearest future so that's why I'm asking how to write code in a good way.

    Read the article

  • virtual function call from base of base

    - by th3g0dr3d
    hi, let's say i have something like this (very simplified): <pre><code> class base { void invoke() { this-foo(); } virtual void foo() = 0; }; class FirstDerived : public base { void foo() { // do stuff } }; class SecondDerived : public FirstDerived { // other not related stuff } int main() { SecondDerived *a = new SecondDerived(); a-invoke(); } What i see in the debugger is that base::invoke() is called and tries to call this-foo(); (i expect FirstDerived::foo() to be called), the debugger takes me to some unfamiliar assembly code (probably because it jumped to unrelated areas) and after few lines i get segmentation fault. am i doing something wrong here? :/ thanks in advance

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >