code review: Is it subjective or objective(quantifiable) ?

Posted by Ram on Stack Overflow See other posts from Stack Overflow or by Ram
Published on 2010-03-03T18:14:59Z Indexed on 2010/03/12 7:17 UTC
Read the original article Hit count: 596

I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed

We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development.

What are the things you look for when doing a code review ?

  • These are the things I came up with
    1. Enforce FXCop rules (we are a Microsoft shop)
    2. Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety
    3. Adhere to naming conventions
    4. The code should cover edge cases and boundaries conditions
    5. Should handle exceptions correctly (do not swallow exceptions)
    6. Check if the functionality is duplicated elsewhere
    7. method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -)
    8. Do not pass/return nulls in methods
    9. Avoid dead code
    10. Document public and protected methods/properties/variables

What other things do you generally look for ?

I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small"

Or is code review very subjective ( and would differ from one reviewer to another ) ?

The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code)

What other objective/quantifiable system do you follow to identify good/bad code written by developers?

Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

© Stack Overflow or respective owner

Related posts about code-review

Related posts about best-practices