Search Results

Search found 91655 results on 3667 pages for 'managed code'.

Page 3/3667 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Best way to relate code smells to a non technical audience?

    - by Ed Guiness
    I have been asked to present examples of code issues that were found during a code review. My audience is mostly non-technical and I want to try to express the issues in such a way that I convey the importance of "good code" versus "bad code". But as I review my presentation it seems to me I've glossed over the reasons why it is important to write good code. I've mentioned a number of reasons including ease of maintenance, increased likelihood of bugs, but with my "non tech" hat on they seem unconvincing. What is your advice for helping a non-technical audience relate to the importance of good code?

    Read the article

  • What's the most effective way to perform code reviews?

    - by Paddyslacker
    I've never found the ideal way to perform code reviews and yet often my customers require them. Each customer seems to do them in a different way and I've never felt satisfied in any of them. What has been the most effective way for you to perform code reviews? For example: Is one person regarded as the gatekeeper for quality and reviews the code, or do the team own the standard? Do you do review code as a team exercise using a projector? Is it done in person, via email or using a tool? Do you eschew reviews and use things like pair programming and collective code ownership to ensure code quality?

    Read the article

  • Shouldn't all source code be plain text? [on hold]

    - by user61852
    Some developing environment/languages save the source code you write in a binary/propietary format that you cannot see or edit with a generic text editor. I'm not talking about compiled code, but the source code. An example could be PowerBuilder and Oracle Forms. It's ok you use proprietary technology if you want, but not being able to open the source code you wrote, in a simple editor, if only to read it, seems like a very strict form of vendor lock-in. Also this prevents you from using text-based version controls that can show you the difference between two versions in a line-by-line base. If the code is plain text, you don't need a license in order to just open it, see it and learn from it. Should it be a golden rule to avoid vendor lock-in to avoid technologies that save your source code to anything but plain text files ?

    Read the article

  • Calling a native callback from managed .NET code (when loading the managed code using COM)

    - by evilfred
    Hi, I am really confused by the multitude of misinformation about native / managed interop. I have a C++ exe which is NOT built using CLR stuff (it is not Managed C++ or C++/CLI and never will be). I would like to access some code I have in a C# assembly. I can access the C# assembly using COM. However, when my C# code detects an event I would like it to call back into my C++ code. The C++ function pointer to call back into will be provided at runtime. Note that the C++ function pointer is a function found in the exe's execution environment. I don't want the managed code to try and load up some DLL to call a function (there is no DLL). How do I pass this C++ function pointer to my C# code through .NET and have my C# code successfully call it? Thanks!

    Read the article

  • Writing a Javascript library that is code-completion and code-inspection friendly

    - by Vivin Paliath
    I recently made my own Javascript library and I initially used the following pattern: var myLibrary = (function () { var someProp = "..."; function someFunc() { ... } function someFunc2() { ... } return { func: someFunc, fun2: someFunc2, prop: someProp; } }()); The problem with this is that I can't really use code completion because the IDE doesn't know about the properties that the function literal is returning (I'm using IntelliJ IDEA 9 by the way). I've looked at jQuery code and tried to do this: (function(window, undefined) { var myLibrary = (function () { var someProp = "..."; function someFunc() { ... } function someFunc2() { ... } return { func: someFunc, fun2: someFunc2, prop: someProp; } }()); window.myLibrary = myLibrary; }(window)); I tried this, but now I have a different problem. The IDE doesn't really pick up on myLibrary either. The way I'm solving the problem now is this way: var myLibrary = { func: function() { }, func2: function() { }, prop: "" }; myLibrary = (function () { var someProp = "..."; function someFunc() { ... } function someFunc2() { ... } return { func: someFunc, fun2: someFunc2, prop: someProp; } }()); But that seems kinda clunky, and I can't exactly figure out how jQuery is doing it. Another question I have is how to handle functions with arbitrary numbers of parameters. For example, jQuery.bind can take 2 or 3 parameters, and the IDE doesn't seem to complain. I tried to do the same thing with my library, where a function could take 0 arguments or 1 argument. However, the IDE complains and warns that the correct number of parameters aren't being sent in. How do I handle this?

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Super Joybox 5 HID 0925:8884 not recognized as joystick in Ubuntu 12.04 LTS

    - by Tim Evans
    Problem: When using the "Super JoyBox 5" 4 port playstation 2 to USB adapter, the device is not recognized as a joystick. there is no js0 created, but instead another input eventX and mouseX are created in /dev/input. When using the directional buttons (up down left right) on a Playstation 1 controller attached to the device, the mouse cursor moves to the top, bottom, left, and right edges of the screen respectively. Buttons are unresponsive. The joypads attached to the device cannot be used in any games or other programs. Attempted remedies: Creating a symlink from the eventX to js0 does not solve the problem. Addl Info: joydev is loaded and running peroperly according to LSMOD. evtest can be run on the created eventX (sudo evtest /dev/input/event14 in my case) and the buttons and axes all register inputs. Here is a paste of EVTEST's diagnostic and the first couple button events. [code] sudo evtest /dev/input/event14 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x925 product 0x8884 version 0x100 Input device name: "HID 0925:8884" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 288 (BTN_TRIGGER) Event code 289 (BTN_THUMB) Event code 290 (BTN_THUMB2) Event code 291 (BTN_TOP) Event code 292 (BTN_TOP2) Event code 293 (BTN_PINKIE) Event code 294 (BTN_BASE) Event code 295 (BTN_BASE2) Event code 296 (BTN_BASE3) Event code 297 (BTN_BASE4) Event code 298 (BTN_BASE5) Event code 299 (BTN_BASE6) Event code 300 (?) Event code 301 (?) Event code 302 (?) Event code 303 (BTN_DEAD) Event code 304 (BTN_A) Event code 305 (BTN_B) Event code 306 (BTN_C) Event code 307 (BTN_X) Event code 308 (BTN_Y) Event code 309 (BTN_Z) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 312 (BTN_TL2) Event code 313 (BTN_TR2) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event code 319 (?) Event code 320 (BTN_TOOL_PEN) Event code 321 (BTN_TOOL_RUBBER) Event code 322 (BTN_TOOL_BRUSH) Event code 323 (BTN_TOOL_PENCIL) Event code 324 (BTN_TOOL_AIRBRUSH) Event code 325 (BTN_TOOL_FINGER) Event code 326 (BTN_TOOL_MOUSE) Event code 327 (BTN_TOOL_LENS) Event code 328 (?) Event code 329 (?) Event code 330 (BTN_TOUCH) Event code 331 (BTN_STYLUS) Event code 332 (BTN_STYLUS2) Event code 333 (BTN_TOOL_DOUBLETAP) Event code 334 (BTN_TOOL_TRIPLETAP) Event code 335 (BTN_TOOL_QUADTAP) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 127 Min 0 Max 255 Flat 15 Event code 1 (ABS_Y) Value 127 Min 0 Max 255 Flat 15 Event code 2 (ABS_Z) Value 127 Min 0 Max 255 Flat 15 Event code 3 (ABS_RX) Value 127 Min 0 Max 255 Flat 15 Event code 4 (ABS_RY) Value 127 Min 0 Max 255 Flat 15 Event code 5 (ABS_RZ) Value 127 Min 0 Max 255 Flat 15 Event code 6 (ABS_THROTTLE) Value 127 Min 0 Max 255 Flat 15 Event code 7 (ABS_RUDDER) Value 127 Min 0 Max 255 Flat 15 Event code 8 (ABS_WHEEL) Value 127 Min 0 Max 255 Flat 15 Event code 9 (ABS_GAS) Value 127 Min 0 Max 255 Flat 15 Event code 10 (ABS_BRAKE) Value 127 Min 0 Max 255 Flat 15 Event code 11 (?) Value 127 Min 0 Max 255 Flat 15 Event code 12 (?) Value 127 Min 0 Max 255 Flat 15 Event code 13 (?) Value 127 Min 0 Max 255 Flat 15 Event code 14 (?) Value 127 Min 0 Max 255 Flat 15 Event code 15 (?) Value 127 Min 0 Max 255 Flat 15 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Event code 18 (ABS_HAT1X) Value 0 Min -1 Max 1 Event code 19 (ABS_HAT1Y) Value 0 Min -1 Max 1 Event code 20 (ABS_HAT2X) Value 0 Min -1 Max 1 Event code 21 (ABS_HAT2Y) Value 0 Min -1 Max 1 Event code 22 (ABS_HAT3X) Value 0 Min -1 Max 1 Event code 23 (ABS_HAT3Y) Value 0 Min -1 Max 1 Event type 4 (EV_MSC) Event code 4 (MSC_SCAN) Testing ... (interrupt to exit) Event: time 1351223176.126127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223176.126130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 1 Event: time 1351223176.126166, -------------- SYN_REPORT ------------ Event: time 1351223178.238127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223178.238130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 0 Event: time 1351223178.238167, -------------- SYN_REPORT ------------ Event: time 1351223180.422127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223180.422129, type 1 (EV_KEY), code 289 (BTN_THUMB), value 1 Event: time 1351223180.422163, -------------- SYN_REPORT ------------ Event: time 1351223181.558099, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223181.558102, type 1 (EV_KEY), code 289 (BTN_THUMB), value 0 Event: time 1351223181.558137, -------------- SYN_REPORT ------------ Event: time 1351223182.486137, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223182.486140, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 1 Event: time 1351223182.486172, -------------- SYN_REPORT ------------ Event: time 1351223183.302130, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223183.302132, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 0 Event: time 1351223183.302165, -------------- SYN_REPORT ------------ Event: time 1351223184.030133, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.030136, type 1 (EV_KEY), code 291 (BTN_TOP), value 1 Event: time 1351223184.030166, -------------- SYN_REPORT ------------ Event: time 1351223184.558135, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.558138, type 1 (EV_KEY), code 291 (BTN_TOP), value 0 Event: time 1351223184.558168, -------------- SYN_REPORT ------------ [/code] The directional buttons on the pad are being identified as HAT0Y and HAT0X axes, thats zero, not the letter O. Aparently, this device used to work flawlessly on kernel 2.4.x systems, and even as late as ubunto 10.04. Perhaps the Joydev rules for identifying joypads has changed? Currently, this kind of bug is affecting a few different type of controller adapters, but since this is the one that i PERSONALLY have (and has been driving me my own special brand of crazy), its the one im documenting. What i think should be happening instead: The device should be registering js0 through js3, one for each port, or JS0 that will handle all of the connected devices with different numbered axes for each connected joypad. Either way, it should work as a joystick and stop controlling the mouse cursor. Please help!

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • How do you get positive criticism on your code?

    - by burnt1ce
    My team rarely does code review, mainly because we don't have enough time and people lack the energy and will to do so. But I would really like to know what people think about my code when they read it. This way, I have a better understanding how other people think and tailor my code accordingly so it's easier to read. So my question is, how do I get positive criticism on my code? My intent is to understand how people think so I can write more readable code.

    Read the article

  • How can you get constructive criticism for your code?

    - by burnt1ce
    My team rarely does code review, mainly because we don't have enough time and people lack the energy and will to do so. But I would really like to know what people think about my code when they read it. This way, I have a better understanding how other people think and tailor my code accordingly so it's easier to read. So my question is, how can I get constructive criticism for my code? My intent is to understand how people think so I can write more readable code.

    Read the article

  • Is there an open-source project that can be an example of well-written code?

    - by Renato Dinhani Conceição
    The title express my intention. I want to see the code of a big project that can be considered a good example of good code writing (clean code, modularization, comments, etc.) I don't want to know if the tool is good or not, but only how the code IS. There is some project that can be used as example? I'm asking this because must great projects have their flaws, some pieces or entire code that appears to be writing to a new person presented to system development (I think that maybe everyone do this in some part of their projects).

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Thou shalt not put code on a piedestal - Code is a tool, no more, no less

    - by Ralf Westphal
    “Write great code and everything else becomes easier” is what Paul Pagel believes in. That´s his version of an adage by Brian Marick he cites: “treat code as an end, not just a means.” And he concludes: “My post-Agile world is software craftsmanship.” I wonder, if that´s really the way to go. Will “simply” writing great code lead the software industry into the light? He´s alluding to the philosopher Kant who proposed, a human beings should never be treated as a means, but always as an end. But should we transfer this ethical statement into the world of software? I doubt it.   Reason #1: Human beings are categorially different from code. They are autonomous entities who need to find a way of living happily together. To Kant it seemed this goal could only be reached if nobody (ab)used a human being for his/her purposes. Because using a human being, i.e. treating it as a means, would contradict the fundamental autonomy and freedom of human beings. People should hold up a symmetric view of their relationships: Since nobody wants to be (ab)used, nobody should (ab)use anybody else. If you want to be treated decently, with respect, in accordance with your own free will - which means as an end - then do the same to other people. Code is dead, it´s a product, it´s a tool for people to reach their goals. No company spends any money on code other than to save money or earn money in the long run. Code is not a puppy. Enterprises do not commission software development to just feel good in its company. Code is not a buddy. Code is a slave, if you will. A mechanical slave, a non-tangible robot. Code is a tool, is a tool. And if we start to treat it differently, if we elevate its status unduely… I guess that will contort our relationship in a contraproductive way. Please get me right: Just because something is “just a tool”, “just a product” does not mean we should not be careful while designing, building, using it. Right to the contrary. We should be very careful when writing code – but not for the code´s sake! We should be careful because we respect our customers who are fellow human beings who should be treated as an end. If we are careless, neglectful, ignorant when producing code on their behalf, then we´re using them. Being sloppy means you´re caring more for yourself that for your customer. You´re then treating the customer as a means to fulfill some of your own needs. That´s plain unethical behavior.   Reason #2: The focus should always be on your purpose, not on any tool. But if code is treated as an end, then the focus is on the code. That might sound right, because where else should be your focus as a software developer? But, well, I´d say, your focus should be on delivering value to your customer. Because in the end your customer does not care if you write a single line of code. She just wants her problem to be solved. Solving problems is the purpose of any contractor. Code must be treated just as a means, a tool we know how to handle very well. But if we´re really trying to be craftsmen then we should be conscious about exactly that and act ethically. That means we must never be so focused on our tool as to be unable to suggest better solutions to the problems of our customers than code.   I´m all with Paul when he urges us to “Write great code”. Sure, if you need to write code, then by all means do so. Write the best code you can think of – and then try to improve it. Paul has all the best intentions when he signs Brians “treat code as an end” - but as we all know: “The road to hell is paved with best intentions” ;-) Yes, I can imagine a “hell of code focus”. In fact, I don´t need to imagine it, I´m seeing it quite often. Because code hell is whereever two developers stand together and are so immersed in talking about all sorts of coding tricks, design patterns, code smells, technologies, platforms, tools that they lose sight of the big picture. Talking about TDD or SOLID or refactoring is a sign of consciousness – relative to the “cowboy coders” view of the world. But from yet another point of view TDD, SOLID, and refactoring are just cures for ailments within a system. And I fear, if “Writing great code” is the only focus or the main focus of software development, then we as an industry lose the ability to see that. Focus draws a line around something, it defines a horizon for perceptions and thinking. So if we focus on code our horizon ends where “the land of code” ends. I don´t think that should be our professional attitude.   So what about Software Craftsmanship as the next big thing after Agility? I think Software Craftsmanship has an important message for all software developers and beyond. But to make it the successor of the Agility movement seems to miss a point. Agility never claimed to solve all software development problems, I´d say. So to blame it for having missed out on certain aspects of it is wrong. If I had to summarize Agility in one word I´d say “Value”. Agility put value for the customer back in software development. Focus on delivering value early and often – that´s Agility´s mantra. All else follows from that. And I ask you: Is that obsolete? Is delivering value not hip anymore? No, sure not. That´s our very purpose as software developers. So how can Agility become obsolete and need to be replaced? We need to do away with this “either/or”-thinking. It´s either Agility or Lean or Software Craftsmanship or whatnot. Instead we should start integrating concepts and movements. Think “both/and”. Think Agility plus Software Craftsmanship plus Lean plus whatnot. We don´t neet to tear down anything from a piedestal and replace it with a new idol. Instead we should do away with piedestals and arrange whatever is helpful is a circle. Then we can turn to concepts, movements for whatever they are best. After 10 years of Agility we should be able to identify what it was good at – and keep that. Keep Agility around and add whatever Agility was lacking or never concerned with. Add whatever is at the core of Software Craftsmanship. Add whatever is at the core of Lean etc. But don´t call out the age of Post-Agility. Because it better never will end. Because once we start to lose Agility´s core we´re losing focus of the customer.

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • Managed C++ prospects

    - by Srikanth
    Has anyone tried coding in managed C++? I have a few questions : How productive is the language compared to C#? Is there any restriction on type of projects that can be written? Can we write a web application in managed C++? Is it possible to mix managed and unmanaged C++ code in one application? Is MFC still valid in managed C++? Will it be the best option when considering migration of a VC++ application?

    Read the article

  • managed beans as managed properties

    - by Sean
    I am using JSF 1.1 on WebSphere 6.1. I am building search functionality within an application and am having some issues. I've stripped out the extras, and have left myself with the following: 4 managed beans: SearchController - Controller bean, session scope SearchResults - session scope (store the results) ProductSearch - session scope (store the search conditions) ResultsBacking - Backing bean for DataTable, used to determine which row was clicked, request scope The SearchController bean has, as managed properties, the other 3. All except ResultsBacking are session scoped. If there is only 1 item in the search results, I want to bring up that record directly. I call setFirst(0) for the data table in the ResultsBacking method (I want to use the existing method that handle which item was clicked, so this is called right after the setFirst). When I go to do another search, I get an IllegalArgumentException when calling getRowData in the data table. According to the api, this is thrown 'if now(sic) row data is available at the currently specified row index'. I'm confused as to why this happens. It works the first time but not the second. Do I need to remove the ResultsBacking on a new search to get rid of the old state?

    Read the article

  • Code bases for desktop and mobile versions of the same app

    - by Code-Guru
    I have written a small Java Swing desktop application. It seems like a natural step to port it to Android since I am interested in learning how to program for that platform. I believe that I can reuse some of my existing code base. (Of course, exactly how much reuse I can get out of it will only be determined as I start coding the Android app.) Currently I am hosting my Java Swing app on Sourceforge.net and use Git for version control. As I start creating the Android app, I am considering two options: Add the Android code to my existing repository, creating separate directories and Java packages for the Android-specific code and resources. Create a new Sourceforge project (or even host a new one) and creating a new Git repository. a. With a new repository, I can simply add the files from my original project that I will reuse. (I don't particularly like this option as it will be difficult to modify both copies of the same file in both repositories.) b. Or I can branch the original repository. This adds the difficulty of merging changes of shared source files. Mostly I am trying to decide between choices 1. and 2b. If I'm going to branch the existing repository, what advantages are there to hosting it as a separate SF project (or even using another OSS hosting service) as opposed to keeping all my source code in the current SF project?

    Read the article

  • How do you put price on your source code?

    - by deviDave
    I was asked to sell the source code of small utility app I did years ago with existing users of this app. I tried investigating how to put price on the source code and haven't come up with a good solution so far. I first tried searching the net, but information I found there are somehow far from reality. Then I found a few people how also sold their source code with users as well. But their price seems unrealistic (too high). For example, one person had an app which price was around $200 for 1 user and he had 80 users. He sold the source with users for $30k. How did he come up with this price? Is it a good price if I charge the code by formula: num_of_users x app_price + app_price x num_of_new_users_in_one_year ? This means that I count the price by selling each user for the price of the app then adding the amount of money I earn in 1 year from this app. If this is a good formula, what shall I do with sources who do not have users yet?

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • Is there any open source code analyzer for java which I can adopt my software metrics algorithm on it?

    - by daneshkohan
    I am doing my masters dissertation and I have conducted a software metrics. I need to adopt my metrics on an open source tool. I have found PMD and check style on sourceforge.net but there is not adequate explanation about their codes. However, I couldn't to find their source code to customize them. I will be appreciated, if you introduce one open source tool for java which I can customize it's code.

    Read the article

  • Is commented out code really always bad?

    - by nikie
    Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to loose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much differnce between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update: S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Glass' "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.

    Read the article

  • If your unit test code "smells" does it really matter?

    - by Buttons840
    Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >