Search Results

Search found 30111 results on 1205 pages for 'best practices analyzer'.

Page 143/1205 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Is Assert.Fail() considered bad practice?

    - by Mendelt
    I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.

    Read the article

  • What are the 'big' advantages to have Poco with ORM?

    - by bonefisher
    One advantage that comes to my mind is, if you use Poco classes for Orm mapping, you can easily switch from one ORM to another, if both support Poco. Having an ORM with no Poco support, e.g. mappings are done with attributes like the DataObjects.Net Orm, is not an issue for me, as also with Poco-supported Orms and theirs generated proxy entities, you have to be aware that entities are actually DAO objects bound to some context/session, e.g. serializing is a problem, etc..

    Read the article

  • C# 4.0 Named Parameters - should they always be used when calling non-Framework methods?

    - by David Neale
    I really this is a hugely subjective topic but here is my current take: When calling methods which do not form part of the .NET BCL named parameters should always be used as the method signatures may well change, especially during the development cycle of my own applications. Although they might appear more verbose they are also far clearer. Is the above a reasonable approach to calling methods or have I overlooked something fundamental?

    Read the article

  • what's wrong in File.Exist() method?

    - by Arseny
    Reading some answers with code samples I notice that those where this method mentioned are subjected to criticism. I'm using this method in my code. So I'd like to know if someone give me detailed response whuy this method is not recomemnded and what alternative approaches are?

    Read the article

  • Is code clearness killing application performance?

    - by Jorge Córdoba
    As today's code is getting more complex by the minute, code needs to be designed to be maintainable - meaning easy to read, and easy to understand. That being said, I can't help but remember the programs that ran a couple of years ago such as Winamp or some games in which you needed a high performance program because your 486 100 Mhz wouldn't play mp3s with that beautiful mp3 player which consumed all of your CPU cycles. Now I run Media Player (or whatever), start playing an mp3 and it eats up a 25-30% of one of my four cores. Come on!! If a 486 can do it, how can the playback take up so much processor to do the same? I'm a developer myself, and I always used to advise: keep your code simple, don't prematurely optimize for performance. It seems that we've gone from "trying to get it to use the least amount of CPU as possible" to "if it doesn't take too much CPU is all right". So, do you think we are killing performance by ignoring optimizations?

    Read the article

  • When are global variables acceptable?

    - by dsimcha
    Everyone here seems to hate global variables, but I see at least one very reasonable use for them: They are great for holding program parameters that are determined at program initialization and not modified afterwords. Do you agree that this is an exception to the "globals are evil" rule? Is there any other exception that you can think of, besides in quick and dirty throwaway code where basically anything goes? If not, why are globals so fundamentally evil that you do not believe that there are any exceptons?

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Observer pattern and violation of Single Responsibility Principle

    - by Devil Jin
    I have an applet which repaints itself once the text has changed Design 1: //MyApplet.java public class MyApplet extends Applet implements Listener{ private DynamicText text = null; public void init(){ text = new DynamicText("Welcome"); } public void paint(Graphics g){ g.drawString(text.getText(), 50, 30); } //implement Listener update() method public void update(){ repaint(); } } //DynamicText.java public class DynamicText implements Publisher{ // implements Publisher interface methods //notify listeners whenever text changes } Isn't this a violation of Single Responsibility Principle where my Applet not only acts as Applet but also has to do Listener job. Same way DynamicText class not only generates the dynamic text but updates the registered listeners. Design 2: //MyApplet.java public class MyApplet extends Applet{ private AppletListener appLstnr = null; public void init(){ appLstnr = new AppletListener(this); // applet stuff } } // AppletListener.java public class AppletListener implements Listener{ private Applet applet = null; public AppletListener(Applet applet){ this.applet = applet; } public void update(){ this.applet.repaint(); } } // DynamicText public class DynamicText{ private TextPublisher textPblshr = null; public DynamicText(TextPublisher txtPblshr){ this.textPblshr = txtPblshr; } // call textPblshr.notifyListeners whenever text changes } public class TextPublisher implments Publisher{ // implements publisher interface methods } Q1. Is design 1 a SPR violation? Q2. Is composition a better choice here to remove SPR violation as in design 2.

    Read the article

  • must have tools for better quality code

    - by leon
    I just started my real development career and I want to know what set of tools/strategy that the community is using to write better quality code. To start, I use astyle to format my code doxygen to document my code gcc -Wall -Wextra -pedantic and clang -Wall -Wextra -pedantic to check all warnings What tools/strategy do you use to write better code? This question is open to all language and all platform.

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • Partial class or "chained inheritance"

    - by Charlie boy
    Hi From my understanding partial classes are a bit frowned upon by professional developers, but I've come over a bit of an issue; I have made an implementation of the RichTextBox control that uses user32.dll calls for faster editing of large texts. That results in quite a bit of code. Then I added spellchecking capabilities to the control, this was made in another class inheriting RichTextBox control as well. That also makes up a bit of code. These two functionalities are quite separate but I would like them to be merged so that I can drop one control on my form that has both fast editing capabilities and spellchecking built in. I feel that simply adding the code form one class to the other would result in a too large code file, especially since there are two very distinct areas of functionality, so I seem to need another approach. Now to my question; To merge these two classes should I make the spellchecking RichTextBox inherit from the fast edit one, that in turn inherits RichTextBox? Or should I make the two classes partials of a single class and thus making them more “equal” so to speak? This is more of a question of OO principles and exercise on my part than me trying to reinvent the wheel, I know there are plenty of good text editing controls out there. But this is just a hobby for me and I just want to know how this kind of solution would be managed by a professional. Thanks!

    Read the article

  • Giving the script tag an ID

    - by The Code Pimp
    Hi guys, i came across a scenario where giving a <script> element an "ID" would solve a problem easily. However, after reading about the script tag at w3schools and quirksmode, it seems doing so could have some unforeseen consequences. Has anyone come across any of these issues with modern browsers such as Chrome, Safari, FF3 up and IE 7 up? Thanks

    Read the article

  • Are regexes really maintainable?

    - by Rich Bradshaw
    Any code I've seen that uses Regexes tends to use them as a black box: Put in string Magic Regex Get out string This doesn't seem a particularly good idea to use in production code, as even a small change can often result in a completely different regex. Apart from cases where the standard is permanent and unchanging, are regexes the way to do things, or is it better to try different methods?

    Read the article

  • How does the verbosity of identifiers affect the performance of a programmer?

    - by DR
    I always wondered: Are there any hard facts which would indicate that either shorter or longer identifiers are better? Example: clrscr() opposed to ClearScreen() Short identifiers should be faster to read because there are fewer characters but longer identifiers often better resemble natural language and therefore also should be faster to read. Are there other aspects which suggest either a short or a verbose style? EDIT: Just to clarify: I didn't ask: "What would you do in this case?". I asked for reasons to prefer one over the other, i.e. this is not a poll question. Please, if you can, add some reason on why one would prefer one style over the other.

    Read the article

  • Javascript clarity of purpose

    - by JesDaw
    Javascript usage has gotten remarkably more sophisticated and powerful in the past five years. One aspect of this sort of functional programming I struggle with, esp with Javascript’s peculiarities, is how to make clear either through comments or code just what is happening. Often this sort of code takes a while to decipher, even if you understand the prototypal, first-class functional Javascript way. Any thoughts or techniques for making perfectly clear what your code does and how in Javascript? I've asked this question elsewhere, but haven't gotten much response.

    Read the article

  • small scale web site - global javascript file style/format/pattern - improving maintainability

    - by yaya3
    I frequently create (and inherit) small to medium websites where I have the following sort of code in a single file (normally named global.js or application.js or projectname.js). If functions get big, I normally put them in a seperate file, and call them at the bottom of the file below in the $(document).ready() section. If I have a few functions that are unique to certain pages, I normally have another switch statement for the body class inside the $(document).ready() section. How could I restructure this code to make it more maintainable? Note: I am less interested in the functions innards, more so the structure, and how different types of functions should be dealt with. I've also posted the code here - http://pastie.org/999932 in case it makes it any easier var ProjectNameEnvironment = {}; function someFunctionUniqueToTheHomepageNotWorthMakingConfigurable () { $('.foo').hide(); $('.bar').click(function(){ $('.foo').show(); }); } function functionThatIsWorthMakingConfigurable(config) { var foo = config.foo || 700; var bar = 200; return foo * bar; } function globallyRequiredJqueryPluginTrigger (tooltip_string) { var tooltipTrigger = $(tooltip_string); tooltipTrigger.tooltip({ showURL: false ... }); } function minorUtilityOneLiner (selector) { $(selector).find('li:even').not('li ul li').addClass('even'); } var Lightbox = {}; Lightbox.setup = function(){ $('li#foo a').attr('href','#alpha'); $('li#bar a').attr('href','#beta'); } Lightbox.init = function (config){ if (typeof $.fn.fancybox =='function') { Lightbox.setup(); var fade_in_speed = config.fade_in_speed || 1000; var frame_height = config.frame_height || 1700; $(config.selector).fancybox({ frameHeight : frame_height, callbackOnShow: function() { var content_to_load = config.content_to_load; ... }, callbackOnClose : function(){ $('body').height($('body').height()); } }); } else { if (ProjectNameEnvironment.debug) { alert('the fancybox plugin has not been loaded'); } } } // ---------- order of execution ----------- $(document).ready(function () { urls = urlConfig(); (function globalFunctions() { $('.tooltip-trigger').each(function(){ globallyRequiredJqueryPluginTrigger(this); }); minorUtilityOneLiner('ul.foo') Lightbox.init({ selector : 'a#a-lightbox-trigger-js', ... }); Lightbox.init({ selector : 'a#another-lightbox-trigger-js', ... }); })(); if ( $('body').attr('id') == 'home-page' ) { (function homeFunctions() { someFunctionUniqueToTheHomepageNotWorthMakingConfigurable (); })(); } });

    Read the article

  • How should my team decide between 3-tier and 2-tier architectures?

    - by j0rd4n
    My team is discussing the future direction we take our projects. Half the team believes in a pure 3-tier architecture while the other half favors a 2-tier architecture. Project Assumptions: Enterprise business applications Business logic needed between user and database Data validation necessary Service-oriented (prefer RESTful services) Multi-year maintenance plan Support hundreds of users 3-tier Team Favors: Persistant layer <== Domain layer <== UI layer Service boundary between at least persistant layer and domain layer. Domain layer might have service boundary between it. Translations between each layer (clean DTO separation) Hand roll persistance unless we can find creative yet elegant automation 2-tier Team Favors: Entity Framework + WCF Data Service layer <== UI layer Business logic kept in WCF Data Service interceptors Minimal translation between layers - favor faster coding So that's the high-level argument. What considerations should we take into account? What experiences have you had with either approach?

    Read the article

  • Howto mix TDD and RAII

    - by f4
    I'm trying to make extensive tests for my new project but I have a problem. Basically I want to test MyClass. MyClass makes use of several other class which I don't need/want to do their job for the purpose of the test. So I created mocks (I use gtest and gmock for testing) But MyClass instantiate everything it needs in it's constructor and release it in the destructor. That's RAII I think. So I thought, I should create some kind of factory, which creates everything and gives it to MyClass's constructor. That factory could have it's fake for testing purposes. But's thats no longer RAII right? Then what's the good solution here?

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • Should I make sure arguments aren't null before using them in a function.

    - by Nathan W
    The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean. I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so. Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String If (FilePath Is Nothing) Then Throw New ArgumentNullException("FilePath") End If Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read))) Return _Hash End Function As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing. I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.? Thanks.

    Read the article

  • Is this 2D array initialization a bad idea?

    - by Brendan Long
    I have something I need a 2D array for, but for better cache performance, I'd rather have it actually be a normal array. Here's the idea I had but I don't know if it's a terrible idea: const int XWIDTH = 10, YWIDTH = 10; int main(){ int * tempInts = new int[XWIDTH * YWIDTH]; int ** ints = new int*[XWIDTH]; for(int i=0; i<XWIDTH; i++){ ints[i] = &tempInts[i*YWIDTH]; } // do things with ints delete[] ints[0]; delete[] ints; return 0; } So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once. The reason for the delete[] (int*) ints; is because I'm actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer. Just wondering if there's any reasons this is a horrible idea. Or if there's an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >