Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 140/348 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Best practice for Google app engine and Git.

    - by systempuntoout
    I'm developing a small pet project on Google App Engine and i would like to keep code under source control using github; this will allow a friend of mine to checkout and modify the sources. I just have a directory with all sources (call it PetProject) and Google App Engine development server points to that directory. Is it correct to create a Repo directly from PetProject directory or is it preferable to create a second directory (mirror of the develop PetProject directory); in this case, anytime my friend will release something new, i need to pull from Git and then copy the modified files to the develop PetProject directory. If i decide to keep the Repo inside the develop directory, skippin .git on yaml is enough? What's the best practice here?

    Read the article

  • Why should I use a container div in HTML?

    - by lara.robertson
    I am currently learning html/css, and have noticed a common technique is to place a generic container div in the root of the body tag: <html> <head> ... </head> <body> <div id="container"> ... </div> </body> </html> Is there a valid reason for doing this? Why can't the css just reference the body tag?

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • Does the optimizer filter subqueries with outer where clauses

    - by Mongus Pong
    Take the following query: select * from ( select a, b from c UNION select a, b from d ) where a = 'mung' Will the optimizer generally work out that I am filtering a on the value 'mung' and consequently filter mung on each of the queries in the subquery. OR will it run each query within the subquery union and return the results to the outer query for filtering (as the query would perhaps suggest) In which case the following query would perform better : select * from ( select a, b from c where a = 'mung' UNION select a, b from d where a = 'mung' ) Obviously query 1 is best for maintenance, but is it sacrificing much performace for this? Which is best?

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Parent child class relationship design pattern

    - by Jeremy
    I have a class which has a list of child items. Is there a design pattern I can copy that I can apply to these classes so that I can access the parent instance from the child, and it enforces rules such as not being able to add the child to multiple parents, etc?

    Read the article

  • Service Layer Pattern - Could we avoid the service layer on a specific case?

    - by lidermin
    Hi, we are trying to implement an application using the Service Layer Pattern cause our application needs to connect to other multiple applications too, and googling on the web, we found this link of a demonstrative graphic for the "right" way of apply the pattern: martinfowler.com - Service Layer Pattern But now we have a question: what if our system needs to implement some business logic, only for our application (like some maintenance data for the system itself) that we don't need to share with other systems. Based on this graphic: As it seems, it will be unnecesary to implement a service layer just for that; it will be more practical to avoid the service layer, and just go from User Interface to the Business Layer (for example). What should be the right way in this case to implement the Service Layer Pattern? What do you suggest us for a scenario like the one I told you? Thanks in advance.

    Read the article

  • Choosing between instance methods and separate functions?

    - by StackedCrooked
    Adding functionality to a class can be done by adding a method or by defining a function that takes an object as its first parameter. Most programmers that I know would choose for the solution of adding a instance method. However, I sometimes prefer to create a separate function. For example, in the example code below Area and Diagonal are defined as separate functions instead of methods. I find it better this way because I think these functions provide enhancements rather than core functionality. Is this considered a good/bad practice? If the answer is "it depends", then what are the rules for deciding between adding method or defining a separate function? class Rect { public: Rect(int x, int y, int w, int h) : mX(x), mY(y), mWidth(w), mHeight(h) { } int x() const { return mX; } int y() const { return mY; } int width() const { return mWidth; } int height() const { return mHeight; } private: int mX, mY, mWidth, mHeight; }; int Area(const Rect & inRect) { return inRect.width() * inRect.height(); } float Diagonal(const Rect & inRect) { return std::sqrt(std::pow(static_cast<float>(inRect.width()), 2) + std::pow(static_cast<float>(inRect.height()), 2)); }

    Read the article

  • What are the virtues of using XML comments in .NET?

    - by Michal Czardybon
    I can't understand the virtues of using XML comments. I know they can be converted into nice documentation external to the code, but the same can be achieved with the much more concise DOxygen syntax. In my opinion the XML comments are wrong, because: They obfuscate the comments and the code in general. (They are more difficult to read by humans). Less code can be viewed on a single screen, because "summary" and "/summary" take additional lines. They suggest that all method parameters have to be commented, whereas 90% of them are obvious and SHOULD be left not commented. The only problem I have with this is that my point of view seems to be in minority. Why?

    Read the article

  • Is there any reason to use C instead of C++ for embedded development?

    - by Piotr Czapla
    Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I though the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists of C developers mainly so advance features of C++ won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it so I will choose one answer that made me think most.

    Read the article

  • Interface and base class mix, the right way to implement this

    - by Lerxst
    I have some user controls which I want to specify properties and methods for. They inherit from a base class, because they all have properties such as "Foo" and "Bar", and the reason I used a base class is so that I dont have to manually implement all of these properties in each derived class. However, I want to have a method that is only in the derived classes, not in the base class, as the base class doesn't know how to "do" the method, so I am thinking of using an interface for this. If i put it in the base class, I have to define some body to return a value (which would be invalid), and always make sure that the overriding method is not calling the base. method Is the right way to go about this to use both the base class and an interface to expose the method? It seems very round-about, but every way i think about doing it seems wrong... Let me know if the question is not clear, it's probably a dumb question but I want to do this right.

    Read the article

  • Should non-English member names be changed to English?

    - by M.A. Hanin
    Situation: Automatically generated memebers, such as MenuStrip items, have their (automatically generated) names based on the text entered when the item was created. My most common situation is creating a menu-strip and adding menu-items by entering their text (using the graphical designer). Since my GUI is in Hebrew, all these members have a name which contains a Hebrew string. Something like "(hebrew-text)ToolStripItem". When I create event handlers, the event handlers "inherit" the hebrew text: "(hebrew-text)ToolStripMenuItem_Click". This actually works well, IntelliSense has no problem with Hebrew text, and so does the compiler. The question is: should I change these names (or prevent them from being created in the first place)? What are the possible consequences of keeping those names?

    Read the article

  • What is the best way to organize directories within a large grails application?

    - by egervari
    What is the best way to organize directories within a large grails application? In a typical Spring application, we'd have myproject/domain/ and myproject/web/controllers and myproject/services Since grails puts these artifacts in their own directories... and then just uses the same base project package for everything, what is the best practice? Use the same sub package name for domain objects, controllers, services too? Ken

    Read the article

  • What ways are there for cleaning an R environment from objects?

    - by Tal Galili
    I know I can use ls() and rm() to see and remove objects that exist in my environment. However, when dealing with "old" .RData file, one needs to sometimes pick an environment a part to find what to keep and what to leave out. What I would like to do, is to have a GUI like interface to allow me to see the objects, sort them (for example, by there size), and remove the ones I don't need (for example, by a check-box interface). Since I imagine such a system is not currently implemented in R, what ways do exist? What do you use for cleaning old .RData files? Thanks, Tal

    Read the article

  • C++ interpreter conceptual problem

    - by Jan Wilkins
    I've built an interpreter in C++ for a language created by me. One main problem in the design was that I had two different types in the language: number and string. So I have to pass around a struct like: class myInterpreterValue { myInterpreterType type; int intValue; string strValue; } Objects of this class are passed around million times a second during e.g.: a countdown loop in my language. Profiling pointed out: 85% of the performance is eaten by the allocation function of the string template. This is pretty clear to me: My interpreter has bad design and doesn't use pointers enough. Yet, I don't have an option: I can't use pointers in most cases as I just have to make copies. How to do something against this? Is a class like this a better idea? vector<string> strTable; vector<int> intTable; class myInterpreterValue { myInterpreterType type; int locationInTable; } So the class only knows what type it represents and the position in the table This however again has disadvantages: I'd have to add temporary values to the string/int vector table and then remove them again, this would eat a lot of performance again. Help, how do interpreters of languages like Python or Ruby do that? They somehow need a struct that represents a value in the language like something that can either be int or string.

    Read the article

  • Java interface and abstract class issue

    - by George2
    Hello everyone, I am reading the book -- Hadoop: The Definitive Guide, http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/0596521979/ref=sr_1_1?ie=UTF8&s=books&qid=1273932107&sr=8-1 In chapter 2 (Page 25), it is mentioned "The new API favors abstract class over interfaces, since these are easier to evolve. For example, you can add a method (with a default implementation) to an abstract class without breaking old implementations of the class". What does it mean (especially what means "breaking old implementations of the class")? Appreciate if anyone could show me a sample why from this perspective abstract class is better than interface? thanks in advance, George

    Read the article

  • Should I use the Model-View-ViewModel (MVVM) pattern in Silverlight projects?

    - by Jon Galloway
    One challenge with Silverlight controls is that when properties are bound to code, they're no longer really editable in Blend. For example, if you've got a ListView that's populated from a data feed, there are no elements visible when you edit the control in Blend. I've heard that the MVVM pattern, originated by the WPF development community, can also help with keeping Silverlight controls "blendable". I'm still wrapping my head around it, but here are some explanations: http://www.nikhilk.net/Silverlight-ViewModel-Pattern.aspx http://mark-dot-net.blogspot.com/2008/11/model-view-view-model-mvvm-in.html http://www.ryankeeter.com/silverlight/silverlight-mvvm-pt-1-hello-world-style/ http://jonas.follesoe.no/YouCardRevisitedImplementingTheViewModelPattern.aspx One potential downside is that the pattern requires additional classes, although not necessarily more code (as shown by the second link above). Thoughts?

    Read the article

  • Checking for nil in view in Ruby on Rails

    - by seaneshbaugh
    I've been working with Rails for a while now and one thing I find myself constantly doing is checking to see if some attribute or object is nil in my view code before I display it. I'm starting to wonder if this is always the best idea. My rationale so far has been that since my application(s) rely on user input unexpected things can occur. If I've learned one thing from programming in general it's that users inputting things the programmer didn't think of is one of the biggest sources of run-time errors. By checking for nil values I'm hoping to sidestep that and have my views gracefully handle the problem. The thing is though I typically for various reasons have similar nil or invalid value checks in either my model or controller code. I wouldn't call it code duplication in the strictest sense, but it just doesn't seem very DRY. If I've already checked for nil objects in my controller is it okay if my view just assumes the object truly isn't nil? For attributes that can be nil that are displayed it makes sense to me to check every time, but for the objects themselves I'm not sure what is the best practice. Here's a simplified, but typical example of what I'm talking about: controller code def show @item = Item.find_by_id(params[:id]) @folders = Folder.find(:all, :order => 'display_order') if @item == nil or @item.folder == nil redirect_to(root_url) and return end end view code <% if @item != nil %> display the item's attributes here <% if @item.folder != nil %> <%= link_to @item.folder.name, folder_path(@item.folder) %> <% end %> <% else %> Oops! Looks like something went horribly wrong! <% end %> Is this a good idea or is it just silly?

    Read the article

  • [MySQL/PHP] Avoid using RAND()

    - by Andrew Ellis
    So... I have never had a need to do a random SELECT on a MySQL DB until this project I'm working on. After researching it seems the general populous says that using RAND() is a bad idea. I found an article that explains how to do another type of random select. Basically, if I want to select 5 random elements, I should do the following (I'm using the Kohana framework here)? If not, what is a better solution? Thanks, Andrew <?php final class Offers extends Model { /** * Loads a random set of offers. * * @param integer $limit * @return array */ public function random_offers($limit = 5) { // Find the highest offer_id $sql = ' SELECT MAX(offer_id) AS max_offer_id FROM offers '; $max_offer_id = DB::query(Database::SELECT, $sql) ->execute($this->_db) ->get('max_offer_id'); // Check to make sure we're not trying to load more offers // than there really is... if ($max_offer_id < $limit) { $limit = $max_offer_id; } $used = array(); $ids = ''; for ($i = 0; $i < $limit; ) { $rand = mt_rand(1, $max_offer_id); if (!isset($used[$rand])) { // Flag the ID as used $used[$rand] = TRUE; // Set the ID if ($i > 0) $ids .= ','; $ids .= $rand; ++$i; } } $sql = ' SELECT offer_id, offer_name FROM offers WHERE offer_id IN(:ids) '; $offers = DB::query(Database::SELECT, $sql) ->param(':ids', $ids) ->as_object(); ->execute($this->_db); return $offers; } }

    Read the article

  • Implementing the procducer-consumer with .NET 4.0 new

    - by bitbonk
    With alle the new paralell programming features in .NET 4.0, what would be a a simple and fast way to implement the producer-consumer pattern (where at least one thread is producing and enqueuing task items and one other thread executes (dequeues) these tasks). Can we benfit from all these new APIs? What is your preferred implementation of this pattern?

    Read the article

  • Best practice for Python Assert

    - by meade
    Is there a performance or code maintenance issue with using assert as part of the standard code instead of using it just for debugging purposes? Is assert x >= 0, 'x is less then zero' and better or worse then if x < 0: raise Exception, 'x is less then zero' Also, is there anyway to set a business rule like if x < 0 raise error that is always checked with out the try, except, finally so, if at anytime throughout the code x is < 0 an error is raised, like if you set assert x < 0 at the start of a function, anywhere within the function where x becomes less then 0 an exception is raised?

    Read the article

  • What is the proper way to code a read-while loop in Scala?

    - by ARKBAN
    What is the "proper" of writing the standard read-while loop in Scala? By proper I mean written in a Scala-like way as opposed to a Java-like way. Here is the code I have in Java: MessageDigest md = MessageDigest.getInstance( "MD5" ); InputStream input = new FileInputStream( "file" ); byte[] buffer = new byte[1024]; int readLen; while( ( readLen = input.read( buffer ) ) != -1 ) md.update( buffer, 0, readLen ); return md.digest(); Here is the code I have in Scala: val md = MessageDigest.getInstance( hashInfo.algorithm ) val input = new FileInputStream( "file" ) val buffer = new Array[ Byte ]( 1024 ) var readLen = 0 while( readLen != -1 ) { readLen = input.read( buffer ) if( readLen != -1 ) md.update( buffer, 0, readLen ) } md.digest The Scala code is correct and works, but feels very un-Scala-ish. For one it is a literal translation of the Java code, taking advantage of none of the advantages of Scala. Further it is actually longer than the Java code! I really feel like I'm missing something, but I can't figure out what. I'm fairly new to Scala, and so I'm asking the question to avoid falling into the pitfall of writing Java-style code in Scala. I'm more interested in the Scala way to solve this kind of problem than in any specific helper method that might be provided by the Scala API to hash a file. (I apologize in advance for my ad hoc Scala adjectives throughout this question.)

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >