Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 150/605 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Why there is perception that VB.NET is good for small to medium size application and not for enterprise class project?

    - by Gens
    I love VB.NET very much. Coding VB.NET with Visual Studio is just like typing messages. Smooth, fast and simple. Any error will be notified instantly. The OO capability of VB.NET is good enough. But often in any .Net languages discussion, there is perception that VB.NET is good for small to medium size application and not for large scale project? Why there is such perception? Or am I missing anything regarding to VB.NET?

    Read the article

  • When are Getters and Setters Justified

    - by Winston Ewert
    Getters and setters are often criticized as being not proper OO. On the other hand most OO code I've seen has extensive getters and setters. When are getters and setters justified? Do you try to avoid using them? Are they overused in general? If your favorite language has properties (mine does) then such things are also considered getters and setters for this question. They are same thing from an OO methodology perspective. They just have nicer syntax. Sources for Getter/Setter Criticism (some taken from comments to give them better visibility): http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html http://typicalprogrammer.com/?p=23 http://c2.com/cgi/wiki?AccessorsAreEvil http://www.darronschall.com/weblog/2005/03/no-brain-getter-and-setters.cfm http://www.adam-bien.com/roller/abien/entry/encapsulation_violation_with_getters_and To state the criticism simply: Getters and Setters allow you to manipulate the internal state of objects from outside of the object. This violates encapsulation. Only the object itself should care about its internal state. And an example Procedural version of code. struct Fridge { int cheese; } void go_shopping(Fridge fridge) { fridge.cheese += 5; } Mutator version of code: class Fridge { int cheese; void set_cheese(int _cheese) { cheese = _cheese; } int get_cheese() { return cheese; } } void go_shopping(Fridge fridge) { fridge.set_cheese(fridge.get_cheese() + 5); } The getters and setters made the code much more complicated without affording proper encapsulation. Because the internal state is accessible to other objects we don't gain a whole lot by adding these getters and setters. The question has been previously discussed on Stack Overflow: http://stackoverflow.com/questions/565095/java-are-getters-and-setters-evil http://stackoverflow.com/questions/996179

    Read the article

  • Does software rot refer primarily to performance, or to messy code?

    - by Kazark
    Wikipedia's definition of software rot focuses on the performance of the software. This is a different usage than I am used to; I had thought of it much more in terms of the cleanliness and design of the code—in terms of the code's having all the standard quality characteristics: readability, maintainability, etc. Now, performance is likely to go down when the code becomes unreadable, because no one knows what is going on. But does the term software rot have special reference to performance? or am I right in thinking it refers to the cleanliness of the code? or is this perhaps a case of multiple senses of the term being in common usage—from the user's perspective, it has do with performance; but for the software craftsman, it has to do more specifically with how the code reads?

    Read the article

  • how to improve concepts for interview

    - by Rahul Mehta
    Hi, I had given the interview , and interviewer tell me to improve the concepts , e.g. he ask me type of array ,and i answered two types of array simple array and associative array . e.g. 2 he ask me why you use pdo , and i answered we can use any database e.g. oracle , mysql and it helps in sql injection , then he ask me how it helps in sql injection then i was not having correct answer. e.g. 3 he ask me about persistent connection , i just use the mysql_pconnect i dont where it will be used and how . is there is any standard way to follow to improve concepts. Please suggest . Thanks

    Read the article

  • How should calculations be handled in a document database

    - by Morten
    Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the "types" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store "models" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well.

    Read the article

  • Is the C programming language still used?

    - by Pankaj Upadhyay
    I am a C# programmer, and most of my development is for websites along with a few Windows application. As far as C goes, I haven't used it in a long time, as there was no need to. It came to me as a surprise when one of my friends said that she needs to learn C for testing jobs, while I was helping her learn C#. I figured that someone would only learn C for testing only if there is development done in C. In my knowledge, all the development related to COM and hardware design are also done in C++. Therefore, learning C doesn't make sense if you need to use C++. I also don't believe in historic significance, so why waste time and money in learning C? Is C is still used in any kind of new software development or anything else?

    Read the article

  • Does learning to develop for iOS create a lock-in?

    - by Jungle Hunter
    If I begin my career (first job) with developing on the iOS platform, does that lock me in into iOS and Mac OS X development only? By locking me in I mean will that create barriers for me to switch technologies as I would be mainly working with Objective-C. If yes, does that make my career choices limited? I'm interested in comparing this with Android development, which if pursued will leave me with Java skills (correct me if I'm wrong) which I can use elsewhere.

    Read the article

  • Typical text encoding+BOM, and EOL behavior on mobile devices

    - by Dan W
    Typical things to worry about when dealing with text are the BOM/signature, encoding, and the end of line (EOL) char/chars. I know that Windows often favours \r\n (CR+LF) and Mac/Linux favours \n (LF), but how about mobile devices such as the iPhone and Android? Do typical apps on those platforms favour one or the other? Also, which text encodings are mobiles most likely to use - UTF-8, iso-8859-1, or even Windows 1252 (or other default codepage) or maybe even UTF-16? And if they use UTF-8/16, are they likely to need (or require not having) a BOM/signature? What is the typical behavior here?

    Read the article

  • ASP.NET and C# learning curve [closed]

    - by Mashael
    My friend wants to become a web developer. However, he doesn't know how to start if he is going to become ASP.NET developer. He found a book which is titled ' Beginning ASP.NET 4: in C# and VB (Wrox Programmer to Programmer) by Imar Spaanjaars' but he is not sure if this will be right start or not because he has know knowledge in OOP programming and whether he has to learn C# first and read such book or is it OK to start with such that book assuming that the book will teach some fundamentals in C#!

    Read the article

  • Is there any reason lazy initialization couldn't be built into Java?

    - by Renesis
    Since I'm working on a server with absolutely no non-persisted state for users, every User-related object we have is rolled out on every request. Consequently I often find myself doing lazy initialization of properties of objects that may go unused. protected EventDispatcher dispatcher = new EventDispatcher(); Becomes... protected EventDispatcher<EventMessage> dispatcher; public EventDispatcher<EventMessage> getEventDispatcher() { if (dispatcher == null) { dispatcher = new EventDispatcher<EventMessage>(); } return dispatcher; } Is there any reason this couldn't be built into Java? protected lazy EventDispatcher dispatcher = new EventDispatcher();

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • Single IBAction for multiple UIButtons versus single IBAction for single UIButton

    - by Miraaj
    While using story-board there are two different approaches which my team mates follow: Approach 1: To bind unique action with each button, ie: Done button - binded to - doneButtonAction Cancel button - binded to - cancelButtonAction OR Approach 2: To bind single action to multiple buttons, ie: Done button - binded to - commonButtonAction Cancel button - binded to - commonButtonAction Then in commonButtonAction they prefer to use switch case like this: - (IBAction)commonButtonAction:(id)sender { UIButton *button = (UIButton *)sender; switch (button.tag) { case 201: // done button [self doneButtonAction:sender]; break; case 202: // cancel button [self cancelButtonAction:sender]; break; default: break; } } - (void)cancelButtonAction:(id)sender { // no interesting stuff, simple dismiss of view :-( } - (void)doneButtonAction:(id)sender { // some interesting stuff ;-) } Reasoning which they give to follow approach 2 is - in each view controller during code walk through anyone can easily identify where to find code related to button actions. While others discard this idea because they say that adding an extra switch case is unnecessary and is not a common practice. What are your views?

    Read the article

  • Which programming language should I learn? [on hold]

    - by Ashkan
    I'm Ashkan and I'm from Iran, I started programming when I was 13 and I learned a lot of stuff since then, But now I'm totally lost. Since I live in Iran there are no counselor or any professionals out there to help me, so I decided to ask here. I started with Visual Basic and after 1 year I started to learn HTML , CSS , Javascript and JQuery. And for the past 6 months I've been learning PHP,and I have a basic understanding of OOP. I want to move to America to continue my studies and I was wondering which programming language helps me the most to get there? Should I learn C++ or JAVA or should I study Computer Science and Math? also since We are not in a good place financially, I want a programming language that helps me in college and lets me make some money? Thanks in advance and sorry for my poor English skills.

    Read the article

  • Self Documenting Code Vs. Commented Code

    - by Phill
    I had a search but didn't find what I was looking for, please feel free to link me if this question has already being asked. Earlier this month this post was made: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/ Basically to sum it up, you're a bad programmer if you don't write comments. My personal opinion is that code should be descriptive and mostly not require comment's unless the code cannot be self describing. In the example given // Get the extension off the image filename $pieces = explode('.', $image_name); $extension = array_pop($pieces); The author said this code should be given a comment, my personal opinion is the code should be a function call that is descriptive: $extension = GetFileExtension($image_filename); However in the comments someone actually made just that suggestion: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/comment-page-2/#comment-357130 The author responded by saying the commenter was "one of those people", i.e, a bad programmer. What are everyone elses views on Self Describing Code vs Commenting Code?

    Read the article

  • Using WebStorm for Razor Syntax MVC

    - by Jay Stevens
    I am building a lot of client-side heavy SPA-like apps with VS2010 and MVC3/4. VS2010 Javascript/HTML/CSS editing (mostly javascript) is interminably slow and sluggish. I'd love to use something like JetBrains' WebStorm to edit my .CSHTML files (with embedded javascript, etc. because I am using RAzor to pop in URL names, etc.) WebStorm seems to have all of the things I want.. better language recognition ("intellisense") and the ability to integrate additional outside libraries into this (I'm using Kendo), etc. Is this possible? How do you get WebStorm to recognize the @"" invoked Razor language inserts? Any help or suggestions would be appreciated.

    Read the article

  • Would learning any (linguistic) language in particular further your programming career?

    - by Anonymous
    It seems apparent that English is the dominant international language for programming based on previous P.SE questions (though a highly upvoted comment correctly points out that asking a question like that on a predominantly English site will skew the results). However, is there benefit in learning a foreign language for software development? For example, do the Chinese have completely different software tools, languages, technologies, etc? How about Japanese, Russian, and other non-latin based languages? Is there an entire world of software development languages, tools and so on that only exist in these other languages? Or do people that know these languages use the tools and languages we know and love?

    Read the article

  • What Are Some Advantages/Disadvantages of Using C over Assembly?

    - by Daniel
    I'm currently studying engineering in Telecommunications and Electronics and we have migrated from assembler to C in microprocessor programming. I have doubts that this is a good idea. What are some advantages and disadvantages of C compared to assembly? The advantages/disadvantages I see are: Advantages: I can tell that C syntax is a lot easier to learn than Assembler syntax. C is easier to use for making more complex programs. Learning C is somehow more productive than learning assembler cause there is more developing stuff around C than Assembler. Disadvantages: Assembler is a lower level programming language than C,so this makes it a good for programming directly to hardware. Is a lot more flexible alluding you to work with memory,interrupts,micro-registers,etc.

    Read the article

  • Is code that terminates on a random condition guaranteed to terminate?

    - by Simon Campbell
    If I had a code which terminated based on if a random number generator returned a result (as follows), would it be 100% certain that the code would terminate if it was allowed to run forever. while (random(MAX_NUMBER) != 0): // random returns a random number between 0 and MAX_NUMBER print('Hello World') I am also interested in any distinctions between purely random and the deterministic random that computers generally use. Assume the seed is not able to be known in the case of the deterministic random. Naively it could be suggested that the code will exit, after all every number has some possibility and all of time for that possibility to be exercised. On the other hand it could be argued that there is the random chance it may not ever meet the exit condition-- the generator could generate 1 'randomly' until infinity. (I suppose one would question the validity of the random number generator if it was a deterministic generator returning only 1's 'randomly' though)

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Is there any practical use for the empty type in Common Lisp?

    - by Pedro Rodrigues
    The Common Lisp spec states that nil is the name of the empty type, but I've never found any situation in Common Lisp where I felt like the empty type was useful/necessary. Is it there just for completeness sake (and removing it wouldn't cause any harm to anyone)? Or is there really some practical use for the empty type in Common Lisp? If yes, then I would prefer an answer with code example. For example, in Haskell the empty type can be used when binding foreign data structures, to make sure that no one tries to create values of that type without using the data structure's foreign interface (although in this case, the type is not really empty).

    Read the article

  • OO Design, how to model Tonal Harmony?

    - by David
    I have started to write a program in C++ 11 that would analyse chords, scales, and harmony. The biggest problem I am having in my design phase, is that the note 'C' is a note, a type of chord (Cmaj, Cmin, C7, etc), and a type of key (the key of Cmajor, Cminor). The same issue arises with intervals (minor 3rd, major 3rd). I am using a base class, Token, that is the base class for all 'symbols' in the program. so for example: class Token { public: typedef shared_ptr<Token> pointer_type; Token() {} virtual ~Token() {} }; class Command : public Token { public: Command() {} pointer_type execute(); } class Note : public Token; class Triad : public Token; class MajorTriad : public Triad; // CMajorTriad, etc class Key : public Token; class MinorKey : public Key; // Natural Minor, Harmonic minor,etc class Scale : public Token; As you can see, to create all the derived classes (CMajorTriad, C, CMajorScale, CMajorKey, etc) would quickly become ridiculously complex including all the other notes, as well as enharmonics. multiple inheritance would not work, ie: class C : public Note, Triad, Key, Scale class C, cannot be all of these things at the same time. It is contextual, also polymorphing with this will not work (how to determine which super methods to perform? calling every super class constructors should not happen here) Are there any design ideas or suggestions that people have to offer? I have not been able to find anything on google in regards to modelling tonal harmony from an OO perspective. There are just far too many relationships between all the concepts here.

    Read the article

  • How do developers verify that software requirement changes in one system do not violate a requirement of downstream software systems?

    - by Peter Smith
    In my work, I do requirements gathering, analysis and design of business solutions in addition to coding. There are multiple software systems and packages, and developers are expected to work on any of them, instead of being assigned to make changes to only 1 system or just a few systems. How developers ensure they have captured all of the necessary requirements and resolved any conflicting requirements? An example of this type of scenario: Bob the developer is asked to modify the problem ticket system for a hypothetical utility repair business. They contract with a local utility company to provide this service. The old system provides a mechanism for an external customer to create a ticket indicating a problem with utility service at a particular address. There is a scheduling system and an invoicing system that is dependent on this data. Bob's new project is to modify the ticket placement system to allow for multiple addresses to entered by a landlord or other end customer with multiple properties. The invoicing system bills per ticket, but should be modified to bill per address. What practices would help Bob discover that the invoicing system needs to be changed as well? How might Bob discover what other systems in his company might need to be changed in order to support the new changes\business model? Let's say there is a documented specification for each system involved, but there are many systems and Bob is not familiar with all of them. End of example. We're often in this scenario, and we do have design reviews but management places ultimate responsibility for any defects (business process or software process) on the developer who is doing the design and the work. Some organizations seem to be better at this than others. How do they manage to detect and solve conflicting or incomplete requirements across software systems? We currently have a lot of tribal knowledge and just a few developers who understand the entire business and software chain. This seems highly ineffective and leads to problems at the requirements level.

    Read the article

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (randomly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

  • Can an agile shop really score 12 on the Joel Test?

    - by Simon
    I really like the Joel test, use it myself, and encourage my staff and interviewees to consider it carefully. However I don't think I can ever score more than 9 because a few points seem to contradict the Agile Manifesto, XP and TDD, which are the bedrocks of my world. Specifically: the questions about schedule, specs, testers and quiet working conditions run counter to what we are trying to create and the values that we have adopted in being genuinely agile. So my question is: is it possible for a true Agile shop to score 12?

    Read the article

  • Prevent Eclipse Java Builder from Compiling Java-Like Source

    - by redjamjar
    I'm in the process of writing an eclipse plugin for my programming language Whiley (see http://whiley.org). The plugin is working reasonably well, although there's lots to do. Two pieces of the jigsaw are: I've created a "Whiley Builder" by subclassing incremental project builder. This handles building and cleaning of "*.whiley" files. I've created a content-type called "Whiley Source Files" for "*.whiley" files, which extends "org.eclipse.jdt.core.javaSource" (this follows Andrew Eisenberg suggestion). The advantage of having the content-type extend javaSource is that it immediately fits into the package explorer, etc. In principle, I could fleshout ICompilationUnit to provide more useful info, although I haven't done that yet. The disadvantage is that the Java builder is trying to compile my whiley files ... and it obviously can't. Originally, I had the Java Builder run first, then the Whiley builder. Superficially, this actually worked out quite well since all of the errors from the Java Builder were discarded by the Whiley Builder (for whiley files). However, I actually want the Whiley Builder to run first, as this is the best way for me to resolve dependencies between Java and Whiley files. Which leads me to my question: can I stop the Java builder from trying to compile certain java-like resources? Specifically, in my case, those with the "*.whiley" extension. As an alternative, I was wondering whether my Whiley Builder could somehow update the resource delta to remove those files which it has dealt with. Thoughts?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >