Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 159/605 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Builder Pattern: When to fail?

    - by skiwi
    When implementing the Builder Pattern, I often find myself confused with when to let building fail and I even manage to take different stands on the matter every few days. First some explanation: With failing early I mean that building an object should fail as soon as an invalid parameter is passed in. So inside the SomeObjectBuilder. With failing late I mean that building an object only can fail on the build() call that implicitely calls a constructor of the object to be built. Then some arguments: In favor of failing late: A builder class should be no more than a class that simply holds values. Moreover, it leads to less code duplication. In favor of failing early: A general approach in software programming is that you want to detect issues as early as possible and therefore the most logical place to check would be in the builder class' constructor, 'setters' and ultimately in the build method. What is the general concensus about this?

    Read the article

  • Is it a good practice to wrap all primitives and Strings?

    - by Amogh Talpallikar
    According to Jeff Bay's Essay on Object Callisthenics, One of the practices is set to be "Wrap all primitives and Strings" Can anyone elaborate on this ? In languages where we already have wrappers for primitives like C# and Java. and In languages where Collections can have generics where you are sure of what type goes into the collection, do we need to wrap string's inside their own classes ? Does it have any other advantage ?

    Read the article

  • How can we plan projects realistically while accounting for support issues?

    - by Thomas Clayson
    We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. Interim work and such means that we lose productive time working on the project. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say "Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week." It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much "extra" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? UPDATE Lots of good answers thank you.

    Read the article

  • Can someone explain to me C#'s coding convention?

    - by AedonEtLIRA
    I recently started working with Unity3D and primarily scripting with C#. As, I normally program in Java, the differences aren't too great but I still referred to a crash course just to make sure I am on the right track. However, My biggest curiosity with C# is that is capitalises the first letter its method names (eg. java: getPrime() C#: GetPrime() aka: Pascal Case?). Is there a good reason for this? I understand from the crash course page that I read that apparently it's convention for .Net and I have no way of ever changing it, but I am curious to hear why it was done like this as opposed to the normal (relative?) camel case that, say, Java uses. Note: I understand that languages have their own coding conventions (python methods are all lower case which also applies in this question) but I've never really understood why it isn't formalised into a standard.

    Read the article

  • Design patterns and multiple programming languages

    - by Eduard Florinescu
    I am referring here to the design patterns found in the GOF book. First, how I see it, there are a few peculiarities to design pattern and knowing multiple languages, for example in Java you really need a singleton but in Python you can do without it you write a module, I saw somewhere a wiki trying to write all GOF patterns for JavaScript and all the entries were empty, I guess because it might be a daunting task to do that adaptation. If there is someone who is using design patterns and is programming multiple languages supporting the OOP paradigm and can give me a hint on how should I approach design patterns. An approach that might help me in all languages I use(Java, JavaScript, Python, Ruby): Can I write good application without knowing exactly the GOF design patterns or I might need just some of them which might be crucial and if yes which one, are there alternatives to GOF for specific languages, and should a programmer or a team make their own design patterns set?

    Read the article

  • How to include serious personal project in Resume?

    - by mob1lejunkie
    My brother has come up with an interesting business idea that could be commercialised. For over a month I have been creating the foundation for SaaS. I have been treating this as commercial project so designing using patterns and best practices. One of the reasons I want to include this in my Resume is my full time job doesn't involve current trendy ASP.Net technologies (e.g Linq/Entity Relationships, jQuery, ASP.Net MVC 3, Silverlight, etc) so the resume lacks impact. In my full time job I work on a 7 year old well designed product and since our data and web layers work well it would be stupid to re-engineer them only because recruiters think Linq, ASP.Net MVC and jQuery are cool. How can I include my personal project in Resume so that it doesn't sound like experiment or quick'n'dirty pet project? Many thanks.

    Read the article

  • Is there a constant for "end of time"?

    - by Nick Rosencrantz
    For some systems, the time value 9999-12-31 is used as the "end of time" as the end of the time that the computer can calculate. But what if it changes? Wouldn't it be better to define this time as a builtin variable? In C and other programming languages there usually is a variable such as MAX_INT or similar to get the largest value an integer could have. Why is there not a similar function for MAX_TIME i.e. set the variable to the "end of time" which for many systems usually is 9999-12-31. To avoid the problem of hardcoding to a wrong year (9999) could these systems introduce a variable for the "end of time"?

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • What should I expect from a system engineer university career

    - by Trufa
    I'm starting tomorrow a series of interviews to decide which university should I choose to get a degree in System Engineer. I know this is a serious university but I would like to get some feedback about what should I expect or "demand" from the university. My experience in the technology field is (obviously) limited and would like to be aware of what should I to be aware if the university might be good or not. Specially in following fields: Infrastructure: what are the essentials? big pluses? Theoretical vs Practical: how practical should it be? what is a "good" mix? Programming languages, frameworks, etc: Which are the ideal for learning? Most demand? Latest technologies: What should they be teaching right know to "prove" they are up to date. Qualification system: What exam methods do you think are ideal for this kind of degree, good ol' Q&A, multiple choice, projects, a fair mix? What other points do you think I should care about? What isn't important? Thanks in advance. I realize this is might be a very subjective topic so I tried to make it as specific and on topic as I could but any recommendations are of course welcome. I also understand that none of this questions will guarantee this will be a good university but it might give me another reference as to which should I choose when the moment comes.

    Read the article

  • How do I explain to HR that my work experience is relevant even if it doesn't match the keywords in the job description?

    - by Dmitri
    I am looking at a job with a great company and in a field that I really want to be in. Unfortunalty, what they want is someone with "experience with ASP (VB), T-SQL in a production environment." But I've never done anything except with FOSS tools: C, Ruby(straight and RoR), Perl, MySQL, et c. I'm thinking that I could probably pick up VB without much trouble (I took a class that used it on college, was impressed at how easy it was to construct Windows UIs, but I've never needed it) Is there any way that I can demonstrate that my experience is similar? What would equivalent experience be in the FOSS world?

    Read the article

  • Design Anti-Patterns - C# - Do you call this a God object?

    - by Reddy S R
    I am writing Portfolio module for my web site and it has 3 components. Gallery Category, Gallery, & Gallery Images. I am doing all the request handling, (creating, reading, updating, other), for the above 3 components in 1 class, Portfolio. DB handling jobs for Portfolio module is done in another file. My question is, even just for request handling purpose, can you do all the operations in 1 class? -Reddy

    Read the article

  • Is catching general exceptions really a bad thing?

    - by Bob Horn
    I typically agree with most code analysis warnings, and I try to adhere to them. However, I'm having a harder time with this one: CA1031: Do not catch general exception types I understand the rationale for this rule. But, in practice, if I want to take the same action regardless of the exception thrown, why would I handle each one specifically? Furthermore, if I handle specific exceptions, what if the code I'm calling changes to throw a new exception in the future? Now I have to change my code to handle that new exception. Whereas if I simply caught Exception my code doesn't have to change. For example, if Foo calls Bar, and Foo needs to stop processing regardless of the type of exception thrown by Bar, is there any advantage in being specific about the type of exception I'm catching?

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • What is 'work' in the Pomodoro Technique?

    - by Sachin Kainth
    I have just started to use Pomodoro today and I am trying to work out what I should and should not do during my 25 minute work time. For my 25 minute work stint I started to write some code and realised that I had done something similar in a related project so I opened that solution to copy and paste that existing code. Question is, is this allowed? Also, if during my 25 minutes I realise that there is an important work-related email that I need to send can I do that or should that wait for the next 25 minutes or the break. I am writing this question during my, now extended, 5 minute break. Is this work or is it a break? I really would appreciate some guidance as I really want to use Pomodoro to focus better on my work. Another thing that happened to me was that a Adobe AIR update alert came up on my desktop during the 25 minutes. Should I ignore such things until the break? Sachin

    Read the article

  • What's the difference between MariaDB and MySQL?

    - by Chris J. Lee
    What's the difference between MariaDB and MySQL? I'm not very familiar with both. I'm primarily a front end developer for the most part. Are they syntactically similar? Where do these two query languages differ? Wikipedia only mentions the difference between licensing: MariaDB is a community-developed branch of the MySQL database, the impetus being the community maintenance of its free status under GPL, as opposed to any uncertainty of MySQL license status under its current ownership by Oracle.

    Read the article

  • Are More Comments Better in High-Turnover Environments?

    - by joshin4colours
    I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run.

    Read the article

  • Learning to be a good developer: what parts can you skip over?

    - by Andrew M
    I have set myself the goal of becoming a decent developer by this time next year. By this I mean full experience of the development 'lifecycle,' a few good apps/sites/webapps under my belt, and most importantly being able to work at a steady pace without getting sidelined for hours by some should-know-this-already technique. I'm not starting from scratch. I've written a lot of html/css, SQL, javascript, python and VB.net, and studied other languages like C and Java. I know about things like OOP, design patterns, TDD, complexity, computational linguistics, pointers/references, functional programming, and other academic/theoretical matters. It's just I can't say I've really done these things yet. So I want to get up to speed, and I want to know what things I can leave till a later date. For instance, studying algorithms and the maths behind them is interesting and all, but so far I've hardly needed to write anything but the most basic nested loops. Investigating Assembly to have a clearer picture of low-level operations would be cool... but I imagine rarely infringes on daily work. On the other hand, looking at a functional programming language might help me write programs that are more comprehensible and less prone to hidden failures (at the moment I'm finding the biggest difficulty is when the complexity of the app exceeds my capacity to understand it - for instance passing data around was fine... until I had to start doing it with AJAX, which was a painful step up). I could spend time working through case studies of design patterns, but I'm not sure how many of them get used in 'real life.' I'm a programmer with basic abilities - what skills should I focus on developing? (also my Unix skills are very weak, and also knowledge of Windows configuration... not sure how much time I should spend on that)

    Read the article

  • Career paths after web development?

    - by Mike
    I know this is open ended, but I'm just curious what you've done after your web development career, or if you've stayed loyal. I have a feeling/read/heard that web development salaries top out at a certain amount.. even after 10-15 years of experience. Reason I ask is that I graduated last summer with a BS in Chemical Engineering.. but have not been able to find a job in California. I've been web designing/developing since high school and thought that I should start a career, even if its not related to my major and not lose more time. Even though I'd really like to have an engineering career, I don't think that will happen. Do you guys have any suggestions or experiences for choices after/ways to enhance your career after several years in web development? Thanks! Update: Thanks for the responses guys! One more question: Is it likely to be accepted into a MS/PhD program if you've been out of uni for a couple years? Or with semi-related job experience? Would I be a bit of a misfit with a BS in ChemE studying CS/CompE for an MS?

    Read the article

  • Where's Randall Hyde?

    - by user1124893
    This probably doesn't belong here, but I couldn't think of any other StackExchange site that would fit it. Quick question, what ever happened to Randall Hyde, author of The Art of Assembly, HLA, and other works? I ask this because I was just exploring some of the content on his website and a lot of it is now gone. His website was hosted on Apple's MobileMe. As of the writing of this question, Apple has closed off all MobileMe content a few days ago. Correct me if I'm wrong, but didn't Apple warn of this a year in advance? If so, then where's Randall Hyde? Come to think of it, all of the content on his website that I have seen is several years old. A lot of it is rather useful but unfinished.

    Read the article

  • Can the Clojure set and maps syntax be added to other Lisp dialects?

    - by Cedric Martin
    In addition to create list using parentheses, Clojure allows to create vectors using [ ], maps using { } and sets using #{ }. Lisp is always said to be a very extensible language in which you can easily create DSLs etc. But is Lisp so extensible that you can take any Lisp dialect and relatively easily add support for Clojure's vectors, maps and sets (which are all functions in Clojure)? I'm not necessarily asking about cons or similar actually working on these functions: what I'd like to know is if the other could be modified so that the source code would look like Clojure's source code (that is: using matching [ ], { } and #{ } in addition to ( )). Note that if it cannot be done this is not a criticism of Lisp: what I'd like to know is, technically, what should be done or what cannot be done if one were to add such a thing.

    Read the article

  • How can you become a competent web application security expert without breaking the law?

    - by hal10001
    I find this to be equivalent to undercover police officers who join a gang, do drugs and break the law as a last resort in order to enforce it. To be a competent security expert, I feel hacking has to be a constant hands-on effort. Yet, that requires finding exploits, testing them on live applications, and being able to demonstrate those exploits with confidence. For those that consider themselves "experts" in Web application security, what did you do to learn the art without actually breaking the law? Or, is this the gray area that nobody likes to talk about because you have to bend the law to its limits?

    Read the article

  • How to prepare for the GRE Computer Science Subject Test?

    - by Maddy.Shik
    How do I prepare for the GRE Computer Science subject test? Are there any standard text books I should follow? I want to score as competitively as possible. What are some good references? Is there anything that top schools like CMU, MIT, and Standford would expect? For example, Cormen et al is considered very good for algorithms. Please tell me standard text books for each subject covered by the test, like Computer Architecture, Database Design, Operating Systems, Discrete Maths etc.

    Read the article

  • Which topics should be covered in a basic undergraduate C++ course?

    - by Gulshan
    I have a young lecturer friend who is going to teach the undergraduate C++ course in CS. He asked me for some suggestions regarding how the course should be organized. Now I am asking you. I have seen many trends in universities which leads to a nasty experience of C++. So, please suggest from a professional programmer's point of view. For your information, the students going to take the course, have taken course like "Introduction to programming with C" in previous semester.

    Read the article

  • Release an upgraded iOS app with a different revenue model

    - by tassock
    I am starting a new iOS project and initially plan release a simple free version to gather feedback. I don't intend to monetize or market this initial version. However, I believe "Version 2" of this app will be good enough to pay for. I would prefer to release Version 2 as an upgrade from Version 1 rather than release it as a separate app. This way I can reserve a name for the app. It will also be easier to keep everything in a single repository. Are there any downsides of this approach? It's my understanding that I can change the price of an app at any point in time, so it shouldn't be an issue transitioning to a paid app, should it?

    Read the article

  • Any valid reason to Nest Master Pages in ASP.Net rather than Inherit?

    - by James P. Wright
    Currently in a debate at work and I cannot fathom why someone would intentionally avoid Inheritance with Master Pages. For reference here is the project setup: BaseProject MainMasterPage SomeEvent SiteProject SiteMasterPage nested MainMasterPage OtherSiteProject MainMasterPage (from BaseProject) The debate came up because some code in BaseProject needs to know about "SomeEvent". With the setup above, the code in BaseProject needs to call this.Master.Master. That same code in BaseProject also applies to OtherSiteProject which is just accessed as this.Master. SiteMasterPage has no code differences, only HTML differences. If SiteMasterPage Inherits MainMasterPage rather than Nests it, then all code is valid as this.Master. Can anyone think of a reason why to use a Nested Master Page here instead of an Inherited one?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >