Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 225/605 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • What tools, libraries, or framework is needed to create a completely offline Javascript application?

    - by makerofthings7
    I am interested in creating a HTML application that can run as disconnected from the server as possible. Two examples of this include OWA in Exchange 2013 and to a lesser extent and the client available at www.ripple.com With the focus on OWA in Exchange 2013, what is needed to replicate the offline functionality available in a different application? A list technologies, frameworks, etc would be immensely helpful

    Read the article

  • Is 'Protection' an acceptable Java class name

    - by jonny
    This comes from a closed thread at stack overflow, where there are already some useful answers, though a commenter suggested I post here. I hope this is ok! I'm trying my best to write good readable, code, but often have doubts in my work! I'm creating some code to check the status of some protected software, and have created a class which has methods to check whether the software in use is licensed (there is a separate Licensing class). I've named the class 'Protection', which is currently accessed, via the creation of an appProtect object. The methods in the class allow to check a number of things about the application, in order to confirm that it is in fact licensed for use. Is 'Protection' an acceptable name for such a class? I read somewhere that if you have to think to long in names of methods, classes, objects etc, then perhaps you may not be coding in an Object Oriented way. I've spent a lot of time thinking about this before making this post, which has lead me to doubt the suitability of the name! In creating (and proof reading) this post, I'm starting to seriously doubt my work so far. I'm also thinking I should probably rename the object to applicationProtection rather than appProtect (though am open to any comments on this too?). I'm posting non the less, in the hope that I'll learn something from others views/opinions, even if they're simply confirming I've "done it wrong"!

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • Is reference to bug/issue in commit message considered good practice?

    - by Christian P
    I'm working on a project where we have the source control set up to automatically write notes in the bug tracker. We simply write the bug issue ID in the commit message and the commit message is added as a note to the bug tracker. I can see only a few downsides for this practice. If sometime in the future the source code gets separated from the bug tracking software (or the reported bugs/issues are somehow lost). Or when someone is looking in the history of commits but doesn't have access to our bug tracker. My question is if having a bug/issue reference in the commit message is considered good practice? Are there some other downsides?

    Read the article

  • MonoGame; reliable enough to be accepted on iOS, Win 8 and Android stores?

    - by Serguei Fedorov
    I love XNA; it simplifies rendering code to where I don't have to deal with it, it runs on C# and has very fairly large community and documentation. I would love to be able to use it for games across many platforms. However, I am a little bit concerned about how well it will be met by platform owners; Apple has very tight rules about code base but Android does not. Microsoft's new Windows 8 platforms seems to be pretty lenient but I am not sure oh how they would respond to an XNA project being pushed to the app store (given they suddenly decided to dump it and force developers to use C++/Direct3D). So the bottom line is; is it safe to invest time and energy into a project that runs on MonoGame? In the end, is is possible to see my game on multiple platforms and not be shot down with a useless product?

    Read the article

  • When not to use Spring to instantiate a bean?

    - by Rishabh
    I am trying to understand what would be the correct usage of Spring. Not syntactically, but in term of its purpose. If one is using Spring, then should Spring code replace all bean instantiation code? When to use or when not to use Spring, to instantiate a bean? May be the following code sample will help in you understanding my dilemma: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = new ClassA(); ca.setName(name); caList.add(ca); } If I configure Spring it becomes something like: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = (ClassA)SomeContext.getBean(BeanLookupConstants.CLASS_A); ca.setName(name); caList.add(ca); } I personally think using Spring here is an unnecessary overhead, because The code the simpler to read/understand. It isn't really a good place for Dependency Injection as I am not expecting that there will be multiple/varied implementation of ClassA, that I would like freedom to replace using Spring configuration at a later point in time. Am I thinking correct? If not, where am I going wrong?

    Read the article

  • How prevent useless content load on the page in Responsive Design

    - by Ícaro Leandro
    In Responsive Design, we hide elements in the page with @media queries and display: hide in the CSS. Ok, But in my system: Browsers that have less than width: 800px, the layout must hide some content, not only hide, but avoid them load fully. I mean, in access with desktop with more than 800px of screen, the page load fully; In mobile devices, or even in desktop with less than 800px, not load some content. I want to make the page load faster in this browsers. The system are maked in PHP and have some Javascript. Thanks...

    Read the article

  • Naming a predicate: "precondition" or "precondition_is_met"?

    - by RexE
    In my web app framework, each page can have a precondition that needs to be satisfied before it can be displayed to the user. For example, if user 1 and user 2 are playing a back-and-forth role-playing game, user 2 needs to wait for user 1 to finish his turn before he can take his turn. Otherwise, the user is displayed a waiting page. This is implemented with a predicate: def precondition(self): return user_1.completed_turn The simplest name for this API is precondition, but this leads to code like if precondition(): ..., which is not really obvious. Seems to me like it is more accurate to call it precondition_is_met(), but not sure about that either. Is there a best practice for naming methods like this?

    Read the article

  • MVC Validation with ModelState.isValid through a wizard

    - by Emmanuel TOPE
    I'm working on a small educational project on MVC 3, and I'm facing a small problem, when attempting to handle validation in my application through a wizard. I tried to get benefit from the ability of MVC3 to deliver content of a different view using the same URL, when handling an [HttpPost] method on a page. I my case,my main model's class contains about ten [Required] properties, that I would like to expose through a small wizard in 3 steps , So I want that the user may be able to enter his personal informations in the first step, then respond to some questions in the second stepp and finally receive a confirmation mail from the web application whit his credentials in the last step. I can't access the last step, because of the ModelState.isValid method that I use to handle validations, and which can't perform properly if I define some properties as [Required], but don't put them on the first view. As the replies to those questions remain in a couple of choices, I've thinked that I may use some nullable bool? for in order to avoid validation issues, but know that it's not the proper way. Are there someone who would like to help me find a way to extend my validation to those three steps ? Thanks in advance and sorry for my english, I'm not a native speaker.

    Read the article

  • Is making my own copyright licence safe?

    - by abcd
    I've seen various open source libraries (actually I've seen it for assets as well) having a home-baked license in the following manner : SomeGuy's License:1. You can use this code freely in commercial projects and modify it as you wish, but not sell it2. If you want to sell a modified version, drop me an email first, or give credits to me Edit: The above example is ambiguous, so I am giving another one, I want to know if 3 lines of license will hold some ground: SomeGuy's License:1. You can use this code in a commercial project as a 3rd party library2. You can't sell it as a derivative work I know that such license is not polished at all, for example the Creative Commons set of licenses seem to be short, but actually have some large legal stuff underneath it, but I wonder if at least some level of protection can be gained with a hobby license like that ? My question is, could this hold any ground in the court, or would the corporative lawyers of the company X tear it apart ?

    Read the article

  • How to understand Linux kernel source code for a beginner?

    - by Amit Chavan
    Hi, I am a student interested in working on Memory Management, particularly the page replacement component of the linux kernel. What are the different guides that can help me to begin understanding the kernel source? I have tried to read the book Understanding the Linux Virutal Memory Manager by Mel Gorman and Understanding the Linux Kernel by Cesati and Bovet, but they do not explain the flow of control through the code. They only end up explaining various data structures used and the work various functions perform. This makes the code more confusing. My project deals with tweaking the page replacement algorithm in a mainstream kernel and analyse its performance for a set of workloads. Is there a flavor of the linux kernel that would be easier to understand(if not the linux-2.6.xx kernel)?

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, and that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • WCF Keep Alive: Whether to disable keepAliveEnabled

    - by Lijo
    I have a WCF web service hosted in a load balanced environment. I do not need any WCF session related functionality in the service. QUESTION What are the scenarios in which performances will be best if keepAliveEnabled = false keepAliveEnabled = true Reference From Load Balancing By default, the BasicHttpBinding sends a connection HTTP header in messages with a Keep-Alive value, which enables clients to establish persistent connections to the services that support them. This configuration offers enhanced throughput because previously established connections can be reused to send subsequent messages to the same server. However, connection reuse may cause clients to become strongly associated to a specific server within the load-balanced farm, which reduces the effectiveness of round-robin load balancing. If this behavior is undesirable, HTTP Keep-Alive can be disabled on the server using the KeepAliveEnabled property with a CustomBinding or user-defined Binding.

    Read the article

  • Why are people using C instead of C++? [closed]

    - by Darth
    Possible Duplicate: When to use C over C++, and C++ over C? Many times I've stumbled upon people saying that C++ is not always better than C. Great example here would be the Linux kernel, where they simply decided to use C instead of C++ because it had better compilers at the time. But that's many years ago and a lot has changed. So the question is, why are people still using C over C++? I gues there are probably some cases (like embedded devices), where there simply isn't a good C++ compiler, or am I wrong here? What are the other cases when it is better to go with C instead of C++?

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • How can calculus and linear algebra be useful to a system programmer?

    - by Victor
    I found a website saying that calculus and linear algebra are necessary for System Programming. System Programming, as far as I know, is about osdev, drivers, utilities and so on. I just can't figure out how calculus and linear algebra can be helpful on that. I know that calculus has several applications in science, but in this particular field of programming I just can't imagine how calculus can be so important. The information was on this site: http://www.wikihow.com/Become-a-Programmer Edit: Some answers here are explaining about algorithm complexity and optimization. When I made this question I was trying to be more specific about the area of System's Programming. Algorithm complexity and optimization can be applied to any area of programming not just System's Programming. That may be why I wasn't able to came up with such thinking at the time of the question.

    Read the article

  • Why is iOS "jailbreaking" CPU specific? [closed]

    - by Ted Wong
    Recently, iOS 6 was "jailbroken" but only on the Apple A4 CPU. Why is the "jailbreaking" process specific to a CPU? From Wikipedia: ... "iOS jailbreaking is the process of removing the limitations imposed by Apple on devices running the iOS operating system through the use of hardware/software exploits – such devices include the iPhone, iPod touch, iPad, and second generation Apple TV. Jailbreaking allows iOS users to gain root access to the operating system""" ...

    Read the article

  • is Microsoft LC random generator patented?

    - by user396672
    I need a very simple pseudo random generator (no any specific quality requirements) and I found Microsoft's variant of LCG algorithm used for rand() C runtime library function fit my needs (gcc's one seems too complex). I found the algorithm here: http://rosettacode.org/wiki/Linear_congruential_generator#C However, I worry the algorithm (including its "magic numbers" i.e coefficients) may by patented or restricted for use in some another way. Is it allowed to use this algorithm without any licence or patent restrictions or not? I can't use library rand() because I need my results to be exactly reproducible on different platforms

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Are there any examples of a temporal field/object updater?

    - by Bryan Agee
    The system in question has numerous examples of temporal objects and fields--ones which are a certain variable at a certain point in time. An example of this would be someone's rate of pay--there are different answers depending on when you ask and what the constraints might be; eg, can there ever be more than one of a certain temporal object concurrently, etc. Ideally, there would be an object that handles those constraints when a new state/stateful object is introduced; when a new value is set, it would prevent creating negative ranges and overlaps. Martin Fowler has written some great material on this (such as this description of Temporal Objects) , but what I've found of it tends to be entirely theoretic, with no concrete implementations. PHP is the target language, but examples in any language would be most helpful.

    Read the article

  • still about perl vs python but (to me) slightly different from what has been asked [closed]

    - by B Chen
    Being a newbie to coding, I read from this site that Perl is still as viable as it has been, while Python, quoted from someone else's post, is good but just "snake oil" (not sure what this refers to exactly though). So from the responses in that post, I got the gist that Perl is good and worthy to learn. My question is - pardon me for phrasing it in this "non-programmer's" way - Which one should I learn FIRST? (I am actually currently learning R) Here below is the background info - (a) I will be using it mostly for data mining and statistics analysis (b) Will there be this "first" and "later" issue with learning either Perl or Python? That is, after I become competent with one language, would there be a need to learn the second one (for a similar task??) (c) If there should be circumstances where I must learn the second one, would learning Perl FIRST be better than learning Python? I hope to learn as much from exchanging info here, so please help provide with more than just "it depends" type of info. Great many thanks to all who choose to respond to my query.

    Read the article

  • How does one pronounce "cron" as in "cron job"?

    - by Rooke
    Before someone ban-hammers this question as they do with all other pronunciation questions, let me explain its relevance. Verbal communication among co-workers and partners is important; today I was on a conference call with people discussing what I thought was something to do with "Chrome", as in Google Chrome. I pronounce the "cron" in "cron job" with a short O, much like "tron", "gone," or "pawn", but this individual pronouced it with a long O, as in "hone", "bone", or "stone" (notice the e at the end of all those!). Is there a standard pronunciation? Or is this a matter of opinion. For example, there's nothing ambiguous about the pronunciation of "Firefox", but debate is raging over "potato" and "tomato".

    Read the article

  • What is the difference between Static code analysis and code review?

    - by Xander
    I just wanted to know what is the difference between static code analysis and code review. How these two are done? What are the tools available today for code review/ static analysis of PHP. I also like to know about good tools for any language code review. Thanks in Advance. Xander Cage Note: I am asking this because I was not able to understand the difference. Please, I expect some answers than "I am Mr.Geek and you asked an irrelevant bla bla..... this is closed". I know this sounds mean. But I am sorry.

    Read the article

  • What are the IEEE and ACM good for?

    - by Joshua Fox
    Membership in the IEEE and ACM is sometimes portrayed as a sign of professionalism. But all that is involved, as far as I can tell, is sending them your money. In return, besides the potential resume line, these organizations sponsor conferences and journals. I can always attend a conference or subscribe to or submit a paper to a journal, whether I am a member or not. If being a member makes some of that cheaper, or is a prerequisite for admission then OK, but I still don't see the purpose of these organizations. The answer, as far as I can gather, is that their most important value is to provide some reading material. I'd suggest that this is not worth the money given the wide availability of other valuable reading materials.

    Read the article

  • Best arguments for/against introducing ORM technology into a companies dev process

    - by james
    I have started using ORM technology in the last few years. My first exposure was NHibernate. I then moved onto Linq 2 Sql, and Entity Framework. The issue I have however is, there are some organisations where I have found strong opposition to introducing ORM tools. They usually have a number of reasons: they have a lot of built up SQL skills in the team, and are worried about the underlying SQL that ORM's generate. they have DBA's who like to be able to see the SQL an app uses in order that can review it for best practice. they are worried about performance (some people have "heard" the ORM's aren't as performant but have no real proof themselves - there may well be some truth in this! :). So, I'm looking for the best or most convincing arguments that you have put forward FOR the use of ORM tools. Equally, I would be interested in the against arguments too. Note: this is NOT a discussion over which ORM I should use.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >