Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 317/605 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • How to advance in my JavaScript skills? [closed]

    - by IlyaD
    I am using javascript for about two years now, and I feel that I can do really basic stuff. I can make some basic algorithms and mostly use jQuery for interactive elements on webpages, and as I need to do more advanced things I get the feeling that my knowledge is lacking. In most cases I find a code, it takes me quite some time to understand it, but I don't understand why it is written as it is. I have no background in computer science, so I'm not sure weather I should go to the basics, or get some advanced javascript book/course. How can I make that jump from using JS for scripting to become a real programmer?

    Read the article

  • Should we use an outside CMS?

    - by SomeKittens
    I work at a web design/development shop. Everything we do is centered around the Joomla! CMS. I'm a bit worried-if anything goes wrong with Joomla (major security flaw revealed, Joomla folds and ceases development) we're sunk. I'm meeting with the CEO to plan the next few steps for our company. Should I recommend that we create our own in-house CMS or am I just being paranoid about a single point of failure?

    Read the article

  • Does a good programmer need to have good spatial sense?

    - by user297318
    Do you need to have good spatial sense to be a good programmer? I have next to nothing of it (I think it has to do with the differing vision of my eyes). I've already coded quite little things but wonder if this interferes with the ability to 'imagine' the assembly of the code in case of a more complex program? Sorry for my english, I'm Austrian and not so used to write in English.. Thanks for your ansers..

    Read the article

  • Interfacing the payment systems

    - by etranger
    Hello all. I'm a complete newbie to using online payment systems for web projects, and can't really think of where to start. Let's assume that web system in question needs to generate some income online, and the business idea/functionality is in place, while organizing cash flow is the only unsolved problem. Points of interest are how the custom developed software interfaces to payment systems, and how the resulting income is available to the owner. I do understand that there are probably hundreds of systems out there, but to be more specific on which of them suit, I'd have to know how they work, and that's where I don't feel like understanding much. Thanks in advance.

    Read the article

  • What issues carry the highest risk in a software project?

    - by Mehrdad
    Clearly, software projects are different from other industries in terms of many things like for instance, quality assurance, project progress measurement, and many other things. Unique characteristics of software projects also makes the risk management process unique. Lots of issues in a project might lead it to unacceptable delay or failure to deliver business value. They might even make a complete disaster in the project. What are the deadliest risk factors in a software project? How to analyze, prevent and handle them? Particularly, I'm interested in the issues that you can detect from the beginning and you should keep an eye on (for example, you might be told about a third-party API that the current application uses and lacks documentation). Please share your experiences if they are relevant.

    Read the article

  • Defining a function that is both a generator and recursive [on hold]

    - by user96454
    I am new to python, so this code might not necessarily be clean, it's just for learning purposes. I had the idea of writing this function that would display the tree down the specified path. Then, i added the global variable number_of_py to count how many python files were in that tree. That worked as well. Finally, i decided to turn the whole thing into a generator, but the recursion breaks. My understanding of generators is that once next() is called python just executes the body of the function and "yields" a value until we hit the end of the body. Can someone explain why this doesn't work? Thanks. import os from sys import argv script, path = argv number_of_py = 0 lines_of_code = 0 def list_files(directory, key=''): global number_of_py files = os.listdir(directory) for f in files: real_path = os.path.join(directory, f) if os.path.isdir(real_path): list_files(real_path, key=key+' ') else: if real_path.split('.')[-1] == 'py': number_of_py += 1 with open(real_path) as g: yield len(g.read()) print key+real_path for i in list_files(argv[1]): lines_of_code += i print 'total number of lines of code: %d' % lines_of_code print 'total number of py files: %d' % number_of_py

    Read the article

  • Why do memory-managed languages retain the `new` keyword?

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Using QTIP2 in COGNOS "content" customization

    - by Jonathan
    I'd like to know how to call to a query in COGNOS 8 using qtip2. Where do I plugin the required "content?" For instance: ajax was calling to a wiki server that pulled an image and content dynamically. So it goes with out saying that I need to plugin my content there but what exactly do I plugin? I know we have ASP.NET and ISS on our cognos server side but where can I just plugin a span for the data to appear dynamically in my scrollable qtip2?

    Read the article

  • "A", "an", and "the" in method and function names: What's your take?

    - by Mike Spross
    I'm sure many of us have seen method names like this at one point or another: UploadTheFileToTheServerPlease CreateATemporaryFile WriteTheRecordToTheDatabase ResetTheSystemClock That is, method names that are also grammatically-correct English sentences, and include extra words purely to make them read like prose. Personally, I'm not a huge fan of such "literal" method names, and prefer to be succint, while still being as clear as possible. To me, words like "a", "an", and "the" just look plain awkward in method names, and it makes method names needlessly long without really adding anything useful. I would prefer the following method names for the previous examples: UploadFileToServer CreateTemporaryFile WriteOutRecord ResetSystemClock In my experience, this is far more common than the other approach of writing out the lengthier names, but I have seen both styles and was curious to see what other people's thoughts were on these two approaches. So, are you in the "method names that read like prose" camp or the "method names that say what I mean but read out loud like a bad foreign-language-to-English translation" camp?

    Read the article

  • How do you organize your projects?

    - by Sergio
    Do you have any particular style of organizing projects? For example, currently I'm creating a project for a couple of schools here in Bolivia, this is how I organized it: TutoMentor (Solution) TutoMentor.UI (Winforms project) TutoMentor.Data (Class library project) How exactly do you organize your project? Do you have an example of something you organized and are proud of? Can you share a screenshot of the Solution pane? In the UI area of my application, I'm having trouble deciding on a good schema to organize different forms and where they belong. Edit: What about organizing different forms in the .UI project? Where/how should I group different form? Putting them all in root level of the project is a bad idea.

    Read the article

  • Is it bad practice to use the same name for arguments and members?

    - by stijn
    Sometimes I write constructor code like class X { public: X( const int numberOfThingsToDo ) : numberOfThingsToDo( numberOfThingsToDo ) { } private: int numberOfThingsToDo; }; or in C# class X { public X( int numberOfThingsToDo ) { this.numberOfThingsToDo = numberOfThingsToDo; } private int numberOfThingsToDo; } I think the main reason is that when I come up with a suitable member name, I see no reason to use a different one for the argument initializing it, and since I'm also no fan of using underscores the easiest is just to pick the same name. After all it's suitable. Is this considered bad practice however? Any drawbacks (apart from shooting yourself in the foot when forgetting the this in C#)?

    Read the article

  • Which features of user story management should an agile team look for?

    - by Sonja Dimitrijevic
    In my research study, I need to identify the key features of user story management tools that can be used to support agile development. So far, I identified the following general groups of features: User role modeling and personas support, User stories and epics management, Acceptance testing support, High-level release planning, Low-level iteration planning, and Progress tracking. Each group contains some specific features, e.g., support for story points, writing of acceptance tests, etc. Which features of user story management should an agile team look for especially when switching from tangible tools (index cards, pin boards and big visible charts) to a software tool? Are some features more important than the others? Many thanks in advance!

    Read the article

  • Everything has an Interface [closed]

    - by Shane
    Possible Duplicate: Do I need to use an interface when only one class will ever implement it? I am taking over a project where every single real class is implementing an Interface. The vast majority of these interfaces are implemented by a single class that share a similar name and the exact same methods (ex: MyCar and MyCarImpl). Almost no 2 classes in the project implement more than the interface that shares its name. I know the general recommendation is to code to an interface rather than an implementation, but isn't this taking it a bit too far? The system might be more flexible in that it is easier to add a new class that behaves very much like an existing class. However, it is significantly harder to parse through the code and method changes now require 2 edits instead of 1. Personally, I normally only create interfaces when there is a need for multiple classes to have the same behavior. I subscribe to YAGNI, so I don't create something unless I see a real need for it. Am I doing it all wrong or is this project going way overboard?

    Read the article

  • Python Multiprocessing with Queue vs ZeroMQ IPC

    - by Imraan
    I am busy writing a Python application using ZeroMQ and implementing a variation of the Majordomo pattern as described in the ZGuide. I have a broker as an intermediary between a set of workers and clients. I want to do some extensive logging for every request that comes in, but I do not want the broker to waste time doing that. The broker should pass that logging request to something else. I have thought of two ways :- Create workers that are only for logging and use the ZeroMQ IPC transport Use Multiprocessing with a Queue I am not sure which one is better or faster for that matter. The first option does allow me to use the current worker base classes that I already use for normal workers, but the second option seems quicker to implement. I would like some advice or comments on the above or possibly a different solution.

    Read the article

  • Your experiences with TDD [closed]

    - by SkonJeet
    In your experience, does TDD prove to be a useful approach in all development projects? Do you take the approach of TDD even when working on an existing project? Also, how does mocking tie in with a TDD discipline? I'm not looking for opinions, I'm looking for developers' advice, tips and learning resources regarding TDD's usage based on their experience. I'm going to spend the day equipping myself with enough knowledge about TDD to start making small steps towards using it but I don't know to what extent I should be using it.

    Read the article

  • Does learning a functional language make a better OOP programmer?

    - by GavinH
    As a Java/C#/C++ programmer I hear a lot of talk about functional languages, but have never found a need to learn one. I've also heard that the higher level of thinking introduced in functional languages makes you a better OOP/procedural language programmer. Can anyone confirm this? In what ways does it improve your programming skills? What is a good choice of language to learn with the goal of improving skills in a less sophisticated language?

    Read the article

  • What are the so-called "levels" of understanding multithreading?

    - by Dan Tao
    I seem to remember reading somewhere some list of 4 "levels" of understanding multithreading. This may have been in a formal publication, or it may have been in an extremely informal context (even like in a Stack Overflow question, for example). Unfortunately I don't remember who referred to them or precisely what they were. I seem to recall that they were roughly like: Total ignorance Awareness mixed with incompetence Relative competence mixed with fear True understanding My intention is to refer to these levels in a blog post I'm writing, with a reference; but I can't for the life of me remember where I first encountered this list. Brief Google searches have proved unfruitful.

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • How to manage and estimate unstructured requirements received from customers

    - by user20358
    A lot of the times I receive a software system's requirements from our customers in a very unstructured format. It is usually a bunch of "product development" guys from the customer's who come up with these "proposed solutions" to the business problems they have. While they are the experts at the business domain, a lot of the times they don't have the solutions right. This results in multiple versions of the same requirement mixing up of two requirements into one a few versions of the requirement later down the line, the requirements which were combined together get separated out again, each taking with it some of the new additions How do you work with such requirements coming in and sort them out into proper use cases and before development begins? What tools can we use to track a particular requirement's history, from the first time it was conceived till the time it gets crystallized into a proper use case? Estimating work against requirements received in such a fashion is a nightmare which ends up in making mistakes in understanding the requirement correctly and estimating the effort against it correctly. Any tips, tools, tricks to make this activity more manageable? I'm just trying to get some insights from someone more experienced than I am in requirements management and effort estimation.

    Read the article

  • What are DRY, KISS, SOLID, etc. classified as?

    - by Morgan Herlocker
    Is something like DRY a design pattern, a methodology, or something in between? They do not have specific implementations that could neccessarily be demonstrated(even if you can easily demonstrate a case NOT using something like KISS... see The Daily WTF for a plethora of examples), nor do they fully explain a development process like a methodology generally would. Where does that leave these types of "rule of thumb"'s?

    Read the article

  • Should a programmer be indispensable?

    - by Tim
    As a programmer or system administrator, you could either strive to have your fingers in every system or to isolate yourself as much as possible to become an easily-substituted cog. Advantages of the latter include being able to take vacations and not being on call, while the former means that you'd always have something to do and be very difficult to fire. Aiming for either extreme would require a conscious effort. Except for the obvious ethical considerations, what should one strive for?

    Read the article

  • Writing Acceptance test cases

    - by HH_
    We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply: Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents. Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each? Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test? Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle? Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.

    Read the article

  • What makes Instagram so valuable? [closed]

    - by ????
    If as in the FAQ, that topics about business (computer industry) is allowed here, I'd like to find out why Instagram can be so valuable, that it is acquired for $1 billion dollars (USD). To put it simply, isn't it just a photo enhancement app (such as making a photo vintage look), plus sharing those photos on Facebook? That's because in contrast, PlayFish had superb Facebook games, and many of them, and are so much more sophisticated (such as Restaurant City and Pet Society). And PlayFish was merely acquired for $400 million. Some companies such as RockYou, had the number one app on Facebook, but wasn't even acquired for a low price like $200 million. And now just a photo filter app and sharing photos, and it is a business considered to be worth a billion dollars. Why is that?

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >