Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 318/605 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Questions about software licensing

    - by iwayneo
    I've been having a discussion about licensing and open source software. Basically - the other guy is saying that licensing is easy, if you're going to build a product you can use an (any) open source project and make money by selling that code. My issue is that say I create a website or app with a project that uses a GPL license the restrictions aren't so straight forward - correct me if i'm wrong on each of these scenarios: 1 - i create an iPhone app using GPL code and put that app into the appstore - the code must be freely available to people buying that app. 2 - i create a website that my client hosts - they must have access to the code. 3 - i create a website as SaaS that my client "leases" but does not own - though it is hosted on their infrastructure - they must have access to that code Am i right on each of those assumptions? Are there any other issues i should be aware of under any other licensing terms for other licenses?

    Read the article

  • What are the so-called "levels" of understanding multithreading?

    - by Dan Tao
    I seem to remember reading somewhere some list of 4 "levels" of understanding multithreading. This may have been in a formal publication, or it may have been in an extremely informal context (even like in a Stack Overflow question, for example). Unfortunately I don't remember who referred to them or precisely what they were. I seem to recall that they were roughly like: Total ignorance Awareness mixed with incompetence Relative competence mixed with fear True understanding My intention is to refer to these levels in a blog post I'm writing, with a reference; but I can't for the life of me remember where I first encountered this list. Brief Google searches have proved unfruitful.

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • Why does java.util.ArrayList allow to add null?

    - by Alfredo O
    I wonder why java.util.ArrayList allows to add null. Is there any case where I would want to add null to an ArrayList? I am asking this question because in a project we had a bug where some code was adding nulls to the List and it was hard to spot where the bug was. Obviously a NullPointerException was thrown but not until another code tried to access the element. The problem was how to locate the code that added the null object. It would have been easier if the ArrayList throwed an exception in the code where the elements was being added.

    Read the article

  • Situations that require protecting files against tampering when stored on a users computer

    - by Joel
    I'm making a 'Pokémon Storage System' with a Client/Server model and as part of that I was thinking of storing an inventory file on the users computer which I do not wish to be edited except by my program. An alternative to this would be to instead to store the inventory file on the server and control it's editing by sending commands to the server but I was wondering if there are any situations which require files to be stored on a users computer where editing would be undesirable and if so how do you protect the files? I was thinking AES with some sort of checksum?

    Read the article

  • Algorithm to calculate trajectories from vector field

    - by cheeesus
    I have a two-dimensional vector field, i.e., for each point (x, y) I have a vector (u, v), whereas u and v are functions of x and y. This vector field canonically defines a set of trajectories, i.e. a set of paths a particle would take if it follows along the vector field. In the following image, the vector field is depicted in red, and there are four trajectories which are partly visible, depicted in dark red: I need an algorithm which efficiently calculates some trajectories for a given vector field. The trajectories must satisfy some kind of minimum denseness in the plane (for every point in the plane we must have a 'nearby' trajectory), or some other condition to get a reasonable set of trajectories. I could not find anything useful on Google on this, and Stackexchange doesn't seem to handle the topic either. Before I start devising such an algorithm by myself: Are there any known algorithms for this problem? What is their name, for which keywords do I have to search?

    Read the article

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • What issues lead people to use Japanese-specific encodings rather than Unicode?

    - by Nicolas Raoul
    At work I come across a lot of Japanese text files in Shift-JIS and other encodings. It causes many mojibake (unreadable character) problems for all computer users. Unicode was intended to solve this sort of problem by defining a single character set for all languages, and the UTF-8 serialization is recommended for use on the Internet. So why doesn't everybody switch from Japanese-specific encodings to UTF-8? What issues with or disadvantages of UTF-8 are holding people back? EDIT: The W3C lists some known problems with Unicode, could this be a reason too?

    Read the article

  • How do I organize a GUI application for passing around events and for setting up reads from a shared resource

    - by Savanni D'Gerinel
    My tools involved here are GTK and Haskell. My questions are probably pretty trivial for anyone who has done significant GUI work, but I've been off in the equivalent of CGI applications for my whole career. I'm building an application that displays tabular data, displays the same data in a graph form, and has an edit field for both entering new data and for editing existing data. After asking about sharing resources, I decided that all of the data involved will be stored in an MVar so that every component can just read the current state from the MVar. All of that works, but now it is time for me to rearrange the application so that it can be interactive. With that in mind, I have three widgets: a TextView (for editing), a TreeView (for displaying the data), and a DrawingArea (for displaying the data as a graph). I THINK I need to do two things, and the core of my question is, are these the right things, or is there a better way. Thing the first: All event handlers, those functions that will be called any time a redisplay is needed, need to be written at a high level and then passed into the function that actually constructs the widget to begin with. For instance: drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: (DrawingArea -> IO ()) -> IO VBox createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do graphs <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 return hbox In this case, createStatView builds up a VBox that contains a DrawingArea to graph the data and potentially other widgets. It attaches drawStatData to the realize and exposeEvent events for the DrawingArea. I would do something similar for the TreeView, but I am not completely sure what since I have not yet done it and what I am thinking of would involve replacing the TreeModel every time the TreeView needs to be updated. My alternative to the above would be... drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: IO (VBox, DrawingArea) ... but in this case, I would arrange createUI like so: createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do (graphbox, graph) <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 on graph realize (drawStatData graph storeMVar field) on graph exposeEvent (do liftIO $ drawStatData graph storeMVar field return ()) return hbox I'm not sure which is better, but that does lead me to... Thing the second: it will be necessary for me to rig up an event system so that various events can send signals all the way to my widgets. I'm going to need a mediator of some kind to pass events around and to translate application-semantic events to the actual events that my widgets respond to. Is it better for me to pass my addressable widgets up the call stack to the level where the mediator lives, or to pass the mediator down the call stack and have the widgets register directly with it? So, in summary, my two questions: 1) pass widgets up the call stack to a global mediator, or pass the global mediator down and have the widgets register themselves to it? 2) pass my redraw functions to the builders and have the builders attach the redraw functions to the constructed widgets, or pass the constructed widgets back and have a higher level attach the redraw functions (and potentially link some widgets together)? Okay, and... 3) Books or wikis about GUI application architecture, preferably coherent architectures where people aren't arguing about minute details? The application in its current form (displays data but does not write data or allow for much interaction) is available at https://bitbucket.org/savannidgerinel/fitness . You can run the application by going to the root directory and typing runhaskell -isrc src/Main.hs data/ or... cabal build dist/build/fitness/fitness data/ You may need to install libraries, but cabal should tell you which ones.

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • Breaking up a large PHP object used to abstract the database. Best practices?

    - by John Kershaw
    Two years ago it was thought a single object with functions such as $database->get_user_from_id($ID) would be a good idea. The functions return objects (not arrays), and the front-end code never worries about the database. This was great, until we started growing the database. There's now 30+ tables, and around 150 functions in the database object. It's getting impractical and unmanageable and I'm going to be breaking it up. What is a good solution to this problem? The project is large, so there's a limit to the extent I can change things. My current plan is to extend the current object for each table, then have the database object contain these. So, the above example would turn into (assume "user" is a table) $database->user->get_user_from_id($ID). Instead of one large file, we would have a file for every table.

    Read the article

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • Please, tell us how you made Agile work for you?

    - by Paul
    I've been seeing many questions related to Agile. There seems to be confusion between the people who are doing Agile successfully, and those of us who don't understand it. So I'm wondering if some of the successful teams would be willing to give the result of us some examples of how you succeeded. Some of the things I know I wonder What steps did you use? (ie. Talk to users, mock up, tests, code, testing, (whatever)) Tools that helped you? Did you generate any artifacts, other than a working implementation? How did you prevent spaghetti architecture / code? How do you pass along to new team members, or is the team stable for the project How did you determine exit criteria, or was it open ended. (Scope of project?) Did you do this as contracting? How did you develop a contract up-front? Did the business do any up front work? Or did they come to the table with "We want to implement a "bleh bleh blah"? What types of tests did you use? Unit, Integration, UAT? Or did the process make some/all of those unnecessary? Bonus: Do you have an situations / links to "How To" Agile articles, books, etc? Wiki, describes what but not how (to the uninitiated) At least to me, not a duplicate

    Read the article

  • Software developer resume template/builder/etc

    - by codecraig
    As a software developer, I'm curious as to what tools, apps, templates, etc. other developers use for creating a resume. Such as LinkedIn, ceevee.com, github resume generator, etc. I use some of the things I've mentioned but I don't really like the out-dated style of resume created by ceevee.com, but I'm also not much of a designer. Anyone have any "nice" (as in design, ease of use) templates/apps/services you use?

    Read the article

  • How should I practice web server administration?

    - by Astyanax
    Security students can practice their skills with software like OWASP's webgoat or something similar to "hackthissite". Students interested in Operating Systems can study MINIX and PintOS, write shell scripts or study POSIX system calls. What would be the best course of action in order to practice Server Administration? Is there any such software/resource available, teaching you such skills with small lessons, or it is totally up to you? I've practiced live FreeBSD server administration and management of VMs (CentOS, Gentoo, Debian) under VirtualBox, but I always feel that this isn't enough and I must push myself harder. So, what would you recommend? What has worked for you?

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • How should I handle using two databases with a legacy PHP application?

    - by Toby Allen
    I have a legacy PHP application that was written in 2004 and uses MSSQL as a database backend. At this stage MSSQL is still supported by PHP but only just via a Microsoft driver. I have looked at converting to mysql via automated tools, which work quite well, but I have quite complex views which need a lot of individual work to convert. I don't have a great deal of time to do this. Many tools I wish to use and frameworks I would like to move the application to, don't support MSSQL, so I was considering adding new features using a new mysql database and wondered if anyone had opinions on the pros and cons of using two seperate database backends in a single application?

    Read the article

  • ORM and component-based architecture

    - by EagleBeek
    I have joined an ongoing project, where the team calls their architecture "component-based". The lowest level is one big database. The data access (via ORM) and business layers are combined into various components, which are separated according to business logic. E.g., there's a component for handling bank accounts, one for generating invoices, etc. The higher levels of service contracts and presentation are irrelevant for the question, so I'll omit them here. From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the "customers" component and then make another call to the "invoices" component to get the invoices for this customer. My impression is that it would be better to leave the data access in one component and separate it from business logic, which may well be cut into various components. Does anybody have some advice? Have I overlooked something?

    Read the article

  • What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?

    - by mindcrime
    I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more "traditional" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link.

    Read the article

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • Everything has an Interface [closed]

    - by Shane
    Possible Duplicate: Do I need to use an interface when only one class will ever implement it? I am taking over a project where every single real class is implementing an Interface. The vast majority of these interfaces are implemented by a single class that share a similar name and the exact same methods (ex: MyCar and MyCarImpl). Almost no 2 classes in the project implement more than the interface that shares its name. I know the general recommendation is to code to an interface rather than an implementation, but isn't this taking it a bit too far? The system might be more flexible in that it is easier to add a new class that behaves very much like an existing class. However, it is significantly harder to parse through the code and method changes now require 2 edits instead of 1. Personally, I normally only create interfaces when there is a need for multiple classes to have the same behavior. I subscribe to YAGNI, so I don't create something unless I see a real need for it. Am I doing it all wrong or is this project going way overboard?

    Read the article

  • Programming languages with a Lisp-like syntax extension mechanism

    - by Giorgio
    I have only a limited knowledge of Lisp (trying to learn a bit in my free time) but as far as I understand Lisp macros allow to introduce new language constructs and syntax by describing them in Lisp itself. This means that a new construct can be added as a library, without changing the Lisp compiler / interpreter. This approach is very different from that of other programming languages. E.g., if I wanted to extend Pascal with a new kind of loop or some particular idiom I would have to extend the syntax and semantics of the language and then implement that new feature in the compiler. Are there other programming languages outside the Lisp family (i.e. apart from Common Lisp, Scheme, Clojure (?), Racket (?), etc) that offer a similar possibility to extend the language within the language itself? EDIT Please, avoid extended discussion and be specific in your answers. Instead of a long list of programming languages that can be extended in some way or another, I would like to understand from a conceptual point of view what is specific to Lisp macros as an extension mechanism, and which non-Lisp programming languages offer some concept that is close to them.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • If you had two projects with the same specification and only one was developed using TDD how could you tell?

    - by Andrew
    I was asked this question in an interview and it has been bugging me ever since. You have two projects, both with the same specification but only one of these projects was developed using Test Driven Development. You are given the source for both but with the tests removed from the TDD project. How can you tell which was developed using TDD? All I was able to muster up was something about the classes being more 'broken up' in to smaller chunks and having more visible APIs, not my proudest moment. I would be very interested to hear a good answer to this question.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >