Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 273/663 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Google Closure Compiler - what does the name mean?

    - by mikez302
    I am curious about the Google Closure Compiler. Why did they name it that? Does it have anything to do with lexical closures? EDIT: I tried researching it in the FAQ and documentation, as well as doing Google searches such as "closure compiler name". I couldn't find anything definite, hence the reason I am asking. I don't think I will get a profoundly helpful answer but I was hoping that I could at least satisfy my curiosity. I am not trying to solve a specific problem. I am just curious.

    Read the article

  • Rails: Law of Demeter Confusion

    - by user2158382
    I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Java EE Web Services study guides

    - by Marthin
    I´m going for the Java EE 6 Web Services Developer certificat but I´m having a hard time to find som solid study guides and mock exams. I already have the JPA and very soon EJB cert so i´m not new to this stuff but I´v looked at coderanch and other places but all information seems a bit outdated. So any tips for books, mock exams free or not, tutorials or other guides would be very much appreciated. EDIT: I will of course read all JSR´s needed.

    Read the article

  • How do you choose to use a specific programming language?

    - by Jesús Bracamonte
    I was having a small talk between teammates about how you choose a programming language for use in a project which lead me to think that there are many criteria to choose one in the beginning of a project but no real standard. Do you chose a programming language for the syntax and semantics? Or do you choose one because it has the best support to do certain things? Or because you have better libraries? Or do you choose it for the paradigm? What criteria do you use to choose one language when you are going to do a project?

    Read the article

  • iOS: Versioned static frameworks vs Git Submodules and included code

    - by drekka
    For the last couple of years I've been building static frameworks of common APIs for my iOS projects. I can build a universal binary containing all the architectures (i386, armv6, armv7) and wrap it up in a .framework directory structure. I then stored this in a directory based on the version of the framework. For example ..../myAPI/v0.1.0/myAPI.framework Once I have this framework I can then easily add it to a project and if I want to advance the version, merely change the framework search paths to the later version. This works, but the approach is very similar to what I would use in the Java world. Recently I've been reading about using Git submodules and static framework sub projects in XCode 4. Im wondering if my currently approach is something that I should consider retiring and what the pros/cons are of the new approach. I'm weary of just including code because I've already had issues in a work project which had (effectively) multiple versions of a third party API. Any opinions?

    Read the article

  • Future of Hadoop? [closed]

    - by Shekhar
    I am a software developer having 4 years experience and little bit of experience in Hadoop. Now I am getting new project and ill be working fully on Hadoop thingy. As Hadoop is still evolving, I would like to know whether Hadoop is really going to be the widely used technology in the future? Will it be something like JEE platform or will it die soon just like some of the other technologies? What do you guys think about Hadoop platform?

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • Erlang web frameworks survey

    - by Zachary K
    (Inspired by similar question on Haskel) There are several web frameworks for Erlang like Nitrogen, Chicago Boss, and Zotonic, and a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks?

    Read the article

  • Clustering and custom applications

    - by Ahmed ilyas
    I was not entirely sure what tags to put but hope this is ok. This is just a general question in regards to clustering and applications: so lets say we have a clustered environment setup. We cluster SQL Server (I dont know exactly how its done but lets just say its been done for the sake of argument). Now if a website or application is trying to access that database for read/write (say an ASP.NET app or a C# Winforms app) and during that time SQL goes down - it takes a couple of minutes for the clustering failover to take affect to switch to another node. What happens during this time? I think it will time out/unable to connect. BUT is there a way for it to place the request in some pipeline so when the cluster node is back up/switched over it will continue as normal? as you can see, I know nothing much about clustering! what about your own custom .NET apps? Would there be a special way to develop them? I know that you can say create a simple Hello world app, and cluster that but they wouldnt be something you could see interms of the UI or anything, so they would effectively need to be developed as a Windows Service perhaps or even as a standard Console app which runs and not wait for user input but you wouldnt see any output from it (unless you redirect output to somewhere else) What im getting at here is... for those who have experience or developed a cluster application in .NET, how did you do it and what are the things to be aware of? For example we have the cloud service - fundamentally its built on clustering - if there is an outage, another node takes place and service is resumed as normal but we dont really see much of that downtime.

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • How to decide whether to implement an operation as Entity operation vs Service operation in Domain Driven Design?

    - by Louis Rhys
    I am reading Evans's Domain Driven Design. The book says that there are entity and there are services. If I were to implement an operation, how to decide whether I should add it as a method on an entity or do it in a service class? e.g. myEntity.DoStuff() or myService.DoStuffOn(myEntity)? Does it depend on whether other entities are involved? If it involves other entities, implement as service operation? But entities can have associations and can traverse it from there too right? Does it depend on stateless or not? But service can also access entities' variable, right? Like in do stuff myService.DoStuffOn, it can have code like if(myEntity.IsX) doSomething(); Which means that it will depend on the state? Or does it depend on complexity? How do you define complex operations?

    Read the article

  • Why is Internet Explorer the only browser to be referred to by version when talking about compatibility?

    - by Rue Leonheart
    Whenever I read something or hear someone talking about HTML5, CSS and JavaScript support, they always refer to Internet Explorer with the version number such as Internet Explorer 6, and Internet Explorer 9. But they only refer to Google Chrome, Firefox, Safari and others without version numbers. Shouldn't they also specify the version number in which certain web technologies are incompatible for other browsers instead of just Internet Explorer?

    Read the article

  • Examples of datecentric LOB-application in Silverlight with creative design?

    - by Alexander Galkin
    I am a database developer and I have rather limited experience with interface design. I am currently working on a project in my free-time a freetime which is mostly data centric and I would like to develop an eye-catchy interface for it using Silverlight. What I am looking for are examples of nice and interesting LOB applications in Siverlight without the use of paid frameworks. So far, I could only find something like Telerik sample application, but it uses a lot of Telerik controls I can't afford to buy.

    Read the article

  • Maintainability of Boolean logic - Is nesting if statements needed?

    - by Vaccano
    Which of these is better for maintainability? if (byteArrayVariable != null) if (byteArrayVariable .Length != 0) //Do something with byteArrayVariable OR if ((byteArrayVariable != null) && (byteArrayVariable.Length != 0)) //Do something with byteArrayVariable I prefer reading and writing the second, but I recall reading in code complete that doing things like that is bad for maintainability. This is because you are relying on the language to not evaluate the second part of the if if the first part is false and not all languages do that. (The second part will throw an exception if evaluated with a null byteArrayVariable.) I don't know if that is really something to worry about or not, and I would like general feedback on the question. Thanks.

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Starting to Program C++ and Java

    - by user0321
    So as the title states, I'm trying to start programming in C++ and Java. I took C++ and Java courses in high school and I'm trying to get back into it. Of course all I want to get working now is a simple "Hello World" program. Couple of things: I want to use an IDE. I've decided on Eclipse. I'm just confused about how I go about downloading/using it. For Java: I get stuck right on their download page. They show Eclipse Classic, Eclipse IDE for Java developers and Eclipse IDE for Java EE Developers. I only programmed in Notepad and compiled in command prompt. Question 1: Which version of Eclipse should I download? Question 2: Do I need to install the Java JDK or does it come built into Eclipse? For C++: I guess I download the separate Eclipse IDE for C/C++ developers? I'm not too sure. I remember using Microsoft Visual for C++. I remember it being weird though. Anyways Question 3: Which version of Eclipse should I download? Question 4: Does C++ have a Development Kit or does it come built into Eclipse?

    Read the article

  • How does one handle sensitive data when using Github and Heroku?

    - by Jonas
    I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to "hide" parts of the code.

    Read the article

  • what does composition example vs aggregation

    - by meWantToLearn
    Composition and aggregation both are confusion to me. Does my code sample below indicate composition or aggregation? class A { public static function getData($id) { //something } public static function checkUrl($url) { // something } class B { public function executePatch() { $data = A::getData(12); } public function readUrl() { $url = A::checkUrl('http/erere.com'); } public function storeData() { //something not related to class A at all } } } Is class B a composition of class A or is it aggregation of class A? Does composition purely mean that if class A gets deleted class B does not works at all and aggregation if class A gets deleted methods in class B that do not use class A will work?

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • Just interviewed, turned down, now got an email asking to chat with recruiter. No response. What should I do? [closed]

    - by Lambert
    I was turned down after two interviews by a prominent company for an internship, and only a couple days later, I was asked when I had 10-15 minutes to chat today. Of course, I loved to, so I emailed within just 10 minutes of their email and let them know what times I was available at, and asked them when the best time should be, and if I should go somewhere or expect a phone call. No reply has come from them since yesterday afternoon, the recruiter wanted to talk to me today. I don't want to lose this opportunity, but I have no way to contact the recruiter other than by email, and the recruiter hasn't responded to my emails from yesterday, even though we were supposed to talk today. What's the best thing I can do (preferably within the next few hours!) to get the job? Is that even probably why she emailed me, or was a different reason likely? Any ideas?

    Read the article

  • How to share code as open source?

    - by Ethel Evans
    I have a little program that I wrote for a local group to handle a somewhat complicated scheduling issue for scheduling multiple meetings in multiple locations that change weekly according to certain criteria. It's a niche need, but I wouldn't be surprised if there are other groups that could use software like this. In fact, we've had requests from others for directions on starting a group like this, and if their groups get as big, they might also want special software to help with scheduling. I plan to continue developing the program and eventually make it an online web app, but a very simple alpha version is completed as a console app. I'd like to make it available as open source, but I have no idea what kind of process I should go through first. Right now, all I have is Java code, not even unit-tested thoroughly. I haven't shown the code to anyone else. There is no documentation. I don't know where I would put the code so others could access it. I don't know anything about licensing it. I don't know what kind of support people will expect from me if I release it as open source. I have no idea what else I should worry about. Can someone outline for me (or post an article(s) that outlines) the process of taking open source software from "coded" to "completed / available"? I really don't want to embarrass myself by doing things weirdly.

    Read the article

  • Help migrating from VB style programming to OO programming [closed]

    - by Agent47DarkSoul
    Being a hobbyist Java developer, I quickly took on with OO programming and understood its advantages over procedural code from C, that I did in college. But I couldn't grasp VB event based code (weird, right?). Bottom-line is OOP came natural to me. Curently I work in a small development firm developing C# applications. My peers here are a bit attached to VB style programming. Most of the C# code written is VB6 event handling code in C#'s skin. I tried explaining to them OOP with its advantages but it wasn't clear to them, maybe because I have never been much of a VB programmer. So can anybody provide any resources: books, web articles on how to migrate from VB style to OO style programming ?

    Read the article

  • Pattern for a class that does only one thing

    - by Heinzi
    Let's say I have a procedure that does stuff: void doStuff(initalParams) { ... } Now I discover that "doing stuff" is quite a compex operation. The procedure becomes large, I split it up into multiple smaller procedures and soon I realize that having some kind of state would be useful while doing stuff, so that I need to pass less parameters between the small procedures. So, I factor it out into its own class: class StuffDoer { private someInternalState; public Start(initalParams) { ... } // some private helper procedures here ... } And then I call it like this: new StuffDoer().Start(initialParams); or like this: new StuffDoer(initialParams).Start(); And this is what feels wrong. When using the .NET or Java API, I always never call new SomeApiClass().Start(...);, which makes me suspect that I'm doing it wrong. Sure, I could make StuffDoer's constructor private and add a static helper method: public static DoStuff(initalParams) { new StuffDoer().Start(initialParams); } But then I'd have a class whose external interface consists of only one static method, which also feels weird. Hence my question: Is there a well-established pattern for this type of classes that have only one entry point and have no "externally recognizable" state, i.e., instance state is only required during execution of that one entry point?

    Read the article

  • Updating large icon in iTunes Connect

    - by Shaggy Frog
    Just wanted to see if I understand properly how/when one can change the "Large icon" for their iOS app in iTunes Connect. Questions are in bold below. To start, first the facts (as I gather) from version 6.6 of the iTC guide (March 2, 2011): The Large Icon is a "locked" piece of version information "You will only be permitted to edit Locked version information when your app is in an Editable state" The "Editable" states are: Prepare For Upload Waiting For Upload Waiting For Review Waiting For Export Compliance Upload Received Rejected Developer Rejected Invalid Binary Missing Screenshot Am I missing anything up until this point? If not, then am I correct to say that the only time I can change an app's Large Icon is when I update the application? Here's a more specific use case: My app is currently on sale, version 2.0 I have version 2.1 ready, and I want the update to coincide with a sale, so I also put a "SALE" banner on top of my large icon (what most devs are doing) I have to upload this "SALE" Large Icon when I upload the binary. If I wait until it's been reviewed, it's too late, and I'll have developer-reject the binary so I can fix it. Is this correct? Say I want the sale to last a week. So at the end of that week, I'll want to switch my Large Icon back to the pre-"SALE" version. Will I necessarily have to upload a new binary at that time? (Also posted on the Developer Forums, but it's getting no love there...)

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >