Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 292/1008 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Does one's native spoken language affect quality of code?

    - by Xepoch
    There is a school of thought in linguistics that problem solving is very much tied to the syntax, semantics, grammar, and flexibility of one's own native spoken language. Working with various international development teams, I can clearly see a mental culture (if you will) in the codebase. Programming language aside, the German coding is quite different from my colleagues in India. As well, code is distinctly different in Middle America as it is in Coastal America (actually, IBM noticed this years ago). Do you notice with your international colleagues (from ANY country) that coding style and problem solving are in-line with native tongues?

    Read the article

  • Javascript Rookie Question: Define Variables Inline

    - by Dylan Kinnett
    I'm proficient with HTML and CSS but I'm still pretty shaky when it comes to Javascript. That said, I've been able to build a site using the Internet Archive Book Reader, which relies on reader.js Here's a copy of one of my versions of reader.js https://gist.github.com/dylan-k/ed4efed2384e221d46cc It's a good site, but I find I have to repeat things a lot. Basically, I have one copy of reader.js for every page/book featured on the site. It seems there must be a better way. I re-use the script, making copies, just so that I can change lines 28, 80, 83, 84. Is there a way I could include just one copy of reader.js and then use a <script> tag to define these 4 lines for the individual pages?

    Read the article

  • Should I ditch AJAX in client side web development when I've got a web-socket open?

    - by jt0dd
    I was thinking that maybe I should forget AJAX (HTTP) requests when I've got a web-socket open between client and server, but I decided I should ask here to check if this could be a bad practice for some reason that I'm not thinking of. Once the socket is open, there's less syntax (often meaning simpler error handling) involved in passing information between client and server with Socket.io (just one example of a web-socket). Is there some obvious situation where a web-socket (Socket.io for example) isn't going to be capable of handling a functionality that an AJAX request could do easily?

    Read the article

  • Help me classify this type of software architecture

    - by Alex Burtsev
    I read some books about software architecture as we are using it in our project but I can't classify the architecture properly. It's some kind of Enterprise Architecture, but what exactly... SOA, ESB (Enterprise Service Bus), Message Bus, Event Driven SOA, there are so many terms in Enterprise software.... The system is based on custom XML messages exchanges between services. (it's not SOAP, nor any other XML based standard, just plain XML). These messages represent notifications (state changes) that are applied to the Domain model, (it's not like CRUD when you serialize the whole domain object, and pass it to service for persistence). The system is centralized, and system participants use different programming languages and frameworks (c++, c#, java). Also, messages are not processed at the moment they are received as they are stored first and the treatment begins on demand. It's called SOA+EDA -:)

    Read the article

  • What to do when a project is too difficult to continue developing?

    - by MaxWell
    As a developer, can you tell your project manager that an application is unworkable? Or, if you're a project manager, how would you need this presented to you in order to be compelled? This isn't about "how to work on a poor project", it's assuming you cannot. I can provide an example of the situation if anyone thinks it's important, but I'm trying to avoid proposed solutions to "plodding through".

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • Ruby on Rails background API polling

    - by Matthew Turney
    I need to integrate a free/busy calendar integration with Zimbra. Unlike outlook, it seems, Zimbra requires polling their API. I need to be able to grab the free/busy data in background tasks for 10's of thousands of users on a regular time interval, preferably every few minutes. What would be the best way to implement this in a Rails application without bogging down our current resque tasks? I have considered moving this process to something like node.js or something similar in Ruby. The biggest problem is that we have no control over the IO, as each clients Zimbra instances could be slow and we don't want to create a huge backup in tasks. Thoughts and ideas?

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • Why is an anemic domain model considered bad in C#/OOP, but very important in F#/FP?

    - by Danny Tuppeny
    In a blog post on F# for fun and profit, it says: In a functional design, it is very important to separate behavior from data. The data types are simple and "dumb". And then separately, you have a number of functions that act on those data types. This is the exact opposite of an object-oriented design, where behavior and data are meant to be combined. After all, that's exactly what a class is. In a truly object-oriented design in fact, you should have nothing but behavior -- the data is private and can only be accessed via methods. In fact, in OOD, not having enough behavior around a data type is considered a Bad Thing, and even has a name: the "anemic domain model". Given that in C# we seem to keep borrowing from F#, and trying to write more functional-style code; how come we're not borrowing the idea of separating data/behavior, and even consider it bad? Is it simply that the definition doesn't with with OOP, or is there a concrete reason that it's bad in C# that for some reason doesn't apply in F# (and in fact, is reversed)? (Note: I'm specifically interested in the differences in C#/F# that could change the opinion of what is good/bad, rather than individuals that may disagree with either opinion in the blog post).

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • How to import ejb project classes into web project on eclipse?

    - by Jhonnytunes
    I have 3 projects on eclipse, maven ejb project, maven wicket webapp project and a enterprise application project that includes ejb and web I mentioned before, as modules. But my question is, How from my web project can I use classes on the ejb project. Lets say ClassA is on the ejb project and I want to say from web project: ClassA cl = new ClassA. I wont be allowed since web project doesnt have ClassA but ejb project. I want to use ejb project as a Business Logic, JPA part. Also I want to include here some not beans classes that I want to use from the web. Dont know if its possible. The web project goes without Business logic, just the wicket pages calling ejb, and using ther classes. Im new in the EE world. Never made an ear file including a war and ejb jar for deploying it on Glassfish. Thanks in advanced.

    Read the article

  • My Client wants to convert me from a Contractor to an Employee. I'd like part of the Headhunter's fee. Is this fair?

    - by Bob Kaufman
    I am happily working on a contract for a good, solid company. This client is very happy with my work and has asked me to consider converting to full-time employment. My problem is the headhunter. This firm has not been entirely upfront with me throughout this contract. A mistake was made by the firm that benched me for several days, costing me those days' pay. Inadequate healthcare coverage left me with a bill of several thousand dollars after my wife's brief hospital stay. My feeling is that I did the work that earned me the invitation to work full-time for this company. Asking for 1/3 of the commission I figure they're going to receive would nicely counterbalance the inequities that I perceive were dealt to me. Is this an (un)reasonable or (in)appropriate request?

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I still feel like it doesn't tell me anything about the program itself. You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do, and you can't tell anything about the user experience from the requirements document, even though it can include QA requirements. ...sequence diagrams are as good a description of how the software works as the call stack, in other words- very limited, highly partial view of the overall system, and a class diagram is great for describing how the system is built, but is utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line- how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I got to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot? ...or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • When following SRP, how should I deal with validating and saving entities?

    - by Kristof Claes
    I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a UsersController with a Create action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction("Index"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes some sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of CreateUserViewModel to the IUserService class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my ICreateUserValidator checks the database to see if there already is another user with the same name... Another option is to let the IUserService take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?

    Read the article

  • Handling Deployment to Multiple Environments

    - by JayGee
    How should I handle deploying web applications to multiple servers? Constraints I have a dev, test and prod environment. No build server is available. Developers can't deploy to prod. The people that do deploy to prod copy files from test to prod. They don't have VS installed. Currently The way it's handled is using web.config transform. However, to deploy to prod involves putting prod code on the test server where it's copied over. Problem Sometimes simple mistakes are made, such as forgetting to change test back to the right environment after deployment. Or the test config gets moved to prod instead of the prod config. Solution So the question is, what is the best way to prevent mistakes from happening? My first thought is let the app determine which server it's on at runtime and use the appropriate settings/connection strings/etc... However, the server names could change in the not too distant future. So if multiple apps are hard coded, that would mean updating all of them. The easiest way to handle that situation would be to place a DLL in the GAC that determines the environment. Are there any drawbacks or possible complications that this would cause? Or is there a better solution to the problem than this?

    Read the article

  • Benchmarking CPU processing power

    - by Federico Zancan
    Provided that many tools for computers benchmarking are available already, I'd like to write my own, starting with processing power measurement. I'd like to write it in C under Linux, but other language alternatives are welcome. I thought starting from floating point operations per second, but it is just a hint. I also thought it'd be correct to keep track of CPU number of cores, RAM amount and the like, to more consistently associate results with CPU architecture. How would you proceed to the task of measuring CPU computing power? And on top of that: I would worry about a properly minimum workload induced by concurrently running services; is it correct to run benchmarking as a standalone (and possibly avulsed from the OS environment) process?

    Read the article

  • Artificial Intelligence implemented in x86 Assembly? [closed]

    - by Bigyellow Bastion
    Okay, so I decided that for my upcoming operating system, I do basically everything in x86 Assembly, using only 16-bit mode. I will need to write the software to host on it once I have something up and going, and I'll definitely post the source and VM-executable file. But as for now I'm stuck with the idea of implementing the AI code for some of the games I'm making to host on it. AI in Assembly is tedious, and sometimes almost impossible seeming, especially complex AI(I'm talking SNES Super Mario World 2: Yoshi's Island AI here, by the way, not pong AI). I was thinking that it'd be such a hassle that I'd have to bring a higher-level language to work some of this out here, like maybe C++ or C#, but I'd have to go through more work linking it into a fine binary that my OS will host, and that adds unnecessary work to the table I wanted to avoid(I don't want a complex system, I want everything as bare-bones as possible, avoiding libraries, APIs, and linkable formats for now, to make everything more directly accessible to the kernel's API).

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • When using membership provider, do you use the user ID or the username?

    - by Chris
    I've come across this is in a couple of different applications that I've worked on. They all used the ASP.NET Membership Provider for user accounts and controlling access to certain areas, but when we've gotten down into the code I've noticed that in one we're passing around the string based user name, like "Ralph Waters", or we're passing around the Guid based user ID from the membership table. Now both seem to work. You can make methods which get by username, or get by user ID, but both have felt somewhat "funny". When you pass a string like "Ralph Waters" you're passing essentially two separate words that make up a unique identifier. And with a Guid, you're passing around a string/number combination which can be cast and made unique. So my question is this; when using Membership Provider, which do you use, the username or the user ID to get back to the user? Thanks all!

    Read the article

  • Best Java Book(s) for an Experienced Developer

    - by Steven Elliott Jr
    I have been a .NET developer now for about the past 5/6 years give or take. I have never done any professional Java development and the last time I really touched it was probably back in college. I have been toying with the Scala language a little bit but nothing serious. Recently, I've been offered an opportunity to do some pretty cool work, but using Java instead of .NET. I think I can get by alright with my current skill set, meaning I already know how to program well and am familiar with languages such as C# and C++, etc. So, the syntax and all that language stuff are really not a problem. What I need is a really good reference book and a book about how to think in Java. Each language/Framework/Stack tries to address things a certain way and I'm sure Java is no different. What are some great Java books that you simply can't live without? Are there any books that talk about the most important parts of Java that must be understood before all else? As a side note, I will be doing mostly Java web development. Not really 100% on what types of stuff they are using for persistence, framework, server, etc. Thanks again for the consideration.

    Read the article

  • Proposal for a new position at work

    - by Seth P.
    I have an idea at work for a new Product Manager position at our office. I work with several developers, and it would be helpful to have someone working in a type of "Scrum Master" capacity, dividing out assignments and making sure they get complete. This position does not currently exist, however I feel that I have enough evidence to indicate that it be very helpful for our business. What is the best way to present this proposal to my boss? Is there a specific template that you know of for new position? It should be able to describe the qualification for the position, their responsibilities, and what metrics we would use to measure them. Thanks. UPDATE++++ With Anna's suggestion, I gave more details about this specific position. However, I would ideally like the most generic way to present a new position to my boss.

    Read the article

  • why not use unmanaged safe code in c#

    - by user613326
    There is an option in c# to execute code unchecked. It's generally not advised to do so, as managed code is much safer and it overcomes a lot of problems. However I am wondering, if you're sure your code won't cause errors, and you know how to handle memory then why (if you like fast code) follow the general advice? I am wondering this since I wrote a program for a video camera, which required some extremely fast bitmap manipulation. I made some fast graphical algorithms myself, and they work excellent on the bitmaps using unmanaged code. Now I wonder in general, if you're sure you don't have memory leaks, or risks of crashes, why not use unmanaged code more often ? PS my background: I kinda rolled into this programming world and I work alone (I do so for a few years) and so I hope this software design question isn't that strange. I don't really have other people out there like a teacher to ask such things.

    Read the article

  • Should a programmer "think" for the client?

    - by P.Brian.Mackey
    I have gotten to the point where I hate requirements gathering. Customer's are too vague for their own good. In an agile environment, where we can show the client a piece of work to completion it's not too bad as we can make small regular corrections/updates to functionality. In a "waterfall" type in environment (requirements first, nearly complete product next) things can get ugly. This kind of environment has led me to constantly question requirements. E.G. Customer wants "automatically convert input to the number 1" (referring to a Qty in an order). But what they don't think about is that "input" could be a simple type-o. An "x" in a textbox could be a "woops" not I want 1 of those "toothpaste" products. But, there's so much in the air with requirements that I could stand and correct for hours on end smashing out what they want. This just isn't healthy. Working for a corporation, I could try to adjust the culture to fit the agile model that would help us (no small job, above my pay grade). Or, sweep ugly details under the rug and hope for the best. Maybe my customer is trying to get too close to the code? How does one handle the problem of "thinking for the client" without pissing them off with too many questions?

    Read the article

  • Do certain corporations hold more weight on a resume?

    - by Ryan
    Would a developer/tester position at Google, Apple, Microsoft, etc. (any large tech. company of which most people have heard) be more valuable on a resume than working as a developer/tester somewhere where tech. isn't the main objective (shipping company, restaurant chain, insurance company, etc.)? Let's say you have two offers, and you only plan to stay with whichever company for 5 years, before trying to get a better position at a different company. One at Google that has a starting salary of $60,000, and one at some insurance company that has a starting salary of $80,000. I guess what I'm trying to say is... with university's, if someone graduates from MIT or Carnegie Mellon, they can pretty much get a job anywhere. Does someone seem more valuable after having worked at a company like Google, Apple, Microsoft, etc.? In other words, would taking the lower paying job be better in the long run since it's at Google, or would it be better to take the higher paying job at the insurance company?

    Read the article

  • Is there such a thing as too much experience?

    - by sunpech
    For modern software developers in today's world, is there such a thing as having too much experience with a certain technology or programming language? To a recruiter, interviewer, or company hiring-- could there often be cases where a particular candidate has so much experience in a certain area or technology where it works against the candidate to being hired? I'm not talking about cases where a senior developer is applying for an entry-level developer position, and has a lot of experience in that sense. Nor am I talking about cases where a candidate is outright lying (e.g. 20+ years experience with Ruby on Rails). I've overheard this in conversations between hiring managers/developers during happy hours, yet I'm not quite sure I fully understand what they mean.

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >