Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 200/563 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • How should I pronounce the :: and -> in PHP?

    - by Renesis
    When I read these lines aloud to someone: $controller->process(); UserManager::getInstance(); How should the -> and :: be pronounced? Reading the characters themselves in cumbersome and I don't know of any nicknames for them. Being a developer who is used to C-style syntax, I'd like to say "dot", but I'd like something that is easy to say and people can easily understand. It would also be good to know if there are any pronunciations that have become de-facto standards among teams of developers.

    Read the article

  • How to price code reviews to encourage good behavior?

    - by Chris Clark
    I work for a company that has a hosted .net internet application with many clients. Those clients often want to write customizations for our application. We have APIs to hook into the app, but the customizations themselves are written in .net. This is a shared, secure hosting environment and we have to code review these customizations before we can deploy them in our datacenter to ensure that they don't degrade performance, crash our servers, or open any security vulnerabilities. We charge for these code reviews. The current pricing model is simply a function of the number of lines of code. I think this is a bad idea for a variety of reasons, but primarily because, if we are interested in verifying that the code works as expected, we should be incentivizing good, readable code, not compaction. I would like to propose a pricing model that incorporates some, or all of the following as inputs: Lines of code Cyclomatic complexity Avg function length # of functions Are there any other metrics I should incorporate, or other ideas for how we can reasonably create pricing for code reviews that encourages safe and understandable code?

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • In a multidisciplinary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • "continue" and "break" for static analysis

    - by B. VB.
    I know there have been a number of discussions of whether break and continue should be considered harmful generally (with the bottom line being - more or less - that it depends; in some cases they enhance clarity and readability, but in other cases they do not). Suppose a new project is starting development, with plans for nightly builds including a run through a static analyzer. Should it be part of the coding guidelines for the project to avoid (or strongly discourage) the use of continue and break, even if it can sacrifice a little readability and require excessive indentation? I'm most interested in how this applies to C code. Essentially, can the use of these control operators significantly complicate the static analysis of the code possibly resulting in additional false negatives, that would otherwise register a potential fault if break or continue were not used? (Of course a complete static analysis proving the correctness of an aribtrary program is an undecidable proposition, so please keep responses about any hands-on experience with this you have, and not on theoretical impossibilities) Thanks in advance!

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • PowerShell programming conventions

    - by Tahir Hassan
    Do you follow any any conventions when programming in PowerShell? For example, in scripts which are to be maintained long-term do you: Use the real cmdlet name or alias? Specify the cmdlet parameter name in full or only partially (dir -Recurse versus dir -r) When specifying string arguments for cmdlets do you enclose them in quotes (New-Object 'System.Int32' versus New-Object System.Int32 When writing functions and filters do you specify the types of parameters? Do you write cmdlets in the (official) correct case? For keywords like BEGIN...PROCESS...END do you write them in uppercase only? Thanks for any replies.

    Read the article

  • Exposing API through a DLL

    - by MageNewbie
    I have a C++ application; I would like to expose an API from that application allowing me to control the C++ app from a VB6 app. I want to expose the API through a DLL file. Is this a viable option (is it possible) ? I haven’t been able to find any literature on using DLLs in this way. In fact from what I have read it seems like this is not possible because DLLs create their own new instance for every application they are linked in. If you have meet theses requirements in an application you built or if your knowledgeable on the subject, please give me a push in the right direction.

    Read the article

  • How to handle mutiple API calls using javascript/jquery

    - by James Privett
    I need to build a service that will call multiple API's at the same time and then output the results on the page (Think of how a price comparison site works for example). The idea being that as each API call completes the results are sent to the browser immediately and the page would get progressively bigger until all process are complete. Because these API calls may take several seconds each to return I would like to do this via javascript/jquery in order to create a better user experience. I have never done anything like this before using javascript/jquery so I was wondering if there was any frameworks/advice that anyone would be willing to share.

    Read the article

  • Are there web frameworks/tools that optimize for speed of development?

    - by Ahmet Yildirim
    I've been a PHP web developer for about 2 and a half years now. I have started using CodeIgniter framework to shorten development process a while ago. I developed 4 websites using CodeIgniter. It has been really tiring and boring due to code-repetition. Code repetition was vast on form handling functions in controllers.So in my last project , i developed a general form input handling function.This lead a realisation that it could get even faster by more automation. What i think i lack in my development is using CRUD & Code Generation tools. But i am wondering if there is any other utilities that shortens development process. Which web development language or framework more inclined towards code generation utilities?

    Read the article

  • Precising definition of programming paradigm

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. Sp how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project? EDIT *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?

    Read the article

  • My boss is feuding with his boss. My workload is expanding What should I do?

    - by steve
    These two have always had a somewhat shaky relationship when they were on the same level. The other guy was recently promoted to director and now my boss reports to him. On the surface, they appear to get along when they get together, but my boss despises the man and badmouths him every chance that he gets (to peers, subordinates, etc). He believe that the director is setting him up to fail. The Director and upper management is holding my boss responsible for the not-so-great performance by the team as of late. He's been playing games to make my boss look bad. Due to lay offs, we don't have the manpower to deliever the results that we did before...but expectations have not lowered...and my boss is taking the heat for it. Now he's on the warpath and starting to micromanage. He's giving everyone more work. He's forcing us midlevel guys to take responsibility for the level one techs' performance. I'm spending less and less time coding....and more time babysitting vendors, techs, etc. I'm not so sure that's a bad thing because I'm sorta burnt out on coding, but I don't really care for the idea of having to be responsible for others poor performance....isn't that the manager's job? Anyway, do you guys have any suggestions on dealing with the situation?

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • Do TODO comments make sense?

    - by Ivan Crojach Karacic
    I am working on a fairly big project and got the task to do some translations for it. There were tons of labels that haven't been translated and while I was digging through the code I found this little piece of code //TODO translations This made me think about the sense of these comments to yourself (and others?) because I got the feeling that most developers after they get a certain piece of code done and it does what it's supposed to do they never look at this until they have to maintain it or add new functionality. So that this TODO will be lost for a long time. Does it make sense to write this comments or should they be written on a whiteboard/paper/something else where they remain in the focus of developers?

    Read the article

  • What good books are out there on program execution models? [on hold]

    - by murungu
    Can anyone out there name a few books that address the topic of program execution models?? I want a book that can answer questions such as... What is the difference between interpreted and compiled languages and what are the performance consequences at runtime?? What is the difference between lazy evaluation, eager evaluation and short circuit evaluation?? Why would one choose to use one evaluation strategy over another?? How do you simulate lazy evaluation in a language that favours eager evaluation??

    Read the article

  • Best Java Book(s) for an Experienced Developer

    - by Steven Elliott Jr
    I have been a .NET developer now for about the past 5/6 years give or take. I have never done any professional Java development and the last time I really touched it was probably back in college. I have been toying with the Scala language a little bit but nothing serious. Recently, I've been offered an opportunity to do some pretty cool work, but using Java instead of .NET. I think I can get by alright with my current skill set, meaning I already know how to program well and am familiar with languages such as C# and C++, etc. So, the syntax and all that language stuff are really not a problem. What I need is a really good reference book and a book about how to think in Java. Each language/Framework/Stack tries to address things a certain way and I'm sure Java is no different. What are some great Java books that you simply can't live without? Are there any books that talk about the most important parts of Java that must be understood before all else? As a side note, I will be doing mostly Java web development. Not really 100% on what types of stuff they are using for persistence, framework, server, etc. Thanks again for the consideration.

    Read the article

  • How to test issues in a local development environment that can only be introduced by clustering in production?

    - by Brian Reindel
    We recently clustered an application, and it came to light that because of how we're doing SSL offloading via the load balancer in production it didn't work right. I had to mimic this functionality on my local machine by SSL offloading Apache with a proxy, but it still isn't a 1-to-1 comparison. Similar issues can arise when dealing with stateful applications and sticky sessions. What would be the industry standard for testing this kind of production "black box" scenario in a local environment, especially as it relates to clustering?

    Read the article

  • Scripting custom drawing in Delphi application with IF/THEN/ELSE statements?

    - by Jerry Dodge
    I'm building a Delphi application which displays a blueprint of a building, including doors, windows, wiring, lighting, outlets, switches, etc. I have implemented a very lightweight script of my own to call drawing commands to the canvas, which is loaded from a database. For example, one command is ELP 1110,1110,1290,1290,3,8388608 which draws an ellipse, parameters are 1110x1110 to 1290x1290 with pen width of 3 and the color 8388608 converted from an integer to a TColor. What I'm now doing is implementing objects with common drawing routines, and I'd like to use my scripting engine, but this calls for IF/THEN/ELSE statements and such. For example, when I'm drawing a light, if the light is turned on, I'd like to draw it yellow, but if it's off, I'd like to draw it gray. My current scripting engine has no recognition of such statements. It just accepts simple drawing commands which correspond with TCanvas methods. Here's the procedure I've developed (incomplete) for executing a drawing command on a canvas: function DrawCommand(const Cmd: String; var Canvas: TCanvas): Boolean; type TSingleArray = array of Single; var Br: TBrush; Pn: TPen; X: Integer; P: Integer; L: String; Inst: String; T: String; Nums: TSingleArray; begin Result:= False; Br:= Canvas.Brush; Pn:= Canvas.Pen; if Assigned(Canvas) then begin if Length(Cmd) > 5 then begin L:= UpperCase(Cmd); if Pos(' ', L)> 0 then begin Inst:= Copy(L, 1, Pos(' ', L) - 1); Delete(L, 1, Pos(' ', L)); L:= L + ','; SetLength(Nums, 0); X:= 0; while Pos(',', L) > 0 do begin P:= Pos(',', L); T:= Copy(L, 1, P - 1); Delete(L, 1, P); SetLength(Nums, X + 1); Nums[X]:= StrToFloatDef(T, 0); Inc(X); end; Br.Style:= bsClear; Pn.Style:= psSolid; Pn.Color:= clBlack; if Inst = 'LIN' then begin Pn.Width:= Trunc(Nums[4]); if Length(Nums) > 5 then begin Br.Style:= bsSolid; Br.Color:= Trunc(Nums[5]); end; Canvas.MoveTo(Trunc(Nums[0]), Trunc(Nums[1])); Canvas.LineTo(Trunc(Nums[2]), Trunc(Nums[3])); Result:= True; end else if Inst = 'ELP' then begin Pn.Width:= Trunc(Nums[4]); if Length(Nums) > 5 then begin Br.Style:= bsSolid; Br.Color:= Trunc(Nums[5]); end; Canvas.Ellipse(Trunc(Nums[0]),Trunc(Nums[1]),Trunc(Nums[2]),Trunc(Nums[3])); Result:= True; end else if Inst = 'ARC' then begin Pn.Width:= Trunc(Nums[8]); Canvas.Arc(Trunc(Nums[0]),Trunc(Nums[1]),Trunc(Nums[2]),Trunc(Nums[3]), Trunc(Nums[4]),Trunc(Nums[5]),Trunc(Nums[6]),Trunc(Nums[7])); Result:= True; end else if Inst = 'TXT' then begin Canvas.Font.Size:= Trunc(Nums[2]); Br.Style:= bsClear; Pn.Style:= psSolid; T:= Cmd; Delete(T, 1, Pos(' ', T)); Delete(T, 1, Pos(',', T)); Delete(T, 1, Pos(',', T)); Delete(T, 1, Pos(',', T)); Canvas.TextOut(Trunc(Nums[0]), Trunc(Nums[1]), T); Result:= True; end; end else begin //No space found, not a valid command end; end; end; end; What I'd like to know is what's a good lightweight third-party scripting engine I could use to accomplish this? I would hate to implement parsing of IF, THEN, ELSE, END, IFELSE, IFEND, and all those necessary commands. I need simply the ability to tell the scripting engine if certain properties meet certain conditions, it needs to draw the object a certain way. The light example above is only one scenario, but the same solution needs to also be applicable to other scenarios, such as a door being open or closed, locked or unlocked, and draw it a different way accordingly. This needs to be implemented in the object script drawing level. I can't hard-code any of these scripting/drawing rules, the drawing needs to be controlled based on the current state of the object, and I may also wish to draw a light a certain shade or darkness depending on how dimmed the light is.

    Read the article

  • Automated architecture validation

    - by P.Brian.Mackey
    I am aware of the fact that TFS 2010 ultimate edition can create and validate architecture diagrams. For example, I can create a new modeling project add Layer Diagram Add Layer called View Add BL Layer Add DL layer. Then I can validate this architecture as part of the build process when someone tries to check code into TFS. In other words, if the View references the DL then the compilation process will fail and the checkin will not be allowed. For those without an MSDN ultimate license, can FxCop or some 3rd party utility be used to validate architecture in an automated fashion? I prefer a TFS install-able plugin, but a local VS plugin will do.

    Read the article

  • Any Practical Alternative to the Signals + Slots model for GUI Programming?

    - by IntermediateHacker
    The majority of GUI Toolkits nowadays use the Signals + Slots model. It was Qt and GTK+, if I am not wrong, who pioneered it. You know, the widgets or graphical objects (sometimes even ones that aren't displayed) send signals to the main-loop handler. The main-loop handler then calls the events, callbacks or slots assigned for that widget / graphical object. There are usually default (and in most cases virtual) event-handlers already provided by the toolkit for handling all pre-defined signals, therefore, unlike previous designs where the developer had to write the entire main-loop and handler for each and every message himself (think WINAPI), the developer only has to worry about the signals he needs to implement new functionality on. Now this design is being used in most modern toolkits as far as I know. There are Qt, GTK+, FLTK etc. There is Java Swing. C# even has a language feature for it ( events and delegates ), and Windows Forms has been developed on this design. In fact, over the last decade, this design for GUI programming has become a kind of an unwritten standard. Since it increases productivity and provides greater abstraction. However, my question is: Is there any alternative design, that is parallel or practical for modern GUI programming? i.e Is the Signals + Slots design, the only practical one in town? Is it feasible to do GUI Programming with any other design? Are any modern (preferably successful and popular) GUI toolkits built on an alternative design?

    Read the article

  • Why fork a library for your own application?

    - by Mr. Shickadance
    Why should a programmer ever fork a library for inclusion in a widely used application? I ask this question because I was reading an article about why Chromium isn't packaged for many Linux distros like Fedora. Apparently its largely due to the fact that Google has forked a number of libraries, modified them, and included them in Chromium. This has driven up the complexity of packaging releases. There are a number of reasons why this can be a bad thing, but how strong a case can you actually make for doing so in a large widely used application such as Chromium? The original article: http://ostatic.com/blog/making-projects-easier-to-package-why-chromium-isnt-in-fedora Isn't it usually worth the effort to make slight modifications to your own program in order to use a popular and well developed library?

    Read the article

  • HTML Canvas: Should my app x, y values be global?

    - by Joe
    I have a large file of functions. Each of these functions is responsible for drawing a particular part of the application. My app has x and y parameters that I use in the setup function to move the whole app around in order to try different placements. So, naturally, I need to use these x and y values to anchor each function's component rendering so that everything moves in unison if the global x,y values ever change. My question is, is it bad practice/architecture to have these x,y values in the global namespace and having the each function directly access them like so? function renderFace() { var x = App.x; var y = App.y; // drawing here } is there a better way i'm missing?

    Read the article

  • Should I ditch a creative pet project in lieu of one that would demonstrate skills more applicable to an employer?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in english), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. Oh, and in case anyone is interested, my repository is here: www.github.com/hmsimha/vimagine

    Read the article

  • Hadoop and Object Reuse, Why?

    - by Andrew White
    In Hadoop, objects passed to reducers are reused. This is extremely surprising and hard to track down if you're not expecting it. Furthermore, the original tracker for this "feature" doesn't offer any evidence that this change actually improved performance (unless I missed it). It would speed up the system substantially if we reused the keys and values [...] but I think it is worth doing. This seems completely counter to this very popular answer. Is there some credence to the Hadoop developer's claim? Is there something "special" about Hadoop that would invalidate the notion of object creation being cheap?

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >