Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 63/663 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Working with fubar/refuctored code

    - by Keyo
    I'm working with some code which was written by a contractor who left a year ago leaving a number of projects with buggy, disgustingly bad code. This is what I call cowboy PHP, say no more. Ideally I'd like to leave the project as is and never touch it again. Things break, requirements change and it needs to be maintained. Part A needs to be changed. There is a bug I cannot reproduce. Part A is connect to parts B D and E. This kind of work gives me a headache and makes me die a little inside. It kills my motivation and productivity. To be honest I'd say it's affecting my mental health. Perhaps being at the start of my career I'm being naive to think production code should be reasonably clean. I would like to hear from anyone else who has been in this situation before. What did you do to get out of it? I'm thinking long term I might have to find another job. Edit I've moved on from this company now, to a place where idiots are not employed. The code isn't perfect but it's at least manageable and peer reviewed. There are a lot of people in the comments below telling me that software is messy like this. Sure I don't agree with the way some programmers do things but this code was seriously mangled. The guy who wrote it tried to reinvent every wheel he could, and badly. He stopped getting work from us because of his bad code that nobody on the team could stand. If it were easy to refactor I would have. Eventually after many 'just do this small 10minute change' situations had ballooned into hours of lost time (regardless of who on the team was doing the work) my boss finally caved in it was rewritten.

    Read the article

  • How to break the "php is a bad language" paradigm? [closed]

    - by dukeofgaming
    PHP is not a bad language (or at least not as bad as some may suggest). I had teachers that didn't even know PHP was object oriented until I told them. I've had clients that immediately distrust us when we say we are PHP developers and question us for not using chic languages and frameworks such as Django or RoR, or "enterprise and solid" languages such as Java and ASP.NET. Facebook is built on PHP. There are plenty of solid projects that power the web like Joomla and Drupal that are used in the enterprise and governments. There are frameworks and libraries that have some of the best architectures I've seen across all languages (Symfony 2, Doctrine). PHP has the best documentation I've seen and a big community of professionals. PHP has advanced OO features such as reflection, interfaces, let alone that PHP now supports horizontal reuse natively and cleanly through traits. There are bad programmers and script kiddies that give PHP a bad reputation, but power the PHP community at the same time, and because it is so easy to get stuff done PHP you can often do things the wrong way, granted, but why blame the language?. Now, to boil this down to an actual answerable question: what would be a good and solid and short and sweet argument to avoid being frowned upon and stop prejudice in one fell swoop and defend your honor when you say you are a PHP developer?. (free cookie with teh whipped cream to those with empirical evidence of convincing someone —client or other— on the spot) P.S.: We use Symfony, and the code ends being beautiful and maintainable

    Read the article

  • Using prefix incremented loops in C#

    - by KChaloux
    Back when I started programming in college, a friend encouraged me to use the prefix incrementation operator ++i instead of the postfix i++, citing that there was a slight chance of better performance with no real chance of a downside. I realize this is true in C++, and it's become a general habit that I continue to do. I'm led to believe that it makes little to no difference when used in a loop in C#, regardless of data type. Apparently the ++ operator can't be overridden. Nevertheless, I like the appearance more, and don't see a direct downside to it. It did astonish a coworker just a moment ago though, he made the (fairly logical) assumption that my loop would terminate early as a result. He's a self-taught programmer, and apparently never came across the C++ convention. That made me question whether or not the equivalent behavior of pre- and post-fix increment and decrement operators in loops is well known enough. Is it acceptable for me to continue using ++i in looping constructs because of style preference, even though it has no real performance benefit? Or is it likely to cause confusion amongst other programmers? Note: This is assuming the ++i convention is used consistently throughout all code.

    Read the article

  • Tried teaching myself to program before college, accidently overwhelmed myself, tips?

    - by Gunnar Keith
    I'm sixteen, I'm overly interested in programming, and I'm currently taking IT classes during my mornings in high school. Last year, I tried teaching myself to code. It was quite exciting, but all I did was watch TheNewBoston's videos on YouTube for Python. After his tutorials, I just did research, made some CMD programs, and that's it. After that, I got cocky and got my feet wet in many other languages. Java, C++, C#, Perl, Ruby... and it overwhelmed me. Which made it less fun to code. I want to go to college for a 2 year programming course. And I want to make writing code my profession. But how do you recommend I attack re-learning it all again? Start with Python? Don't even try? Also, I'm not 100% in math, but I'm good friends with a lot of programmers, who say they suck at math, but manage to code just fine. I'm not looking for negative feedback. I just want the proper head-start on things before college.

    Read the article

  • Which reference provides your definition of "elegant" or "beautiful" code?

    - by Donnied
    This question is phrased in a very specific way - it asks for references. There was a similar question posted which was closed because it was considered a duplicate to a good code question. The Programmers FAQ points out that answers should have references - or its just an unproductive sharing of (seemingly) baseless opinions. There is a difference between shortest code and most elegant code. This becomes clear in several seminal texts: Dijkstra, E. W. (1972). The humble programmer. Communications of the ACM, 15(10), 859–866. Kernighan, B. W., & Plauger, P. J. (1974). Programming style: Examples and counterexamples. ACM Comput. Surv., 6(4), 303–319. Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97–111. doi:10.1093/comjnl/27.2.97 They all note the importance of clarity over brevity. Kernighan & Plauger (1974) provide descriptions of "good" code, but "good code" is certainly not synonymous with "elegant". Knuth (1984) describes the impo rtance of exposition and "excellence of style" to elegant programs. He cites Hoare - who describes that code should be self documenting. Dijkstra (1972) indicates that beautiful programs optimize efficiency but are not opaque. This sort of conversation is qulaitatively different than a random sharing of opinions. Therefore, the question - Which reference provides your definition of "elegant" or "beautiful" code? "Which *reference*" is not subjective - anything else will most likely shut the thread down, so please supply *references* not opinions.

    Read the article

  • Named output parameters vs return values

    - by Abyx
    Which code is better: // C++ void handle_message(...some input parameters..., bool& wasHandled) void set_some_value(int newValue, int* oldValue = nullptr) // C# void handle_message(...some input parameters..., out bool wasHandled) void set_some_value(int newValue, out int oldValue) or bool handle_message(...some input parameters...) ///< Returns -1 if message was handled //(sorry, this documentation was broken a year ago and we're too busy to fix it) int set_some_value(T newValue) // (well, it's obvious what this function returns, so I didn't write any documentation for it) The first one doesn't have and need any documentation. It's a self-documenting code. Output value clearly says what it means, and it's really hard to make a change like this: - void handle_message(Message msg, bool& wasHandled) { - wasHandled = false; - if (...) { wasHandled = true; ... + void handle_message(Message msg, int& wasHandled) { + wasHandled = -1; + if (...) { wasHandled = ...; With return values such change could be done easily /// Return true if message was handled - bool handle_message(Message msg) { + int handle_message(Message msg) { ... - return true; + return -1; Most of compilers don't (and can't) check documentation written in comments. Programmers also tend to ignore comments while editing code. So, again, the question is: if subroutine has single output value, should it be a procedure with well-named self-documenting output parameter, or should it be a function which returns an unnamed value and have a comment describing it?

    Read the article

  • Sucking Less Every Year?

    - by AdityaGameProgrammer
    Sucking Less Every Year -Jeff Atwood I had come across this insightful article.Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen? Related: Ever look back at some of your old code and grimace in pain? Star Wars Moment in Code "Luke! I am your code!" "No! Impossible! It can't be!"

    Read the article

  • Which language meets my needs? [closed]

    - by Gerald Goward
    I am a junior C# developer, working for half a year now. In my company I am working on some enterprise projects and after doing it for quite some time I understood that I dont like enterprise projects. I have my own browser-game written in PHP+MySql with some simple HTML+CSS and I have 300 active (those, who entered the game at least once per 5 days) players currently :) After thinking quite some time I understood that I am interested in: 1). Web-development AND 2). standalone programs (but not enterprise ones). 3). Development for mobile platforms is also nice, Android/iOs. 1st and 2nd categories are what I want the most. Android/iOs is good too. I am NOT interested in big systems which are hard to integrate, I am not interested in enterprise systems. In future I would like to start my own business/projects. I would like to create my own projects or/and create a small programmers company to create and release own products. Please tell me what programming language(s)/technologies would you advice me for it? Thanks alot! UPD: It's NOT a "which language is better" or any flame/holywar generating topic since I ask for language that suits my EXACT needs better. I believe C++ is better for low-level coding, while PHP is good for web-development and Object-C being made for iOs. I am still newbie at programming so dont hate me please.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • How do you motivate peers to become better developers?

    - by Brian Rasmussen
    In my experience there seems to be two kinds of developers (if we simplify matters a great deal of course). On the one hand we have the developers, who may do a perfectly acceptable job, but who do not really care about the computer science part of their craft. They usually know few languages / technologies and are happy to let things stay that way. For whatever reason, they don't try to improve their computer science skills unless this is required in their current position. On the other hand, we have the geeks or the pragmatic programmers if you subscribe to that idea. They play around with other languages and technologies and usually have knowledge about several topics outside the technical domain of their current job. I would like to see more developers, who are enthusiastic about software development. If you share this point of view, what do you do to push your peers in that direction? Edit: follow-up question inspired by one of the answers: As non-managers, should we really care about this? And why/why not?

    Read the article

  • Is an app that does nothing but link to a web site functional enough to meet Apple's iOS guidelines?

    - by Pointy
    I don't hang out on Programmers enough to know whether this question is "ok", so my apologies if not. I tried to make the title obvious so at least it can be closed quickly :-) The question is simple. My employer wants "home screen presence" (or at least the possibility thereof) on iOS devices (also Android but I'm mostly interested in Apple at the moment). Our actual application will be a pure web-delivered mobile-friendly application, so what we want on the homescreen is basically something that just acts as a link to bring up Safari (or Chrome now I guess; not important). I'm presuming that that's more-or-less possible; if not then that would be interesting too. I know that the Apple guidelines are such that low-functionality apps are generally rejected out of hand. There are a lot of existing apps that seem (to me) less functional than a link to something useful, but I'm not Apple of course. Because this seems like a not-too-weird situation, I'm hoping that somebody knows it's either definitely OK (maybe because there are many such apps) or definitely not OK. Note that I know about things like PhoneGap and I don't want that, at least not at the moment.

    Read the article

  • What's the best way to learn/increase problem-solving skills?

    - by tucaz
    Hi all! I'm not sure this is the right place to ask this question, neither if this is the right way to ask this question but I hope you help me if it is not. I work as a programmer since I was 15 (will be 24 next week) so learning programming logic was somehow natural during the course of my career and I think that it helped me to get pretty good problem-solving. One thing none of us (programmers) can deny is that programming logic helps us in a lot of fields outside computer programming. So I'd say it is a very valuable resource that one should learn. My girlfriend is not a programmer and graduated in college on a non related course (Foreign Relations) because she didn't know what to study back then. As the years passed she discovered that she liked Logistics and started to work with it almost two years ago. However, since she does not have a technical background (not even basic Math) she is really having a hard time with it. She is already trying to catch up with Math, but even simple questions/brain-teasers are hard to her. For example, trying to find the missing numbers of this sequence: 0, 1, 1, 2, 3, 5, 8, _, _, 34 and so on. We know that this is Fibonacci but if we didn't we would probably be able to get to the correct answer just by "guessing" (using our acquired problem-solving skills). I'm not sure if problem-solving skills or logic are the correct name for it, but this is what I mean: quick solve problems, brain-teasers, find patterns, have a "sharp" mind. So, the question is: what is the best way for someone to learn this kind of skills without being a programmer (or studying algorithms and such)? If you say it is a book, could you please recommend one? Thanks a lot!

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • How do you compare job offers from companies in different countries?

    - by Danny Tuppeny
    This isn't really a programmer-specific question, but I'm not sure of a more appropriate place, and I think the users of this site are best able to answer the question in the context of programmers. Relocating to the US seems fairly common in the programming industry. I live in the UK, and maybe one day, I might do it too. So, if that day comes - how would you go about comparing job offers? Benefits are fairly easy to compare, but given the differences in cost of living, how would you go about comparing salaries and the quality of living you'll have? In a country where the cost of living is lower, you might be able to accept a lower salary (based on exchange rate) and still have the same quality of living. But what can you do to ensure this? In some cases, you may even take a "pay rise" in terms of exchange rate, but end up far worse off. How can you compare job offers across different countries to get an idea of the salary you would need in order to not feel you've gone "backwards"?

    Read the article

  • What do you do to make sure you take proper/enough breaks, while avoiding unwanted side-effects of break taking?

    - by blueberryfields
    preamble It seems to me that computer programmers are one of a select few groups of people who actually take pleasure from sitting in front of computers for long periods of time. Most people in other professions actively dislike their time at computers, and do their best to avoid it (so, I assume, they don't have problems taking breaks). At least for me, having external cues for taking breaks, and clear instructions on what to do with each break (stretch, go for a walk, close my eyes, look into a distance of preferably a few km and focus on faraway objects, etc...), is a must. So far, I've just been making up the breaks and tools to get them as I go along, based on what looks to be low-specificity information found on the net (generic stuff ala ergonomics advice for office staff). This has led to all sorts of side effects - loss of attention as I get distracted if I walk around, breaks in flow with alarm clocks interrupting my thoughts, and people around me assuming I'm low on work due to the frequency of my walking around compared to everyone else. /preamble tl;dr Taking breaks is important My internal break taking system doesn't work, and ad-hoc ones have unwanted side effects What do you do to make sure you take proper breaks? How do you avoid unwanted side-effects, such as getting distracted or interrupting flow or giving your co-workers the impression you're spending a lot of time goofing off?

    Read the article

  • My first time in the gambling industry

    - by sfrj
    I am a Java enterprise developer with almost 3 years of professional experience. Soon i am going to have a face to face interview with a company in the gambling industry. I already did successfully a phone screening and now for the personal interview i suppose they will ask me about some kind of white board problem or system design task. I think i am in the right place to ask about this, and would appreciate a lot if someone would give me some tips or share something related to his own experience. The things i am more interested in regarding my interview are: What are the most common challenges for programmers, in this industry? Any idea or suggestion on a white board problem they may ask me? Could you point me to some links where i can find information on the topic or sample problems in this industry?. I personally find this question very interesting not just for me. Also i think, the given answers can help also others in a similar situation. Just what i want to say whit this last comment is: Please avoid, answers like: www.google.com and so on...

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • Benefits of Masters of Engineering Professional Practice for the lowly (yet aspiring) programmer

    - by Peter Turner
    I've been looking into in state online degree programs 'to fit my busy lifestyle' (i.e. three children, wife and hour and a half commute). One interesting one I've found is that Master of Engineering in Professional Practice. It looks more useful and practical than a MBA in project management. I'll contact the admission dept there about the specifics. But here I'm just asking in general. Do the courses in this degree apply to software engineering/development in even an abstract sense. The university I'm looking at does not have a Software Engineering major in the school of engineering. I'm not interested in architecture astronomy, but I am interested in helping my company succeed and being able to communicate technical information at a high and effective level as well as being able to lead my co-programmers toward a more robust end product. So my multipart question is: What might be the real benefit to me and my brain and How do I convince my boss (the owner of the company, who does do some tuition reimbursement) that just because it doesn't say anything about software that it might still do us some good? Oh, and how do I get past the fact that a masters degree would make me more qualified to be the project manager than... the project manager? (who is my supervisor)

    Read the article

  • Explicitly pass context object versus injecting with IoC

    - by SonOfPirate
    I have a layered service application where the service layer delegates operations into the domain layer for execution. Many of these operations need to know the context under which they are operation. (The context included the identity of the current user, culture information, etc. received from the caller.) For example, I have an API method that returns a list of announcements. The list is based on the current user's role and each announcement is localized to their culture. The API is a thin-facade that delegates to an Application Service in my domain layer. The Application Service method obviously needs to know the context of the current request/operation as another call to the same API from another user should result in a different list. Within this method, we also have logging that uses some of the context information so we a clear understanding of the context when the operation was performed (this is especially useful if something goes wrong.) While this is a contrived example, in the real world, my Application Services will coordinate operations with many collaborative components, any number of them also needing the context information. My choice is to pass the context to the Application Service which would then pass it with any calls to collaborators or have the IoC container satisfy the dependency the Application Service and any collaborators have on the context. I am wondering if it is considered good/bad, best practices/code smell, etc. if I pass the context object as a parameter to the domain methods or if injecting the context via an IoC container is preferred. (EDIT: I should mention that the context object is instantiated per-request.)

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • How Can I Effectively Interview an Oracle Candidate?

    - by Tim Medora
    First, I browsed through SO for matching questions and didn't find one, but please point me in the right direction if this exact question has already been asked. I work with and around programmers of various skill levels on various platforms. I would consider my skills to be strong in terms of relational database design, query development, and basic performance tuning and administration. I'm mid-level when it comes to database theory. My team is looking to me to ensure that we have the best talent on staff, in this case, an engineer experienced in Oracle administration. To me, a well-rounded database administrator, regardless of platform, should also be competent in developing against the database so that is also a requirement. However my database skills are centralized around SQL Server 200x with experience in a few other products like SAP MaxDB, Access, and FoxPro. How can I thoroughly assess the skills of an Oracle engineer? I can ask high-level database theory questions and talk about routine tasks that are common across platforms, but I want to dig deep enough that I can be confident in the people I hire. Normally, I would alternate very specific questions that have a right/wrong answer with architectural questions that might have several valid answers. Does anyone have an interview template, specific questions, or any other knowledge that they can share? Even knowing the meaningful Oracle-related certifications would be a help. Thank you. EDIT: All the answers have been very helpful so far and I have given upvotes to everyone. I'm surprised that there are already 3 close votes on this question as "off topic". To be clear, I am specifically asking how a MS SQL Server engineer (like myself) can effectively interview a person with different but symbiotic skills. The question has already received specific, technical answers which have improved my own database design and programming skills. If this is more appropriate as a community wiki, please convert it.

    Read the article

  • Is there value in having new developers (graduates) start as testers / bug-fixers?

    - by Nico Huysamen
    Hi Programmers Community. What are your thoughts on the following: Is there value in having new developers (graduates) start as testers / bug-fixers? There are two schools of thought here that I have come across. Having new developers (graduates) start as testers / bug-fixers / doing SLA (Service Level Agreement) work, get's them familiar with the code base. It also allows them the opportunity to learn how to read [other people's] code. Further more, by fixing bugs, they will learn certain bad and good practices, which could hopefully help them in the future. The other way of thinking though, is that if you immediately start new developers on something like testing / bug-fixing / SLA work, their appetite for the development world might go away, and/or they might leave the company and you potentially loose out on a great future resource. Is there a balance that should be kept between these two? Currently where I work there is no clear-cut definition of what new starters do. Some go directly on to client work, while some fall in to the SLA world. Should companies have such a policy? Or should it be handled on a case-by-case or opportunity-based basis? Hope to hear from some of you that have experience in this field. Thanks!

    Read the article

  • Code base migration - old versioning system to modern

    - by JohnP
    Our current code base is contained in a versioning system that is old and outdated (Visual Sourcesafe 5.0, mid 1990's), and contains a mix of packages that are no longer used, ones that are being used but no longer updated, and newer code. It is also a mix of 4 languages, and includes libraries for some of our systems (Such as Dialogic, Sun Tzu {clipper}) implementations. This breaks down into the following categories: Legacy code - No longer used (Systems that have been retired or replaced, etc) Legacy code - In current use (No intentions for upgrades or minor bug fixes, only major fixes if needed) Current code - In current use, and will be used for future versions/development Support libraries - For both legacy and current code (Some of the legacy libraries are no longer available as well) We would like to migrate this to a newer versioning system as we will be adding more developers, and expanding the reach to include remote programmers. When migrating, how do you structure it? Do you just perform a dump of all the data and then import it into the new system, or do you segregate according to type before you bring it into the new system? Do you set up a separate area for libraries, or keep them with the relevant packages? Do you separate by language, system, both? A general outline and methodology is fine, it doesn't need to be broken down to individual program level.

    Read the article

  • Will people respect a Masters of Science in IT w/software engineering concentration from RPI?

    - by twneale
    Here's my thing: I got my undergraduate degree in political science, then a law degree. Then I figured out that I love programming and I'm pretty good at it too. It's fun and rewarding enough for me that I'd prefer to do it for a living over almost any form of pure law practice. So I'm looking at getting a masters degree to put some weight behind a possible career switch. If I actually want to develop software (web, in particular), would people in programming circles respect a master's of science in IT? Specifically, consider as an example the MS in IT from Rensselaer Polytechnic Institute (with a concentration in software engineering). Here's the home page: http://www.rpi.edu/IT/graduate/masters_program.html In particular, I mean to draw a contrast between IT as specifically contemplated by the RPI masters program (an interdisciplinary tech/business program) and other MS degrees in computer science or software engineering that focus more on the science and technical aspects. I guess I want to make sure that other programmers would respect my credentials and not consider me as different or underqualified based on the connotations of the phrase "IT". I believe RPI has an unimpeachable reputation for hard science, and the program seems excellent, but it still matters to me how people in industry would perceive it.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >