Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 183/605 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • dependency injection example project suggestion

    - by TokenMacGuy
    I'm exploring dependency injection and trying to make the exercise as pythonic as possible; existing dependency injection frameworks seem very java-like. I've made some pretty good progress building my own framework, but I could really use a model project to validate the framework against. An ideal suggestion would be something that is hard without dependency injection, but is otherwise conceptually trivial.

    Read the article

  • Should I give the answer to a failed interview coding exercise?

    - by GlenH7
    We had a senior level interview candidate fail a nuance of the FizzBuzz question*. I mean, really, utterly, completely, failed the question - not even close. I even coached him through to thinking about using a loop and that 3 and 5 were really worth considering as special cases. He blew it. Just for QA purposes, I gave the same exact question to three teammates; gave them 5 minutes; and then came back to collect their pseudo-code. All of them nailed it and hadn't seen the question before. Two asked what the trick was... On a different logic exercise, the candidate showed some understanding of some of the features available within the language he chose to use (C#). So it's not as if he had never written a line of code. But his logic still stunk. My question is whether or not I should have given him the answer to the logic questions. He knew he blew them, and acknowledged it later in the interview. On the other hand, he never asked for the answer or what I was expecting to see. I know coding exercises can be used to set candidates up for failure (again, see second link from above). And I really tried to help him home in on answering the core of the question. But this was a senior level candidate and Fizz-Buzz is, frankly, ridiculously easy even with accounting for interview jitters. I felt like I should have shown him a way of solving the problem so that he could at least learn from the experience. But again, he didn't ask. What's the right way to handle that situation? *Okay, that's not the link to the actual FizzBuzz question, but it is a good P.SE discussion around FizzBuzz and links to the various aspects of it.

    Read the article

  • How to design software when using BDD?

    - by Léster
    I'm working on a project right now and it's my first project using BDD. Up till now, the user stories have proven themselves a very valuable weapon to understand requirements and to specify the solution in a comprehensive, easy to understand language. My question is this: now that my user stories are complete, how do I design my solution? I understand that I derive behavior tests from my user stories, and I have to do UI design, but am I supposed to use good ol' UML? I'm under the impression that when using user stories, UML is left out; is this correct? Léster

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Does anyone actually use the /// comment blocks?

    - by Rachel
    Someone once said we should prefix all our methods with the /// <summary> comment blocks (C#) and I am wondering if that is true or not. I started to use them and found they annoyed me quite a bit, so stopped using them except for libraries and static methods. They're bulky and I'm always forgetting to update them. Do you recommend using them? Why? EDIT: I normally use // comments all the time, it's just the /// <summary> blocks I was wondering about

    Read the article

  • Object-Oriented equivalent of LISP's progn function?

    - by Archer
    I'm currently writing a LISP parser that iterates through some AutoLISP code and does its best to make it a little easier to read (changing prefix notation to infix notation, changing setq assignments to "=" assignments, etc.) for those that aren't used to LISP code/only learned object oriented programming. While writing commands that LISP uses to add to a "library" of LISP commands, I came across the LISP command "progn". The only problem is that it looks like progn is simply executing code in a specific order and sometimes (not usually) assigning the last value to a variable. Am I incorrect in assuming that for translating progn to object-oriented understanding that I can simply forgo the progn function and print the statements that it contains? If not, what would be a good equivalent for progn in an object-oriented language?

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • How to Detect Sprites in a SpriteSheet?

    - by IAE
    I'm currently writing a Sprite Sheet Unpacker such as Alferds Spritesheet Unpacker. Now, before this is sent to gamedev, this isn't necessarily about games. I would like to know how to detect a sprite within a spriitesheet, or more abstactly, a shape inside of an image. Given this sprite sheet: I want to detect and extract all individual sprites. I've followed the algorithm detailed in Alferd's Blog Post which goes like: Determine predominant color and dub it the BackgroundColor Iterate over each pixel and check ColorAtXY == BackgroundColor If false, we've found a sprite. Keep going right until we find a BackgroundColor again, backtrack one, go down and repeat until a BackgroundColor is reached. Create a box from location to ending location. Repeat this until all sprites are boxed up. Combined overlapping boxes (or within a very short distance) The resulting non-overlapping boxes should contain the sprite. This implementation is fine, especially for small sprite sheets. However, I find the performance too poor for larger sprite sheets and I would like to know what algorithms or techniques can be leveraged to increase the finding of sprites. A second implementation I considered, but have not tested yet, is to find the first pixel, then use a backtracking algorithm to find every connected pixel. This should find a contiguous sprite (breaks down if the sprite is something like an explosion where particles are no longer part of the main sprite). The cool thing is that I can immediately remove a detected sprite from the sprite sheet. Any other suggestions?

    Read the article

  • Workflow versioning

    - by Nitra
    I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). TLDR-version WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? Background In WWF you define a workflow by combining a set of activites. There are different types of activities - some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. So what's the issue then? What I don't really get is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some real-world effects: We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some suggestions on how to handle this When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch a lot of people has to re-do their work. Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?

    Read the article

  • Is a coding standard even needed anymore?

    - by SomeKittens
    I know that it's been proven that a coding standard helps enormously. However, there are many different tools and IDEs that will format to whatever standard the programmer prefers. So long as the code's neat/commented (and not a spaghetti mess), I don't see the need for a coding standard. Are there any arguments for the development of a coding standard (we don't have one, but I was looking into creating one)?

    Read the article

  • Agile bug fixing - what's the preferred process for testing?

    - by Andrew Stephens
    When a bug is fixed, the dev set its status to "resolved" and the bug is reassigned back to the person that created it. In our case this is usually the product owner - we don't have dedicated testers. But what's a good process for controlling how/when the PO tests the software? Should he be given the latest build after each bug is resolved/checked-in? Or what about every morning? Or should he only receive a build at (or close to) the end of the iteration, to include all of that iteration's new functionality and bug fixes? We are using TFS by the way.

    Read the article

  • What should I learn to create web-services like ones listed? [closed]

    - by Gerald Blizz
    I am very inspired by websites like imgur, dropbox, screencloud, maybe w3schools...you get my point. Fresh web-services with some new idea, not big portals but something simple yet useful and used by many people, something simple and new. What aspects of my developer career should I focus to be able to build such things on my own if I have enough ideas? (Sure if it ends up being popular I can get more developers to help me and so on, but at first I can do it alone, right?) I am currently a PHP web-developer, I know HTML+CSS+JS+AJAX+JQuery. But even like that there still is web-design, there are a lot of paths: websites for enterprise, startups, webservices, entertainment websites and serious bank/document flow systems, frameworks used for big systems, different approaches for little ones, etcetcetc. Which path should I take to be able to start my own projects like the ones that I listed on top which inspire me?

    Read the article

  • looking for information about HP openview servicedesk api or understanding an api without any information about one

    - by Zagorulkin Dmitry
    Good day folks. I am very confused in this situation. I need to implement system which will be based on HP open view service desk 4.5 api. But this system are reached the end of supporting period. On oficial site no information available I am looking an information about this API(articles, samples etc). Now i have only web-api.jar and javadoc. Methods in javadoc is bad documented. If you have any info, please share it with me. Thanks. Second question: there are methods for api(with huge amount of methods) understanding if it not documented or information is not available? PS:If it question is not belong here i will delete it.

    Read the article

  • How to design a scriptable communication emulator?

    - by Hawk
    Requirement: We need a tool that simulates a hardware device that communicates via RS232 or TCP/IP to allow us to test our main application which will communicate with the device. Current flow: User loads script Parse script into commands User runs script Execute commands Script / commands (simplified for discussion): Connect RS232 = RS232ConnectCommand Connect TCP/IP = TcpIpConnectCommand Send data = SendCommand Receive data = ReceiveCommand Disconnect = DisconnectCommand All commands implement the ICommand interface. The command runner simply executes a sequence of ICommand implementations sequentially thus ICommand must have an Execute exposure, pseudo code: void Execute(ICommunicator context) The Execute method takes a context argument which allows the command implementations to execute what they need to do. For instance SendCommand will call context.Send, etc. The problem RS232ConnectCommand and TcpIpConnectCommand needs to instantiate the context to be used by subsequent commands. How do you handle this elegantly? Solution 1: Change ICommand Execute method to: ICommunicator Execute(ICommunicator context) While it will work it seems like a code smell. All commands now need to return the context which for all commands except the connection ones will be the same context that is passed in. Solution 2: Create an ICommunicatorWrapper (ICommunicationBroker?) which follows the decorator pattern and decorates ICommunicator. It introduces a new exposure: void SetCommunicator(ICommunicator communicator) And ICommand is changed to use the wrapper: void Execute(ICommunicationWrapper context) Seems like a cleaner solution. Question Is this a good design? Am I on the right track?

    Read the article

  • Is this how dynamic language copes with dynamic requirement?

    - by Amumu
    The question is in the title. I want to have my thinking verified by experienced people. You can add more or disregard my opinion, but give me a reason. Here is an example requirement: Suppose you are required to implement a fighting game. Initially, the game only includes fighters, who can attack each other. Each fighter can punch, kick or block incoming attacks. Fighters can have various fighting styles: Karate, Judo, Kung Fu... That's it for the simple universe of the game. In an OO like Java, it can be implemented similar to this way: abstract class Fighter { int hp, attack; void punch(Fighter otherFighter); void kick(Fighter otherFighter); void block(Figther otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; This is fine if the game stays like this forever. But, somehow the game designers decide to change the theme of the game: instead of a simple fighting game, the game evolves to become a RPG, in which characters can not only fight but perform other activities, i.e. the character can be a priest, an accountant, a scientist etc... At this point, to make it more generic, we have to change the structure of our original design: Fighter is not used to refer to a person anymore; it refers to a profession. The specialized classes of Fighter (KaraterFighter, JudoFighter, KungFuFighter) . Now we have to create a generic class named Person. However, to adapt this change, I have to change the method signatures of the original operations: class Person { int hp, attack; List<Profession> skillSet; }; abstract class Profession {}; class Fighter extends Profession { void punch(Person otherFighter); void kick(Person otherFighter); void block(Person otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; class Accountant extends Profession { void calculateTax(Person p) { //...implementation...}; void calculateTax(Company c) { //...implementation...}; }; //... more professions... Here are the problems: To adapt to the method changes, I have to fix the places where the changed methods are called (refactoring). Every time a new requirement is introduced, the current structural design has to be broken to adapt the changes. This leads to the first problem. Rigid structure makes it hard for code reuse. A function can only accept the predefined types, but it cannot accept future unknown types. A written function is bound to its current universe and has no way to accommodate to the new types, without modifications or rewrite from scratch. I see Java has a lot of deprecated methods. OO is an extreme case because it has inheritance to add up the complexity, but in general for statically typed language, types are very strict. In contrast, a dynamic language can handle the above case as follow: ;;fighter1 punch fighter2 (defun perform-punch (fighter1 fighter2) ...implementation... ) ;;fighter1 kick fighter2 (defun perform-kick (fighter1 fighter2) ...implementation... ) ;;fighter1 blocks attacks from fighter2 (defun perform-block (fighter1 fighter2) ...implementation... ) fighter1 and fighter2 can be anything as long as it has the required data for calculation; or methods (duck typing). You don't have to change from the type Fighter to Person. In the case of Lisp, because Lisp only has a single data structure: list, it's even easier to adapt to changes. However, other dynamic languages can have similar behaviors as well. I work primarily with static languages (mainly C and Java, but working with Java was a long time ago). I started learning Lisp and some other dynamic languages this year. I can see how it helps improving my productivity.

    Read the article

  • How can QA prevent defects?

    - by user970696
    Also according to Software Testing By Srinisvasan Desikan, Gopalaswamy Ramesh or ISTQB text books. Quality assurance is e.g. reviewing products, inspections, walkthroughs to see if all standards are being followed. This is preventive activity. I cannot see how this can be preventive? For the references: defect prevention (Quality Assurance) Software Testing By Srinisvasan Desikan, Gopalaswamy Ramesh Quality Assurance (QA) tries to go one step further. Instead of concentrating on post- facto defect detection and correction, it focusses on the prevention of defects from the very start. Managing Global Software Projects - Page 110 QA deals with prevention of defects in the product being developed. Software Testing and Quality Assurance

    Read the article

  • Is there a variable width font that does not change width when adding effects like bold, italic?

    - by George Bailey
    NetBeans has a word wrap feature now - but if the font changes width when bold then it gets all jumpy and sometimes hard to work with. Edit: It turns out that even with Courier New that NetBeans word wrap still jumps up and down lines at a time at random. I guess that this question no longer cares for an answer. However,, it seems that there is no answer. (at least nobody has brought one up yet) I am currently using Comic Sans MS which gets wider when bold.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • Which .NET REST approach/technology/tool should I use?

    - by SonOfPirate
    I am implementing a RESTful web service and several client applications that are mostly in Silverlight. I am finding a litany of options for developing both the server-side and client-side of the API but am not sure which is the best approach. I'm concerned about stability as well as a platform that will continue to exist a few months from now. We started using the REST Starter Kit with .NET 3.5 but moved to the new WCF Web API when updating to .NET 4.0. All of their documentation indicates that WCF Web API is the replacement for the RSK. However, Web API is only in Preview 4 and does not include support for Silverlight or Windows Phone 7 clients (yet). WCF Web API looks like a wrapper on top of the WCF WebHttp Services stuff provided in the System.ServiceModel.Web library which makes me think that maybe it would be simpler to just go with the built-in stuff but Web API does offer some nice features. I am specifically tied-up trying to determine the best course for the client-side. My main requirement is that I need to support deserializing into my client-side objects quickly and easily. The Web API offers a nice client library but doesn't have a Silverlight version. I'd like to use the latest approach and the toolset that is being actively developed and supported. Is the REST Starter Kit really obsolete? Has anyone had any success implementing the WCF Web API toolkit? Is there merit to using either of these over the built-in WCF WebHttp Services features found in System.ServiceModel.Web? Is there a single solution that works for any client (web, Silverlight, etc.)? What suggestions do you have?

    Read the article

  • is it possible to execute keyboard input programmaticly in linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a serious of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of RSI from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and i type + n, and new tab opens. If i'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • .NET licenses and project worths millions

    - by Ivan Tanasijevic
    I have a question about. NET licenses. I heard that in the case when project becomes worth millions, Microsoft have rights on great percent of this amount. If this is true, then how are things with social network which is built with ASP.NET MVC. Is this the same situation as in the case of the profit coming from selling software, because in this situation profit comes from marketing not from direct selling software.

    Read the article

  • Databases and the CI server

    - by mlk
    I have a CI server (Hudson) which merrily builds, runs unit tests and deploys to the development environment but I'd now like to get it running the integration tests. The integration tests will hit a database and that database will be consistently being changed to contain the data relevant to the test in question. This however leads to a problem - how do I make sure the database is not being splatted with data for one test and then that data being override by a second project before the first set of tests complete? I am current using the "hope" method, which is not working out too badly at the moment, but mostly due to the fact that we only have a small number of integration tests set up on CI. As I see it I have the following options: Test-local (in memory) databases I'm not sure if any in-memory databases handle all the scaryness of Oracles triggers and packages etc, and anything less I don't feel would be a worth while test. CI Executor-local databasesA fair amount of work would be needed to set this up and keep 'em up to date, but defiantly an option (most of the work is already done to keep the current CI database up-to-date). Single "integration test" executorLikely the easiest to implement, but would mean the integration tests could fall quite far behind. Locking the database (or set of tables) I'm sure I've missed some ways (please add them). How do you run database-based integration tests on the CI server? What issues have you had and what method do you recommend? (Note: While I use Hudson, I'm happy to accept answers for any CI server, the ideas I'm sure will be portable, even if the details are not). Cheers,      Mlk

    Read the article

  • Should I list this work experience on my resume? [closed]

    - by Phoenix
    I am currently working at a company. I did an internship before this job with a prestigious company and project itself was challenging but it was in the initial phases and hence there were no tight schedules and we ended up doing brainstorming for the first month and the 2nd month actually setting up our hardware, which is linux servers in lab and a cluster administrator for the servers. And then i wrote an addin task which runs on the server and uses existing API to collect some statistics from the the servers in the cluster and feeding them into another entity which is basically an algorithm that calculates how the load on the servers should be automatically balanced. Neither of these things went into production by the time I left the company and I'm not even sure of their current state. Does it make sense to include it in my resume then? I also worked as a software engineer right out of school at another prestigious company for 9 months. I was involved in some bug fixes before the product launched and I don't even recollect the exact fixes I made to the product. So, will it make sense to have these experiences on my resume ? Will people question me about them and will saying it was bug fixes and mentioning what kind of fixes suffice as enough to justify my work ex there ?

    Read the article

  • Do employers prefer software engineering over CS majors?

    - by Joey Green
    I'm in grad school at a university that was one of the first to have a software engineering accredited program. My undergrad is in CS. An employer recently recruited at our university and hired 5 SE majors. None of them were CS. Do employers prefer software engineering majors? The reason I ask is because I can focus on many different areas during my graduate studies and really want to take the classes that will help me land a great job. Right now I'm either going to use CUDA and parallelize an advanced ray-tracer for a graduate project or do research on non-photo-realistic rendering in augmented reality. Pursuing these would leave very little SE classes in my schedule. If I went the software engineering route, I would probably either do research into data-oriented programming or software design complexity. Sometimes I think when I'm 40 and look back will it matter at all? For some reason I'm thinking not.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >