Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 132/663 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • What Web Technology to use for web app?

    - by Chris
    Want to get the opinions of the people of Stack Overflow. I am creating a web application that ideally will have some sort of desktop notification. i would love to do this in HTML5 but cant as need it to run on IE 8 and below. I have looked a Flex but I'm not 100% sure how to achieve desktop notifications when running as a web app. Has anyone had this dilemma or even know of anything that would be the best fit? All opinions are welcome, will help me out a lot

    Read the article

  • What projects did you have on your CV when you got your first junior web developer job?

    - by CodeNoob
    What sort of projects should one have completed and at what level/standard should these be at before one could justifiably start applying for junior web development jobs? I'm basically trying to find out exactly what other self-taught (front-end or back-end) web-developers have done before they felt they had a realistic chance of getting their first junior development job. I'm hoping for more specific answers than 'I joined an open source project' or 'I did some freelance work'. What was the project? What tasks had you completed on this project?

    Read the article

  • how to learn ios game development using swift.. good starting point?

    - by hamobi
    I've published a simple app on the app store using objective-c. That was a good learning experience but I never grew to love the language. Later on I jumped into learning cocos2d in order to begin developing a game.. but objective-c always seemed really cumbersome to write. Eventually I put my project aside. Now that swift has come out.. It has made me think about developing games again.. I know that xcode has some project types geared towards game development, but since I'm a beginner in this area I really need some hand holding (books / tutorials) to get started. Cocos2d seems like its really stuck in that objective-c world. What's the best way for a beginner to learn game development using swift?

    Read the article

  • What is so good about Linux? [closed]

    - by Chris Bridgett
    Self-explanatory question title. I've only ever used Windows OS's (except Mac OSX at friends etc, years ago occasionally) and when diving into the world of programming, Linux is a name that is coming up every other tutorial or article. All my web hosts run Linux and a lot of programming literature covers how to go about various tasks on Linux as well as Windows, but other than the odd raving I've read years ago about Linux being less resource-intensive, I haven't really given it much thought. Any article I read about Linux and whether it should be used for... 'regular' use, it's shunned since any windows applications I'm familiar with will usually require the windows API and there's no end of 'hacking' to get various programs working on Linux. As far as I understand a GUI is optional on Linux too? This all sounds very noobish I'm sure, but we all start somewhere, so: What is Linux good for? What should Linux be used for?

    Read the article

  • unit testing variable state explicit tests in dynamically typed languages

    - by kris welsh
    I have heard that a desirable quality of unit tests is that they test for each scenario independently. I realised whilst writing tests today that when you compare a variable with another value in a statement like: assertEquals("foo", otherObject.stringFoo); You are really testing three things: The variable you are testing exists and is within scope. The variable you are testing is the expected type. The variable you are testing's value is what you expect it to be. Which to me raises the question of whether you should test for each of these implicitly so that a test fail would occur on the specific line that tests for that problem: assertTrue(stringFoo); assertTrue(stringFoo.typeOf() == "String"); assertEquals("foo", otherObject.stringFoo); For example if the variable was an integer instead of a string the test case failure would be on line 2 which would give you more feedback on what went wrong. Should you test for this kind of thing explicitly or am i overthinking this?

    Read the article

  • How to create a Semantic Network like wordnet based on Wikipedia?

    - by Forbidden Overseer
    I am an undergraduate student and I have to create a Semantic Network based on Wikipedia. This Semantic Network would be similar to Wordnet(except for it is based on Wikipedia and is concerned with "streams of text/topics" rather than simple words etc.) and I am thinking of using the Wikipedia XML dumps for the purpose. I guess I need to learn parsing an XML and "some other things" related to NLP and probably Machine Learning, but I am no way sure about anything involved herein after the XML parsing. Is the starting step: XML dump parsing into text a good idea/step? Any alternatives? What would be the steps involved after parsing XML into text to create a functional Semantic Network? What are the things/concepts I should learn in order to do them? I am not directly asking for book recommendations, but if you have read a book/article that teaches any thing related/helpful, please mention them. This may include a refernce to already existing implementations regarding the subject. Please correct me if I was wrong somewhere. Thanks!

    Read the article

  • MVC: Why put the business logic in the model? What happens when I've multiple types of storage?

    - by Steffen Winkler
    I always thought that the business logic has to be in the controller and that the controller, since it is the 'middle' part, stays static and that the model/view have to be capsuled via interfaces, that way you could change the business logic without affecting anything else, program multiple Models (one for each database/type of storage) and a dozens of views (for different platforms for example). Now I read in this question that you should always put the business logic into the model and that the controller is deeply connected with the view. To me, that doesn't really make sense and implies that each time I want to have the means of supporting another database/type of storage I've to rewrite my whole model including the business logic. And if I want another view, I've to rewrite both the view and the controller. May someone explain why that is or if I went wrong somewhere? Currently, that whole thing doesn't really make sense to me.

    Read the article

  • Programming Practice/Test Contest?

    - by Emmanuel
    My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the "contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?

    Read the article

  • Rules and advice for logging?

    - by Nick Rosencrantz
    In my organization we've put together some rules / guildelines about logging that I would like to know if you can add to or comment. We use Java but you may comment in general about loggin - rules and advice Use the correct logging level ERROR: Something has gone very wrong and need fixing immediately WARNING: The process can continue without fixing. The application should tolerate this level but the warning should always get investigated. INFO: Information that an important process is finished DEBUG. Is only used during development Make sure that you know what you're logging. Avoid that the logging influences the behavior of the application The function of the logging should be to write messages in the log. Log messages should be descriptive, clear, short and concise. There is not much use of a nonsense message when troubleshooting. Put the right properties in log4j Put in that the right method and class is written automatically. Example: Datedfile -web log4j.rootLogger=ERROR, DATEDFILE log4j.logger.org.springframework=INFO log4j.logger.waffle=ERROR log4j.logger.se.prv=INFO log4j.logger.se.prv.common.mvc=INFO log4j.logger.se.prv.omklassning=DEBUG log4j.appender.DATEDFILE=biz.minaret.log4j.DatedFileAppender log4j.appender.DATEDFILE.layout=org.apache.log4j.PatternLayout log4j.appender.DATEDFILE.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%C{1}.%M] - %m%n log4j.appender.DATEDFILE.Prefix=omklassning. log4j.appender.DATEDFILE.Suffix=.log log4j.appender.DATEDFILE.Directory=//localhost/WebSphereLog/omklassning/ Log value. Please log values from the application. Log prefix. State which part of the application it is that the logging is written from, preferably with something for the project agreed prefix e.g. PANDORA_DB The amount of text. Be careful so that there is not too much logging text. It can influence the performance of the app. Loggning format: -There are several variants and methods to use with log4j but we would like a uniform use of the following format, when we log at exceptions: logger.error("PANDORA_DB2: Fel vid hämtning av frist i TP210_RAPPORTFRIST", e); In the example above it is assumed that we have set log4j properties so that it automatically write the class and the method. Always use logger and not the following: System.out.println(), System.err.println(), e.printStackTrace() If the web app uses our framework you can get very detailed error information from EJB, if using try-catch in the handler and logging according to the model above: In our project we use this conversion pattern with which method and class names are written out automatically . Here we use two different pattents for console and for datedfileappender: log4j.appender.CONSOLE.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.DATEDFILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n In both the examples above method and class wioll be written out. In the console row number will also be written our. toString() Please have a toString() for every object. EX: @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(" DwfInformation [ "); sb.append("cc: ").append(cc); sb.append("pn: ").append(pn); sb.append("kc: ").append(kc); sb.append("numberOfPages: ").append(numberOfPages); sb.append("publicationDate: ").append(publicationDate); sb.append("version: ").append(version); sb.append(" ]"); return sb.toString(); } instead of special method which make these outputs public void printAll() { logger.info("inbet: " + getInbetInput()); logger.info("betdat: " + betdat); logger.info("betid: " + betid); logger.info("send: " + send); logger.info("appr: " + appr); logger.info("rereg: " + rereg); logger.info("NY: " + ny); logger.info("CNT: " + cnt); } So is there anything you can add, comment or find questionable with these ways of using the logging? Feel free to answer or comment even if it is not related to Java, Java and log4j is just an implementation of how this is reasoned.

    Read the article

  • SO-overflow induced passivity - how to cope?

    - by Ruben
    After not really working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a pet project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) not object-oriented (at least the PHP part, the JS-part mostly is) no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient really poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) using mootools even though JQuery seems to be fashionable, so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into passivity. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for auth and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it anyway? What's the best practice for little-practice people?

    Read the article

  • Need clarification concerning Windows Azure

    - by SnOrfus
    I basically need some confirmation and clarification concerning Windows Azure with respect to a Silverlight application using RIA Services. In a normal Silverlight app that uses RIA services you have 2 projects: App App.Web ... where App is the default client-side Silverlight and app.web is the server-side code where your RIA services go. If you create a Windows Azure app and add a WCF Web Services Role, you get: App (Azure project) App.Services (WCF Services project) In App.Services, you add your RIA DomainService(s). You would then add another project to this solution that would be the client-side Silverlight that accesses the RIA Services in the App.Services project. You then can add the entity model to the App.Services or another project that is referenced by App.Services (if that division is required for unit testing etc.) and connect that entity model to either a SQLServer db or a SQLAzure instance. Is this correct? If not, what is the general 'layout' for building an application with the following tiers: UI (Silverlight 4) Services (RIA Services) Entity/Domain (EF 4) Data (SQL Server)

    Read the article

  • Gradual approaches to dependency injection

    - by JW01
    I'm working on making my classes unit-testable, using dependency injection. But some of these classes have a lot of clients, and I'm not ready to refactor all of them to start passing in the dependencies yet. So I'm trying to do it gradually; keeping the default dependencies for now, but allowing them to be overridden for testing. One approach I'm conisdering is just moving all the "new" calls into their own methods, e.g.: public MyObject createMyObject(args) { return new MyObject(args); } Then in my unit tests, I can just subclass this class, and override the create functions, so they create fake objects instead. Is this a good approach? Are there any disadvantages? More generally, is it okay to have hard-coded dependencies, as long as you can replace them for testing? I know the preferred approach is to explicitly require them in the constructor, and I'd like to get there eventually. But I'm wondering if this is a good first step.

    Read the article

  • Methods of learning / teaching programming

    - by Mark Avenius
    When I was in school, I had a difficult time getting into programming because of a catch-22 in the learning process: I didn't know how to write anything because I didn't know what keywords and commands meant. For example (as a student, I would think), "what does this using namespace std; thing do anyway? I didn't know what keywords and commands meant because I hadn't written anything. This basically led me to spending countless long night cursing the compiler as I made minor tweaks to my assignments until they would compile (and hopefully perform whatever operation they were supposed to). Is there a teaching/learning method that anyone uses that gets around this catch-22? I am trying to make this non-argumentative, which is why I don't want to know the 'best' method, but rather which methods exist.

    Read the article

  • Do you think natively compiled languages have reached their EOL?

    - by Yuval A
    If we look at the major programming languages in use today it is pretty noticeable that the vast majority of them are, in fact, interpreted. Looking at the largest piece of the pie we have Java and C# which are both enterprise-ready, heavy-duty, serious programming languages which are basically compiled to byte-code only to be interpreted by their respective VMs (the JVM and the CLR). If we look at scripting languages, we have Perl, Python, Ruby and Lua which are all interpreted (either from code or from bytecode - and yes, it should be noted that they are absolutely not the same). Looking at compiled languages we have C which is nowadays used in embedded and low-level, real-time environments, and C++ which is still alive and kicking, when you want to get down to serious programming as close to the hardware as you can, but still have some nice abstractions to help you with day to day tasks. Basically, there is no real runner-up compiled language in the distance. Do you feel that languages which are natively compiled to executable, binary code are a thing of the past, taken over by interpreted languages which are much more portable and compatible? Does C++ mark an end of an era? Why don't we see any new compiled languages anymore? I think I should clarify: I do not want this to turn into a "which language is better" discussion, because that is not the issue at hand. The languages I gave as example are only examples. Please focus on the question I raised, and if you disagree with my statement that compiled languages are less frequent these days, that is totally fine, I am more than happy to be proved mistaken.

    Read the article

  • Are Language Comparisons Meaningful?

    - by Prasoon Saurav
    Dr Bjarne Stroustrup in his book D&E says Several reviewers asked me to compare C++ to other languages. This I have decided against doing. Thereby, I have reaffirmed a long-standing and strongly held view: "Language comparisons are rarely meaningful and even less often fair" . A good comparison of major programming languages requires more effort than most people are willing to spend, experience in a wide range of application areas, a rigid maintenance of a detached and impartial point of view, and a sense of fairness. I do not have the time, and as the designer of C++, my impartiality would never be fully credible. -- The Design and Evolution of C++(Bjarne Stroustrup) Do you people agree with his this statement "Language comparisons are rarely meaningful and even less often fair"? Personally I think that comparing a language X with Y makes sense because it gives many more reasons to love/despise X/Y :-P What do you people think?

    Read the article

  • Newbie worried about CASE tool.

    - by Jason Evans
    Hi there. I'm looking for some guidance on CASE tools and whether my concerns are valid. Recently I was in a meeting between my employer and an external software company which have a CASE tool currently in beta. They demonstrated this tool to us, showing how you build a UML model in Enterprise Architect (or something like it) and then, through their tool, that UML model is transformed into a Visual Studio project, with C# files, stored procedures for SQL Server, code for the data layer, WCF stuff, logging code and allsorts. Now, admittedly, I don't see the point in this, as in I'm not convinced it will save that much time (plus it feels like overkill). The tool authors said that a trial of the tool at another company had saved a team there 5 weeks of development time (from 6 weeks down to about 1 week) using this tool. I find the accuracy of that estimate hard to believe. My main concern is whether using this tool is going slow down my productivity. For example - Say I have a UML model which I built a VS solution from. Now, I want to rename a class method to something else; will this mean having to update the UML model first and then rebuilding the code? Is this how case tools normally work? Something I will need to check with the authors is the structure of the generated VS solution. I like the Domain Driven Design way of project structure - Infrstructure, Services, Model, etc. I doubt very much this tool will do that. Also, I've been playing around with Entity Framework Code First and think it's a great way to build the data model. I have nice repositories, unit of work classes and other design patterns that work well with EF. I have data anootations and stuff like that working great. By not having EF (the CASE tool uses it's own data layer code) I'm concerned that this tool's data layer code might not be a nice to integrate in the UoW pattern, repositories, etc. This I will need to verify when I get a closer look at the generated code. What are other people's experiences with CASE tools? Am I being paranoid about nothing? Am I being unfair - are my negativities unfounded? EDIT: I like to use TDD/BDD for building my code, and using a CASE tool looks like it will make this difficult. Again, any feedback on this would be great. Cheers. Jas.

    Read the article

  • Au revoir, Python?

    - by GuySmiley
    I'm an ex-C++ programmer who's recently discovered (and fallen head-over-heels with) Python. I've taken some time to become reasonably fluent in Python, but I've encountered some troubling realities that may lead me to drop it as my language of choice, at least for the time being. I'm writing this in the hopes that someone out there can talk me out of it by convincing me that my concerns are easily circumvented within the bounds of the python universe. I picked up python while looking for a single flexible language that will allow me to build end-to-end working systems quickly on a variety of platforms. These include: - web services - mobile apps - cross-platform client apps for PC Development speed is more of a priority at the time-being than execution speed. However, in order to improve performance over time without requiring major re-writes or architectural changes I think it's imperative to be able to interface easily with Java. That way, I can use Java to optimize specific components as the application scales, without throwing away any code. As far as I can tell, my requirement for an enterprise-capable, platform-independent, fast language with a large developer base means it would have to be Java. .NET or C++ would not cut it due to their respective limitations. Also Java is clearly de rigeur for most mobile platforms. Unfortunately, tragically, there doesn't seem to be a good way to meet all these demands. Jython seems to be what I'm looking for in principle, except that it appears to be practically dead, with no one developing, supporting, or using it to any great degree. And also Jython seems too married to the Java libraries, as you can't use many of the CPython standard libraries with it, which has a major impact on the code you end up writing. The only other option that I can see is to use JPype wrapped in marshalling classes, which may work although it seems like a pain and I wonder if it would be worth it in the long run. On the other hand, everything I'm looking for seems to be readily available by using JRuby, which seems to be much better supported. As things stand, I think this is my best option. I'm sad about this because I absolutely love everything about Python, including the syntax. The perl-like constructs in Ruby just feel like such a step backwards to me in terms of readability, but at the end of the day most of the benefits of python are available in Ruby as well. So I ask you - am I missing something here? Much of what I've said is based on what I've read, so is this summary of the current landscape accurate, or is there some magical solution to the Python-Java divide that will snuff these concerns and allow me to comfortably stay in my happy Python place?

    Read the article

  • Small projects using the cathedral model: does open-source lower security?

    - by Anto
    We know of Linus' law: With enough eyeballs all bugs are shallow In general, people seem to say that open-source software is more secure because of that very thing, but... There are many small OSS projects with just 1 or 2 developers (the cathedral model, as described by ESR). For these projects, does releasing the source-code actually lower the security? For projects like the Linux kernel there are thousands of developers and security vulnerabilities are quite likely going to be found, but when just some few people look through the source code, while allowing crackers (black hat hackers) to see the source as well, is the security lowered instead of increased? I know that the security advantage closed-source software has over OSS is security through obscurity, which isn't good (at all), but it could help to some degree, at least by giving those few devs some more time (security through obscurity doesn't help with the if but with the when). EDIT: The question isn't whether OSS is more secure than non-OSS software but if the advantages for crackers are greater than the advantages for the developers who want to prevent security vulnerabilities from being exploited.

    Read the article

  • Practical Meta Programming System (MPS)

    - by INTPnerd
    This is in regards to Meta Programming System or MPS by JetBrains. Thus far, from my efforts to learn how to use MPS, I have only learned its basic purpose and that it is very complex. Is MPS worth learning? Is there anyone who already effectively uses MPS to create their own languages and editors for those languages and uses these created editors as their primary way of programming? If so, what types of programs have they made with this? What are the advantages and disadvantages of working with MPS? What is the best way to learn MPS?

    Read the article

  • Learning curve regarding the transition from Windows to Linux from a Java developer perspective [closed]

    - by Geek
    I am a Java developer who has worked on windows platform all through . Now I have shifted job and my new job requires me to do the development work in Red Hat Linux environment . The IDE they use is JDeveloper . I do not have any prior experience in Linux and JDeveloper . So what suggestion would you guys give me so that I can have a smooth and incremental transition from Windows to Linux ? I do not want to short circuit my learning curve . I want to learn it the correct way . Any suggestions regrading any good books,links etc that will help to get started is welcome .

    Read the article

  • Can an aggregate root hold references of members of another aggregate root?

    - by Rushino
    Hello, I know outside aggregates cant change anything inside an aggregate without passing by his root. That said i would like to know if an aggregate root can hold references of members (objects insides) of another aggregate root? (fellowing DDD rules) Example : a Calendar contain a list of phases which contain a list of sequences which contain a list of assignations Calendar is root because phases and sequences and assignations only work in context of a calendar. You also have Students and Groups of student (called groups) It is possible (fellowing DDD rules) to make Groups holding references of assignations or it need to pass by the root for accessing groups from assignations ? Thanks.

    Read the article

  • When do you call yourself a programmer

    - by benhowdle89
    "A programmer, computer programmer or coder is someone who writes computer software" from Wikipedia If you do frontend development using jQuery/CSS/HTML do you call yourself a programmer? If you develop PHP applications that deal with databases, do you call yourself a programmer? Are you only a programmer if you write applications for desktops and mobiles? Is the web a place where the line between developer and programmer stops?

    Read the article

  • Should I choose Doctrine 2 or Propel 1.5/1.6, and why?

    - by Billy ONeal
    I'd like to hear from those who have used Doctrine 2 (or later) and Propel 1.5 (or later). Most comparisons between these two object relational mappers are based on old versions -- Doctrine 1 versus Propel 1.3/1.4, and both ORMs went through significant redesigns in their recent revisions. For example, most of the criticism of Propel seems to center around the "ModelName Peer" classes, which are deprecated in 1.5 in any case. Here's what I've accumulated so far (And I've tried to make this list as balanced as possible...): Propel Pros Extremely IDE friendly, because actual code is generated, instead of relying on PHP magic methods. This means IDE features like code completion are actually helpful. Fast (In terms of database usage -- no runtime introspection is done on the database) Clean migration between schema versions (at least in the 1.6 beta) Can generate PHP 5.3 models (i.e. namespaces) Easy to chain a lot of things into a single database query with things like useXxx methods. (See the "code completion" video above) Cons Requires an extra build step, namely building the model classes. Generated code needs rebuilt whenever Propel version is changed, a setting is changed, or the schema changes. This might be unintuitive to some and custom methods applied to the model are lost. (I think?) Some useful features (i.e. version behavior, schema migrations) are in beta status. Doctrine Pros More popular Doctrine Query Language can express potentially more complicated relationships between data than easily possible with Propel's ActiveRecord strategy. Easier to add reusable behaviors when compared with Propel. DocBlock based commenting for building the schema is embedded in the actual PHP instead of a separate XML file. Uses PHP 5.3 Namespaces everywhere Cons Requires learning an entirely new programming language (Doctrine Query Language) Implemented in terms of "magic methods" in several places, making IDE autocomplete worthless. Requires database introspection and thus is slightly slower than Propel by default; caching can remove this but the caching adds considerable complexity. Fewer behaviors are included in the core codebase. Several features Propel provides out of the box (such as Nested Set) are available only through extensions. Freakin' HUGE :) This I have gleaned though only through reading the documentation available for both tools -- I've not actually built anything yet. I'd like to hear from those who have used both tools though, to share their experience on pros/cons of each library, and what their recommendation is at this point :)

    Read the article

  • Logical progressions through the job market

    - by Philluminati
    I'm 5 years out of a unrecognised university where I did Software Engineering. First job was VB.NET, one job was Python, Linux and Web development. I feel cast as a web developer. I'd love a role doing C but no one is interested in juniors if the applicant hasn't got 3 years of C development experience already. I've done some C and a drop of open source coding but I'll never have the confidence to convince someone I know absolutely what I'm doing. Do I just spend more and more time letting life pass me by as I sit in my room on a friday night writing a C problem "for the sake of learning more C" Basically I'm just not sure I want to continue my career if it's going to involve nothing but high level, machine abstracted, business logic and as interested as I am in low level development and enjoy reading books by Taunembaum I struggle to see how I can make the jump and I just feel life would be easier if I got a job in a cafe in Amsterdam rolling spliffs for customers. My ideal job, being a paid member of the Fedora development team seems so far away, without anyone to pay me to learn the skills to get there, and the only way would be to literally spend weeks and weeks of my life contributing code without recognition for free and without any guarentees at the end. Not that I've contributed anything at all so far. Are there any career paths that are logically set out so that jumping between roles is "correctly" incremental and where hard work and learning does eventually lead to the kind of places I might want to go? [ and also getting paid at the same time? ]

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >