Search Results

Search found 11553 results on 463 pages for 'bad programmer'.

Page 108/463 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • what languages are good selling points on resume? [closed]

    - by Thomas Galvin
    I have a good amount of experience with C# and Java at the moment but after education and whatnot I wish to be able in more than just 2 high-level, comparatively limited languages, and from what I've seen languages like C(++) or PHP are in demand at the moment. I've thought about learning the following: C. Very standard, lightweight and available on everything. However very old and mostly procedural. C++. Standard like C but I've read in some places that it encourages bad programming design and use of dodgy libraries - but similar things have been said about C too so I'll take that with a grain of salt. D. Quite new but looks promising, but will it be relevant or applicable in the future though? PHP. With the internet becoming ever more important I think this might be the one to go with, but the code itself isn't very intuitive. CoffeeScript (or plain JavaScript). With Microsoft's new idea of HTML5+JS for everything under the sun this doesn't look like a bad choice. However things do change and I wish to be primarily a software dev, not web dev. So out of the above list, or any others that you could suggest, what would you say I should begin to focus on? What is your opinion on staying with C#?

    Read the article

  • Companies and Ships

    - by TechnicalWriting
    I have worked for small, medium, large, and extra large companies and they have something in common with ships. These metaphors have been used before, I know, but I will have a go at them.The small company is like a speed boat, exciting and fast, and can turn on a dime, literally. Captain and crew share a lot of the work. A speed boat has a short range and needs to refuel a lot. It has difficulty getting through bad weather. (Small companies often live quarter to quarter. By the way, if a larger company is living quarter to quarter, it is taking on water.)The medium company is is like a battleship. It can maneuver, has a longer range, and the crew is focused on its mission. Its main concern are the other battleships trying to blow it out of the water, but it can respond quickly. Bad weather can jostle it, but it can get through most storms.The large company is like an aircraft carrier; a floating city. It is well-provisioned and can carry a specialized load for a very long range. Because of its size and complexity, it has to be well-organized to be effective and most of its functions are specialized (with little to no functional cross-over). There are many divisions and layers between Captain and crew. It is not very maneuverable; it has to set its course well in advance and have a plan of action.The extra large company is like a cruise liner. It also has to be well-organized and changes in direction are often slow. Some of the people are hard at work behind the scenes to run the ship; others can be along for the ride. They sail the same routes over and over again (often happily) with the occasional cosmetic face-lift to the ship and entertainment. It should stay in warm, friendly waters and avoid risky speed through fields of ice bergs.I have enjoyed my career on the various Ships of Technical Writing, but I get the most of my juice from the battleship where I am closer to the campaign and my contributions have the greater impact on success.Mark Metcalfewww.linkedin.com/in/MarkMetcalfe

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • What's wrong with circular references?

    - by dash-tom-bang
    I was involved in a programming discussion today where I made some statements that basically assumed axiomatically that circular references (between modules, classes, whatever) are generally bad. Once I got through with my pitch, my coworker asked, "what's wrong with circular references?" I've got strong feelings on this, but it's hard for me to verbalize concisely and concretely. Any explanation that I may come up with tends to rely on other items that I too consider axioms ("can't use in isolation, so can't test", "unknown/undefined behavior as state mutates in the participating objects", etc.), but I'd love to hear a concise reason for why circular references are bad that don't take the kinds of leaps of faith that my own brain does, having spent many hours over the years untangling them to understand, fix, and extend various bits of code. Edit: I am not asking about homogenous circular references, like those in a doubly-linked list or pointer-to-parent. This question is really asking about "larger scope" circular references, like libA calling libB which calls back to libA. Substitute 'module' for 'lib' if you like. Thanks for all of the answers so far!

    Read the article

  • Which things instantly ring alarm bells when looking at code? [closed]

    - by FinnNk
    I attended a software craftsmanship event a couple of weeks ago and one of the comments made was "I'm sure we all recognize bad code when we see it" and everyone nodded sagely without further discussion. This sort of thing always worries me as there's that truism that everyone thinks they're an above average driver. Although I think I can recognize bad code I'd love to learn more about what other people consider to be code smells as it's rarely discussed in detail on people's blogs and only in a handful of books. In particular I think it'd be interesting to hear about anything that's a code smell in one language but not another. I'll start off with an easy one: Code in source control that has a high proportion of commented out code - why is it there? was it meant to be deleted? is it a half finished piece of work? maybe it shouldn't have been commented out and was only done when someone was testing something out? Personally I find this sort of thing really annoying even if it's just the odd line here and there, but when you see large blocks interspersed with the rest of the code it's totally unacceptable. It's also usually an indication that the rest of the code is likely to be of dubious quality as well.

    Read the article

  • Graphics hardware warning when updating to 14.04

    - by pacomet
    As I use Ubuntu at work I just update only LTS versions but now I'm not sure if I can/should. As my computer is now ten years old I would change if was mine but as it is owned by my employer I have to work with it. It's not a bad one, it runs fine (this was not true when still had Windows on it ;-). When updating to 14.04, it warns about possible bad/slow performance with Unity 3D so I stop updating as I am at work, not my own computer. As I understand from http://askubuntu.com/a/438958/25305 Nvidia Geforce FX 5500 graphics card is still supported in 14.04. Now, in 12.04, I have driver version 173 and unity 2d runs fine for me. output of /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.39 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Should I update? Is it better to stay with 12.04? Thanks

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Ternary and Artificial Intelligence

    - by user2957844
    Not much of a programmer myself, however I have been thinking about the future of AI. If a fully functional AI is programmed in a binary environment as is used in current computing, would that create a bit of a black and white personality? As in just yes/no, on/off, 1/0? I will use the Skynet computer from the Terminator series as a bad analogy; it is brought online and comes to the conclusion that humanity should just be destroyed so the problem is resolved, basically its only options were; fire the missiles or not. (The films do not really go into what its moves would be after doing such a thing, but that goes into the realms of AI evolution so does not really fit with this question.) It may also have been badly programmed. Now, the human mind has been akin to a ternary system which allows our "out of the box" thinking along with all the other wonderful things our minds can do. So, would it not be more prudent to create a functional ternary system and program an AI using it so the resulting personality would be able to benefit from the third "maybe" (so to speak) option? I understand that in binary there are ways to get around the whole yes/no etc. way of things, however the basic operations are still just 1's and 0's. Again with using the above bad Skynet analogy; if it could have had that third "maybe" option as part of its core system, it may have decided to not launch due to being able to make sense of the intricacies of human nature and the politics of such a move etc. In effect, my question is; Would an AI benefit more from ternary computing as opposed to binary due to the inclusion of -1, or 2, dependent on the system ("maybe," as I call it)?

    Read the article

  • recovering raid 0 hard disk

    - by Hiawatha
    I bumped to a huge (for me) problem. I was running dual boot system (win 7 / linux) and at some point I decided to test fedora ( I am new in Linux ). My hard disk conf: 3 hard disks each 1 TB, 2 set to raid 0 with windows running on it and 1 for linux. After installing it from live usb I found out that windows 7 is not in grub anymore and while booting shows raid error. I installed back Ubuntu and ran Disk Utility and checked now I have one disks (raid 0) failed (READ) error. First has 5 bad sectors and second has 1 bad sector. And now I dont know what to do and how to repair. further I dont know which data i could provide to get help. I tried ntfsfix and got this output: Mounting volume... NTFS signature is missing. FAILED Attempting to correct errors... NTFS signature is missing. FAILED Failed to startup volume: Invalid argument NTFS signature is missing. Trying the alternate boot sector Unrecoverable error Volume is corrupt. You should run chkdsk. #sudo ntfs-3g -o force,rw /dev/sdb /media/windows NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

    Read the article

  • What is a good way to refactor a large, terribly written code base by myself? [closed]

    - by AgentKC
    Possible Duplicate: Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?

    Read the article

  • Which is the best way to catch an expiring domain name? [closed]

    - by newspeak
    I know a similar question has been asked, but I really don't know what to do. There is this .com domain which is currently on redemption period and should likely be available again within a month. I was wondering which is the best way to get it at a reasonable price. I don't think it's a highly valuable domain, it shows to have very bad ranking and has 0 exact same searches according to adwords. Why it is valuable to me is very simple: I have a project responding to this name. I already own the .net domain and would love to have the .com. I discovered the domain was going to be available thanks to an email I received by a backorder site. I did some research and these guys have a bad reputation on the web. I did further research and found that more reputable (at least in theory) companies should be the likes of snapnames, pool, namejet, godaddy, etc. I am a bit suspicious using these drop cathing services: What if they shill bids? What if they make it go into auction even if I'm the only person interested? What if I raise attention and interest to the domain by backordering? I just would rather wait for it to be deleted and available again to register it manually. It is really not an interesting domain name, and I don't think anyone would care to have it. But what if the domain is already being watched by the domain industry sharks? I did a whois research and my desired domain nameserves point to domcollect.com, which appears to be an auction site. What if I decide to wait for manual registration and I miss the chance to get it? I'm willing to spend the 60/70$ fees these sites require, but not really more than that. Suggestions? Thank you very much. I'm a bit confused and undecided.

    Read the article

  • ?????Exadata????

    - by Liu Maclean(???)
    ??check Exadata Image & OS versions , GI & DB patches sundiag exacheck cellserv ==> imageinfo dbhost ==> /usr/local/bin/imagehistory Also check the version of the switch. Login to Switch and execute the following command [root@myswitch-1 sbin]# version [root@dmorlsw-ib2 sbin]# cd /usr/local/bin [root@dmorlsw-ib2 bin]# ls -lrt version -rwxr-xr-x 1 root root 20356 Apr 4 2011 version Output will look as below. [root@dmorlsw-ib2 ~]# version SUN DCS 36p version: 1.3.3-2 Build time: Apr 4 2011 11:15:19 SP board info: Manufacturing Date: 2009.05.05 Serial Number: "NCD3X0178" Hardware Revision: 0x0006 Firmware Revision: 0x0102 BIOS version: NOW1R112 BIOS date: 04/24/2009 ib8# cat /sys/class/infiniband/is4_0/fw_ver 7.2.300 ib8 # cat /sys/class/dmi/id/bios_version NOW1R112 ib8 # nm2version NM2-36p version: 1.0.1-1 Build time: Sep 14 2009 12:52:51 ComExpress info: Manufacturing Date: 2009.08.19 Serial Number: Hardware Revision: 0x0006 Firmware Revision: 0x0102 { case `uname` in Linux ) ILOM="/usr/bin/ipmitool sunoem cli" ;; SunOS ) ILOM="/opt/ipmitool/bin/ipmitool sunoem cli" ;; esac ; ImageInfo="/opt/oracle.cellos/imageinfo" ; uname -srm ; head -1 /etc/*release ; uptime | cut -d, -f1 ; $ILOM "show /SP system_description system_identifier" | grep = ; $ImageInfo -activated -node -status -ver | grep -v ^$ ; } | tee /tmp/ExaInfo.log $GRID_HOME/OPatch/opatch lsinv -all -oh $GRID_HOME | tee /tmp/OPatchInv.log $ORACLE_HOME/OPatch/opatch lsinv -all | tee -a /tmp/OPatchInv.log cat /tmp/ExaInfo.log Linux 2.6.18-128.1.16.0.1.el5 x86_64 ==> /etc/enterprise-release <== Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) ==> /etc/redhat-release <== Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) 20:37:56 up 458 days system_description = SUN FIRE X4170 SERVER, ILOM v3.0.6.10.b, r52264 system_identifier = Sun Oracle Database Machine Active image version: 11.2.1.2.3 Active image activated: XXXX-XX-XX 12:27:12 +0800 Active image status: success Active node type: COMPUTE Inactive image version: undefined FileName: OPatchInv.log ---------------- ... Oracle Home       : /u01/app/11.2.0/grid Central Inventory : /u01/app/oraInventory   from           : /etc/oraInst.loc OPatch version    : 11.2.0.1.2 OUI version       : 11.2.0.1.0 OUI location      : /u01/app/11.2.0/grid/oui ... -------------------------------------------------------------------------------- List of Oracle Homes:   Name                                       Location   Ora11g_gridinfrahome1         /u01/app/11.2.0/grid   OraDb11g_home1                  /u01/app/oracle/product/11.2.0/dbhome_1 -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure                                           11.2.0.1.0 ... Interim patches (2) : Patch  9524394      : applied on Thu Jun 03 20:46:05 CST 2010 ... {TRACKING BUG FOR 11.2.0.1 DB MACHINE BUNDLE PATCH 3} Patch  9455587      : applied on Fri Apr 02 18:27:47 CST 2010 ... {MERGE REQUEST ON TOP OF 11.2.0.1.0 FOR BUGS 8483425 8667622 8702731 8730804} Rac system comprising of multiple nodes  Local node = dbserv01  Remote node = dbserv02  Remote node = dbserv03  Remote node = dbserv04 -------------------------------------------------------------------------------- OPatch succeeded. ... Oracle Home       : /u01/app/oracle/product/11.2.0/dbhome_1 ... Oracle Database 11g                                                  11.2.0.1.0 ... Interim patches (5) : Patch  8888434      : applied on Sat Jan 08 00:27:33 CST 2011 ... {AIX-ASM-CF: LMHB TERMINATE INSTANCE WHEN OFFLINE ONE FAILGROUP IN ASM DG} Patch  8730312      : applied on Thu Jun 03 21:30:03 CST 2010 ... {FWD MERGE FOR BASE BUG 8715387 FOR 12G} Patch  9502717      : applied on Thu Jun 03 21:25:54 CST 2010 ... {LMS HIT ORA-600 [KJBLDRMNEXTPKEY:SEEN] AND CRASHED THE INSTANCE} { + same 2 as GI above} ?? cell server Cache Policy cell08# MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy' Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU cell09# MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy' Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Cache policy is in WB Would recommend proactive  battery repalcement. Example : a. /opt/MegaRAID/MegaCli/MegaCli64 -LDGetProp  -Cache -LALL -aALL ####( Will list the cache policy) b. /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp  -WB  -LALL -aALL ####( Will try to change teh policy from xx to WB)     So policy Change to WB will not come into effect immediately     Set Write Policy to WriteBack on Adapter 0, VD 0 (target id: 0) success     Battery capacity is below the threshold value ??cell BBU??????: cell08# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0 BBU status for Adapter: 0 BatteryType: iBBU Voltage: 4061 mV Current: 0 mA Temperature: 36 C BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : Yes Periodic Learn Required : No Battery state: GasGuageStatus: Fully Discharged : No Fully Charged : Yes Discharging : Yes Initialized : Yes Remaining Time Alarm : No Remaining Capacity Alarm: No Discharge Terminated : No Over Temperature : No Charging Terminated : No Over Charged : No Relative State of Charge: 99 % Charger System State: 49168 Charger System Ctrl: 0 Charging current: 0 mA Absolute state of charge: 21 % Max Error: 2 % Exit Code: 0x00 ????BBU ??: dcli -g ~/cell_group -l root -t '{ uname -srm ; head -1 /etc/*release ; uptime | cut -d, -f1 ; imagehistory ; ipmitool sunoem cli "show /SP system_description system_identifier" | grep = ; ipmitool sunoem cli "show /SP/policy FLASH_ACCELERATOR_CARD_INSTALLED /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0 | egrep -i 'BBU|Battery|Charge:|Fully|Low|Learn' ; }' | tee /tmp/ExaInfo.log Target cells: ['cellserv01', 'cellserv02', 'cellserv03', 'cellserv04', 'cellserv05', 'cellserv06', 'cellserv07'] cellserv01: Linux 2.6.18-128.1.16.0.1.el5 x86_64 cellserv01: ==> /etc/enterprise-release <== cellserv01: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) cellserv01: cellserv01: ==> /etc/redhat-release <== cellserv01: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) cellserv01: 01:17:39 up 635 days cellserv01: Version : 11.2.1.2.1 cellserv01: Image activation date : 2011-03-25 11:59:34 -0800 cellserv01: Imaging mode : fresh cellserv01: Imaging status : success cellserv01: cellserv01: Version : 11.2.1.2.3 cellserv01: Image activation date : 2011-04-13 12:15:46 +0800 cellserv01: Imaging mode : patch cellserv01: Imaging status : success cellserv01: cellserv01: Version : 11.2.1.2.6 cellserv01: Image activation date : 2011-05-27 23:08:22 +0800 cellserv01: Imaging mode : patch cellserv01: Imaging status : success cellserv01: cellserv01: system_description = SUN FIRE X4275 SERVER, ILOM v3.0.6.10.b, r52264 cellserv01: system_identifier = Sun Oracle Database Machine cellserv01: Connected. Use ^D to exit. cellserv01: -> show /SP/policy FLASH_ACCELERATOR_CARD_INSTALLED cellserv01: show: No matching properties found. cellserv01: cellserv01: -> Session closed cellserv01: Disconnected cellserv01: BBU status for Adapter: 0 cellserv01: BatteryType: iBBU cellserv01: BBU Firmware Status: cellserv01: Learn Cycle Requested : No cellserv01: Learn Cycle Active : No cellserv01: Learn Cycle Status : OK cellserv01: Learn Cycle Timeout : No cellserv01: Battery Pack Missing : No cellserv01: Battery Replacement required : No cellserv01: Remaining Capacity Low : Yes cellserv01: Periodic Learn Required : No cellserv01: Battery state: cellserv01: Fully Discharged : No cellserv01: Fully Charged : Yes cellserv01: Relative State of Charge: 99 % cellserv01: Absolute state of charge: 21 % dcli -l root -g /root/all_group '/opt/MegaRAID/MegAaCli/MegaCli64 -AdpBbuCmd -a0' > BBU.out check ipmi: dcli -g ~/cell_group -l root -t '{ > ipmitool sunoem cli "show /SP/policy FLASH_ACCELERATOR_CARD_INSTALLED" | grep = ; MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy' ; }' | tee /tmp/ExaCells.log

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • Vista/7: How to get glass color?

    - by Ian Boyd
    How do you use DwmGetColorizationColor? The documentation says it returns two values: a 32-bit 0xAARRGGBB containing the color used for glass composition a boolean parameter that is true "if the color is an opaque blend" (whatever that means) Here's a color that i like, a nice puke green: You can notice the color is greeny, and the translucent title bar (against a white background) shows the snot color very clearly: i try to get the color from Windows: DwmGetColorizationColor(dwCcolorization, bIsOpaqueBlend); And i get dwColorization: 0x0D0A0F04 bIsOpaqueBlend: false According to the documentation this value is of the format AARRGGBB, and so contains: AA: 0x0D (13) RR: 0x0A (10) GG: 0x0F (15) BB: 0x04 (4) This supposedly means that the color is (10, 15, 4), with an opacity of ~5.1%. But if you actually look at this RGB value, it's nowhere near my desired snot green. Here is (10, 15, 4) with zero opacity (the original color), and (10,15,4) with 5% opacity against a white/checkerboard background: So the question is: How to get glass color in Windows Vista/7? i tried using DwmGetColorizationColor, but that doesn't work very well. A person with same problem, but a nicer shiny picture to attract you squirrels: So, it boils down to – DwmGetColorizationColor is completely unusable for applications attempting to apply the current color onto an opaque surface. i love this guy's screenshots much better than mine. Using his screenshots as a template, i made up a few more sparklies: For the last two screenshots, the alpha blended chip is a true partially transparent PNG, blending to your browser's background. Cool! (i'm such a geek) Edit 2: Had to arrange them in rainbow color. (i'm such a geek) Edit 3: Well now i of course have to add Yellow. Undocumented/Unsupported/Fragile Workarounds There is an undocumented export from DwmApi.dll at entry point 137, which we'll call DwmGetColorizationParameters: HRESULT GetColorizationParameters_Undocumented(out DWMCOLORIZATIONPARAMS params); struct DWMCOLORIZATIONPARAMS { public UInt32 ColorizationColor; public UInt32 ColorizationAfterglow; public UInt32 ColorizationColorBalance; public UInt32 ColorizationAfterglowBalance; public UInt32 ColorizationBlurBalance; public UInt32 ColorizationGlassReflectionIntensity; public UInt32 ColorizationOpaqueBlend; } We're interested in the first parameter: ColorizationColor. We can also read the value out of the registry: HKEY_CURRENT_USER\Software\Microsoft\Windows\DWM ColorizationColor: REG_DWORD = 0x6614A600 So you pick your poison of creating appcompat issues. You can rely on an undocumented API (which is bad, bad, bad, and can go away at any time) use an undocumented registry key (which is also bad, and can go away at any time) See also Is there a list of valid parameter combinations for GetThemeColor / Visual Styles API How does Windows change Aero Glass color? DWM - Colorization Color Handling Using DWMGetColorizationColor Retrieving Aero Glass base color for opaque surface rendering i've been wanting to ask this question for over a year now. i always knew that it's impossible to answer, and that the only way to get anyone to actually pay attention is to have colorful screenshots; developers are attracted to shiny things. But on the downside it means i had to put all kinds of work into making the lures.

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • Anyone willing to help out a javascript n00b? :-)

    - by Splynx
    Since I am asking for a lot, and know it, the following is a wall of text for those who might show some interest and want to know a little before offering their help to me. First a little about my level of programming skills, and a little about what I ask for. Where I'm at: I am not totally new to Javascript, and have dabbled a little with PHP earlier - well have dabbled a lot with PHP in fact, but never got good at it because I program alone. And I have until now never used forums to get help etc. other that searching to see if anyone else had my problem before and what the solution was. So I am not a intuitive or talented programmer, I'm more of a very maticulate programmer and you would be surprised how far you can get with if else... (ok that's a joke hehe). My solutions are usually (I am guessing here) not the best ones - and slow I take it, and the code is usually too long and I have to look up most of the stuff I use (really a lot of it is not done in "freehand"). I have a LOT of experience with HTML and CSS, and have always done well formed markup, as well as I am really into x-browsing and always require that my work validates when it's done. I also worry about optimizing a lot, and work with sprites for images, minimize the number of http requests etc, using H1,H2 etc. where it is logically correct, as well as use the correct elements and not just div span or p it... So because I am a workhorse and very maticulate I can actually pull off some quite "advanced" features, but it's always the basics that bite me in the end. Not fully understanding the syntax and so on usually gives me problems. Have recently discovered jQuery - wich is a lot of fun.... But I want to use it for the DOM node manipulation/handling only. As I mentioned I worry about optimizing, and jQuery used for everything seems... well not optimal, it strikes me as doing it yourself when possible is faster than accesing another script that may take a whole lot of other considerations into perspective when handling your variables and objects (and I am just guessing here since I as explained know nothing). So thats where I'm at... As mentioned I just started with javascript for "real" so I do not have much to show, but at the end of my WOT you can see two unfinisheded scripts I have made so you can see where I'm at roughly - just check out the URL without the /feedback.html for the second example (I am only allowed to post 1 link since I am also a SO n00b) (and for those rushing over to a validation service, remember I wrote "when it's done"...) What I ask for: I am figuring this... I have a piece of code I am working on at the moment, and this little project has taught me a whole lot already, and I have "grown" a lot as a javascript programmer. If I add a whole lot of comments to the script, and explain what it is intended to do, will you then show me where: I am writing incorrect code - making mistakes Where/how my code could be more optimal Where I am just simply being a muppet The code I want to use as the background for the tuition is the one here http://projects.1000monkeys.dk/feedback.html Use firebug and have a quick look see...

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; UPDATE Guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4 Menu Item 2 .............. $ 7.5 Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer. echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation.

    Read the article

  • C# 4.0: COM Interop Improvements

    - by Paulo Morgado
    Dynamic resolution as well as named and optional arguments greatly improve the experience of interoperating with COM APIs such as Office Automation Primary Interop Assemblies (PIAs). But, in order to alleviate even more COM Interop development, a few COM-specific features were also added to C# 4.0. Ommiting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. These parameters are typically not meant to mutate a passed-in argument, but are simply another way of passing value parameters. Specifically for COM methods, the compiler allows to declare the method call passing the arguments by value and will automatically generate the necessary temporary variables to hold the values in order to pass them by reference and will discard their values after the call returns. From the point of view of the programmer, the arguments are being passed by value. This method call: object fileName = "Test.docx"; object missing = Missing.Value; document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); can now be written like this: document.SaveAs("Test.docx", Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value); And because all parameters that are receiving the Missing.Value value have that value as its default value, the declaration of the method call can even be reduced to this: document.SaveAs("Test.docx"); Dynamic Import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object form the context of the call, but has to explicitly perform a cast on the returned values to make use of that knowledge. These casts are so common that they constitute a major nuisance. To make the developer’s life easier, it is now possible to import the COM APIs in such a way that variants are instead represented using the type dynamic which means that COM signatures have now occurrences of dynamic instead of object. This means that members of a returned object can now be easily accessed or assigned into a strongly typed variable without having to cast. Instead of this code: ((Excel.Range)(excel.Cells[1, 1])).Value2 = "Hello World!"; this code can now be used: excel.Cells[1, 1] = "Hello World!"; And instead of this: Excel.Range range = (Excel.Range)(excel.Cells[1, 1]); this can be used: Excel.Range range = excel.Cells[1, 1]; Indexed And Default Properties A few COM interface features are still not available in C#. On the top of the list are indexed properties and default properties. As mentioned above, these will be possible if the COM interface is accessed dynamically, but will not be recognized by statically typed C# code. No PIAs – Type Equivalence And Type Embedding For assemblies indentified with PrimaryInteropAssemblyAttribute, the compiler will create equivalent types (interfaces, structs, enumerations and delegates) and embed them in the generated assembly. To reduce the final size of the generated assembly, only the used types and their used members will be generated and embedded. Although this makes development and deployment of applications using the COM components easier because there’s no need to deploy the PIAs, COM component developers are still required to build the PIAs.

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • How do I prove or disprove "god" objects are wrong?

    - by honestduane
    Problem Summary: Long story short, I inherited a code base and an development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, It doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and its never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), The problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >