Search Results

Search found 107 results on 5 pages for 'hunch'.

Page 5/5 | < Previous Page | 1 2 3 4 5 

  • Atomikos rollback doesn't clear JPA persistence context?

    - by HDave
    I have a Spring/JPA/Hibernate application and am trying to get it to pass my Junit integration tests against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts my DAO integration one of the tests is failing with org.hibernate.NonUniqueObjectException. In the failing test I create an object with the "new" operator, set the ID and call persist on it. @Test @Transactional public void save_UserTestDataNewObject_RecordSetOneLarger() { int expectedNumberRecords = 4; User newUser = createNewUser(); dao.persist(newUser); List<User> allUsers = dao.findAll(0, 1000); assertEquals(expectedNumberRecords, allUsers.size()); } In the previous testmethod I do the same thing (createNewUser() is a helper method that creates an object with the same ID everytime). I am sure that creating and persisting a second object with the same Id is the cause, but each test method is in own transaction and the object I created is bound to a private test method variable. I can even see in the logs that Spring Test and Atomikos are rolling back the transaction associated with each test method. I would have thought the rollback would have also cleared the persistence context too. On a hunch, I added an a call to dao.clear() at the beginning of the faulty test method and the problem went away!! So rollback doesn't clear the persistence context??? If not, then who does?? My EntityManagerFactory config is as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • How can I get my Web API app to run again after upgrading to MVC 5 and Web API 2?

    - by Clay Shannon
    I upgraded my Web API app to the funkelnagelneu versions using this guidance: http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2 However, after going through the steps (it seems all this should be automated, anyway), I tried to run it and got, "A project with an Output Type of Class Library cannot be started directly" What in Sam Hills Brothers Coffee is going on here? Who said this was a class library? So I opened Project Properties, and changed it (it was marked as "Class Library" for some reason - it either wasn't yesterday, or was and worked fine) to an Output Type of "Windows Application" ("Console Application" and "Class Library" being the only other options). Now it won't compile, complaining: "*Program 'c:\Platypus_Server_WebAPI\PlatypusServerWebAPI\PlatypusServerWebAPI\obj\Debug\PlatypusServerWebAPI.exe' does not contain a static 'Main' method suitable for an entry point...*" How can I get my Web API app back up and running in view of this quandary? UPDATE Looking in packages.config, two entries seem chin-scratch-worthy: <package id="Microsoft.AspNet.Providers" version="1.2" targetFramework="net40" /> <package id="Microsoft.Web.Infrastructure" version="1.0.0.0" targetFramework="net40" /> All the others target net451. Could this be the problem? Should I remove these packages? UPDATE 2 I tried to uninstall the Microsoft.Web.Infrastructure package (its description leads me to believe I don't need it; also, it has no dependencies) via the NuGet package manager, but it tells me, "NuGet failed to install or uninstall the selected package in the following project(s). [mine]" UPDATE 3 I went through the steps in again, and found that I had missed one step. I had to change this entry in the Application web.config File : <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-5.0.0.0" newVersion="5.0.0.0" /> </dependentAssembly> (from "4.0.0.0" to "5.0.0.0") ...but I still get the same result - it rebuilds/compiles, but won't run, with "A project with an Output Type of Class Library cannot be started directly" UPDATE 4 Thinking about the err msg, that it can't directly open a class library, I thought, "Sure you can't/won't -- this is a web app, not a project. So I followed a hunch, closed the project, and reopened it as a website (instead of reopening a project). That has gotten me further, at least; now I see a YSOD: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. UPDATE 5 Note: The project is now (after being opened as a web site) named "localhost_48614" And...there is no longer a "References" folder...?!?!? As to that YSOD I'm getting, the official instructions (http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2) said to do this, and I quote: "Update all elements that contain “System.Web.WebPages.Razor” from version “2.0.0.0” to version“3.0.0.0”." UPDATE 6 When I select Tools Library Package Manager Manage NuGet Packages for Solution now, I get, "Operation failed. Unable to locate the solution directory. Please ensure that the solution has been saved." So I save it, and it saves it with this funky new name (C:\Users\clay\Documents\Visual Studio 2013\Projects\localhost_48614\localhost_48614.sln) I get the Yellow Strip of Enlightenment across the top of the NuGet Package Manager telling me, "Some NuGet packages are missing from this solution. Click to restore from your online package sources." I do (click the "Restore" button, that is), and it downloads the missing packages ... I end up with the 30 packages. I try to run the app/site again, and ... the erstwhile YSOD becomes a compilation error: The pre-application start initialization method Start on type System.Web.Mvc.PreApplicationStartCode threw an exception with the following error message: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.. Argghhhh!!! (and it's not even talk-like-a-pirate day).

    Read the article

  • Get Image Source URLs from a Different Page Using JS

    - by SDD
    Everyone: I'm trying to grab the source URLs of images from one page and use them in some JavaScript in another page. I know how to pull in images using JQuery .load(). However, rather than load all the images and display them on the page, I want to just grab the source URLs so I can use them in a JS array. Page 1 is just a page with images: <html> <head> </head> <body> <img id="image0" src="image0.jpg" /> <img id="image1" src="image1.jpg" /> <img id="image2" src="image2.jpg" /> <img id="image3" src="image3.jpg" /> </body> </html> Page 2 contains my JS. (Please note that the end goal is to load images into an array, randomize them, and using cookies, show a new image on page load every 10 seconds. All this is working. However, rather than hard code the image paths into my javascript as shown below, I'd prefer to take the paths from Page 1 based on their IDs. This way, the images won't always need to be titled "image1.jpg," etc.) <script type = "text/javascript"> var days = 730; var rotator = new Object(); var currentTime = new Date(); var currentMilli = currentTime.getTime(); var images = [], index = 0; images[0] = "image0.jpg"; images[1] = "image1.jpg"; images[2] = "image2.jpg"; images[3] = "image3.jpg"; rotator.getCookie = function(Name) { var re = new RegExp(Name+"=[^;]+", "i"); if (document.cookie.match(re)) return document.cookie.match(re)[0].split("=")[1]; return''; } rotator.setCookie = function(name, value, days) { var expireDate = new Date(); var expstring = expireDate.setDate(expireDate.getDate()+parseInt(days)); document.cookie = name+"="+value+"; expires="+expireDate.toGMTString()+"; path=/"; } rotator.randomize = function() { index = Math.floor(Math.random() * images.length); randomImageSrc = images[index]; } rotator.check = function() { if (rotator.getCookie("randomImage") == "") { rotator.randomize(); document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenTime = parseInt(rotator.getCookie("timeClock"),10); if ( currentMilli > writtenTime + 10000 ) { rotator.randomize(); var writtenImage = rotator.getCookie("randomImage") while ( randomImageSrc == writtenImage ) { rotator.randomize(); } document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenImage = rotator.getCookie("randomImage") document.write("<img src=" + writtenImage + ">"); } } } rotator.check() </script> Can anyone point me in the right direction? My hunch is to use JQuery .get(), but I've been unsuccessful so far. Please let me know if I can clarify!

    Read the article

  • Ruby erb template- try to change layout- get error

    - by nigel curruthers
    Hi there! I'm working my way through adapting a template I have been given that is basically a list of products for sale. I want to change it from a top-down list into a table layout. I want to end up with something as follows- <div id= 'ladiesproducts'> <% ladies_products = hosting_products.find_all do |product| product.name.match("ladies") end %> <table><tbody> <% [ladies_products].each do | slice | %> <tr> <% slice.each do | product | %> <td> <h4><%= product.name.html %></h4> <p><%= product.description %></p> <% other parts go here %> </td> <% end %> </tr> <% end %> </tbody></table> </div> This works fine for the layout that I am trying to achieve. The problem I have is when I paste back the <% other parts go here % part of the code. I get an internal error message on the page. I am completely new to Ruby so am just bumbling my way through this really. I have a hunch that I'm neglecting something that is probably very simple. The <% other parts go here %> code is as follows: <input type='hidden' name='base_renewal_period-<%= i %>' value="<%= product.base_renewal_period %>" /> <input type='hidden' name='quoted_unit_price-<%= i %>' value="<%= billing.price(product.unit_price) %>" /> <p><input type='radio' name='add-product' value='<%= product.specific_type.html %>:<%= i %>:base_renewal_period,quoted_unit_price,domain' /><%= billing.currency_symbol.html %><%= billing.price(product.unit_price, :use_tax_prefs) %> <% if product.base_renewal_period != 'never' %> every <%= product.unit_period.to_s_short.html %> <% end %> <% if product.setup_fee != 0 %> plus a one off fee of <%= billing.currency_symbol.html %><%= sprintf("%.2f", if billing.include_tax? then billing.price(product.setup_fee) else product.setup_fee end) %> <% end %> <% if product.has_free_products? %> <br /> includes free domains <% product.free_products_list.each do | free_product | %> <%= free_product["free_name"] %> <% end %> <% end %> * </p> <% i = i + 1 %> <% end %> <p><input type='submit' value='Add to Basket'/></p> </form> <% unless basket.nil? or basket.empty? or no_upsell? %> <p><a href='basket?add-no-product=package'>No thank you, please continue with my order ...</a></p> <% end %> <% if not billing.tax_applies? %> <% elsif billing.include_tax? %> <p>* Includes <%= billing.tax_name %></p> <% else %> <p>* Excluding <%= billing.tax_name %></p> <% end %> If anyone can point out what I'm doing wrong or what I'm missing or failing to change I would GREATLY appreciate it! Many thanks in advance. Nigel

    Read the article

  • How to diagnose frequent segfaults

    - by Andreas Gohr
    My server is logging frequent segmentation faults to /var/log/kern.log in different tools. So far I've seen them in Perl, PHP and rsync. All installed software is up-to-date Debian packages. Here's an exerpt from the log file: Mar 2 01:07:54 gaz kernel: [ 5316.246303] imapsync[4533]: segfault at 8b ip 00007fb448c98fe6 sp 00007ffff571dd68 error 4 in libperl.so.5.10.1[7fb448bd7000+164000] Mar 2 01:17:42 gaz kernel: [ 5904.354307] php5-cgi[4441]: segfault at 2bb3dc8 ip 0000000002bb3dc8 sp 00007fffbeeaae48 error 15 Mar 2 02:54:05 gaz kernel: [11687.922316] php5-cgi[4495]: segfault at 2d7acf9 ip 0000000002d7acf9 sp 00007fff60c6eb18 error 15 Mar 2 10:50:08 gaz kernel: [40250.390322] BUG: unable to handle kernel paging request at 00000000024b03f0 Mar 2 10:50:08 gaz kernel: [40250.390341] IP: [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390353] PGD 208c71067 PUD 21c811067 PMD 209329067 PTE 8000000211c88067 Mar 2 10:50:08 gaz kernel: [40250.390365] Oops: 0011 [#1] SMP Mar 2 10:50:08 gaz kernel: [40250.390373] last sysfs file: /sys/devices/pci0000:00/0000:00:12.0/host4/target4:0:0/4:0:0:0/block/sdb/stat Mar 2 10:50:08 gaz kernel: [40250.390386] CPU 1 Mar 2 10:50:08 gaz kernel: [40250.390392] Modules linked in: cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative xt_recent xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ ipv4 ip6table_filter ip6_tables xt_DSCP xt_TCPMSS ipt_LOG ipt_REJECT iptable_mangle iptable_filter xt_multiport xt_state xt_limit xt_conntrack nf_conntrack_ftp nf_conntrack ip_tables x_tables loop snd _hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm snd drm_kms_helper soundcore drm snd_page_alloc i2c_algo_bit shpchp i2c_piix4 edac_core pcspkr k8temp evdev edac_m ce_amd pci_hotplug i2c_core button ext3 jbd mbcache dm_mod powernow_k8 aacraid 3w_9xxx 3w_xxxx raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 md_mod sata_nv sata_sil sata_via sd_mod crc_t10dif ata_generic ahci pata_atiixp ohci_hcd libata r8169 mii thermal ehci_hcd processor thermal_sys scsi_mod usbcore nls_base [last unloaded: scsi_wait_scan] Mar 2 10:50:08 gaz kernel: [40250.390566] Pid: 11482, comm: munin-limits Not tainted 2.6.32-5-amd64 #1 MS-7368 Mar 2 10:50:08 gaz kernel: [40250.390576] RIP: 0010:[<00000000024b03f0>] [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390586] RSP: 0018:ffff88021cc8dec0 EFLAGS: 00010286 Mar 2 10:50:08 gaz kernel: [40250.390593] RAX: 000000001ddc1000 RBX: 0000000000000010 RCX: ffffffff810f9904 Mar 2 10:50:08 gaz kernel: [40250.390600] RDX: 0000000000000000 RSI: ffffea0007688200 RDI: 0000000000000286 Mar 2 10:50:08 gaz kernel: [40250.390608] RBP: 00000000ffffffea R08: 0000000000000025 R09: 7865542f30312e35 Mar 2 10:50:08 gaz kernel: [40250.390615] R10: 000000d01cc8ddf8 R11: 0000000000000246 R12: ffff88021cc8def8 Mar 2 10:50:08 gaz kernel: [40250.390622] R13: 0000000002295010 R14: 00000000022c9db0 R15: 0000000002488d78 Mar 2 10:50:08 gaz kernel: [40250.390630] FS: 00007f3b3c8b2700(0000) GS:ffff880008d00000(0000) knlGS:0000000000000000 Mar 2 10:50:08 gaz kernel: [40250.390641] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 2 10:50:08 gaz kernel: [40250.390648] CR2: 00000000024b03f0 CR3: 000000021c5d1000 CR4: 00000000000006e0 Mar 2 10:50:08 gaz kernel: [40250.390656] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 2 10:50:08 gaz kernel: [40250.390663] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 2 10:50:08 gaz kernel: [40250.390671] Process munin-limits (pid: 11482, threadinfo ffff88021cc8c000, task ffff88021bf59530) Mar 2 10:50:08 gaz kernel: [40250.390681] Stack: Mar 2 10:50:08 gaz kernel: [40250.390687] ffffffff810f1d4a ffff880208c63228 0000000000000000 00007fffc2dcecc0 Mar 2 10:50:08 gaz kernel: [40250.390697] <0> 00000000024ba2b0 0000000002295010 ffffffff810f1e3d 0000000000000004 Mar 2 10:50:08 gaz kernel: [40250.390712] <0> ffff88021bf59530 ffff88021c4edc00 ffffffff812fe0b6 ffff88021c4edc60 Mar 2 10:50:08 gaz kernel: [40250.390732] Call Trace: Mar 2 10:50:08 gaz kernel: [40250.390742] [<ffffffff810f1d4a>] ? vfs_fstatat+0x2c/0x57 Mar 2 10:50:08 gaz kernel: [40250.390750] [<ffffffff810f1e3d>] ? sys_newstat+0x11/0x30 Mar 2 10:50:08 gaz kernel: [40250.390760] [<ffffffff812fe0b6>] ? do_page_fault+0x2e0/0x2fc Mar 2 10:50:08 gaz kernel: [40250.390768] [<ffffffff812fbf55>] ? page_fault+0x25/0x30 Mar 2 10:50:08 gaz kernel: [40250.390777] [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b Mar 2 10:50:08 gaz kernel: [40250.390783] Code: Bad RIP value. Mar 2 10:50:08 gaz kernel: [40250.390791] RIP [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390799] RSP <ffff88021cc8dec0> Mar 2 10:50:08 gaz kernel: [40250.390805] CR2: 00000000024b03f0 Mar 2 10:50:08 gaz kernel: [40250.391051] ---[ end trace 1cc1473b539c7f6e ]--- Mar 2 11:42:20 gaz kernel: [43382.242301] php5-cgi[10963]: segfault at d81160 ip 0000000000d81160 sp 00007fff3adcb058 error 15 Mar 2 21:51:14 gaz kernel: [79916.418302] php5-cgi[20089]: segfault at 1c59dc8 ip 0000000001c59dc8 sp 00007fff9b877fb8 error 15 Mar 3 03:45:01 gaz kernel: [101143.334305] munin-update[22519] general protection ip:7f516dce204c sp:7fff6049a978 error:0 in libperl.so.5.10.1[7f516dc7d000+164000] Mar 3 11:22:37 gaz kernel: [128599.570307] php5-cgi[22888]: segfault at 36485a8 ip 00000000036485a8 sp 00007fff2d56e1c8 error 15 Mar 4 08:32:17 gaz kernel: [204779.842304] php5-cgi[22090]: segfault at 18 ip 0000000000689e5e sp 00007fff677a6a48 error 6 in php5-cgi[400000+6f9000] Mar 4 10:01:02 gaz kernel: [210104.434706] rsync[22236] general protection ip:7f14a07137f9 sp:7fff88f940b8 error:0 in libc-2.11.2.so[7f14a069d000+158000] Mar 4 11:32:22 gaz kernel: [215584.262316] BUG: unable to handle kernel paging request at 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.262331] IP: [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262343] PGD 0 Mar 4 11:32:22 gaz kernel: [215584.262350] Oops: 0010 [#2] SMP Mar 4 11:32:22 gaz kernel: [215584.262359] last sysfs file: /sys/devices/pci0000:00/0000:00:12.0/host4/target4:0:0/4:0:0:0/block/sdb/stat Mar 4 11:32:22 gaz kernel: [215584.262371] CPU 1 Mar 4 11:32:22 gaz kernel: [215584.262378] Modules linked in: cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative xt_recent xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 ip6table_filter ip6_tables xt_DSCP xt_TCPMSS ipt_LOG ipt_REJECT iptable_mangle iptable_filter xt_multiport xt_state xt_limit xt_conntrack nf_conntrack_ftp nf_conntrack ip_tables x_tables loop snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm snd drm_kms_helper soundcore drm snd_page_alloc i2c_algo_bit shpchp i2c_piix4 edac_core pcspkr k8temp evdev edac_mce_amd pci_hotplug i2c_core button ext3 jbd mbcache dm_mod powernow_k8 aacraid 3w_9xxx 3w_xxxx raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 md_mod sata_nv sata_sil sata_via sd_mod crc_t10dif ata_generic ahci pata_atiixp ohci_hcd libata r8169 mii thermal ehci_hcd processor thermal_sys scsi_mod usbcore nls_base [last unloaded: scsi_wait_scan] Mar 4 11:32:22 gaz kernel: [215584.262552] Pid: 1960, comm: proxymap Tainted: G D 2.6.32-5-amd64 #1 MS-7368 Mar 4 11:32:22 gaz kernel: [215584.262563] RIP: 0010:[<00000000ffffff9c>] [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262573] RSP: 0018:ffff880209257e00 EFLAGS: 00010212 Mar 4 11:32:22 gaz kernel: [215584.262580] RAX: ffff8801514eb780 RBX: ffffffff810efb2d RCX: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262590] RDX: 0000000000000020 RSI: 0000000000000001 RDI: ffff8801514eb780 Mar 4 11:32:22 gaz kernel: [215584.262600] RBP: 00000000ffffffe9 R08: 0000000000000000 R09: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262611] R10: ffff880209257e78 R11: ffffffff81152c7c R12: 0000000000000001 Mar 4 11:32:22 gaz kernel: [215584.262622] R13: 0000000000008001 R14: 0000000000000024 R15: 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.262633] FS: 00007fca4de35700(0000) GS:ffff880008d00000(0000) knlGS:0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262644] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 4 11:32:22 gaz kernel: [215584.262650] CR2: 00000000ffffff9c CR3: 00000001c9cbb000 CR4: 00000000000006e0 Mar 4 11:32:22 gaz kernel: [215584.262661] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262671] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 4 11:32:22 gaz kernel: [215584.262682] Process proxymap (pid: 1960, threadinfo ffff880209256000, task ffff88021c4b1c40) Mar 4 11:32:22 gaz kernel: [215584.262693] Stack: Mar 4 11:32:22 gaz kernel: [215584.262698] ffffffff810f8566 ffff880209257e78 ffff88021c7bf000 ffff88021c7bf0c8 Mar 4 11:32:22 gaz kernel: [215584.262709] <0> 0000800000000000 ffff88021fc0f000 ffff880209257e78 00000000fffffffe Mar 4 11:32:22 gaz kernel: [215584.262724] <0> ffffffff810e5881 ffff880209257f48 0000000000000286 ffff88021fc0f000 Mar 4 11:32:22 gaz kernel: [215584.262743] Call Trace: Mar 4 11:32:22 gaz kernel: [215584.262753] [<ffffffff810f8566>] ? do_filp_open+0xa7/0x94b Mar 4 11:32:22 gaz kernel: [215584.262763] [<ffffffff810e5881>] ? virt_to_head_page+0x9/0x2a Mar 4 11:32:22 gaz kernel: [215584.262771] [<ffffffff810f9904>] ? user_path_at+0x52/0x79 Mar 4 11:32:22 gaz kernel: [215584.262779] [<ffffffff810cfec1>] ? get_unmapped_area+0xd7/0x139 Mar 4 11:32:22 gaz kernel: [215584.262787] [<ffffffff811019d5>] ? alloc_fd+0x67/0x10c Mar 4 11:32:22 gaz kernel: [215584.262795] [<ffffffff810eceaf>] ? do_sys_open+0x55/0xfc Mar 4 11:32:22 gaz kernel: [215584.262804] [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b Mar 4 11:32:22 gaz kernel: [215584.262811] Code: Bad RIP value. Mar 4 11:32:22 gaz kernel: [215584.262819] RIP [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262828] RSP <ffff880209257e00> Mar 4 11:32:22 gaz kernel: [215584.262833] CR2: 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.263077] ---[ end trace 1cc1473b539c7f6f ]--- As you can see there are segfaults, a general protection fault and a Kernel Oops. My first guess was that there's a Hardware problem of some sort and I asked my Hoster (it's a rented root server) to do a full hardwarecheck - they did, but couldn't find any problem. I don't know what and how they checked but their support team is usually quite good. I ran memtester and cpuburn myself and couldn't find any error either. Unfortunately I have no reliable way to reproduce these segfaults, they seem to be more or less random. On a hunch I disabled the firewall of the system and ran one of the programs that segfaulted regularily (imapsync) and it seemed to take longer to segfault than before, so the problem might be related to the network stack. Or could just be a random thing. Here are the kernel specs: # uname -a Linux gaz 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64 GNU/Linux # cat /etc/debian_version 6.0 # lsmod Module Size Used by cpufreq_userspace 1992 0 cpufreq_stats 2659 0 cpufreq_powersave 902 0 cpufreq_conservative 5162 0 xt_recent 5977 0 xt_tcpudp 2319 0 iptable_nat 4299 0 nf_nat 13388 1 iptable_nat nf_conntrack_ipv4 9833 3 iptable_nat,nf_nat nf_defrag_ipv4 1139 1 nf_conntrack_ipv4 ip6table_filter 2384 0 ip6_tables 15075 1 ip6table_filter xt_DSCP 1995 0 xt_TCPMSS 2919 0 ipt_LOG 4518 0 ipt_REJECT 1953 0 iptable_mangle 2817 0 iptable_filter 2258 0 xt_multiport 2267 0 xt_state 1303 0 xt_limit 1782 0 xt_conntrack 2407 0 nf_conntrack_ftp 5537 0 nf_conntrack 46535 6 iptable_nat,nf_nat,nf_conntrack_ipv4,xt_state,xt_conntrack,nf_conntrack_ftp ip_tables 13899 3 iptable_nat,iptable_mangle,iptable_filter x_tables 12845 13 xt_recent,xt_tcpudp,iptable_nat,ip6_tables,xt_DSCP,xt_TCPMSS,ipt_LOG,ipt_REJECT,xt_multiport,xt_state,xt_limit,xt_conntrack,ip_tables loop 11799 0 radeon 573996 0 ttm 39986 1 radeon drm_kms_helper 20065 1 radeon snd_hda_codec_atihdmi 2251 1 drm 142359 3 radeon,ttm,drm_kms_helper snd_hda_intel 20019 0 i2c_algo_bit 4225 1 radeon pcspkr 1699 0 i2c_piix4 8328 0 snd_hda_codec 54244 2 snd_hda_codec_atihdmi,snd_hda_intel i2c_core 15712 5 radeon,drm_kms_helper,drm,i2c_algo_bit,i2c_piix4 snd_hwdep 5380 1 snd_hda_codec snd_pcm 60503 2 snd_hda_intel,snd_hda_codec snd_timer 15582 1 snd_pcm snd 46446 5 snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_timer soundcore 4598 1 snd evdev 7352 3 snd_page_alloc 6249 2 snd_hda_intel,snd_pcm k8temp 3283 0 edac_core 29261 0 edac_mce_amd 6433 0 shpchp 26264 0 pci_hotplug 21203 1 shpchp button 4650 0 ext3 106518 2 jbd 37085 1 ext3 mbcache 5050 1 ext3 dm_mod 53754 0 powernow_k8 10978 1 aacraid 59779 0 3w_9xxx 28684 0 3w_xxxx 20569 0 raid10 17809 0 raid456 44500 0 async_raid6_recov 5170 1 raid456 async_pq 3479 2 raid456,async_raid6_recov raid6_pq 77179 2 async_raid6_recov,async_pq async_xor 2478 3 raid456,async_raid6_recov,async_pq xor 4380 1 async_xor async_memcpy 1198 2 raid456,async_raid6_recov async_tx 1734 5 raid456,async_raid6_recov,async_pq,async_xor,async_memcpy raid1 18431 3 raid0 5517 0 md_mod 73824 7 raid10,raid456,raid1,raid0 sata_nv 19166 0 sata_sil 7412 0 sata_via 7928 0 sd_mod 29889 8 crc_t10dif 1276 1 sd_mod ata_generic 3047 0 ahci 32374 6 r8169 29229 0 mii 3210 1 r8169 thermal 11674 0 pata_atiixp 3489 0 libata 133632 6 sata_nv,sata_sil,sata_via,ata_generic,ahci,pata_atiixp ohci_hcd 19212 0 ehci_hcd 31151 0 processor 29935 1 powernow_k8 thermal_sys 11942 2 thermal,processor scsi_mod 122149 5 aacraid,3w_9xxx,3w_xxxx,sd_mod,libata usbcore 122034 3 ohci_hcd,ehci_hcd nls_base 6377 1 usbcore # free total used free shared buffers cached Mem: 8166128 1228036 6938092 0 140412 782060 -/+ buffers/cache: 305564 7860564 Swap: 2102456 0 2102456 So, basically my questions are: How can I diagnose this further? Is there any data in the log above that could help me to isolate the troublemaker? Are there any known problems with the above hardware/software I overlooked when googling for it? Is there a way to prevent the kernel from autoloading modules (I probably don't need all these modules and one of them might be the culprit)

    Read the article

  • Null-free "maps": Is a callback solution slower than tryGet()?

    - by David Moles
    In comments to "How to implement List, Set, and Map in null free design?", Steven Sudit and I got into a discussion about using a callback, with handlers for "found" and "not found" situations, vs. a tryGet() method, taking an out parameter and returning a boolean indicating whether the out parameter had been populated. Steven maintained that the callback approach was more complex and almost certain to be slower; I maintained that the complexity was no greater and the performance at worst the same. But code speaks louder than words, so I thought I'd implement both and see what I got. The original question was fairly theoretical with regard to language ("And for argument sake, let's say this language don't even have null") -- I've used Java here because that's what I've got handy. Java doesn't have out parameters, but it doesn't have first-class functions either, so style-wise, it should suck equally for both approaches. (Digression: As far as complexity goes: I like the callback design because it inherently forces the user of the API to handle both cases, whereas the tryGet() design requires callers to perform their own boilerplate conditional check, which they could forget or get wrong. But having now implemented both, I can see why the tryGet() design looks simpler, at least in the short term.) First, the callback example: class CallbackMap<K, V> { private final Map<K, V> backingMap; public CallbackMap(Map<K, V> backingMap) { this.backingMap = backingMap; } void lookup(K key, Callback<K, V> handler) { V val = backingMap.get(key); if (val == null) { handler.handleMissing(key); } else { handler.handleFound(key, val); } } } interface Callback<K, V> { void handleFound(K key, V value); void handleMissing(K key); } class CallbackExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; private Callback<String, String> handler; public CallbackExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); handler = new Callback<String, String>() { public void handleFound(String key, String value) { found.add(key + ": " + value); } public void handleMissing(String key) { missing.add(key); } }; } void test() { CallbackMap<String, String> cbMap = new CallbackMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; cbMap.lookup(key, handler); } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } Now, the tryGet() example -- as best I understand the pattern (and I might well be wrong): class TryGetMap<K, V> { private final Map<K, V> backingMap; public TryGetMap(Map<K, V> backingMap) { this.backingMap = backingMap; } boolean tryGet(K key, OutParameter<V> valueParam) { V val = backingMap.get(key); if (val == null) { return false; } valueParam.value = val; return true; } } class OutParameter<V> { V value; } class TryGetExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; public TryGetExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); } void test() { TryGetMap<String, String> tgMap = new TryGetMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; OutParameter<String> out = new OutParameter<String>(); if (tgMap.tryGet(key, out)) { found.add(key + ": " + out.value); } else { missing.add(key); } } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } And finally, the performance test code: public static void main(String[] args) { int size = 200000; Map<String, String> map = new HashMap<String, String>(); for (int i = 0; i < size; i++) { String val = (i % 5 == 0) ? null : "value" + i; map.put("key" + i, val); } long totalCallback = 0; long totalTryGet = 0; int iterations = 20; for (int i = 0; i < iterations; i++) { { TryGetExample tryGet = new TryGetExample(map); long tryGetStart = System.currentTimeMillis(); tryGet.test(); totalTryGet += (System.currentTimeMillis() - tryGetStart); } System.gc(); { CallbackExample callback = new CallbackExample(map); long callbackStart = System.currentTimeMillis(); callback.test(); totalCallback += (System.currentTimeMillis() - callbackStart); } System.gc(); } System.out.println("Avg. callback: " + (totalCallback / iterations)); System.out.println("Avg. tryGet(): " + (totalTryGet / iterations)); } On my first attempt, I got 50% worse performance for callback than for tryGet(), which really surprised me. But, on a hunch, I added some garbage collection, and the performance penalty vanished. This fits with my instinct, which is that we're basically talking about taking the same number of method calls, conditional checks, etc. and rearranging them. But then, I wrote the code, so I might well have written a suboptimal or subconsicously penalized tryGet() implementation. Thoughts?

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

< Previous Page | 1 2 3 4 5