Search Results

Search found 42465 results on 1699 pages for 'xml simple'.

Page 352/1699 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Antenna Aligner part 1: In the beginning.

    - by Chris George
    Picture the scene, it's 9pm, I'm in my caravan (yes I know, I've heard all the jokes!) with my family and I'm trying to tune the tv by moving the aerial, retuning, moving the aerial again, retuning... 45 mins and much cursing later I succeed. Surely there must be an easier way than this? Aha, an app; there must be an app for that? So I search in the AppStore for such an app, but curiously drew a blank. Then the seeds of the idea started to grow. I can code, I work in a software house with lots of very clever people, surely I can make an app that points to the nearest digital tv transmitter! Not having looked into app development before, I investigated how one goes about making an iPhone app and was quickly greeted by a now familiar answer "Buy a mac!". That was not an option for many reasons, mostly wife related! My dreams were starting to fade until one of my colleagues pointed out that within Red Gate, the very company I work for, there was on-going development on a piece of software that would allow me to write an app using Visual Studio on a Windows machine, Nomad! Once I signed up for the beta program I got to work learning the Jquery mobile / Phonegap framework. Within a couple of hours I had written (in Visual Studio), built in the cloud (using Nomad) and published (via TestFlight) my first iPhone app onto my iPhone ! It didn't do much, but it was a step in the right direction. To be continued...

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • Migrating from OCS 2007 R2 to Lync: Part 2

    In the story so far, Johan has described how to check that the migration from your OCS to Lync is supported and how to determine the requirements for the new installation This was followed by a walk-through of the preparation the Active Directory and installation of the first Lync Front End Server with a Mediation Server co-located. Now Johan tackles the merging the OCS configuration, and connection to the outsode world, followed by testing, performing and then validating the migration.

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon Armstrong
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query

    Read the article

  • Calling all developers building ASP.NET applications

    - by Laila Lotfi
    We know that developers building desktop apps have to contend with memory management issues, and we’d like to learn more about the memory challenges ASP.NET developers are facing. To be more specific, we’re carrying out some exploratory research leading into the next phase of development on ANTS Memory Profiler, and our development team would love to speak to developers building ASP.NET applications. You don’t need to have ever used ANTS profiler – this will be a more general conversation about: - your current site architecture, and how you manage the memory requirements of your applications on your back-end servers and web services. - how you currently diagnose memory leaks and where you do this (production server, or during testing phase, or if you normally manage to get them all during the local development). - what specific memory problems you’ve experienced – if any. Of course, we’ll compensate you for your time with a $50 Amazon voucher (or equivalent in other currencies), and our development team’s undying gratitude. If you’d like to participate, please just drop me a line on [email protected].

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query :)

    Read the article

  • Smartassembly 5: it lives! Early Access builds now available

    - by Bart Read
    I'm pleased to announce that, late last week, we put out the first early access build for Smartassembly 5, Red Gate's fantastic code protection and error reporting tool, which we acquired last September. You can download it via: http://www.red-gate.com/messageboard/viewforum.php?f=116 It's obviously pretty early days, so please do not try to use this to protect a production application, but we've already done a lot of work in some key areas: We're simplifying and streamlining the licensing model (you won't see this yet, but a lot of the work on this has already been done). We've improved usability of the product, with a better menu, reordering of project settings, and better defaults. We've also fixed a load of bugs, which I'll let Alex blog about in more detail. On a slightly more trivial level, the curly braces are also no more. Over the coming weeks, we'll be adding more improvements, and starting usability tests. If you're interested in getting involved in the latter, please drop an email to [email protected].

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • XSLT: a variation on the pagination problem

    - by MarcoS
    I must transform some XML data into a paginated list of fields. Here is an example. Input XML: <?xml version="1.0" encoding="UTF-8"?> <data> <books> <book title="t0"/> <book title="t1"/> <book title="t2"/> <book title="t3"/> <book title="t4"/> </books> <library name="my library"/> </data> Desired output: <?xml version="1.0" encoding="UTF-8"?> <pages> <page number="1"> <field name="library_name" value="my library"/> <field name="book_1" value="t0"/> <field name="book_2" value="t1"/> </page> <page number="2"> <field name="book_1" value="t2"/> <field name="book_2" value="t3"/> </page> <page number="3"> <field name="book_1" value="t4"/> </page> </pages> In the above example I assume that I want at most 2 fields named book_n (with n ranging between 1 and 2) per page. Tags <page> must have an attribute number. Finally, the field named library_name must appear only the first <page>. Here is my current solution using XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0" exclude-result-prefixes="trx xs"> <xsl:output method="xml" indent="yes" omit-xml-declaration="no" /> <xsl:variable name="max" select="2"/> <xsl:template match="//books"> <xsl:for-each-group select="book" group-ending-with="*[position() mod $max = 0]"> <xsl:variable name="pageNum" select="position()"/> <page number="{$pageNum}"> <xsl:for-each select="current-group()"> <xsl:variable name="idx" select="if (position() mod $max = 0) then $max else position() mod $max"/> <field value="{@title}"> <xsl:attribute name="name">book_<xsl:value-of select="$idx"/> </xsl:attribute> </field> </xsl:for-each> <xsl:if test="$pageNum = 1"> <xsl:call-template name="templateFor_library"/> </xsl:if> </page> </xsl:for-each-group> </xsl:template> <xsl:template name="templateFor_library"> <xsl:for-each select="//library"> <field name="library_name" value="{@name}" /> </xsl:for-each> </xsl:template> </xsl:stylesheet> Is there a better/simpler way to perform this transformation?

    Read the article

  • Antenna Aligner Part 3: Kaspersky

    - by Chris George
    Quick one today. Since starting this project, I've been encountering times where Nomad fails to build my app. It would then take repeated attempts at building to then see a build go through successfully. Rob, who works on Nomad at Red Gate, investigated this and it showed that certain parts of the message required to trigger the 'cloud build' were not getting through to the Nomad app, causing the HTTP connection to stall until timeout. After much scratching heads, it turns out that the Kaspersky Internet Security system I have installed on my laptop at home, was being very aggressive and was causing the problem. Perhaps it's trying to protect me from myself? Anyway, we came up with an interim solution why the Nomad guys investigate with Kaspersky by setting Visual Studio to be a trusted application with the Kaspersky settings and setting it to not scan network traffic. Hey presto! This worked and I have not had a single build problem since (other than losing internet connection, or that embarrassing moment when you blame everyone else then realise you've accidentally switched off your wireless on the laptop).

    Read the article

  • Exceptional PowerShell DBA Pt 3 - Collation and Fragmentation

    In this final look into his everyday essentials, Laerte Junior provides some useful scripts for the DBA that use an alternative way of error-logging. He shows how to use a PowerShell script to check and, if necessary, to defragment your indexes, write data to a SQL Server table, and change the collation for a table. Being an exceptional DBA just got a little easier.

    Read the article

  • Getting baseline and performance stats - the easy way.

    - by fatherjack
    OK, pretty much any DBA worth their salt has read Brent Ozar's (Blog | Twitter) blog about getting a baseline of your server's performance counters and then getting the same counters at regular intervals afterwards so that you can track performance trends and evidence how you are making your servers faster or cope with extra load without costing your boss any money for hardware upgrades. No? well, go read it now. I can wait a while as there is a great video there too...http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/,...(read more)

    Read the article

  • ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate

    - by Bart Read
    Following on from my previous post, where I shared a Live Template for quickly declaring a normal read-write dependency property and its associated property change event boilerplate, here's an unsurprisingly similar template for creating a read-only dependency property.        #region $PROPNAME$ Read-Only Property and Property Change Routed Event        private static readonly DependencyPropertyKey $PROPNAME$PropertyKey =                                             DependencyProperty.RegisterReadOnly(             "$PROPNAME$", typeof ( $PROPTYPE$ ), typeof ( $DECLARING_TYPE$ ),             new PropertyMetadata( $DEF_VALUE$ , On$PROPNAME$Changed ) );       public static readonly DependencyProperty $PROPNAME$Property =                                           $PROPNAME$PropertyKey.DependencyProperty;        public $PROPTYPE$ $PROPNAME$         {             get { return ( $PROPTYPE$ ) GetValue( $PROPNAME$Property ); }             private set { SetValue( $PROPNAME$PropertyKey, value ); }         }       public static readonly RoutedEvent $PROPNAME$ChangedEvent   =                                           EventManager.RegisterRoutedEvent(           "$PROPNAME$Changed",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( RoutedPropertyChangedEventHandler< $PROPTYPE$ > ),           typeof( $DECLARING_TYPE$ ) );       public event RoutedPropertyChangedEventHandler< $PROPTYPE$ > $PROPNAME$Changed       {           add { AddHandler( $PROPNAME$ChangedEvent, value ); }           remove { RemoveHandler( $PROPNAME$ChangedEvent, value ); }       }        private static void On$PROPNAME$Changed(           DependencyObject d, DependencyPropertyChangedEventArgs e)         {             var $DECLARING_TYPE_var$ = d as $DECLARING_TYPE$;            var args = new RoutedPropertyChangedEventArgs< $PROPTYPE$ >(               ( $PROPTYPE$ ) e.OldValue,               ( $PROPTYPE$ ) e.NewValue );           args.RoutedEvent    = $DECLARING_TYPE$.$PROPNAME$ChangedEvent;           $DECLARING_TYPE_var$.RaiseEvent( args );$END$        }        #endregion The only real difference here is the addition of the DependencyPropertyKey, which allows your implementation to set the value of the dependency property without exposing the setter code to consumers of your type. You'll probably find that you create read-only dependency properties much less often than read-write properties, but this should still save you some typing when you do need to do so. Technorati Tags: resharper,live template,c#,dependency property,read-only,routed events,property change,boilerplate,wpf

    Read the article

  • Injecting the mailer service, got "The service definition 'mailer' does not exist"?

    - by Gremo
    This is the class where the service mailer should be injected: namespace Gremo\SkebbyBundle\Transport; use Swift_Mailer; use Gremo\SkebbyBundle\Message\AbstractSkebbyMessage; class MailerTransport extends AbstractSkebbyTransport { /** * @var \Swift_Mailer */ private $mailer; public function __construct(Swift_Mailer $mailer) { $this->mailer = $mailer; } /** * @param \Gremo\SkebbyBundle\Message\AbstractSkebbyMessage $message * @return void */ public function executeTransport(AbstractSkebbyMessage $message) { /* ... */ } } Service id is gremo_skebby.transport.mailer, placed inside mailer.xml file: <parameters> <parameter key="gremo_skebby.converter.swift_message.class"> Gremo\SkebbyBundle\Converter\SwiftMessageConverter </parameter> <parameter key="gremo_skebby.transport.mailer.class"> Gremo\SkebbyBundle\Transport\MailerTransport </parameter> </parameters> <services> <service id="gremo_skebby.converter.swift_message" class="%gremo_skebby.converter.swift_message.class%" public="false" /> <service id="gremo_skebby.transport.mailer" class="%gremo_skebby.transport.mailer.class%" public="false" parent="gremo_skebby.tranport.abstract_transport"> <argument type="service" id="mailer" /> </service> </services> When i try to inject the gremo_skebby.transport.mailer into another service (an helper) i get: Symfony\Component\DependencyInjection\Exception\InvalidArgumentException : The service definition "mailer" does not exist. That is strange, because command php app/console container:debug shows that the mailer service actually exists: mailer container Swift_Mailer I'm loading mailer.xml file dynamically by the extension: if(in_array($configs['transport'], array('http', 'rest', 'mailer'))) { $loader->load('transport.xml'); $loader->load($configs['transport'] . '.xml'); $container->setAlias('gremo_skebby.transport', 'gremo_skebby.transport.' . $configs['transport']); } $loader->load('skebby.xml'); ... and gremo_skebby.transport service is injected into gremo_skebby.skebby service (skebby.xml): <parameters> <parameter key="gremo_skebby.skebby.class"> Gremo\SkebbyBundle\Skebby </parameter> </parameters> <services> <service id="gremo_skebby.skebby" class="%gremo_skebby.skebby.class%"> <argument type="service" id="gremo_skebby.transport" /> </service> </services> A quick test is giving me the same InvalidArgumentException: public function testSkebbyWithMailerTransport() { $loader = new GremoSkebbyExtension(); $container = new ContainerBuilder(); $config = $this->getEmptyConfiguration(); $loader->load(array($config), $container); $this->assertTrue($container->hasDefinition('gremo_skebby.transport.mailer')); $this->assertTrue($container->hasDefinition('gremo_skebby.skebby')); $this->assertInstanceOf('Gremo\SkebbyBundle\Skebby', $container->get('gremo_skebby.skebby')); }

    Read the article

  • Got that Friday feeling?

    - by Rebecca Amos
    Saturday is just around the corner, and we’re all starting to wrap up for the weekend. If you’re the DBA that ‘Friday feeling’ might be as much about checking and preparing your SQL Servers for the next two days, as about looking forward to spending time with friends and family. Whether you’re double-checking your disaster recovery strategy, or know that it’s your turn to be on-call this weekend, it’s likely you’re preparing for the worst, just in case. The fact that you’re making these checks, and caring about both your servers and your users, means that you might be an exceptional DBA. You’re already putting in that extra effort to make other people’s lives easier. So why not take some time for your professional development and enter the Exceptional DBA Awards? If you’re looking for some inspiration for your entry, download our Judges’ Top Tips poster for advice on what the judges are looking for from this year’s entrants. Not only will you be boosting your professional development, but you could win full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and a copy of Red Gate’s SQL DBA Bundle. So take some time out for yourself this weekend and get started on your entry: www.exceptionaldba.com

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Finding bugs is difficult, right?

    - by Laila
    Something I hear developers tell us all the time is that they take pride in being a developer.and that bugs are a dent in that pride. Someone once told me "I know I have found bugs years later, and it's the worst feeling in the world." So how can you avoid that sinking feeling when you find out a bug has been in production months before someone lets you know about it? Besides, let's face it: hearing about a bug often means a world of pain, because it can take hours to track down where the problem is and more hours (if not days) to fix it. And during that time, you're not working on something new, and that, my friends, is really frustrating! So to cheer you up, we've created a Bug Hunt game, where you battle against the clock to spot bugs. We've really enjoyed putting this together and hope you enjoy playing it too. Once you're done with the bug hunt, we explain how easy it can be to find and fix bugs in real life, using a neat mechanism that we call Automated Error Reporting. Play the game now.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • In search of database delivery practitioners and enthusiasts

    - by Claire Brooking
    We know from speaking with many of you at tradeshows and user groups that database delivery is not a factory production line. During planning, evaluation, quality control, and disaster mitigation, the people having their say at each step means that successful database deployment is a carefully managed course of action. With so many factors involved at every stage, we would love to find a way for our software to help out, by simplifying processes, speeding them up or joining together the people and the steps that make it all happen. We’re hoping our new research group for database delivery (SQL Server and Oracle) will help us understand the views and experiences of those of you out there in the trenches managing database changes. As part of our new group, we’ll be running a variety of research sessions, including surveys and phone interviews, over coming months. If you have opinions to share on Continuous Integration or Continuous Delivery for databases, we’d love to hear from you. Your feedback really will count as the product teams at Red Gate build plans. For some of our more in-depth sessions, we’ll also be offering participants an Amazon voucher as a thank-you for your time. If you’re not yet practising automated database deployment processes, but are contemplating or planning it, please do consider joining our research group too. If you’d like to sign up to the group and find out more, please fill in a quick form online, and we’ll be in touch to let you know about new research opportunities you might be interested in. We look forward to hearing your stories!

    Read the article

  • Managing Data Growth in SQL Server

    'Help, my database ate my disk drives!'. Many DBAs spend most of their time dealing with variations of the problem of database processes consuming too much disk space. This happens because of errors such as incorrect configurations for recovery models, data growth for large objects and queries that overtax TempDB resources. Rodney describes, with some feeling, the errors that can lead to this sort of crisis for the working DBA, and their solution.

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • Test-driven Database Development – Why Bother?

    Test-Driven Development is a practice that can bring many benefits, including better design, and less-buggy code, but is it relevant to database development, where the process of development tends to me much more interactive, and the culture more test-oriented? Greg reviews the support for TDD for Databases, and suggests that it is worth giving it a try for the range of advantages it can bring to team-working.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >