Search Results

Search found 31586 results on 1264 pages for 'custom result'.

Page 199/1264 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • FileNotFoundException when running WiX CustomAction with COMAdmin interop

    - by dabcabc
    I am trying to create a WiX custom action which will allow me to shutdown and clear down a COM+ package as part of an upgrade installation, or create and configure a new COM+ package as part of the initial installation. I previously had this running as a CustomAction within a standard Visual Studio MSI but this only allows the custom action to be executed after the files have been copied - which will fail as the package will still be running. The COMAdmin.dll has been added as a reference to the CustomAction project and is set CopyLocal=true. In the bin folder for the custom action project the Interop.COMAdmin.dll is present. The answer to this question seems to suggest that it should work. I am getting the following exception within the MSI log when trying to install: MSI (s) (C4:04) [10:40:34:205]: Invoking remote custom action. DLL: C:\WINDOWS\Installer\MSI119.tmp, Entrypoint: BeforeInstall SFXCA: Extracting custom action to temporary directory: C:\WINDOWS\Installer\MSI119.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action MyCustomAction!MyCustomAction.CustomActions.BeforeInstall Exception thrown by custom action: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.FileNotFoundException: Could not load file or assembly 'Interop.COMAdmin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. File name: 'Interop.COMAdmin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' at MyCustomAction.CustomActions.BeforeInstall(Session session) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value . --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.RuntimeMethodHandle.InvokeMethodFast(Object target, Object arguments, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.InvokeCustomAction(Int32 sessionHandle, String entryPoint, IntPtr remotingDelegatePtr)

    Read the article

  • environment change in rake task

    - by Mellon
    I am developing Rails v2.3 app with MySQL database and mysql2 gem. I faced a weird situation which is about changing the environment in rake task. (all my setting and configurations for environment and database are correct, no problem for that.) Here is my simple story : I have a rake task like following: namespace :db do task :do_something => :environment do #1. run under 'development' environment my_helper.run_under_development_env #2. change to 'custom' environment RAILS_ENV='custom' Rake::Task['db:create'] Rake::Task['db:migrate'] #3. change back to 'development' environment RAILS_ENV='development' #4. But it still run in 'customer' environment, why? my_helper.run_under_development_env end end The rake task is quite simple, what it does is: 1. Firstly, run a method from my_helper under "development" environment 2. Then, change to "custom" environment and run db:create and db:migrate until now, everything is fine, the environment did change to "custom" 3. Then, change it back again to "development" environment 4. run helper method again under "development" environment But, though I have changed the environment back to "development" in step 3, the last method still run in "custom" environment, why? and how to get rid of it? --- P.S. --- I have also checked a post with the similar situation here, and tried to use the solution there like (in step 2): ActiveRecord::Base.establish_connection('custom') Rake::Task['db:create'] Rake::Task['db:migrate'] to change the database connection instead of changing environment but, the db:create and db:migrate will still run under "development" database, though the linked post said it should run for "custom" database... weird

    Read the article

  • How to configure custom binding to consume this WS secure Webservice using WCF?

    - by Soeteman
    Hello all, I'm trying to configure a WCF client to be able to consume a webservice that returns the following response message: Response message <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://myservice.wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:StatusResponse> <result> ... </result> </ns0:StatusResponse> </env:Body> </env:Envelope> To do this, I've constructed a custom binding (which doesn't work). I keep getting a "Security header is empty" message. My binding: <customBinding> <binding name="myCustomBindingForVestaServices"> <security authenticationMode="UserNameOverTransport" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11" securityHeaderLayout="Strict" includeTimestamp="false" requireDerivedKeys="true"> </security> <textMessageEncoding messageVersion="Soap11" /> <httpsTransport authenticationScheme="Negotiate" requireClientCertificate ="false" realm =""/> </binding> </customBinding> My request seems to be using the same SOAP and WS Security versions as the response, but use different namespace prefixes ("o" instead of "wsse"). Could this be the reason why I keep getting the "Security header is empty" message? Request message <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-d3b70d1f-0ebb-4a79-85e6-34f0d6aa3d0f-1"> <o:Username>user</o:Username> <o:Password>pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://myservice.wsdl"> <request xmlns="" xmlns:a="http://myservice.wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> ... </request> </getPrdStatus> </s:Body> </s:Envelope> How do I need to configure my WCF client binding to be able to consume this webservice? Any help greatly appreciated! Sander

    Read the article

  • How to make Spring load a JDBC Driver BEFORE initializing Hibernate's SessionFactory?

    - by Bill_BsB
    I'm developing a Spring(2.5.6)+Hibernate(3.2.6) web application to connect to a custom database. For that I have custom JDBC Driver and Hibernate Dialect. I know for sure that these custom classes work (hard coded stuff on my unit tests). The problem, I guess, is with the order on which things get loaded by Spring. Basically: Custom Database initializes Spring load beans from web.xml Spring loads ServletBeans(applicationContext.xml) Hibernate kicks in: shows version and all the properties correctly loaded. Hibernate's HbmBinder runs (maps all my classes) LocalSessionFactoryBean - Building new Hibernate SessionFactory DriverManagerConnectionProvider - using driver: MyCustomJDBCDriver at CustomDBURL I get a SQLException: No suitable driver found for CustomDBURL Hibernate loads the Custom Dialect My CustomJDBCDriver finally gets registered with DriverManager (log messages) SettingsFactory runs SchemaExport runs (hbm2ddl) I get a SQLException: No suitable driver found for CustomDBURL (again?!) Application get successfully deployed but there are no tables on my custom Database. Things that I tried so far: Different techniques for passing hibernate properties: embedded in the 'sessionFactory' bean, loaded from a hibernate.properties file. Nothing worked but I didn't try with hibernate.cfg.xml file neither with a dataSource bean yet. MyCustomJDBCDriver has a static initializer block that registers it self with the DriverManager. Tried different combinations of lazy initializing (lazy-init="true") of the Spring beans but nothing worked. My custom JDBC driver should be the first thing to be loaded - not sure if by Spring but...! Can anyone give me a solution for this or maybe a hint for what else I could try? I can provide more details (huge stack traces for instance) if that helps. Thanks in advance.

    Read the article

  • Surely eAsy but I am not able ... JQUERY UI - WIDGET - HEADER

    - by alex
    I was making this simple trial, but can anyone tell me why the distance from the border of DIV to the H2 header is so much ? How can I reduce it ? I don't want space ... Prova WIDGET <link rel="stylesheet" href="jquery-ui-1.8.custom/css/smoothness/jquery-ui-1.8.custom.css" type="text/css"> <link rel="stylesheet" href="jquery-ui-1.8.custom/development-bundle/ui/jquery-ui-1.8.custom.css" type="text/css"> <script src="jquery-ui-1.8.custom/development-bundle/jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8.custom/js/jquery-ui-1.8.custom.min.js" type="text/javascript"></script> <script type="text/javascript"> $(themify); function themify(){ $("div").addClass("ui-widget ui-widget-content ui-corner-all"); $("input").addClass("ui-button ui-button-text"); $(":header").addClass("ui-widget-header ui-corner-all"); //ui-widget } </script> <style>#test{display:none}</style> <script type="text/javascript"> function rendiVisibile(){ if(document.getElementById("test").style.display = "none"){ $("#test").css({"width":"200px","float":"right","text-align":"center"}); $("#test").show("slide",{},1000); } } </script> </head> <body> <h2>Tentativo widget con DIV</h2> <form action=""> <input type="button" value="Submit" id="pulsante" onclick="rendiVisibile()";><br/></br> <div id="test"> <h2>CIAO</h2> Un saluto </div> </form> </body>

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • How can I get returning data from jquery ajax request?

    - by Oguz
    function isNewUsername(str){ var result; $.post('/api/isnewusername', {username:str}, function(data) { result = data.result; }, "json"); return result; } So , my problem is very simple but I can not figure it out . I want to access result from the isnewusername function . I am very curious about answer because I spent 1 hour on it . Thank you

    Read the article

  • How to apply GROUP_CONCAT in mysql Query

    - by Query Master
    How to apply GROUP_CONCAT in this Query if you guys have any idea or any alternate solution about this please share me. Helps are definitely appreciated also (see Query or result required) Query SELECT WEEK(cpd.added_date) AS week_no,COUNT(cpd.result) AS death_count FROM cron_players_data cpd WHERE cpd.player_id = 81 AND cpd.result = 2 AND cpd.status = 1 GROUP BY WEEK(cpd.added_date); Query output result screen Result Required 23,24,25 AS week_no 2,3,1 AS death_count

    Read the article

  • why my code show messy code ..

    - by zjm1126
    class sss(webapp.RequestHandler): def get(self): url = "http://www.google.com/" result = urlfetch.fetch(url) if result.status_code == 200: self.response.out.write(result.content) and this view show : when i change code to this: if result.status_code == 200: self.response.out.write(result.content.decode('utf-8').encode('gb2312')) it show : so ,what i should do ? thanks

    Read the article

  • Declare Ajax-webservicecall OnSuccess method anonymous.

    - by user333113
    I write a lot of ajax javascript code and have a little design problem which I'm not totally satisfied with. A lot of times I end up with writing something like this: var typeOfPopup; function RetrievePopupContent(_typeOfPopup) { switch (_typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError); break; } typeOfPopup = _typeOfPopup; } function DisplayPopup(result) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Allright. This is a simplified example of what I mean. Often I end up with a lot worse code I believe. What I don't like is the global state variabel outside the functions. One solution I wasn't thinking about when writing this code is to send a context object. I believe you could write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; } } function DisplayPopup(result, typeOfPopup) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Is this the recommended way? What I also want to do is to be able to write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, new function(result) { $get('Popup1').innerHTML = result; }, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, new function(result) { $get('Popup2').innerHTML = result; }, OnError); break; } } Is this possible at all? To declare the callback function anonymous? I am grateful for all opinions on the two options I mentioned myself and also new alternatives to get rid of my global variables I have used this way.

    Read the article

  • BoundingBox created from mesh to origin, making it bigger

    - by Gunnar Södergren
    I'm working on a level-based survival game and I want to design my scenes in Maya and export them as a single model (with multiple meshes) into XNA. My problem is that when I try to create Bounding Boxes(for Collision purposes) for each of the meshes, the are calculated from origin to the far-end of the current mesh, so to speak. I'm thinking that it might have something to do with the position each mesh brings from Maya and that it's interpreted wrongly... or something. Here's the code for when I create the boxes: private static BoundingBox CreateBoundingBox(Model model, ModelMesh mesh) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); BoundingBox result = new BoundingBox(); foreach (ModelMeshPart meshPart in mesh.MeshParts) { BoundingBox? meshPartBoundingBox = GetBoundingBox(meshPart, boneTransforms[mesh.ParentBone.Index]); if (meshPartBoundingBox != null) result = BoundingBox.CreateMerged(result, meshPartBoundingBox.Value); } result = new BoundingBox(result.Min, result.Max); return result; } private static BoundingBox? GetBoundingBox(ModelMeshPart meshPart, Matrix transform) { if (meshPart.VertexBuffer == null) return null; Vector3[] positions = VertexElementExtractor.GetVertexElement(meshPart, VertexElementUsage.Position); if (positions == null) return null; Vector3[] transformedPositions = new Vector3[positions.Length]; Vector3.Transform(positions, ref transform, transformedPositions); for (int i = 0; i < transformedPositions.Length; i++) { Console.WriteLine(" " + transformedPositions[i]); } return BoundingBox.CreateFromPoints(transformedPositions); } public static class VertexElementExtractor { public static Vector3[] GetVertexElement(ModelMeshPart meshPart, VertexElementUsage usage) { VertexDeclaration vd = meshPart.VertexBuffer.VertexDeclaration; VertexElement[] elements = vd.GetVertexElements(); Func<VertexElement, bool> elementPredicate = ve => ve.VertexElementUsage == usage && ve.VertexElementFormat == VertexElementFormat.Vector3; if (!elements.Any(elementPredicate)) return null; VertexElement element = elements.First(elementPredicate); Vector3[] vertexData = new Vector3[meshPart.NumVertices]; meshPart.VertexBuffer.GetData((meshPart.VertexOffset * vd.VertexStride) + element.Offset, vertexData, 0, vertexData.Length, vd.VertexStride); return vertexData; } } Here's a link to the picture of the mesh(The model holds six meshes, but I'm only rendering one and it's bounding box to make it clearer: http://www.gsodergren.se/portfolio/wp-content/uploads/2011/10/Screen-shot-2011-10-24-at-1.16.37-AM.png The mesh that I'm refering to is the Cubelike one. The cylinder is a completely different model and not part of any bounding box calculation. I've double- (and tripple-)-checked that this mesh corresponds to this bounding box. Any thoughts on what I'm doing wrong?

    Read the article

  • Does Altova StyleVision support generation of these specific Word XML Word ML List Numbering Bullet Markup? Extend with custom external XSLT?

    - by Alex S
    Does Altova StyleVision support generation of these specific Word XML Word ML List Numbering Bullet Markup? Extend with custom external XSLT? PS: I know is specific to Altova and their Dev Tools, but just like Eclipse and Visual Studio it is one of the widest used toolkits for XML related development & programming. So, please do not hate, ban or give negative. Linked below is a section of information for Word ML XML and its numbering, list, bullet etc. The markup is pretty extensive. I am wondering if this can be replicated via StyleVision or is this a limitation that needs to extended with an external XSLT? Quote: Key links to the Markup Documentation: http://officeopenxml.com/WPnumbering.php http://officeopenxml.com/WPnumberingAbstractNum.php Also: /WPnumberingLvl.php Short outline of the Documentation there: *Numbering, Levels and Lists* - Overview - Defining a Numbering Scheme - Defining a Particular Level ++ Numbering Level Text ++ Numbering Format ++ Displaying as Numerals Only ++ Restart Numbering ++ Picture or Image as Numbering Symbol ++ Justification ++ Overriding a Numbering Definition If StyleVision supports the above, where and how inside StyleVision can I access or use these properties/ attributes for the markup? From what I've gathered, I think it does not. In the past, I have written XSL-FO and XSL-WordML by hand. So I could write an add-on external XSLT containing Word specific markup for this purpose. *Given the limitation exists, the questions now: * Where and how do I create and linked inside of StyleVision so as to APPLY and EXTEND these capability limitations of StyleVision. AND How could I make it apply only for Word ML / Word XML output styling and be DEACTIVATED/ DISABLED for HTML and PDF output?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • Changing the prompt in telnet

    - by wim
    With some help from people on here, I was able to set a custom prompt in an ssh session (thanks!). Now I need to do the same in telnet, but I'm not sure of what syntax I could use for that. Basically the telnet prompt is just a > character, I need to modify it to something I can more reliably detect in automation jobs. Hope this makes sense. From inside telnet, trying to escape that command with a bang like !PS1=spam and !PS2=eggs did not change it. wim@wim-acer:~$ ssh [email protected] -i ~/.ssh/guest_nopassphrase -t "export PS1='Sending a custom prompt \w \$ '; exec sh" Sending a custom prompt ~ $ set HOME='/var/tmp' IFS=' ' LOGNAME='guest' PATH='/sbin:/usr/sbin:/bin:/usr/bin' PPID='1128' PS1='Sending a custom prompt \w $ ' PS2='> ' PS4='+ ' PWD='' SHELL='/bin/sh' TERM='xterm' USER='guest' Sending a custom prompt ~ $ telnet localhost <snip> Entering character mode Escape character is '^]'. > !set CONSOLE='/dev/ttyp0' HOME='/var/tmp' IFS=' ' LOGNAME='root' PATH='/sbin:/bin:/usr/sbin:/usr/bin' PPID='546' PREVLEVEL='N' PS1='\w \$ ' PS2='> ' PS4='+ ' PWD='/var/tmp' RESPAWN_COUNT='1' RESPAWN_LAST='0' RESPAWN_MAX='5' RESPAWN_TIME='5' ROOTDEV='/dev/sla1' RUNLEVEL='5' SHELL='/bin/false' TERM='linux' USER='root' > > Connection closed by foreign host Sending a custom prompt ~ $ Connection to 192.168.1.124 closed. wim@wim-acer:~$

    Read the article

  • BinaryWrite exception "OutputStream is not available when a custom TextWriter is used" in MVC 2 ASP.

    - by Grant
    Hi, I have a view rendering a stream using the response BinaryWrite method. This all worked fine under ASP.NET 4 using the Beta 2 but throws this exception in the RC release: "HttpException" , "OutputStream is not available when a custom TextWriter is used." <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" %> <%@ Import Namespace="System.IO" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { if (ViewData["Error"] == null) { Response.Buffer = true; Response.Clear(); Response.ContentType = ViewData["DocType"] as string; Response.AddHeader("content-disposition", ViewData["Disposition"] as string); Response.CacheControl = "No-cache"; MemoryStream stream = ViewData["DocAsStream"] as MemoryStream; Response.BinaryWrite(stream.ToArray()); Response.Flush(); Response.Close(); } } </script> </script> The view is generated from a client side redirect (jquery replace location call in the previous page using Url.Action helper to render the link of course). This is all in an iframe. Anyone have an idea why this occurs?

    Read the article

  • WPF: How to set column width with auto fill in ListView with custom user control.

    - by powerk
    A ListView with Datatemplate in GridViewColumn: <ListView Name ="LogDataList" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding LogDataCollection}" Background="Cyan"> <ListView.View> <GridView AllowsColumnReorder="true" ColumnHeaderToolTip="Event Log Information"> <GridViewColumn Header="Event Log Name" Width="100"> <GridViewColumn.CellTemplate> <DataTemplate> <l:MyTextBlock Height="25" DataContext="{Binding LogName, Converter={StaticResource DataFieldConverter}}" HighlightMatchCase="{Binding Element}" Loaded="EditBox_Loaded"/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> ... </GridView> </ListView.View> </ListView> I have no idea about how to make column width autofill although I have tried a lot of way to walk up. The general idea for demo is : <ListView Name ="LogDataList" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding LogDataCollection}" Background="Cyan"> <ListView.Resources> <Style x:Key="ColumnWidthStyle" TargetType="{x:Null GridViewColumn}"> <Style.Setters> <Setter Property="HorizontalContentAlignment" Value="Stretch" > </Setter> </Style.Setters> </Style> </ListView.Resources> <ListView.View> <GridView AllowsColumnReorder="true" ColumnHeaderToolTip="Event Log Information"> <GridViewColumn Header="Event Log Name" DisplayMemberBinding="{Binding Path=LogName}" HeaderContainerStyle="{StaticResource ColumnWidthStyle}"> It works, but not accord with my demand. I need to customize datatemplate with my custom user control(MyTextBlock) since the enhancement(HighlighMatchCase property) and binding datacontext. How can I set up ColumnWidthMode with Fill in the word? On-line'in. I really appreciate your help.

    Read the article

  • Setting custom WCF-binding behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • When to use custom exceptions vs. existing exceptions vs. generic exceptions

    - by Ryan Elkins
    I'm trying to figure out what the correct form of exceptions to throw would be for a library I am writing. One example of what I need to handle is logging a user in to a station. They do this by scanning a badge. Possible things that could go wrong include: Their badge is deactivated They don't have permission to work at this station The badge scanned does not exist in the system They are already logged in to another station elsewhere The database is down Internal DB error (happens sometimes if the badge didn't get set up correctly) An application using this library will have to handle these exceptions one way or another. It's possible they may decide to just say "Error" or they may want to give the user more useful information. What's the best practice in this situation? Create a custom exception for each possibility? Use existing exceptions? Use Exception and pass in the reason (throw new Exception("Badge is deactivated.");)? I'm thinking it's some sort of mix of the first two, using existing exceptions where applicable, and creating new ones where needed (and grouping exceptions where it makes sense).

    Read the article

  • Is it possible to create a custom, animated MKAnnotationView?

    - by smountcastle
    I'm trying to simulate the user location animation in MapKit (where-by the user's position is represented by a pulsating blue dot). I've created a custom subclass of MKAnnotationView and in the drawRect method I'm attempting to cycle through a set of colors. Here's a simpler implementation of what I'm doing: - (void)drawRect:(CGRect)rect { float magSquared = event.magnitude * event.magnitude; CGContextRef context = UIGraphicsGetCurrentContext(); if (idx == -1) { r[0] = 1.0; r[1] = 0.5; r[2] = 0; b[0] = 0; b[1] = 1.0; b[2] = 0.5; g[0] = 0.5; g[1] = 0; g[2] = 1.0; idx = 0; } // CGContextSetRGBFillColor(context, 1.0, 1.0 - magSquared * 0.015, 0.211, .6); CGContextSetRGBFillColor(context, r[idx], g[idx], b[idx], 0.75); CGContextFillEllipseInRect(context, rect); idx++; if (idx > 3) idx = 0; } Unfortunately this just causes the annotations to be one of the 3 different colors and doesn't cycle through them. Is there a way to force the MKAnnotations to continually redraw so that it appears to be animated?

    Read the article

  • Setting custom behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • How to do custom display and auto-select in django admin multi-select field?

    - by rsp
    I'm new to django, so please feel free to tell me if I'm doing this incorrectly. I am trying to create a django ordering system. My order model: class Order(models.Model): ordered_by = models.ForeignKey(User, limit_choices_to = {'groups__name': "Managers", 'is_active': 1}) in my admin ANY user can enter an order, but ordered_by must be someone in the group "managers" (this is the behavior I want). Now, if the logged in user happens to be a manager I want it to automatically fill in the field with that logged in user. I have accomplished this by: class OrderAdmin(admin.ModelAdmin): def formfield_for_foreignkey(self, db_field, request, **kwargs): if db_field.name == "ordered_by": if request.user in User.objects.filter(groups__name='Managers', is_active=1): kwargs["initial"] = request.user.id kwargs["empty_label"] = "-------------" return db_field.formfield(**kwargs) return super(OrderAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs) This also works, but the admin puts the username as the display for the select box by default. It would be nice to have the user's real name listed. I was able to do it with this: class UserModelMultipleChoiceField(forms.ModelMultipleChoiceField): def label_from_instance(self, obj): return obj.first_name + " " + obj.last_name class OrderForm(forms.ModelForm): ordered_by = UserModelChoiceField(queryset=User.objects.all().filter(groups__name='Managers', is_active=1)) class OrderAdmin(admin.ModelAdmin): form = OrderForm My problem: I can't to both of these. If I put in the formfield_for_foreignkey function and add form = OrderForm to use my custom "UserModelChoiceField", it puts the nice name display but it won't select the currently logged in user. I'm new to this, but my guess is that when I use UserModelChoiceField it "erases" the info passed in via formfield_for_foreignkey. Do I need to use the super() function somehow to pass on this info? or something completely different?

    Read the article

  • Should an Event that has no arguments define its own custom EventArgs or simply use System.EventArgs

    - by Mike Rosenblum
    I have an event that is currently defined with no event arguments. That is, the EventArgs it sends is EventArgs.Empty. In this case, it is simplest to declare my Event handler as: EventHandler<System.EventArgs> MyCustomEvent; I do not plan on adding any event arguments to this event, but it is possible that any code could need to change in the future. Therefore, I am leaning towards having all my events always create an empty event args type that inheretis from System.EventArgs, even if there are no event args currently needed. Something like this: public class MyCustomEventArgs : EventArgs { } And then my event definition becomes the following: EventHandler<MyCustomEventArgs> MyCustomEvent; So my question is this: is it better to define my own MyCustomEventArgs, even if it does not add anything beyond inheriting from System.EventArgs, so that event arguments could be added in the future more easily? Or is it better to explicitly define my event as returning System.EventArgs, so that it is clearer to the user that there are no extra event args? I am leaning towards creating custom event arguments for all my events, even if the event arguments are empty. But I was wondering if others thought that making it clearer to the user that the event arguments are empty would be better? Much thanks in advance, Mike

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >