Search Results

Search found 97532 results on 3902 pages for 'user acceptance testing'.

Page 324/3902 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Does admin=true and root has the same privileges on AIX?

    - by Boaz Tirosh
    Does a user in /etc/security/user with the parameter admin set to true (admin = true) has the same privileges as the root user? According to IBM (full information here): admin Defines the administrative status of the user. Possible values are: true The user is an administrator. Only the root user can change the attributes of users defined as administrators. false The user is not an administrator. This is the default value. Is there another type of user, or are admin and root the same?

    Read the article

  • How can I find the names of AD Group policies that a user/pc is using?

    - by Russ
    I am having trouble locating some settings in group policy so I can make changes due to the convoluted nature of our policies. What I would like to be able to do is go to a specific PC and see what group policies are being applied, so I can focus on those policies. My goal would be to clean up the GP's a bit, while allowing me to "walk the tree" to see what people have implemented and what is worthless. Thanks. EDIT: In this specific case, I am looking to find which GP maped drives are configured in. (User Configuration -- Preferences -- Windows Settings -- Drive Maps)

    Read the article

  • Is it safe to remove Per user queued Windows Error Reporting?

    - by Rewinder
    I was cleaning up my laptop hard-disk, running Windows 7, and as part of the process I ran the Disk Cleanup utility. To my surprise I saw 2 items in the list that were quite large (both ~300MB). Per user queued Windows Error Reporting System queued Windows Error Reporting I guess I had never noticed these, because they were never that big. So, what are these items? Any particular reason why they became so large all of a sudden? And finally, is it safe to remove them?

    Read the article

  • Windows 8 Remote Desktop only allows one user at a time?

    - by segmentation fault
    I tried connecting to Windows 8 using its built-in Remote Desktop feature, but for some inexplicable reason, it requires that no users are logged in on the target machine before a remote user can log in. This has never been a problem with rdesktop on Unixen; I could rdesktop from as many machines as I wanted and any logged-in users would never notice a thing. What's the problem with Windows? Any way to allow concurrent local and remote logins to a Windows 8 machine without hacks or cracks? The "guides" on how to do this that show up in the Google results all suggest replacing a system DLL with a hacked one, but that's not acceptable.

    Read the article

  • always need administrator password...even my user and password is correct.. i cannot log in

    - by sulit09
    why the computer always need administrator password. when i change all the setting of my computer, all around ..... and when i log in my administrator password... still not working even it is correct....password....and user..... and this pc is connected to domain.. but i try to other pc . to login all also using windows administrator its open..... what is the problem itry run cmd as administrator need administrator password.....again.. , i try to open the properties of network need administrator passpowrd..again how to solve this problem my new pc is windows 7 .

    Read the article

  • Excel Macro To Lookup a User Entered String, and return data from the field next to it

    - by CJG
    On worksheet A, a user is prompted to enter a product number, such as BCI610. On worksheet B somewhere, that value exists. I want excel to lookup/find that value, and then return the data in the cell that is right next to it one column to the right, by copying that data, and pasting it somewhere in worksheet A. If I enter BCI610, it should return the value M332651, because that is the number in the cell immediately to the right of BCI610. I tried VLookup and HLookup, but to no avail... Any suggestions?

    Read the article

  • how to change user interface language in firefox to avoid so-called localization?

    - by hugh featherstone
    since yesterday my firefox thinks that everyone in belgium speaks flemish. this is a typically american problem: americans NEVER seem to know where languages are spoken (i think they just guess) but for some reason believe that language localization software is cool. it is NOT, it is a curse. so suddenly i no longer have my user interface in english (which was fine) but now in flemish, which looks and reads like a tongue disease and is not even the second language of the area i live in (first language here is german, second is french). how can i change back to english?

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Tablet design guide, Endeca patterns now available

    - by JuergenKress
    UX Direct, an Oracle program that offers consultants, partners, and customers the same scientifically proven and reusable user experience best practices that Oracle uses to build Oracle Applications, recently added links to a new design guide for creating tablet-based solutions for enterprise applications, and to the recently published Endeca User Interface Design Pattern Library. The tablet design guide is available from the UX Direct Home page. Tap the button under “Latest patterns & tools” for “Oracle Applications UX Tablet Guide.” It provides basic help for designers, developers, and project managers trying to approach tablet design and testing from an enterprise point of view. To hear what developers are saying about it, follow the links from this post on the User Experience Assistance blog. The newly released Endeca User Interface Design Pattern Library is also available from the UX Direct Home page and from a post on the User Experience Assistance blog. It describes principled ways to solve common user interface (UI) design problems related to search, faceted navigation, and discovery. The link between Simplified UI and Oracle UX strategy, plus content you can share on the cloud, ADf, tailoring, and more Simplified User Interface in Oracle Fusion Applications Fronts Oracle Cloud Offerings This new article on Simplified UI has just been posted on Usable Apps. Learn about the three themes - simplicity, mobility, and extensibility – that Simplified UI embodies. These same principles are guiding the development of the next generation of the Oracle user experience. Oracle's Applications User Experience Strategy: One Cloud User Experience, with Optimized UIs Where and How You Want This podcast from Misha Vaughan, Director, User Experience, is now available on the Oracle University Knowledge Center. It is available for partners and Oracle employees at this iLearning Link. Oracle Partner Builds User Experience That Hits Right Note for New Employees This new article on the Usable Apps website explores the experience of consultants at IntraSee as they implement a PeopleSoft onboarding process for Invesco, a global asset management company. The Feng Shui of Fusion This article in Oracle Scene is from Grant Ronald, Director of Product Management, on the Tools of Fusion: Oracle JDeveloper and Oracle ADF. Hands-On Workshop with Fusion Applications and ADF UX Desktop Design Patterns This post on the Voice of User Experience, or VoX, blog from Misha Vaughan describes a new kind of workshop for partners and a handful of internal Oracle sales folks on extending Oracle Fusion Applications and building custom applications with Application Development Framework (ADF) while maintaining the Oracle user experience. To learn more about the content that was delivered during this three-day workshop, visit the Usable Apps blog. Recent posts from a new blog series take a look at several of the topics discussed during the workshop. Applications User Experience Fundamentals Visual Design for any Enterprise User Interface / Art School in a Box Wireframing / Blueprinting Usable Applications Concepts. Tailoring videos This blog post from Richard Bingham, Applications Architect, on the Fusion Applications Developer Relations blog provides links to several videos that show many customization and development tasks using the Oracle Fusion Applications platform. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: UX,Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • ASP. NET MVC2: Where abouts do I set Current.User from a Session variable?

    - by Adam Brown
    Hi, I'm trying to get authentication and authorisation working in an ASP .NET MVC2 site. I've read loads of tutorials and some books which explain how you can use attributes to require authorisation (and membership in a role) like this: [Authorize(Roles="Admin")] public ActionResult Index() { return View(); } I've made classes that implement IIdentity and IPrincipal, I create a userPrincipal object once the user has successfully logged in and I add it to a session variable. When the user goes to another page I want it to set the HttpContext.Current.User to the object that I stored in the session, something like this: if (Session["User"] != null) { HttpContext.Current.User = Session["User"] as MyUser; } My question is: Where abouts do I put the code directly above? I tried Application_AuthenticateRequest but it tells me Session state is not available in this context. Many thanks.

    Read the article

  • Why not all users can log in to a network computer by using <network computer name>/<user name> form

    - by Haris
    There is a file server in my office that is not connected to my office LAN. In this server there are folders that are shared. This server is connected to a switch and a wireless router. Everyone who wants to access this server uses wireless network connection. They log in to this server by providing user name and password registered to the server. Some people can log in to my office file server by providing user name in (the server name)/(user name) format, while other must use (the server IP address)/(user name) format. Why is it like this? I need everyone can access the file server by providing user name in (the server name)/(user name) format. I have tried to change the %SystemRoot%\system32\drivers\etc\hosts file, as some suggested, but it won't work. Any other suggestion?

    Read the article

  • Trouble with RSpec's with method

    - by Thiago
    Hi there, I've coded the following spec: it "should call user.invite_friend" do user = mock_model(User, :id = 1) other_user = mock_model(User, :id = 2) User.stub!(:find).with(user.id).and_return(user) User.stub!(:find).with(other_user.id).and_return(other_user) user.should_receive(:invite_friend).with(other_user) post :invite, { :id = other_user.id }, {:user_id = user.id} end But I'm getting the following error when I run the specs NoMethodError in 'UsersController POST invite should call user.invite_friend' undefined method `find' for # Class:0x86d6918 app/controllers/users_controller.rb:144:in `invite' ./spec/controllers/users_controller_spec.rb:13: What's the mistake? Without .with it works just fine, but I want different return values for different arguments to the stub method. The following controller's actions might be relevant: def invite me.invite_friend(User.find params[:id]) respond_to do |format| format.html { redirect_to user_path(params[:id]) } end end def me User.find(session[:user_id]) end

    Read the article

  • Ruby on Rails Join Table Associations

    - by Eef
    Hey, I have a Ruby on Rails application setup like so: User Model has_and_belongs_to_many :roles Role Model has_many :transactions has_and_belongs_to_many :users Transaction Model belongs_to :role This means that a join table is used called roles_users and it also means that a user can only see the transactions that have been assigned to them through roles, usage example: user = User.find(1) transactions = user.roles.first.transactions This will return the transactions which are associated with the first role assigned to the user. If a user has 2 roles assigned to them, at the moment to get the transactions associated with the second role I would do: transactions = user.roles.last.transactions I am basically trying to figure out a way to setup an association so I can grab the user's transactions via something like this based on the roles defined in the association between the user and roles: user = User.find(1) transactions = user.transactions I am not sure if this is possible? I hope you understand what I am trying to do.

    Read the article

  • How do these user/userParam references relate to the Customer and Account lookups?

    - by plath
    In the following code example how do the user/userParam references relate to the Customer and Account lookups and what is the relationship between Customer and Account? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • How do these user/userParam references relate to the Customer and Account lookups?

    - by marmalade
    In the following code example how do the user/userParam references relate to the Customer and Account lookups and what is the relationship between Customer and Account? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • Unit testing authorization in a Pylons app fails; cookies aren't been correctly set or recorded

    - by Ian Stevens
    I'm having an issue running unit tests for authorization in a Pylons app. It appears as though certain cookies set in the test case may not be correctly written or parsed. Cookies work fine when hitting the app with a browser. Here is my test case inside a paste-generated TestController: def test_good_login(self): r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password}) r = r.follow() # Should only be one redirect to root assert 'http://localhost/' == r.request.url assert 'Dashboard' in r This is supposed to test that a login of an existing account forwards the user to the dashboard page. Instead, what happens is that the user is redirected back to the login. The first POST works, sets the user in the session and returns cookies. Although those cookies are sent in the follow request, they don't seem to be correctly parsed. I start by setting a breakpoint at the beginning of the above method and see what the login response returns: > nosetests --pdb --pdb-failure -s foo.tests.functional.test_account:TestMainController.test_good_login Running setup_config() from foo.websetup > /Users/istevens/dev/foo/foo/tests/functional/test_account.py(33)test_good_login() -> r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password}) (Pdb) n > /Users/istevens/dev/foo/foo/tests/functional/test_account.py(34)test_good_login() -> r = r.follow() # Should only be one redirect to root (Pdb) p r.cookies_set {'auth_tkt': '"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!"'} (Pdb) p r.request.environ['REMOTE_USER'] '4bd871833d19ad8a79000000' (Pdb) p r.headers['Location'] 'http://localhost/?__logins=0' A session appears to be created and a cookie sent back. The browser is redirected to the root, not the login, which also indicates a successful login. If I step past the follow(), I get: > /Users/istevens/dev/foo/foo/tests/functional/test_account.py(35)test_good_login() -> assert 'http://localhost/' == r.request.url (Pdb) p r.request.headers {'Host': 'localhost:80', 'Cookie': 'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; '} (Pdb) p r.request.environ['REMOTE_USER'] *** KeyError: KeyError('REMOTE_USER',) (Pdb) p r.request.environ['HTTP_COOKIE'] 'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; ' (Pdb) p r.request.cookies {'auth_tkt': ''} (Pdb) p r <302 Found text/html location: http://localhost/login?__logins=1&came_from=http%3A%2F%2Flocalhost%2F body='302 Found...y. '/149> This indicates to me that the cookie was passed in on the request, although with dubious escaping. The environ appears to be without the session created on the prior request. The cookie has been copied to the environ from the headers, but the cookies in the request seems incorrectly set. Lastly, the user is redirected to the login page, indicating that the user isn't logged in. Authorization in the app is done via repoze.who and repoze.who.plugins.ldap with repoze.who_friendlyform performing the challenge. I'm using the stock tests.TestController created by paste: class TestController(TestCase): def __init__(self, *args, **kwargs): if pylons.test.pylonsapp: wsgiapp = pylons.test.pylonsapp else: wsgiapp = loadapp('config:%s' % config['__file__']) self.app = TestApp(wsgiapp) url._push_object(URLGenerator(config['routes.map'], environ)) TestCase.__init__(self, *args, **kwargs) That's a webtest.TestApp, by the way. The encoding of the cookie is done in webtest.TestApp using Cookie: >>> from Cookie import _quote >>> _quote('"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!"') '"\\"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!\\""' I trust that that's correct. My guess is that something on the response side is incorrectly parsing the cookie data into cookies in the server-side request. But what? Any ideas?

    Read the article

  • Query for model by key

    - by Jason Hall
    What I'm trying to do is query the datastore for a model where the key is not the key of an object I already have. Here's some code: class User(db.Model): partner = db.SelfReferenceProperty() def text_message(self, msg): user = User.get_or_insert(msg.sender) if not user.partner: # user doesn't have a partner, find them one # BUG: this line returns 'user' himself... :( other = db.Query(User).filter('partner =', None).get() if other: # connect users else: # no one to connect to! The idea is to find another User who doesn't have a partner, that isn't the User we already know. I've tried filter('key !=, user.key()), filter('__key__ !=, user.key()) and a couple others, and nothing returns another User who doesn't have a partner. filter('foo !=, user.key()) also returns nothing, for the record.

    Read the article

  • Eager load this rails association

    - by dombesz
    Hi, I have rails app which has a list of users. I have different relations between users, for example worked with, friend, preferred. When listing the users i have to decide if the current user can add a specific user to his friends. -if current_user.can_request_friendship_with(user) =add_to_friends(user) -else =remove_from_friends(user) -if current_user.can_request_worked_with(user) =add_to_worked_with(user) -else =remove_from_worked_with(user) The can_request_friendship_with(user) looks like: def can_request_friendship_with(user) !self.eql?(user) && !self.friendships.find_by_friend_id(user) end My problem is that this means in my case 4 query per user. Listing 10 users means 40 query. Could i somehow eager load this?

    Read the article

  • converting result set to list

    - by akshay
    how can i conver result set to list? i am using following code but its not working properly private List<User> convertToList(ResultSet rs) { List<User> userList = new ArrayList(); User user = new User(); try { while (rs.next()) { user.setId(rs.getInt("id")); user.setUsername(rs.getString("username")); user.setFname(rs.getString("fname")); user.setLname(rs.getString("lname")); user.setUsertype(rs.getInt("usertype")); user.setPasswd(rs.getString("passwd")); user.setEmail(rs.getString("email")); userList.add(user); } } catch (SQLException ex) { Logger.getLogger(UserDAO.class.getName()).log(Level.SEVERE, null, ex); } finally { closeConnection(); } return userList; }

    Read the article

  • Automatically "upgrade" user settings from previous version of app.config file?

    - by SqlRyan
    Every time I compile my app and the version number changes (I have an auto-incrementing build number), I lose the user-configured app.config settings, since they're stored in the AppData folder for a specific version. Essentially, every release of my application starts from scratch as far as user settings go. While this is a mild annoyance in development, it raises the question as I approach deployment/release - if I use the app.config to store my user settings, will the user's personalized settings be hosed every time they install a patch that changes the version number of my app? If so, is there an easy way to "upgrade" the settings from the previous release? I know that using HKCU in the registry is another option, but I like the ease of the My.Settings namespace, and I'd like to stay with app.config. Another SO question asks something similar, though the answer doesn't seem that clear. Will setting my MSI so it asks the user to upgrade be enough to preserve these user-level settings?

    Read the article

  • Java AD Authentication across Trusted Domains

    - by benjiisnotcool
    I am trying to implement Active Directory authentication in Java which will be ran from a Linux machine. Our AD set-up will consist of multiple servers that share trust relationships with one another so for our test environment we have two domain controllers: test1.ad1.foo.com who trusts test2.ad2.bar.com. Using the code below I can successfully authenticate a user from test1 but not on test2: public class ADDetailsProvider implements ResultSetProvider { private String domain; private String user; private String password; public ADDetailsProvider(String user, String password) { //extract domain name if (user.contains("\\")) { this.user = user.substring((user.lastIndexOf("\\") + 1), user.length()); this.domain = user.substring(0, user.lastIndexOf("\\")); } else { this.user = user; this.domain = ""; } this.password = password; } /* Test from the command line */ public static void main (String[] argv) throws SQLException { ResultSetProvider res = processADLogin(argv[0], argv[1]); ResultSet results = null; res.assignRowValues(results, 0); System.out.println(argv[0] + " " + argv[1]); } public boolean assignRowValues(ResultSet results, int currentRow) throws SQLException { // Only want a single row if (currentRow >= 1) return false; try { ADAuthenticator adAuth = new ADAuthenticator(); LdapContext ldapCtx = adAuth.authenticate(this.domain, this.user, this.password); NamingEnumeration userDetails = adAuth.getUserDetails(ldapCtx, this.user); // Fill the result set (throws SQLException). while (userDetails.hasMoreElements()) { Attribute attr = (Attribute)userDetails.next(); results.updateString(attr.getID(), attr.get().toString()); } results.updateInt("authenticated", 1); return true; } catch (FileNotFoundException fnf) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught File Not Found Exception trying to read cris_authentication.properties"); results.updateInt("authenticated", 0); return false; } catch (IOException ioe) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught IO Excpetion processing login"); results.updateInt("authenticated", 0); return false; } catch (AuthenticationException aex) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught Authentication Exception attempting to bind to LDAP for [{0}]", this.user); results.updateInt("authenticated", 0); return true; } catch (NamingException ne) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught Naming Exception performing user search or LDAP bind for [{0}]", this.user); results.updateInt("authenticated", 0); return true; } } public void close() { // nothing needed here } /** * This method is called via a Postgres function binding to access the * functionality provided by this class. */ public static ResultSetProvider processADLogin(String user, String password) { return new ADDetailsProvider(user, password); } } public class ADAuthenticator { public ADAuthenticator() throws FileNotFoundException, IOException { Properties props = new Properties(); InputStream inStream = this.getClass().getClassLoader(). getResourceAsStream("com/bar/foo/ad/authentication.properties"); props.load(inStream); this.domain = props.getProperty("ldap.domain"); inStream.close(); } public LdapContext authenticate(String domain, String user, String pass) throws AuthenticationException, NamingException, IOException { Hashtable env = new Hashtable(); this.domain = domain; env.put(Context.INITIAL_CONTEXT_FACTORY, com.sun.jndi.ldap.LdapCtxFactory); env.put(Context.PROVIDER_URL, "ldap://" + test1.ad1.foo.com + ":" + 3268); env.put(Context.SECURITY_AUTHENTICATION, simple); env.put(Context.REFERRAL, follow); env.put(Context.SECURITY_PRINCIPAL, (domain + "\\" + user)); env.put(Context.SECURITY_CREDENTIALS, pass); // Bind using specified username and password LdapContext ldapCtx = new InitialLdapContext(env, null); return ldapCtx; } public NamingEnumeration getUserDetails(LdapContext ldapCtx, String user) throws NamingException { // List of attributes to return from LDAP query String returnAttributes[] = {"ou", "sAMAccountName", "givenName", "sn", "memberOf"}; //Create the search controls SearchControls searchCtls = new SearchControls(); searchCtls.setReturningAttributes(returnAttributes); //Specify the search scope searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE); // Specify the user to search against String searchFilter = "(&(objectClass=*)(sAMAccountName=" + user + "))"; //Perform the search NamingEnumeration answer = ldapCtx.search("dc=dev4,dc=dbt,dc=ukhealth,dc=local", searchFilter, searchCtls); // Only care about the first tuple Attributes userAttributes = ((SearchResult)answer.next()).getAttributes(); if (userAttributes.size() <= 0) throw new NamingException(); return (NamingEnumeration) userAttributes.getAll(); } From what I understand of the trust relationship, if trust1 receives a login attempt for a user in trust2, then it should forward the login attempt on to it and it works this out from the user's domain name. Is this correct or am I missing something or is this not possible using the method above? --EDIT-- The stack trace from the LDAP bind is {java.naming.provider.url=ldap://test1.ad1.foo.com:3268, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, java.naming.security.authentication=simple, java.naming.referral=follow} 30-Oct-2012 13:16:02 ADDetailsProvider assignRowValues WARNING: Caught Authentication Exception attempting to bind to LDAP for [trusttest] Auth error is [LDAP: error code 49 - 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db0]

    Read the article

  • Can I do this in only one query ?

    - by Paté
    Merry christmas everyone, I Know my way around SQL but I'm having a hard time figuring this one out. First here are my tables (examples) User id name friend from //userid to //userid If user 1 is friend with user 10 then you a row with 1,10. User 1 cannot be friend with user 10 if user 10 is not friend with user 1 so you have 1,10 10,1 It may look weird but I need those two rows per relations. Now I'm trying to make a query to select the users that have the most mutual friend with a given user. For example User 1 is friend with user 10,9 and 7 and user 8 is friend with 10,9 and 7 too ,I want to suggest user 1 to invite him (like facebook). I want to get like the 10 first people with the most mutual friend. The output would be like User,NumOfMutualFriends I dont know if that can be done in a single query ? Thanks in advance for any help.

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >