Search Results

Search found 9439 results on 378 pages for 'telepath needed'.

Page 77/378 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Homemaking a 2d soft body physics engine

    - by Griffin
    hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics engine such as box2d as a starting point?

    Read the article

  • Grant a user permissions on www-data owned /var/www

    - by George Pearce
    I have a simple web server setup for some websites, with a layout something like: site1: /var/www/site1/public_html/ site2: /var/www/site2/public_html/ I have previously used the root user to manage files, and then given them back to www-data when I was done (WordPress sites, needed for WP Uploads to work). This probably isn't the best way. I'm trying to find a way to create another user (lets call it user1) that has permission to edit files in site1, but not site2, and doesn't stop the files being 'owned' by www-data. Is there any way for me to do this?

    Read the article

  • "Unable to create OpenGL 3.3 context (flags 0, profile 1)"

    - by Tsvetan
    Trying to run any of the well-known McKesson's tutorials on a friend's laptop results in the aforementioned exception. I read that in order to run applications which use OpenGL 3.3 you must at least have an ATI HD or Nvidia 8xxx GPU series. He has an ATI HD class graphics processor which eliminates (maybe) this issue. Also, I read that this error may result in having old drivers. He updated his drivers but that didn't solve the problem. The tutorials are built as said in the book introduction and glsdk is installed. If more information is needed, say so and I will provide it. What are the other reasons for this kind of exception? And how can I fix them?

    Read the article

  • Approaching Java/JVM internals

    - by spinning_plate
    I've programmed in Java for about 8 years and I know the language quite well as a developer, but my goal is to deepen my knowledge of the internals. I've taken undergraduate courses in PL design, but they were very broad academic overviews (in Scheme, IIRC). Can someone suggest a route to start delving into the details? Specifically, are there particular topics (say, garbage collection) that might be more approachable or be a good starting point? Is there a decent high-level book on the internals of the JVM and the design of the Java programming language? My current approach is going to be to start with the JVM spec and research as needed.

    Read the article

  • Best Practice - XML To Excel

    - by MemLeak
    I've to read a big XML file with a lot of information. Afterwards I extract the needed information (~20 Points(columns) / ~80 relevant Data (rows, some of them with subdatasets) and write them out in a Excel File. My Question is how to handle the extraction (of unused Data) part, should I copy the whole file and delete the unused parts, and then write it to excel or is it a good approach to create Objects for each column? should I write the whole xml to excel and start to delete rows in excel? What would be performant and a acceptable solution?

    Read the article

  • [News] Les 10 raisons qui font que HTML 5 n'est pas pr?t de remplacer Silverlight

    Alors qu'on entend de plus en plus de voix s'?lever pour le remplacement de Flash et Silverlight par la future sp?cification HTML 5, Bart Czernicki explique les 10 raisons qui font que cela n'est pas pr?t d'arriver : "HTML 5 is the next update to the HTML standard that powers the web. There are many new exciting features being added like the the canvas element, local offline storage, drag and drop and video playback support. HTML needed to evolve and added these features in order to stay relevant as the de facto markup language that can provide a rich web experience.". Une argumentation ?tay?e ? lire absolument.

    Read the article

  • Frame Interpolation issues for skeletal animation

    - by sebby_man
    I'm trying to animate in-between keyframes for skeletal animation but having some issues. Each joint is represented by a quaternion and there is no translation component. When I try to slerp between the orientations at the two key frames, I got a very wacky animation. I know my skinning equation is right because the animation is perfectly fine when the animation is directly on a keyframe rather than in-between two. I'm using glm's built in mix function to do the slerp, so I don't think there are any problems with the actual slerp implementation. There's really one thing left that could be wrong here. I must not be in the correct space to do slerp. Right now the orientations are in joint local space. Do I have to be in world space? In some other space along the way? I have the bind pose matrix and world-space transformation matrix at my disposal if those are needed.

    Read the article

  • Installing ZFS changed my sudoers file

    - by MaxMackie
    Following the wiki's advice, I installed ubuntu-zfs. However, once everything installed correctly, and I tried installing another application via apt-get, I get a weird issue with my sudoers file: max@host:~$ sudo apt-get install deluge deluge-web sudo: /etc/sudoers.d/zfs is mode 0644, should be 0440 >>> /etc/sudoers.d/README: /etc/sudoers.d/zfs near line 18 <<< sudo: parse error in /etc/sudoers.d/README near line 18 sudo: no valid sudoers sources found, quitting *** glibc detected *** sudo: double free or corruption (!prev): 0x08909d08 *** ======= Backtrace: ========= .... Why has zfs messed with the sudoers file? I can post the backtrace if needed.

    Read the article

  • How to stop fan running always on Asus P8P76LE motherboard with ATI Radeon HD6900

    - by Chris Good
    I'm using Ubuntu 12.04 LTS. I'm not sure if it is the CPU (i7) fan or the video card fan. I've tried using lm-sensors & fancontrol sudo sensors-detect Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `w83627ehf': * ISA bus, address 0x290 Chip `Nuvoton NCT6776F Super IO Sensors' (confidence: 9) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: # Chip drivers coretemp w83627ehf Like many people, I'm also getting error: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Here is the output of sensors: # sensors radeon-pci-0100 Adapter: PCI adapter temp1: +71.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 1: +40.0°C (high = +80.0°C, crit = +98.0°C) Core 2: +43.0°C (high = +80.0°C, crit = +98.0°C) Core 3: +42.0°C (high = +80.0°C, crit = +98.0°C) I'm hoping some-one has already solved this for my configuration because this seems to be a problem for many people and there are many different suggestions.

    Read the article

  • What is the best policy for allowing clients to change email?

    - by Steve Konves
    We are developing a web application with a fairly standard registration process which requires a client/user to verify their email address before they are allowed to use the site. The site also allows users to change their email address after verification (with a re-type email field, as well). What are the pros and cons of having the user re-verify their email. Is this even needed? EDIT: Summary of answers and comments below: "Over-verification annoys people, so don't use it unless critical Use a "re-type email" field to prevent typos Beware of overwriting known good data with potentially good data Send email to old for notification; to new for verification Don't assume that the user still has access to the old email Identify impact of incorrect email if account is compromised

    Read the article

  • Using Java classes in JRuby

    - by kerry
    I needed to do some reflection today so I fired up interactive JRuby (jirb) to inspect a jar.  Surprisingly, I couldn’t remember exactly how to use Java classes in JRuby.  So I did some searching on the internet and found this to be a common question with many answers.  So I figure I will document it here in case I forget how in the future. Add it’s folder to the load path, require it, then use it! $: << '/path/to/my/jars/' require 'myjar' # so we don't have to reference it absolutely every time (optional) include Java::com.goodercode my_object = SomeClass.new

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • How to make an arc'd, but not mario-like jump in python, pygame [duplicate]

    - by PythonInProgress
    This question already has an answer here: Arc'd jumping method? 2 answers Analysis of Mario game Physics [closed] 6 answers I have looked at many, many questions similar to this, and cannot find a simple answer that includes the needed code. What i am trying to do is raise the y value of a square for a certain amount of time, then raise it a bit more, then a bit more, then lower it twice. I cant figure out how to use acceleration/friction, and might want to do that too. P.S. - can someone tell me if i should post this on stackoverflow or not? Thanks all! Edit: What i am looking for is not mario-like physics, but a simple equation that can be used to increase then decrease height over the time over a few seconds.

    Read the article

  • Migration from XNA to SharpDX

    - by Wouter
    My fear is that XNA has reached the end of the road. To keep up with the latest technology a shift to another game framework might be needed. We have many games in a large codebase, all based on XNA. My question is, how much work would it be to migrate to SharpDX and are there other possibilities? Our code base mainly uses basic 3D rendering and the SpriteBatch, no fancy shader stuff. Update: I should have mentioned we only use 2.5D, we have a simple engine that builds textured quads to render text and animated sprites. Also for sound we use XACT (what else..) with some effects.

    Read the article

  • Avoiding That Null Reference!

    - by TheJuice
    A coworker had this conversation with another of our developers. Names have been changed to protect the guilty. Clueless: hey! Clueless: I am using the ?? operator for null check below Nice Guy: hey Clueless: FundLoanRequestBoatCollateral boatCollateral = request.BoatCollateral ?? null; Nice Guy: that's not exactly how it works Clueless: I want to achive: FundLoanRequestBoatCollateral boatCollateral = request.BoatCollateral != null ? request.BoatCollateral : null; Clueless: isnt that equal to:  FundLoanRequestBoatCollateral boatCollateral = request.BoatCollateral ?? null; Nice Guy: that is functionally equivalent to FundLoanRequestBoatCollateral boatCollateral = request.BoatCollateral Nice Guy: you're checking if it's null and if so setting it to null Clueless: yeah Clueless: if its null I want to set it to null Nice Guy: if it's null then you're already going to set it to null, no special logic needed Clueless: I wanted to avoid a null reference if BoatCollateral is null   The sad part of all of this is that "Clueless" has been with our company for years and has a Master's in Computer Science.

    Read the article

  • Retain only dependencies of a given package

    - by Karthik
    I am trying to have a minimal version of Ubuntu, having only those packages necessary to run mininet. So, I would like a solution, where I would be able to uninstall all the packages from the system, except those that are needed to run mininet. A much more preferable solution would be to be able to create a new ISO, with mininet and all it's dependencies. Is there any program out there which is already capable of this? If not please guide me on how I can solve my problem? Note: I need to retain not only the direct dependencies of mininet, but also dependencies of dependencies and so on. The final system should be able to run mininet without a hitch.

    Read the article

  • Multiple Object Instantiation

    - by Ricky Baby
    I am trying to get my head around object oriented programming as it pertains to web development (more specifically PHP). I understand inheritance and abstraction etc, and know all the "buzz-words" like encapsulation and single purpose and why I should be doing all this. But my knowledge is falling short with actually creating objects that relate to the data I have in my database, creating a single object that a representative of a single entity makes sense, but what are the best practises when creating 100, 1,000 or 10,000 objects of the same type. for instance, when trying to display a list of the items, ideally I would like to be consistent with the objects I use, but where exactly should I run the query/get the data to populate the object(s) as running 10,000 queries seems wasteful. As an example, say I have a database of cats, and I want a list of all black cats, do I need to set up a FactoryObject which grabs the data needed for each cat from my database, then passes that data into each individual CatObject and returns the results in a array/object - or should I pass each CatObject it's identifier and let it populate itself in a separate query.

    Read the article

  • Flattening a Jagged Array with LINQ

    - by PSteele
    Today I had to flatten a jagged array.  In my case, it was a string[][] and I needed to make sure every single string contained in that jagged array was set to something (non-null and non-empty).  LINQ made the flattening very easy.  In fact, I ended up making a generic version that I could use to flatten any type of jagged array (assuming it's a T[][]): private static IEnumerable<T> Flatten<T>(IEnumerable<T[]> data) { return from r in data from c in r select c; } Then, checking to make sure the data was valid, was easy: var flattened = Flatten(data); bool isValid = !flattened.Any(s => String.IsNullOrEmpty(s)); You could even use method grouping and reduce the validation to: bool isValid = !flattened.Any(String.IsNullOrEmpty); Technorati Tags: .NET,LINQ,Jagged Array

    Read the article

  • Do we need use case levels or not?

    - by Gabriel Šcerbák
    I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows and at the end of his book mentions, that at least two levels areneeded most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?

    Read the article

  • Casing of COM-Interop registered components

    - by Marko Apfel
    During a refactoring i realized that renaming of components, which will be registered for COM-Interop, must be done carefully. In my case i changed the casing of XyzToolbar to XyzToolBar. At the developing machine everything works fine. But after installing the modified stuff at the production machine, the toolbar was not visible. Using regasm with the new assemblies helped. So this was the hint: we use WIX to build the setup. And during setup-development the heat-tool extracted the needed registry-keys. And in these keys still was the old name XyzToolbar. Refreshing the names corrected the problem.

    Read the article

  • TFS vs. Star Team comparison

    - by ryanabr
    I have a sales call today in which the person that I am talking to is interested in what TFS would give them over Star Team, The first thing I believe that I can say is that TFS is cheaper! Especially if you are doing MSFT development already and your team members have MSDN subscriptions as the CALs for TFS are covered in the MSDN subscription. The other thing that I noticed about Star Team was all of the references to ‘readiness’ and ‘integration’. While that is great, that means that other tools will be needed to provide the features that are already bundled with TFS like, SharePoint integration, as well as Analysis Services and Reporting Services to provide visibility on the web with reports on project health, and team velocity. Below is a quick table that I was able to throw together to answer some high level questions: Feature TFS Star Team Work Items X X Work Item custom Queries X X Customizable Work Items X Web Portal View X X Reporting X Integration Version Control X X Build Management X Integration Integrated Test Suite X Integration Cost Free for first 5 / MSDN Sub covers others $7500 / seat

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • AWR Performance Report and Read by Other Session Waits

    - by user702295
    For the questions regarding "read by other session" and its relation to "db file sequential/scattered read", the logic is like this: When a "db file sequential/scattered read" is done, the blocks are either already in the cache or on the disk.  Since any operation on blocks is done in the cache and since and the issue is "read by other session" I will relate to the case the blocks are on the disk. Process A is reading the needed block from the disk to the cache.  During that time, if process B (and C and others) need the same block, it will wait on "read by other session".  A and B can be threads of the same process running in parallel or unrelated processes.  For example two processes doing full table scan on mdp_matrix etc. Solutions for that can be lowering the number of processes competing on the same blocks, increasing PCTFREE.  If it is a full table scan, maybe an index is missing that can result in less blocks being read from the cache and so on.

    Read the article

  • Free forum engine with good anti-attack mechanisms

    - by macias
    I am looking for forum engine (for discussions) with good attack countermeasures built in. Windows (preferrably) or Linux. Free (as beer). I think about registration flooding and blocking user accounts attacks. For registration, such engine should have at least: captcha blocking mulitple registrations from the same IP providing login (for logging in) and user name (for displaying the author of the posts) For logging in: no blocking on multiple tries -- instead after X try sending via mail a token, the third piece needed for next login -- without it logging in will be impossible (it would be similar to activation process) The engine should be designed with two ideas in mind: protecting engine against attacks 0 penalty for decent users Thank you in advance for your help and recommendations.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >