Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 20/2677 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • What do I need to know about Data Structures and Algorithms in the "real" world

    - by Ray T Champion
    I just finished the data structures and algorithms course in school , I took it during the summer so 6wks course vs a 16 wk course during the regular semester. So not only was the course hard but it was really really really fast. My question is what do I need to know about data structures in the real world? I understand what they do and how they work, for the most part, but I had a real tough time coding them , I wouldn't be able to write the code for a binary tree class or a balanced tree class from scratch .... Is that bad? should I retake it , or is knowledge of how they work sufficient, without being able to write the classes from scratch?

    Read the article

  • Managing accounts on a private website for a real-life community

    - by Smudge
    I'm looking at setting-up a walled-in website for a real-life community of people, and I was wondering if anyone has any experience with managing member accounts for this kind of thing. Some conditions that must be met: This community has a set list of real-life members, each of whom would be eligible for one account on the website. We don't expect or require that they all sign-up. It is purely opt-in, but we anticipate that many of them would be interested in the services we are setting up. Some of the community members emails are known, but some of them have fallen off the grid over the years, so ideally there would be a way for them to get back in touch with us through the public-facing side of the site. (And we'd want to manually verify the identity of anyone who does so). Their names are known, and for similar projects in the past we have assigned usernames derived from their real-life names. This time, however, we are open to other approaches, such as letting them specify their own username or getting rid of usernames entirely. The specific web technology we will use (e.g. Drupal, Joomla, etc) is not really our concern right now -- I am more interested in how this can be approached in the abstract. Our database already includes the full member roster, so we can email many of them generated links to a page where they can create an account. (And internally we can require that these accounts be paired with a known member). Should we have them specify their own usernames, or are we fine letting them use their registered email address to log-in? Are there any paradigms for walled-in community portals that help address security issues if, for example, one of their email accounts is compromised? We don't anticipate attempted break-ins being much of a threat, because nothing about this community is high-profile, but we do want to address security concerns. In addition, we want to make the sign-up process as painless for the members as possible, especially given the fact that we can't just make sign-ups open to anyone. I'm interested to hear your thoughts and suggestions! Thanks!

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • Task-It Webinar - Building a real-world application with RadControls for Silverlight 4

    Yesterday I held a live webinar on Building a real-world application with RadControls for Silverlight 4. Thank you to all of those that attended, but if you did not have a chance to catch it, you can watch a recorded version here: Building a real-world application with RadControls for Silverlight 4 I wasn't able to get too deep into the inner workings of the app because of time limitations, but over the upcoming weeks I will dig deeper in my blog posts, and potentially some videos. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ADF Real World Developers Guide Book Review

    - by Grant Ronald
    I'm half way through my review of "Oracle ADF Real World Developer's Guide" by Jobinesh Purushothaman - unfortunately some work deadlines de-railed me from having completed my review by now but here goes.  First thing, Jobinesh works in the Oracle Product Management team with me, so is a colleague. That declaration aside, its clear that this is someone who has done the "real world" side of ADF development and that comes out in the book. In this book he addresses both the newbies and the experience developers alike.  He introduces the ADF building blocks like entity objects and view obejcts, but also goes into some of the nitty gritty details as well.  There is a pro and con to this approach; having only just learned about an entity or view object, you might then be blown away by some of the lower details of coding or lifecycle.  In that respect, you might consider this a book which you could read 3 or 4 times; maybe skipping some elements in the first read but on the next read you have a better grounding to learn the more advanced topics. One of the key issues he addresses is breaking down what happens behind the scenes.  At first, this may not seem important since you trust the framework to do everything for you - but having an understanding of what goes on is essential as you move through development.  For example, page 58 he explains the full lifecycle of what happens when you execute a query.  I think this is a great feature of his book. You see this elsewhere, for example he explains the full lifecycle of what goes on when a page is accessed : which files are involved,the JSF lifecycle etc. He also sprinkes the book with some best practices and advice which go beyond the standard features of ADF and really hits the mark in terms of "real world" advice. So in summary, this is a great ADF book, well written and covering a mass of information.  If you are brand new to ADF its still valid given it does start with the basics.  But you might want to read the book 2 or 3 times, skipping the advanced stuff on the first read.  For those who have some basics already then its going to be an awesome way to cement your knowledge and take it to the next levels.  And for the ADF experts, you are still going to pick up some great ADF nuggets.  Advice: every ADF developer should have one!

    Read the article

  • Rails timezone differences between Time and DateTime

    - by kjs3
    I have the timezone set. config.time_zone = 'Mountain Time (US & Canada)' Creating a Christmas event from the console... c = Event.new(:title = "Christmas") using Time: c.start = Time.new(2012,12,25) = 2012-12-25 00:00:00 -0700 #has correct offset c.end = Time.new(2012,12,25).end_of_day = 2012-12-25 23:59:59 -0700 #same deal using DateTime: c.start = DateTime.new(2012,12,25) = Tue, 25 Dec 2012 00:00:00 +0000 # no offset c.end = DateTime.new(2012,12,25).end_of_day = Tue, 25 Dec 2012 23:59:59 +0000 # same I've carelessly been using DateTime thinking that input was assumed to be in the config.time_zone but there's no conversion when this gets saved to the db. It's stored just the same as the return values (formatted for the db). Using Time is really no big deal but do I need to manually offset anytime I'm using DateTime and want it to be in the correct zone?

    Read the article

  • Making the user change the time in Android

    - by Casebash
    Android doesn't appear to provide a way for a user application to change the system time. What I would like to do instead is to get the user to change the time. It is easy to open up the Date & Time settings: startActivity(new Intent(android.provider.Settings.ACTION_DATE_SETTINGS)); What I would like to know is: Is it possible to link directly to the set time option? Is it possible to check that the user set the time correctly? I am aware of the TIME_CHANGED broadcast message, but I can't find any documentaion on it

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Html 5 Time Tag not recognized by IE8 when cloning

    - by matsientst
    I have been having trouble getting IE to recognize the new Time tag in this context. This all works great in FF. Here is the code: var origComment = $('.articleComment:first div'); if (origComment.length > 0) { var commentHtml = origComment.clone(true); commentHtml.find('time').text('today'); var html = '<article class="' + ((side == 'LEFT') ? '' : 'that') + '">' + commentHtml.html() + '</article>'; $(html).insertAfter('.articleComment:last'); The HTML looks something like this: <article class="articleComment that"> <div id="156" class="parent"> <div class="byline"> <p>Posted <time pubdate="pubdate" datetime="2010-05-07T09:11:08">today</time> by<br/> <a class="username" href="/u/matt">matt</a> </p> <p class="report"><a href="#">Report?</a></p> </div> <div class="comment">left</div> </div> </article> IE can find the Time tag but it returns a collection of 2 elements. I assume the beginning and ending. However, I cannot access it to modify it. I have tried val(), html() and text(). I also can't drop to the actual HTMLElement. I can't get(0).innerHTML. But, if I .get(0).tagName it actually is the Time tag I've got. Any ideas? I hope this makes sense.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • ffmpeg - How to determine if -movflags faststart is enabled? PHP

    - by IIIOXIII
    While I am able to encode an mp4 file which I can plan on my local windows machine, I am having trouble encoding files to mp4 which are readable when streaming by safari, etc. After a bit of reading, I believe my issue is that I must move the metadata from the end of the file to the beginning in order for the converted mp4 files to be streamable. To that end, I am trying to find out if the build of ffmpeg that I am currently using is able to use the -movflags faststart option through php - as my current outputted mp4 files are not working when streamed online. This is the way I am now echoing the -help, -formats, -codecs, but I am not seeing anything about -movflags faststart in any of the lists: exec($ffmpegPath." -help", $codecArr); for($ii=0;$ii<count($codecArr);$ii++){ echo $codecArr[$ii].'</br>'; } Is there a similar method of determining if -movflags fastart is available to my ffmpeg build? Any other way? Should it be listed with any of the previously suggested commands? -help/-formats? Can someone that knows it is enabled in their version of ffmpeg check to see if it is listed under -help or -formats, etc.? TIA. EDIT: COMPLETE CONSOLE OUTPUT FOR BOTH THE CONVERSION COMMAND AND -MOVFLAGS COMMAND BELOW: COMMAND: ffmpeg_new -i C:\vidtests\Wildlife.wmv -s 640x480 C:\vidtests\Wildlife.mp4 OUTPUT: ffmpeg version N-54207-ge59fb3f Copyright (c) 2000-2013 the FFmpeg developers built on Jun 25 2013 21:55:00 with gcc 4.7.3 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo- amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs -- enable-libxvid --enable-zlib libavutil 52. 37.101 / 52. 37.101 libavcodec 55. 17.100 / 55. 17.100 libavformat 55. 10.100 / 55. 10.100 libavdevice 55. 2.100 / 55. 2.100 libavfilter 3. 77.101 / 3. 77.101 libswscale 2. 3.100 / 2. 3.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 3.100 / 52. 3.100 [asf @ 00000000002ed760] Stream #0: not enough frames to estimate rate; consider increasing probesize Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, asf, from 'C:\vidtests\Wildlife.wmv' : Metadata: SfOriginalFPS : 299700 WMFSDKVersion : 11.0.6001.7000 WMFSDKNeeded : 0.0.0.0000 comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball title : Wildlife in HD copyright : -¬ 2008 Microsoft Corporation IsVBR : 0 DeviceConformanceTemplate: AP@L3 Duration: 00:00:30.09, start: 0.000000, bitrate: 6977 kb/s Stream #0:0(eng): Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, fltp , 192 kb/s Stream #0:1(eng): Video: vc1 (Advanced) (WVC1 / 0x31435657), yuv420p, 1280x7 20, 5942 kb/s, 29.97 tbr, 1k tbn, 1k tbc [libx264 @ 00000000002e6980] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 [libx264 @ 00000000002e6980] profile High, level 3.0 [libx264 @ 00000000002e6980] 264 - core 133 r2334 a3ac64b - H.264/MPEG-4 AVC cod ec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=1 r ef=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed _ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pski p=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 deci mate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_ adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2 5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.6 0 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'C:\vidtests\Wildlife.mp4': Metadata: SfOriginalFPS : 299700 WMFSDKVersion : 11.0.6001.7000 WMFSDKNeeded : 0.0.0.0000 comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball title : Wildlife in HD copyright : -¬ 2008 Microsoft Corporation IsVBR : 0 DeviceConformanceTemplate: AP@L3 encoder : Lavf55.10.100 Stream #0:0(eng): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 6 40x480, q=-1--1, 30k tbn, 29.97 tbc Stream #0:1(eng): Audio: aac (libvo_aacenc) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (vc1 -> libx264) Stream #0:0 -> #0:1 (wmav2 -> libvo_aacenc) Press [q] to stop, [?] for help frame= 53 fps= 49 q=29.0 size= 0kB time=00:00:00.13 bitrate= 2.9kbits/ frame= 63 fps= 40 q=29.0 size= 0kB time=00:00:00.46 bitrate= 0.8kbits/ frame= 74 fps= 35 q=29.0 size= 0kB time=00:00:00.83 bitrate= 0.5kbits/ frame= 85 fps= 32 q=29.0 size= 0kB time=00:00:01.20 bitrate= 0.3kbits/ frame= 95 fps= 30 q=29.0 size= 0kB time=00:00:01.53 bitrate= 0.3kbits/ frame= 107 fps= 28 q=29.0 size= 0kB time=00:00:01.93 bitrate= 0.2kbits/ Queue input is backward in time [mp4 @ 00000000003ef800] Non-monotonous DTS in output stream 0:1; previous: 7616 , current: 7063; changing to 7617. This may result in incorrect timestamps in th e output file. frame= 118 fps= 28 q=29.0 size= 113kB time=00:00:02.30 bitrate= 402.6kbits/ frame= 129 fps= 26 q=29.0 size= 219kB time=00:00:02.66 bitrate= 670.7kbits/ frame= 141 fps= 26 q=29.0 size= 264kB time=00:00:03.06 bitrate= 704.2kbits/ frame= 152 fps= 25 q=29.0 size= 328kB time=00:00:03.43 bitrate= 782.2kbits/ frame= 163 fps= 25 q=29.0 size= 431kB time=00:00:03.80 bitrate= 928.1kbits/ frame= 174 fps= 24 q=29.0 size= 568kB time=00:00:04.17 bitrate=1116.3kbits/ frame= 190 fps= 25 q=29.0 size= 781kB time=00:00:04.70 bitrate=1359.9kbits/ frame= 204 fps= 25 q=29.0 size= 1006kB time=00:00:05.17 bitrate=1593.1kbits/ frame= 218 fps= 25 q=29.0 size= 1058kB time=00:00:05.63 bitrate=1536.8kbits/ frame= 229 fps= 25 q=29.0 size= 1093kB time=00:00:06.00 bitrate=1490.9kbits/ frame= 239 fps= 24 q=29.0 size= 1118kB time=00:00:06.33 bitrate=1444.4kbits/ frame= 251 fps= 24 q=29.0 size= 1150kB time=00:00:06.74 bitrate=1397.9kbits/ frame= 265 fps= 24 q=29.0 size= 1234kB time=00:00:07.20 bitrate=1402.3kbits/ frame= 278 fps= 25 q=29.0 size= 1332kB time=00:00:07.64 bitrate=1428.3kbits/ frame= 294 fps= 25 q=29.0 size= 1403kB time=00:00:08.17 bitrate=1405.7kbits/ frame= 308 fps= 25 q=29.0 size= 1547kB time=00:00:08.64 bitrate=1466.4kbits/ frame= 323 fps= 25 q=29.0 size= 1595kB time=00:00:09.14 bitrate=1429.5kbits/ frame= 337 fps= 25 q=29.0 size= 1702kB time=00:00:09.60 bitrate=1450.7kbits/ frame= 351 fps= 25 q=29.0 size= 1755kB time=00:00:10.07 bitrate=1427.1kbits/ frame= 365 fps= 25 q=29.0 size= 1820kB time=00:00:10.54 bitrate=1414.1kbits/ frame= 381 fps= 25 q=29.0 size= 1852kB time=00:00:11.07 bitrate=1369.6kbits/ frame= 396 fps= 26 q=29.0 size= 1893kB time=00:00:11.57 bitrate=1339.5kbits/ frame= 409 fps= 26 q=29.0 size= 1923kB time=00:00:12.01 bitrate=1311.8kbits/ frame= 421 fps= 25 q=29.0 size= 1967kB time=00:00:12.41 bitrate=1298.3kbits/ frame= 434 fps= 25 q=29.0 size= 1998kB time=00:00:12.84 bitrate=1274.0kbits/ frame= 445 fps= 25 q=29.0 size= 2018kB time=00:00:13.21 bitrate=1251.3kbits/ frame= 458 fps= 25 q=29.0 size= 2048kB time=00:00:13.64 bitrate=1229.5kbits/ frame= 471 fps= 25 q=29.0 size= 2067kB time=00:00:14.08 bitrate=1202.3kbits/ frame= 484 fps= 25 q=29.0 size= 2189kB time=00:00:14.51 bitrate=1235.5kbits/ frame= 497 fps= 25 q=29.0 size= 2260kB time=00:00:14.94 bitrate=1238.3kbits/ frame= 509 fps= 25 q=29.0 size= 2311kB time=00:00:15.34 bitrate=1233.3kbits/ frame= 523 fps= 25 q=29.0 size= 2429kB time=00:00:15.81 bitrate=1258.1kbits/ frame= 535 fps= 25 q=29.0 size= 2541kB time=00:00:16.21 bitrate=1283.5kbits/ frame= 548 fps= 25 q=29.0 size= 2718kB time=00:00:16.64 bitrate=1337.5kbits/ frame= 560 fps= 25 q=29.0 size= 2845kB time=00:00:17.05 bitrate=1367.1kbits/ frame= 571 fps= 25 q=29.0 size= 2965kB time=00:00:17.41 bitrate=1394.6kbits/ frame= 580 fps= 25 q=29.0 size= 3025kB time=00:00:17.71 bitrate=1398.7kbits/ frame= 588 fps= 25 q=29.0 size= 3098kB time=00:00:17.98 bitrate=1411.1kbits/ frame= 597 fps= 25 q=29.0 size= 3183kB time=00:00:18.28 bitrate=1426.1kbits/ frame= 606 fps= 24 q=29.0 size= 3279kB time=00:00:18.58 bitrate=1445.2kbits/ frame= 616 fps= 24 q=29.0 size= 3441kB time=00:00:18.91 bitrate=1489.9kbits/ frame= 626 fps= 24 q=29.0 size= 3650kB time=00:00:19.25 bitrate=1553.0kbits/ frame= 638 fps= 24 q=29.0 size= 3826kB time=00:00:19.65 bitrate=1594.7kbits/ frame= 649 fps= 24 q=29.0 size= 3950kB time=00:00:20.02 bitrate=1616.3kbits/ frame= 660 fps= 24 q=29.0 size= 4067kB time=00:00:20.38 bitrate=1634.1kbits/ frame= 669 fps= 24 q=29.0 size= 4121kB time=00:00:20.68 bitrate=1631.8kbits/ frame= 682 fps= 24 q=29.0 size= 4274kB time=00:00:21.12 bitrate=1657.9kbits/ frame= 696 fps= 24 q=29.0 size= 4446kB time=00:00:21.58 bitrate=1687.1kbits/ frame= 709 fps= 24 q=29.0 size= 4590kB time=00:00:22.02 bitrate=1707.3kbits/ frame= 719 fps= 24 q=29.0 size= 4772kB time=00:00:22.35 bitrate=1748.5kbits/ frame= 732 fps= 24 q=29.0 size= 4852kB time=00:00:22.78 bitrate=1744.3kbits/ frame= 744 fps= 24 q=29.0 size= 4973kB time=00:00:23.18 bitrate=1756.9kbits/ frame= 756 fps= 24 q=29.0 size= 5099kB time=00:00:23.59 bitrate=1770.8kbits/ frame= 768 fps= 24 q=29.0 size= 5149kB time=00:00:23.99 bitrate=1758.4kbits/ frame= 780 fps= 24 q=29.0 size= 5227kB time=00:00:24.39 bitrate=1755.7kbits/ frame= 797 fps= 24 q=29.0 size= 5377kB time=00:00:24.95 bitrate=1765.0kbits/ frame= 813 fps= 24 q=29.0 size= 5507kB time=00:00:25.49 bitrate=1769.5kbits/ frame= 828 fps= 24 q=29.0 size= 5634kB time=00:00:25.99 bitrate=1775.5kbits/ frame= 843 fps= 24 q=29.0 size= 5701kB time=00:00:26.49 bitrate=1762.9kbits/ frame= 859 fps= 24 q=29.0 size= 5830kB time=00:00:27.02 bitrate=1767.0kbits/ frame= 872 fps= 24 q=29.0 size= 5926kB time=00:00:27.46 bitrate=1767.7kbits/ frame= 888 fps= 24 q=29.0 size= 6014kB time=00:00:27.99 bitrate=1759.7kbits/ frame= 900 fps= 24 q=29.0 size= 6332kB time=00:00:28.39 bitrate=1826.9kbits/ frame= 901 fps= 24 q=-1.0 Lsize= 6717kB time=00:00:30.10 bitrate=1828.0kbits /s video:6211kB audio:472kB subtitle:0 global headers:0kB muxing overhead 0.513217% [libx264 @ 00000000002e6980] frame I:8 Avg QP:21.77 size: 39744 [libx264 @ 00000000002e6980] frame P:433 Avg QP:25.69 size: 11490 [libx264 @ 00000000002e6980] frame B:460 Avg QP:29.25 size: 2319 [libx264 @ 00000000002e6980] consecutive B-frames: 5.4% 78.6% 2.7% 13.3% [libx264 @ 00000000002e6980] mb I I16..4: 21.8% 48.8% 29.5% [libx264 @ 00000000002e6980] mb P I16..4: 0.7% 4.0% 1.3% P16..4: 37.1% 22.2 % 15.5% 0.0% 0.0% skip:19.2% [libx264 @ 00000000002e6980] mb B I16..4: 0.1% 0.5% 0.2% B16..8: 43.5% 7.0 % 2.1% direct: 2.2% skip:44.5% L0:36.4% L1:52.7% BI:10.9% [libx264 @ 00000000002e6980] 8x8 transform intra:62.8% inter:56.2% [libx264 @ 00000000002e6980] coded y,uvDC,uvAC intra: 74.2% 78.8% 44.0% inter: 2 3.6% 14.5% 1.0% [libx264 @ 00000000002e6980] i16 v,h,dc,p: 48% 24% 9% 20% [libx264 @ 00000000002e6980] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 17% 15% 7% 8% 11% 8% 10% 8% [libx264 @ 00000000002e6980] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 17% 15% 7% 10% 11% 8% 7% 7% [libx264 @ 00000000002e6980] i8c dc,h,v,p: 53% 21% 18% 7% [libx264 @ 00000000002e6980] Weighted P-Frames: Y:0.7% UV:0.0% [libx264 @ 00000000002e6980] ref P L0: 62.4% 19.0% 12.0% 6.6% 0.0% [libx264 @ 00000000002e6980] ref B L0: 90.5% 8.9% 0.7% [libx264 @ 00000000002e6980] ref B L1: 97.9% 2.1% [libx264 @ 00000000002e6980] kb/s:1692.37 AND THE –MOVFLAGS COMMAND: C:\XSITE\SITE>ffmpeg_new -i C:\vidtests\Wildlife.mp4 -movflags faststart C:\vidtests\Wildlife_fs.mp4 AND THE –MOVFLAGS OUTPUT ffmpeg version N-54207-ge59fb3f Copyright (c) 2000-2013 the FFmpeg developers built on Jun 25 2013 21:55:00 with gcc 4.7.3 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo- amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs -- enable-libxvid --enable-zlib libavutil 52. 37.101 / 52. 37.101 libavcodec 55. 17.100 / 55. 17.100 libavformat 55. 10.100 / 55. 10.100 libavdevice 55. 2.100 / 55. 2.100 libavfilter 3. 77.101 / 3. 77.101 libswscale 2. 3.100 / 2. 3.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 3.100 / 52. 3.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\vidtests\Wildlife.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 title : Wildlife in HD encoder : Lavf55.10.100 comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball copyright : -¬ 2008 Microsoft Corporation Duration: 00:00:30.13, start: 0.036281, bitrate: 1826 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x480, 1692 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: handler_name : VideoHandler Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 12 8 kb/s Metadata: handler_name : SoundHandler [libx264 @ 0000000004360620] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 [libx264 @ 0000000004360620] profile High, level 3.0 [libx264 @ 0000000004360620] 264 - core 133 r2334 a3ac64b - H.264/MPEG-4 AVC cod ec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=1 r ef=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed _ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pski p=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 deci mate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_ adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2 5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.6 0 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'C:\vidtests\Wildlife_fs.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 title : Wildlife in HD copyright : -¬ 2008 Microsoft Corporation comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball encoder : Lavf55.10.100 Stream #0:0(eng): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 6 40x480, q=-1--1, 30k tbn, 29.97 tbc Metadata: handler_name : VideoHandler Stream #0:1(eng): Audio: aac (libvo_aacenc) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Metadata: handler_name : SoundHandler Stream mapping: Stream #0:0 -> #0:0 (h264 -> libx264) Stream #0:1 -> #0:1 (aac -> libvo_aacenc) Press [q] to stop, [?] for help frame= 52 fps=0.0 q=29.0 size= 29kB time=00:00:01.76 bitrate= 133.9kbits/ frame= 63 fps= 60 q=29.0 size= 104kB time=00:00:02.14 bitrate= 397.2kbits/ frame= 74 fps= 47 q=29.0 size= 176kB time=00:00:02.51 bitrate= 573.2kbits/ frame= 87 fps= 41 q=29.0 size= 265kB time=00:00:02.93 bitrate= 741.2kbits/ frame= 101 fps= 37 q=29.0 size= 358kB time=00:00:03.39 bitrate= 862.8kbits/ frame= 113 fps= 34 q=29.0 size= 437kB time=00:00:03.79 bitrate= 943.7kbits/ frame= 125 fps= 33 q=29.0 size= 520kB time=00:00:04.20 bitrate=1012.2kbits/ frame= 138 fps= 32 q=29.0 size= 606kB time=00:00:04.64 bitrate=1069.8kbits/ frame= 151 fps= 31 q=29.0 size= 696kB time=00:00:05.06 bitrate=1124.3kbits/ frame= 163 fps= 30 q=29.0 size= 780kB time=00:00:05.47 bitrate=1166.4kbits/ frame= 176 fps= 30 q=29.0 size= 919kB time=00:00:05.90 bitrate=1273.9kbits/ frame= 196 fps= 31 q=29.0 size= 994kB time=00:00:06.57 bitrate=1237.4kbits/ frame= 213 fps= 31 q=29.0 size= 1097kB time=00:00:07.13 bitrate=1258.8kbits/ frame= 225 fps= 30 q=29.0 size= 1204kB time=00:00:07.53 bitrate=1309.8kbits/ frame= 236 fps= 30 q=29.0 size= 1323kB time=00:00:07.91 bitrate=1369.4kbits/ frame= 249 fps= 29 q=29.0 size= 1451kB time=00:00:08.34 bitrate=1424.6kbits/ frame= 263 fps= 29 q=29.0 size= 1574kB time=00:00:08.82 bitrate=1461.3kbits/ frame= 278 fps= 29 q=29.0 size= 1610kB time=00:00:09.30 bitrate=1416.9kbits/ frame= 296 fps= 30 q=29.0 size= 1655kB time=00:00:09.91 bitrate=1368.0kbits/ frame= 313 fps= 30 q=29.0 size= 1697kB time=00:00:10.48 bitrate=1326.4kbits/ frame= 330 fps= 30 q=29.0 size= 1737kB time=00:00:11.05 bitrate=1286.5kbits/ frame= 345 fps= 30 q=29.0 size= 1776kB time=00:00:11.54 bitrate=1260.4kbits/ frame= 361 fps= 30 q=29.0 size= 1813kB time=00:00:12.07 bitrate=1230.3kbits/ frame= 377 fps= 30 q=29.0 size= 1847kB time=00:00:12.59 bitrate=1201.4kbits/ frame= 395 fps= 30 q=29.0 size= 1880kB time=00:00:13.22 bitrate=1165.0kbits/ frame= 410 fps= 30 q=29.0 size= 1993kB time=00:00:13.72 bitrate=1190.2kbits/ frame= 424 fps= 30 q=29.0 size= 2080kB time=00:00:14.18 bitrate=1201.4kbits/ frame= 439 fps= 30 q=29.0 size= 2166kB time=00:00:14.67 bitrate=1209.4kbits/ frame= 455 fps= 30 q=29.0 size= 2262kB time=00:00:15.21 bitrate=1217.5kbits/ frame= 469 fps= 30 q=29.0 size= 2341kB time=00:00:15.68 bitrate=1223.0kbits/ frame= 484 fps= 30 q=29.0 size= 2430kB time=00:00:16.19 bitrate=1229.1kbits/ frame= 500 fps= 30 q=29.0 size= 2523kB time=00:00:16.71 bitrate=1236.3kbits/ frame= 515 fps= 30 q=29.0 size= 2607kB time=00:00:17.21 bitrate=1240.4kbits/ frame= 531 fps= 30 q=29.0 size= 2681kB time=00:00:17.73 bitrate=1238.2kbits/ frame= 546 fps= 30 q=29.0 size= 2758kB time=00:00:18.24 bitrate=1238.2kbits/ frame= 561 fps= 30 q=29.0 size= 2824kB time=00:00:18.75 bitrate=1233.4kbits/ frame= 576 fps= 30 q=29.0 size= 2955kB time=00:00:19.25 bitrate=1256.8kbits/ frame= 586 fps= 29 q=29.0 size= 3061kB time=00:00:19.59 bitrate=1279.6kbits/ frame= 598 fps= 29 q=29.0 size= 3217kB time=00:00:19.99 bitrate=1318.4kbits/ frame= 610 fps= 29 q=29.0 size= 3354kB time=00:00:20.39 bitrate=1347.2kbits/ frame= 622 fps= 29 q=29.0 size= 3483kB time=00:00:20.78 bitrate=1372.6kbits/ frame= 634 fps= 29 q=29.0 size= 3593kB time=00:00:21.19 bitrate=1388.6kbits/ frame= 648 fps= 29 q=29.0 size= 3708kB time=00:00:21.66 bitrate=1402.3kbits/ frame= 661 fps= 29 q=29.0 size= 3811kB time=00:00:22.08 bitrate=1413.5kbits/ frame= 674 fps= 29 q=29.0 size= 3978kB time=00:00:22.53 bitrate=1446.3kbits/ frame= 690 fps= 29 q=29.0 size= 4133kB time=00:00:23.05 bitrate=1468.4kbits/ frame= 706 fps= 29 q=29.0 size= 4263kB time=00:00:23.58 bitrate=1480.4kbits/ frame= 721 fps= 29 q=29.0 size= 4391kB time=00:00:24.08 bitrate=1493.8kbits/ frame= 735 fps= 29 q=29.0 size= 4524kB time=00:00:24.55 bitrate=1509.4kbits/ frame= 748 fps= 29 q=29.0 size= 4661kB time=00:00:24.98 bitrate=1528.2kbits/ frame= 763 fps= 29 q=29.0 size= 4835kB time=00:00:25.50 bitrate=1553.1kbits/ frame= 778 fps= 29 q=29.0 size= 4993kB time=00:00:25.99 bitrate=1573.6kbits/ frame= 795 fps= 29 q=29.0 size= 5149kB time=00:00:26.56 bitrate=1588.1kbits/ frame= 814 fps= 29 q=29.0 size= 5258kB time=00:00:27.18 bitrate=1584.4kbits/ frame= 833 fps= 29 q=29.0 size= 5368kB time=00:00:27.82 bitrate=1580.2kbits/ frame= 851 fps= 29 q=29.0 size= 5469kB time=00:00:28.43 bitrate=1575.9kbits/ frame= 870 fps= 29 q=29.0 size= 5567kB time=00:00:29.05 bitrate=1569.5kbits/ frame= 889 fps= 29 q=29.0 size= 5688kB time=00:00:29.70 bitrate=1568.4kbits/ Starting second pass: moving header on top of the file frame= 902 fps= 28 q=-1.0 Lsize= 6109kB time=00:00:30.14 bitrate=1659.8kbits /s dup=1 drop=0 video:5602kB audio:472kB subtitle:0 global headers:0kB muxing overhead 0.566600% [libx264 @ 0000000004360620] frame I:8 Avg QP:20.52 size: 39667 [libx264 @ 0000000004360620] frame P:419 Avg QP:25.06 size: 10524 [libx264 @ 0000000004360620] frame B:475 Avg QP:29.03 size: 2123 [libx264 @ 0000000004360620] consecutive B-frames: 3.2% 79.6% 0.3% 16.9% [libx264 @ 0000000004360620] mb I I16..4: 20.7% 52.3% 26.9% [libx264 @ 0000000004360620] mb P I16..4: 0.7% 4.2% 1.1% P16..4: 39.4% 21.4 % 13.8% 0.0% 0.0% skip:19.3% [libx264 @ 0000000004360620] mb B I16..4: 0.1% 0.9% 0.3% B16..8: 41.8% 6.4 % 1.7% direct: 1.7% skip:47.1% L0:36.4% L1:53.3% BI:10.3% [libx264 @ 0000000004360620] 8x8 transform intra:65.7% inter:58.8% [libx264 @ 0000000004360620] coded y,uvDC,uvAC intra: 71.2% 76.6% 35.7% inter: 2 0.7% 13.0% 0.5% [libx264 @ 0000000004360620] i16 v,h,dc,p: 48% 24% 8% 20% [libx264 @ 0000000004360620] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 18% 15% 6% 8% 11% 8% 10% 8% [libx264 @ 0000000004360620] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 16% 15% 7% 10% 11% 8% 8% 7% [libx264 @ 0000000004360620] i8c dc,h,v,p: 51% 22% 19% 9% [libx264 @ 0000000004360620] Weighted P-Frames: Y:0.7% UV:0.0% [libx264 @ 0000000004360620] ref P L0: 63.4% 19.7% 11.0% 5.9% 0.0% [libx264 @ 0000000004360620] ref B L0: 90.7% 8.7% 0.7% [libx264 @ 0000000004360620] ref B L1: 98.4% 1.6% [libx264 @ 0000000004360620] kb/s:1524.54

    Read the article

  • Where is the iPhone's Date & Time getting it's time zone list

    - by johnbdh
    I can get a list of time zones with [NSTimeZone knownTimeZoneNames], but that only gives the time zone IDs which include one or 2 cities in each time zone. The Date & Time settings has a great list of cities and I have seen a few other apps that have the same if not similar lookup lists. Where do these lists come from?? I do need to relate a picked city to it's time zone like Date & Time does. Thanks, John

    Read the article

  • Strategy for building an application to replace a large spreadsheet

    - by Dan Walmsley
    I'm working on an application that is going to replace a rather large spreadsheet. The spreadsheet is used to budget purchases and things like that. It is the largest spreadsheet I have ever seen, and it required a lot of manual data entry, so this application is going to automate much of that. But as I'm working on this I've noticed its slow going. And I got to thinking this must be a common thing to do many companies will start with something like a spreadsheet, then when they get too big to maintain that, they will get a custom application built. So is there anything out there ( a framework or similar ) that does this sort of thing, migrating a spreadsheet to a custom application. I've had a quick Google but not really seen the kind of thing that I'm looking for. It's too late for this project, but I thought it would be worth having a look for next time. How do you guys tackle this problem?

    Read the article

  • Subtracting Delphi Time Ranges from a Date Range, Calculate Remaining Time

    - by Anagoge
    I'm looking for an algorithm that will help calculate a workday working time length. It would have an input date range and then allow subtracting partially or completely intersecting time range slices from that date range and the result would be the number of minutes (or the fraction/multiple of a day) left in the original date range, after subtracting out the various non-working time slices. For Example: Input date range: 1/4/2010 11:21 am - 1/5/2010 3:00 pm Subtract out any partially or completely intersecting slices like this: Remove all day Sunday Non-Sundays remove 11:00 - 12:00 Non-Sundays remove time after 5:00 pm Non-Sundays remove time before 8:00 am Non-Sundays remove time 9:15 - 9:30 am Output: # of minutes left in the input date range I don't need anything overly-general. I could hardcode the rules to simplify the code. If anyone knows of sample code or a library/function somewhere, or has some pseudo-code ideas, I'd love something to start with. I didn't see anything in DateUtils, for example. Even a basic function that calculates the number of minutes of overlap in two date ranges to subtract out would be a good start.

    Read the article

  • Trying to retrace our SEO domain redirect strategy

    - by dans
    An SEO built a copy of my company's e-commerce site on another domain that contained our product's keywords in the name (i.e. as if Levi's built a duplicate site on bluejeans.com)...and then they referenced a lot of the images on the actual website from the other domain (as if Levis.com had images on it referenced like: img src="http://www.bluejeans.com/jeans-front.jpg"), but when you tried to reach the site by typing the name into the browser you would be redirected to the regular website, so the site wasn't really used for any purpose except I guess SEO. Since I didn't think this was doing anything GOOD for us at the time, I deleted the duplicate site and let the hosting on it expire, only to watch our search engine position rankings fall dramatically. Any ideas as to what was going on there? I want to get it back to understand its impact, but I don't know how it was set up. I contacted our host and they have no idea how it was set up. I suspect there was some sort of redirect in play, or something?

    Read the article

  • Angularjs showin time portion from date time

    - by J. Davidson
    Hi I have following input which displays datetime <div ng-repeat="item in items"> <input type="text" ng-model="item.name" /> <input ng-model="item.time" /> </div> The issue i have is that time is in following format. "2002-11-28T14:00:00Z" I want to just display the time portion. For which I would have to apply filter date: 'hh:mm a' I tried ng-model="labor.start_time | date: 'hh:mm a'" Please let me know how i can show only time portion in input box showin time only. I cant use span tag as the time a user can change so have to show in input tag. Thanks

    Read the article

  • JD Edwards World Reporting Made Easy with Real Time Reporting Tools from The GL Company

    Fred talks to Paul Yarwood, US Operations General Manager and Richard Crotty, North America Business Development Manager for The GL Company, an Oracle Certified Partner, and Denise Grills, Senior Director of Marketing and Product Strategy for Oracle's JD Edwards World products. They discuss how the finance department of JD Edwards World customers can have complete control over their management reporting with a true inquiry, consolidation, and reporting solution from The GL Company, freeing up the finance team from being dependent upon IT time and resources.

    Read the article

  • What's Microsoft's strategy on Windows CE development?

    - by Heinzi
    Lots of specialized mobile devices use Windows CE or Windows Mobile. I'm not talking about smart phones here -- I know that Windows Phone 7 is Microsoft's current technology of choice here. I'm talking about barcode readers, embedded devices, industry PDAs with specialized hardware, etc... the kind of devices (Example 1, Example 2) where Windows Phone Silverlight development is not an option (no P/Invoke to access the hardware, etc.). Since direct Compact Framework support has been dropped in Visual Studio 2010, the only option to develop for these device currently is to use outdated development tools (VS 2008), which already start to cause trouble on modern machines (e.g. there's no supported way to make the Windows Mobile Device Emulator's network stack work on Windows 7). Thus, my question is: What are Microsoft's plans regarding these mobile devices? Will they allow native applications on Windows Phone, such that, for example, barcode reader drivers can be developed that can be accessed in Silverlight applications? Will they re-add "native" Compact Framework support to Visual Studio and just haven't found the time yet? Or will they leave this niche market?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >