Search Results

Search found 195 results on 8 pages for 'estimation'.

Page 1/8 | 1 2 3 4 5 6 7 8  | Next Page >

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Web Application Tasks Estimation

    - by Ali
    I know the answer depends on the exact project and its requirements but i am asking on the avarage % of the total time goes into the tasks of the web application development statistically from your own experiance. While developing a web application (database driven) How much % of time does each of the following activities usually takes: -- database creation & all related stored procedures -- server side development -- client side development -- layout settings and designing I know there are lots of great web application developers around here and each one of you have done fair amount of web development and as a result there could be an almost fixed percentage of time going to each of the above activities of web developments for standard projects Update : I am not expecting someone to tell me number of hours i am asking about the average percentage of time that goes on each of the activities as per your experience i.e. server side dev 50%, client side development 20% ,,,,, I repeat there will be lots of cases that differs from the standard depending on the exact requirments of each web application project but here i am asking about Avarage for standard (no special requirment) web project

    Read the article

  • Sales Manager: "Why is time-estimation so complex?"

    - by Tim
    A few days ago a sales manager asked me that question. But at this moment I didn't know a answer which he can understand. He isn't a programmer! At the moment I work on a product which is over 8 years old. Nobody thought about architecture or evolvability. I have a swamp of code in front of me every day which is not tested. Because of that, time estimates are very difficult for me. How I can describe that problem to an salesman? Not only my swamp-code-problem, but general!

    Read the article

  • How do great enterprises estimate software development efforts?

    - by Ed Pichler
    I was learning about how to estimate software development effort, and would like to know how successful enterprises estimate their projects. How they do to know how much time a system will spend to be developed? What are the modern techniques to do this? What are the techniques used by these modern enterprises? Some articles and interviews of employees of those enterprises would be interesting. I asked on Project Management site of StackExchange too.

    Read the article

  • Software cost estimation

    - by David Conde
    I've seen on my work place (a University) most students making the software estimation cost of their final diploma work using COCOMO. My guessing is that this way of estimating costs is somewhat old (COCOMO dates of 1981), hence my question: How do you estimate costs in your software? I've seen things like : Cost = ( HoursOfWork + EstimatedIddle ) * HourlyRate That's not what I want, I'm looking for a properly (scientifically) defined cost model EDIT I've found some related questions on SO: What are some of the software cost estimation methods and models? How do you estimate the cost of developing software requirements?

    Read the article

  • LTO(4) tape shelf life estimation?

    - by emilp
    LTO tapes, Maxell in this case, are often marketed as having 30 years or more shelf life when stored under "optimal conditions" Is there a way to get a good estimation to the shelf life, given parameters such as relative humidity and temperature etc? Obsolescence of the tapes aside, is there a way of determining the impact to shelf life of any deviance from the optimal. In other words how many years are lost when storing say 1 degree above the specified range? regards Emil

    Read the article

  • SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

    - by Pinal Dave
    Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words. You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014. Action Item Here are the blog posts I have previously written. You can read it over here: Simple Demo of New Cardinality Estimation Features of SQL Server 2014 Pluralsight Course You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Video

    Read the article

  • Calculation of Milestones/Task list

    - by sugar
    My project manager assigned me a task to estimate the development time for an iPad application. Lets assume that I gave estimation of 15 working days. He thought that the number of days where too many and client needed the changes to the application urgently (as in most of cases). So, he told me: "I am going to assign two developer including you and as per my understandings and experience it won't take more than seven working days." Clarifications I was given the task of estimating development time for an individual. How could I be sure that 2 developers are going to finish it within 7 days? (I am new to team & I hardly know the others abilities) Questions Why do most of project managers / team leaders have understandings like: If one developer requires N days, Then two developers would require N/2 days, Do they think something like developer = s/w production machines? Should a team member (developer, not team lead or any higher post) estimate other developers work? I didn't deny anything in the meeting and didn't said, but what should be the appropriate answer to convince them that N/2 formula that they follow is not correct?

    Read the article

  • Delivering estimates and client expectations?

    - by FishOrDie
    When a client asks for an estimate on how long it would take to develop different sections of an app, is it best to give them a total amount or what it would take for each section? Is it better/more common to give a range of hours/days or just a single number? Do you think most clients feel that if a programmer says it should take 50 hours that they should be billed for 50 hours? If I say it would take 50 and it actually takes 60, do I tell them in advance that I'm going over on my estimate or just charge what was originally quoted?

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • Best techniques for estimation

    - by viswanathan
    What are the possible techniques to arrive at a good estimate? We use Delphi estimation technique for estimating tasks. What are the other better ways to do so? Also what would be the do's and dont's while giving an estimate.

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • difficulty based time estimation software

    - by Frankie
    Some months ago I found a project-management / time-estimation software that would ask you to sort out your tasks in terms of difficulty (1, 2 or 3) and would then estimate the time you would take to deploy. The system would auto-adapt as you were working. I've forgot the software name. For the past days I've been digging emails and searching Google with no results. Can anyone pin the software name by my description? Its not http://www.fogcreek.com (though I've found it to be a great piece of software. Thank you in advance.

    Read the article

  • Noise Estimation / Noise Measurement in Image

    - by Drazick
    Hello. I want to estimate the noise in an image. Let's assume the model of an Image + White Noise. Now I want to estimate the Noise Variance. My method is to calculate the Local Variance (3*3 up to 21*21 Blocks) of the image and then find areas where the Local Variance is fairly constant (By calculating the Local Variance of the Local Variance Matrix). I assume those areas are "Flat" hence the Variance is almost "Pure" noise. Yet I don't get constant results. Is there a better way? Thanks.

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Two Weeks As A Software Estimation Rule of Thumb?

    - by Todd Williamson
    I saw a blog posting that spoke to me: http://james-iry.blogspot.com/2010/10/how-to-estimate-software.html Oddly, this is the kind of estimate that I tend to do on smaller projects. Just about everything is "two weeks" as that is comfortably far enough out. I once had an instructor walk us through how to create a more detailed estimate, wherein we already had the requirements up front, etc. and even after all the careful tabulation and such the final instruction was "Now that you have all this documentation go ahead and double it." Agile practitioners seem to like two weeks also as a sprint length. Is there something magical about two weeks? Is it a hrair number for our psyches or some other kind of crutch? Do you have an immediate default fall-back schedule strategy when you are pressed for an initial delivery date?

    Read the article

  • Traffic estimation for a multiplayer flash game

    - by Steve Addington
    hey, i want to know if my rough traffic estimations are right, it would be for a pretty simple realtime flashgame in the style of haxball (but not as a soccer game) heres a video of it http://www.youtube.com/watch?v=z_xBdFg1RcI So here comes my estimation, i dont know if they are realistic! i hope someone can help me. consider the packet attached as a typical one sent every 200ms, its 148bytes + 64 bytes of header will make around a 200bytes packet. The server will receive 200bytes x 6 players x 5 times a sec=6000bytes/s=5.85Kbytes/s=46.9kbit/s plus he has to send all back to the players, so at this point are 94Kbit/s.The server received all the information, perform the definitive calculation and send the new position to all players, in a bigger packet of around 900bytes that have to be delivered to the others 6, which makes 900bytes x 6 players x 5 times a sec=27000bytes/s=26Kbytes/s=210kbit/s. overall that would be 26kbyte per second. thats like 130mb traffic per hour for a 6player room. but somehow i think the numbers are too high? that would be really much traffic for such a simple game. did i calculate something wrong?

    Read the article

  • Google Adsense threshold estimation

    - by Wladimir Ivanov
    I've an electronic music blog with traffic mainly from the North American continent, Western Europe and Russia. Daily I get about 100 unique visitors with 150-200 pageviews. Should I start Adsense or I need to work to increase the traffic stats. Can you suggest another appropriate monetizing option for the given case? How much time It would take me to hit the 100$ Adsense barrier with the given traffic statistics? Thanks in advance.

    Read the article

  • What to look for in estimating a PowerBuilder Conversion Project?

    - by tekiegreg
    Hi there, I've been trying to do a spec for a PowerBuilder 9 to 11.5 migration of a relatively complex application. Granted PowerBuilder is not really my specialty I'm having issues trying to justify an estimate for this part of the project (and the PowerBuilder people I've been talking with have had some personal issues lately and are out of communication). These are some of the metrics that we have seen and can evaluate: -PBL Files -Main Windows -Data Windows -Functions (no we don't have the source available on this project) What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Is there any project estimation tool to give estimate for web design/ development work?

    - by jitendra
    Is there any project estimation tool which gives estimates for web design/ development work? I don't have to calculate Price just want to calculate estimated time. Just for example, for things like: Page creation (layout in XHTML) CSS creation Content creation (Word to HTML, including images in some pages) Bulk PDF upload PHP Script for Form Testing all pages I need like Items Quantity Time for each task(min) Estimated total (in hour) PDF upload x 30 = 2 min = 60 Min pages with images x 30 = 15 min for each = 60 Min Is there any simple JQuery calculator power with JQuery? Where we can add add/remove custom thing to calculate time? Or any other free online/offline tool ?

    Read the article

  • proper concurrent users estimation case studies

    - by golemwashere
    I've been asked to size a web architecture for an excessive number of concurrent users ( hundreds of thousands ). I'm having a hard time convincing these people that unless you are in the top 5 of your country websites it's quite hard to hit those numbers. Can anyone provide some real world case studies providing stats for total / concurrent users explaining what is the usual ratio between total vs concurrent?

    Read the article

  • Altruistic network connection bandwidth estimation

    - by datenwolf
    Assume two peers Alice and Bob connected over a IP network. Alice and Bob are exchanging packets of lossy compressed data which are generated and to be consumes in real time (think a VoIP or video chat application). The service is designed to cope with as little bandwidth available, but relies on low latencies. Alice and Bob would mark their connection with an apropriate QoS profile. Alice and Bob want use a variable bitrate compression and would like to consume all of the leftover bandwidth available for the connection between them, but would voluntarily reduce the consumed bitrate depending on the state of the network. However they'd like to retain a stable link, i.e. avoid interruptions in their decoded data stream caused by congestion and the delay until the bandwidth got adjusted. However it is perfectly possible for them to loose a few packets. TL;DR: Alice and Bob want to implement a VoIP protocol from scratch, and are curious about bandwidth and congestion control. What papers and resources do you suggest for Alice and Bob to read? Mainly in the area of bandwidth estimation and congestion control.

    Read the article

1 2 3 4 5 6 7 8  | Next Page >