Search Results

Search found 228 results on 10 pages for 'bankers rounding'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Developing an Implementation Plan with Iterations by Russ Pitts

    - by user535886
    Developing an Implementation Plan with Iterations by Russ Pitts  Ok, so you have come to grips with understanding that applying the iterative concept, as defined by OUM is simply breaking up the project effort you have estimated for each phase into one or more six week calendar duration blocks of work. Idea being the business user(s) or key recipient(s) of work product(s) being developed never go longer than six weeks without having some sort of review or prototyping of the work results for an iteration…”think-a-little”, “do-a-little”, and “show-a-little” in a six week or less timeframe…ideally the business user(s) or key recipients(s) are involved throughout. You also understand the OUM concept that you only plan for that which you have knowledge of. The concept further defined, a project plan initially is developed at a high-level, and becomes more detailed as project knowledge grows. Agreeing to this concept means you also have to admit to the fallacy that one can plan with precision beyond six weeks into a project…Anything beyond six weeks is a best guess in most cases when dealing with software implementation projects. Project planning, as defined by OUM begins with the Implementation Plan view, which is a very high-level perspective of the effort estimated for each of the five OUM phases, as well as the number of iterations within each phase. You might wonder how can you predict the number of iterations for each phase at this early point in the project. Remember project planning is not an exact science, and initially is high-level and abstract in nature, and then becomes more detailed and precise as the project proceeds. So where do you start in defining iterations for each phase for a project? The following are three easy steps to initially define the number of iterations for each phase: Step 1 => Start with identifying the known factors… …Prior to starting a project you should know: · The agreed upon time-period for an iteration (e.g 6 weeks, or 4 weeks, or…) within a phase (recommend keeping iteration time-period consistent within a phase, if not for the entire project) · The number of resources available for the project · The number of total number of man-day (effort) you have estimated for each of the five OUM phases of the project · The number of work days for a week Step 2 => Calculate the man-days of effort required for an iteration within a phase… Lets assume for the sake of this example there are 10 project resources, and you have estimated 2,536 man-days of work effort which will need to occur for the elaboration phase of the project. Let’s also assume a week for this project is defined as 5 business days, and that each iteration in the elaboration phase will last a calendar duration of 6 weeks. A simple calculation is performed to calculate the daily burn rate for a single iteration, which produces a result of… ((Number of resources * days per week) * duration of iteration) = Number of days required per iteration ((10 resources * 5 days/week) * 6 weeks) = 300 man days of effort required per iteration Step 3 => Calculate the number of iterations that can occur within a phase Next calculate the number of iterations that can occur for the amount of man-days of effort estimated for the phase being considered… (number of man-days of effort estimated / number of man-days required per iteration) = # of iterations for phase (2,536 man-days of estimated effort for phase / 300 man days of effort required per iteration) = 8.45 iterations, which should be rounded to a whole number such as 9 iterations* *Note - It is important to note this is an approximate calculation, not an exact science. This particular example is a simple one, which assumes all resources are utilized throughout the phase, including tech resources, etc. (rounding down or up to a whole number based on project factor considerations). It is also best in many cases to round up to higher number, as this provides some calendar scheduling contingency.

    Read the article

  • Code Golf: Leibniz formula for Pi

    - by Greg Beech
    I recently posted one of my favourite interview whiteboard coding questions in "What's your more controversial programming opinion", which is to write a function that computes Pi using the Leibniz formula. It can be approached in a number of different ways, and the exit condition takes a bit of thought, so I thought it might make an interesting code golf question. Shortest code wins! Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to within 0.00001. Edit: 3 Jan 2008 As suggested in the comments I changed the exit condition to be within 0.00001 as that's what I really meant (an accuracy 5 decimal places is much harder due to rounding and so I wouldn't want to ask that in an interview, whereas within 0.00001 is an easier to understand and implement exit condition). Also, to answer the comments, I guess my intention was that the solution should compute the number of iterations, or check when it had done enough, but there's nothing to prevent you from pre-computing the number of iterations and using that number. I really asked the question out of interest to see what people would come up with.

    Read the article

  • Accurate least-squares fit algorithm needed

    - by ggkmath
    I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here. The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other). Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like: V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code [Q,R] = qr(V,0); p = R\(Q'*y); % performs same as p = V\y I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results. For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc. Any comments appreciated! thanks in advance.

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • Converting Milliseconds to Timecode

    - by Jeff
    I have an audio project I'm working on using BASS from Un4seen. This library uses BYTES mainly but I have a conversion in place that let's me show the current position of the song in Milliseconds. Knowing that MS = Samples * 1000 / SampleRate and that Samples = Bytes * 8 / Bits / Channels So here's my main issue and it's fairly simple... I have a function in my project that converts the Milliseconds to TimeCode in Mins:Secs:Milliseconds. Public Function ConvertMStoTimeCode(ByVal lngCurrentMSTimeValue As Long) ConvertMStoTimeCode = CheckForLeadingZero(Fix(lngCurrentMSTimeValue / 1000 / 60)) & ":" & _ CheckForLeadingZero(Int((lngCurrentMSTimeValue / 1000) Mod 60)) & ":" & _ CheckForLeadingZero(Int((lngCurrentMSTimeValue / 10) Mod 100)) End Function Now the issue comes within the Seconds calculation. Anytime the MS calculation is over .5 the seconds place rounds up to the next second. So 1.5 seconds actually prints as 2.5 seconds. I know for sure that using the Int conversion causes a round down and I know my math is correct as I've checked in a calculator 100 times. I can't figure out why the number is rounding up. Any suggestions?

    Read the article

  • Can someone look over the curriculum for this major & give me your thoughts? Computing & Security Te

    - by scottsharpejr
    My goal is to become a good web developer. I'm interested in learning how to build complex websites as well as how to write web applications. I want skills that will enable me to write apps for <--insert hottest web trend here-- (Facebook & iphone apps for example) This is one of my goals as far as Tech. is concerned. I'd also like to have a brod knowledge of different areas of IT. I'm looking into majoring in "Computing & Security Technology". The program is offered by Drexel in conjunction with my CC. It's a 4 year degree. Can someone take a look @ the pdf below. It outlines every course I must take. http://www.drexelatbcc.org/academics/PDF/CST_CT.pdf For degree requirments w/ links to course descriptiongs see drexel.edu/catalog/degree/ct.htm With electives I can go up to Web Development 4. Based on my goals of Web development & wanting a well rounding education in information technology, what do you think of the curriculum? How will I fare entering the job market with this degree? My goals here are a little different. I'd like to work for 2 to 3 companies over the course of 6-7 years. Working with and learning different areas of IT. I'd like to stay with a company an average of 2-3 years before moving on. My end goal is to go into business for myself (IT related). I appreciate any and all advice the community here can give me! :) Could someone also explain to me their interpretation of this major? thanks! P.S. I already know XHTML & CSS. I am just now starting to experiment with PHP.

    Read the article

  • compact Number formatting behavior in Java (automatically switch between decimal and scientific notation)

    - by kostmo
    I am looking for a way to format a floating point number dynamically in either standard decimal format or scientific notation, depending on the value of the number. For moderate magnitudes, the number should be formatted as a decimal with trailing zeros suppressed. If the floating point number is equal to an integral value, the decimal point should also be suppressed. For extreme magnitudes (very small or very large), the number should be expressed in scientific notation. Alternately stated, if the number of characters in the expression as standard decimal notation exceeds a certain threshold, switch to scientific notation. I should have control over the maximum number of digits of precision, but I don't want trailing zeros appended to express the minimum precision; all trailing zeros should be suppressed. Basically, it should optimize for compactness and readability. 2.80000 - 2.8 765.000000 - 765 0.0073943162953 - 0.00739432 (limit digits of precision—to 6 in this case) 0.0000073943162953 - 7.39432E-6 (switch to scientific notation if the magnitude is small enough—less than 1E-5 in this case) 7394316295300000 - 7.39432E+6 (switch to scientific notation if the magnitude is large enough—for example, when greater than 1E+10) 0.0000073900000000 - 7.39E-6 (strip trailing zeros from significand in scientific notation) 0.000007299998344 - 7.3E-6 (rounding from the 6-digit precision limit causes this number to have trailing zeros which are stripped) Here's what I've found so far: The .toString() method of the Number class does most of what I want, except it doesn't upconvert to integer representation when possible, and it will not express large integral magnitudes in scientific notation. Also, I'm not sure how to adjust the precision. The "%G" format string to the String.format(...) function allows me to express numbers in scientific notation with adjustable precision, but does not strip trailing zeros. I'm wondering if there's already some library function out there that meets these criteria. I guess the only stumbling block for writing this myself is having to strip the trailing zeros from the significand in scientific notation produced by %G.

    Read the article

  • How can I round money values to the nearest $5.00 interval?

    - by Frank Developer
    I have an Informix-SQL based Pawnshop app which calculates an estimate of how much money should be loaned to a customer, based on the weight and purity of gold. The minimum the pawnshop lends is $5.00. The pawnshop employee will typically lend amounts which either ends with a 5 or 0. examples: 10, 15, 20, 100, 110, 125, etc. They do this so as to not run into shortage problems with $1.00 bills. So, if for example my system calculates the loan should be: $12.49, then round it to $10, $12.50 to $15.00, $13.00 to $15.00, $17.50 to $20.00, and so on!..The employee can always override the rounded amount if necessary. Is it possible to accomplish this within the instructions section of a perform screen or would I have to write a cfunc and call it from within perform?.. Are there any C library functions which perform interval rounding of money values?.. On another note, I think the U.S. Government should discontinue the use of pennies so that businesses can round amounts to the nearest nickel, it would save so much time and weight in our pockets!

    Read the article

  • Dealing with Imprecise Drawing in CAD Drawing

    - by Graviton
    I have a CAD application, that allows user to draw lines and polygons and all that. One thorny problem that I face is user drawing can be highly imprecise, for example, a user might want to draw two rectangles that are connected to each other. Hence there should be one line shared by two rectangles. However, it's easy for user to, instead of draw a line, draw two lines that are very close to each other, so close to each other that when look from the screen, you would be mistaken that they are the same line, except that they aren't when you zoom in a little bit. My application would require user to properly draw the lines ( or my preprocessing must be able to do auto correction), or else my internal algorithm would not be able to process the inputs correctly. What is the best strategy to combat this kind of problem? I am thinking about rounding the point coordinates to a certain degree of precision, but although I can't exactly pinpoint the problem of this approach, but I feel that this is not the correct way of doing things, that this will introduce a new set of problem. Any idea?

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • How to make UISlider output nice rounded numbers exponentially?

    - by RickiG
    Hi I am implementing a UISlider a user can manipulate to set a distance. I have never used the CocoaTouch UISlider, but have used other frameworks sliders, usually there is a variable for setting the "step" and other "helper" properties. The documentation for the UISlider deals only with a max and min value, and the output is always a 6 decimal float with a linear relation to the position of the "slider nob". I guess I will have to implement the desired functionality step by step. To the user, the min/max values range from 10 m to 999 Km, I am trying to implement this in an exponential way, that will feel natural to the user. I.e. the user experiences a feeling of control over the values, big or small. Also that the "output" has reasonable values. Values like 10m 200m 2.5km 150 km etc. instead of 1.2342356 m or 108.93837756 km. I would like for the step size to increase by 10m for the first 200m, then maybe by 50m up to 500m, then when passing the 1000 m value, it starts to deal with Kilometers, so then it is step size = 1 km up until 50 km, then maybe 25 km steps etc. Any way I go about this I end up doing a lot of rounding and a lot of calculations wrapped in a forrest of if statements and NSString/Number conversions, each time the user moves the slider just a little. I was hoping someone could lend me a bit of inspiration/math help or make me aware of a more lean approach to solving this problem. My last idea is to populate and array with a 100 string values, then have the slider int value correspond to a string, this is not very flexible, but doable. Thank you in advance for any help given:)

    Read the article

  • Problem with JavaScript arithmetic

    - by Lynn
    I have a form for my customers to add budget projections. A prominent user wants to be able to show dollar values in either dollars, Kila-dollars or Mega-dollars. I'm trying to achieve this with a group of radio buttons that call the following JavaScript function, but am having problems with rounding that make the results look pretty crummy. Any advice would be much appreciated! Lynn function setDollars(new_mode) { var factor; var myfield; var myval; var cur_mode = document.proj_form.cur_dollars.value; if(cur_mode == new_mode) { return; } else if((cur_mode == 'd')&&(new_mode == 'kd')) { factor = "0.001"; } else if((cur_mode == 'd')&&(new_mode == 'md')) { factor = "0.000001"; } else if((cur_mode == 'kd')&&(new_mode == 'd')) { factor = "1000"; } else if((cur_mode == 'kd')&&(new_mode == 'md')) { factor = "0.001"; } else if((cur_mode == 'md')&&(new_mode == 'kd')) { factor = "1000"; } else if((cur_mode == 'md')&&(new_mode == 'd')) { factor = "1000000"; } document.proj_form.cur_dollars.value = new_mode; var cur_idx = document.proj_form.cur_idx.value; var available_slots = 13 - cur_idx; var td_name; var cell; var new_value; //Adjust dollar values for projections for(i=1;i<13;i++) { var myfield = eval('document.proj_form.proj_'+i); if(myfield.value == '') { myfield.value = 0; } var myval = parseFloat(myfield.value) * parseFloat(factor); myfield.value = myval; if(i < cur_idx) { document.getElementById("actual_"+i).innerHTML = myval; } }

    Read the article

  • Raphael SVG VML Implement Multi Pivot Points for Rotation

    - by Cody N
    Over the last two days I've effectively figured out how NOT to rotate Raphael Elements. Basically I am trying to implement a multiple pivot points on element to rotate it by mouse. When a user enters rotation mode 5 pivots are created. One for each corner of the bounding box and one in the center of the box. When the mouse is down and moving it is simple enough to rotate around the pivot using Raphael elements.rotate(degrees, x, y) and calculating the degrees based on the mouse positions and atan2 to the pivot point. The problem arises after I've rotated the element, bbox, and the other pivots. There x,y position in the same only there viewport is different. In an SVG enabled browser I can create new pivot points based on matrixTransformation and getCTM. However after creating the first set of new pivots, every rotation after the pivots get further away from the transformed bbox due to rounding errors. The above is not even an option in IE since in is VML based and cannot account for transformation. Is the only effective way to implement element rotation is by using rotate absolute or rotating around the center of the bounding box? Is it possible at all the create multi pivot points for an object and update them after mouseup to remain in the corners and center of the transformed bbox?

    Read the article

  • How to create dynamic panel layout for this logo creation wizard ?

    - by Rebol Tutorial
    I want to create a wizard for the logo badge below with 3 parameters. I can make the title dynamic but for image and gradient it's hardcoded because I can't see how to make them dynamic. Code follows after pictures: custom-styles: stylize [ lab: label 60x20 right bold middle font-size 11 btn: button 64x20 font-size 11 edge [size: 1x1] fld: field 200x20 font-size 11 middle edge [size: 1x1] inf: info font-size 11 middle edge [size: 1x1] ari: field wrap font-size 11 edge [size: 1x1] with [flags: [field tabbed]] ] panel1: layout/size [ origin 0 space 2x2 across styles custom-styles h3 "Parameters" font-size 14 return lab "Title" fld_title: fld "EXPERIMENT" return lab "Logo" fld_logo: fld "http://www.rebol.com/graphics/reb-logo.gif" return lab "Gradient" fld_gradient: fld "5 55 5 10 10 71.0.6 30.10.10 71.0.6" ] 278x170 panel2: layout/size [ ;layout (window client area) size is 278x170 at the end of the spec block at 0x0 ;put the banner on the top left corner box 278x170 effect [ ; default box face size is 100x100 draw [ anti-alias on line-width 2.5 ; number of pixels in width of the border pen black ; color of the edge of the next draw element fill-pen radial 100x50 5 55 5 10 10 71.0.6 30.10.10 71.0.6 ; the draw element box ; another box drawn as an effect 15 ; size of rounding in pixels 0x0 ; upper left corner 278x170 ; lower right corner ] ] pad 30x-150 Text fld_title/text font [name: "Impact" size: 24 color: white] image http://www.rebol.com/graphics/reb-logo.gif ] 278x170 main: layout [ vh2 "Logo Badge Wizard" guide pad 20 button "Parameters" [panels/pane: panel1 show panels ] button "Rendering" [show panel2 panels/pane: panel2 show panels] button "Quit" [Unview] return box 2x170 maroon return panels: box 278x170 ] panel1/offset: 0x0 panel2/offset: 0x0 panels/pane: panel1 view main

    Read the article

  • Where is rebol fill-pen documented (to get glow effect on a round rectangle) ?

    - by Rebol Tutorial
    There is some discussion here about fill-pen http://www.mail-archive.com/[email protected]/msg02019.html But I can't see documentation about cubic, diamond, etc... effect for fill-pen in rebol's official doc ? I'm trying to draw some round rectangle with glowing effect but don't really understand the parameters I'm playing with so I can't get exactly what I'd like (I'd like the glow effect starting from the center not from the dark left top corner): view layout [ box 278x185 effect [ ; default box face size is 100x100 draw [ anti-alias on ; information for the next draw element (not required) line-width 2.5 ; number of pixels in width of the border pen black ; color of the edge of the next draw element ; fill pen is a little complex: ;fill-pen 10x10 0 90 0 1 1 0.0.0 255.0.0 255.0.255 fill-pen radial 20x20 5 55 5 5 10 0.0.0 55.0.5 55.0.5 ; the draw element box ; another box drawn as an effect 15 ; size of rounding in pixels 0x0 ; upper left corner 278x170 ; lower right corner ] ] ]

    Read the article

  • Change Powerpoint chart data with .NET

    - by mc6688
    I have a Powerpoint template that contains 1 slide and on that slide is a chart. I'd like to be able to manipulate that charts data using .NET. So far I have code that... unzips the Powerpoint file. unzips the embedded excel file (ppt\embeddings\Microsoft_Office_Excel_Worksheet1.xlsx) It successfully manipulates the data in the excel sheet and zips it back up. Opens and manipulates ppt\charts\chart1.xml Powerpoint is then zipped up and delivered to the user The result of this is a Powerpoint file that shows a blank chart. But when I click on the chart and go to edit data it updates the data and shows the correct chart. I believe my problem is with the chart1.xml that I am generating. I have compared my generated version with a version created by Powerpoint and they are almost identical. The only differences are in the values for <c:crossAx> and <c:axId>. There are also some rounding difference in the data. But I do not feel like that would result in an blank chart. Is there another file that I need to edit? Does anyone have any ideas as to what else I should try to get this working?

    Read the article

  • OpenGL fast texture drawing with vertex buffer objects. Is this the way to do it?

    - by Matthew Mitchell
    Hello. I am making a 2D game with OpenGL. I would like to speed up my texture drawing by using VBOs. Currently I am using the immediate mode. I am generating my own coordinates when I rotate and scale a texture. I also have the functionality of rounding the corners of a texture, using the polygon primitive to draw those. I was thinking, would it be fastest to make a VBO with vertices for the sides of the texture with no offset included so I can then use glViewport, glScale (Or glTranslate? What is the difference and most suitable here?) and glRotate to move the drawing position for my texture. Then I can use the same VBO with no changes to draw the texture each time. I could only change the VBO when I need to add coordinates for the rounded corners. Is that the best way to do this? What things should I look out for while doing it? Is it really fastest to use GL_TRIANGLES instead of GL_QUADS in modern graphics cards? Thank you for any answer.

    Read the article

  • How to check if a number is a power of 2

    - by configurator
    Today I needed a simple algorithm for checking if a number is a power of 2. The algorithm needs to be: Simple Correct for any ulong value. I came up with this simple algorithm: private bool IsPowerOfTwo(ulong number) { if (number == 0) return false; for (ulong power = 1; power > 0; power = power << 1) { // this for loop used shifting for powers of 2, meaning // that the value will become 0 after the last shift // (from binary 1000...0000 to 0000...0000) then, the for // loop will break out if (power == number) return true; if (power > number) return false; } return false; } But then I thought, how about checking if log2x is an exactly round number? But when I checked for 2^63+1, Math.Log returned exactly 63 because of rounding. So I checked if 2 to the power 63 is equal to the original number - and it is, because the calculation is done in doubles and not in exact numbers: private bool IsPowerOfTwo_2(ulong number) { double log = Math.Log(number, 2); double pow = Math.Pow(2, Math.Round(log)); return pow == number; } This returned true for the given wrong value: 9223372036854775809. Does anyone have any suggestion for a better algorithm?

    Read the article

  • Why does GLSL's arithmetic functions yield so different results on the iPad than on the simulator?

    - by cheeesus
    I'm currently chasing some bugs in my OpenGL ES 2.0 fragment shader code which is running on iOS devices. The code runs fine in the simulator, but on the iPad it has huge problems and some of the calculations yield vastly different results, I had for example 0.0 on the iPad and 4013.17 on the simulator, so I'm not talking about small differences which could be the result of some rounding errors. One of the things I noticed is that, on the iPad, float1 = pow(float2, 2.0); can yield results which are very different from the results of float1 = float2 * float2; Specifically, when using pow(x, 2.0) on a variable containing a larger negative number like -8, it seemed to return a value which satified the condition if (powResult <= 0.0). Also, the result of both operations (pow(x, 2.0) as well as x*x) yields different results in the simulator than on the iPad. Used floats are mediump, but I get the same stuff with highp. Is there a simple explanation for those differences? I'm narrowing the problem down, but it takes so much time, so maybe someone can help me here with a simple explanation.

    Read the article

  • Weird flex date issue

    - by CodeMonkey
    Flex is driving me CRAZY and I think it's some weird gotcha with how it handles leap years and none leap years. So here's my example. I have the below dateDiff method that finds the number of days or milliseconds between two dates. If I run the following three statements I get some weird issues. dateDiff("date", new Date(2010, 0,1), new Date(2010, 0, 31)); dateDiff("date", new Date(2010, 1,1), new Date(2010, 1, 28)); dateDiff("date", new Date(2010, 2,1), new Date(2010, 2, 31)); dateDiff("date", new Date(2010, 3,1), new Date(2010, 3, 30)); If you were to look at the date comparisons above you would expect to get 30, 27, 30, 29 as the number of days between the dates. There weird part is that I get 29 when comparing March 1 to March 31. Why is that? Is it something to do with February only having 28 days? If anyone has ANY input on this that would be greatly appreciated. public static function dateDiff( datePart:String, startDate:Date, endDate:Date ):Number { var _returnValue:Number = 0; switch (datePart) { case "milliseconds": _returnValue = endDate.time - startDate.time; break; case "date": // TODO: Need to figure out DST problem i.e. 23 hours at DST start, 25 at end. // Math.floor causes rounding down error with DST start at dayOfYear _returnValue = Math.floor(dateDiff("milliseconds", startDate, endDate)/(1000 * 60 * 60 * 24)); break; } return _returnValue; }

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • how to make this in python

    - by user2980882
    The number reduction game Rules of the game: ? The first player to write a 0 wins. ? To start the game, Player 1 picks any whole number greater than 1, say 18. ? The players take turns reducing the number by either: o Subtracting 1 from the number his/her opponent just wrote, OR o Halving the number his/her opponent just wrote, rounding down if necessary. Write a Python program that lets two players play the number reduction game. Your program should: 1. Ask Player 1 to enter the starting number. 2. Use a while-loop to allow the players to take turns reducing the number until someone wins. 3. Each time a player enters a positive number (not 0), inform the other player what his/her choices are and ask him/her to enter the next number. 4. Declare the winner when someone enters 0. Example session: Player 1, enter a number greater than 1: 16 Player 2, your choices are 15 or 8: 15 Player 1, your choices are 14 or 7: 7 Player 2, your choices are 6 or 3:3 Player 1, your choices are 2 or 1:2 Player 2, your choices are 1 or 1:1 Player 1, your choices are 0 or 0:0 Player 1 wins

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

  • Different Flavors of Leases Back On

    - by Theresa Hickman
    Given the continued interest regarding the proposed changes to Lease Accounting, I decided to write another entry on this controversial topic with colorful commentary from our resident accounting expert, Seamus Moran. Background (A History Lesson) Back in 1976, the FASB issued FAS 13, “Accounting for Leases” that permitted leases to be either an operating lease or capital (finance) lease. In substance, operating leases are a form of off-balance sheet financing. According to Seamus, operating leases date back to the launch of the Boeing 707 in the 1950s.  Because the aircraft was so much more expensive than previous aircrafts, the industry came up with the operating lease concept to accommodate these jet liners that dominated air transport.  How it worked was the bank would buy the plane and lease it to the airline.  Because the bank never controlled or flew the plane, they never placed the asset on their balance sheet, and because the airline never owned the plane, they didn’t place it on their balance sheet either. They simply treated the monthly lease payments as rental expenses on the P&L.   August 2010 Original Lease Accounting Changes In August 2010, FASB and IASB decided to overhaul lease accounting as part of their joint commitment “to insure that investors and other users of financial statements are provided useful, transparent, and complete information about leasing transactions in the financial statements.”  Some say that the current lease accounting standards are broken because it keeps assets off the balance sheet, hidden from investors’ view. The original proposal abolished operating leases and only permitted capital leases where all leases would be recorded on the balance sheet as assets and liabilities. The asset side would reflect the right to use the asset for the leased term, and the liability side would reflect the obligation to make lease payments.   Why Companies Were Freaking Out According to the SEC, the financial impact of the aforementioned lease changes was estimated to add more than $1.3 trillion of operating lease obligations to corporate balance sheets. Many companies in various industries, especially retail, are concerned because the changes are significant and will impact existing leases with no grandfather clause for existing operating leases. Of course, the banks and airlines I mentioned earlier really hate this because neither wants to report the airplane (now costing around $60 M) as an asset. Regular companies were concerned that they would have to report routine short term leases of real estate or equipment as fixed assets, even though they were really just longer term rentals.  One company we spoke to leased roadside billboards, and really did not consider them to be fixed assets in any way. Obviously, these changes would have had a profound and lasting effect on a company’s financial and real estate strategies and significantly impact its financial statements.  Financial statements would show higher depreciation and interest expense with significantly higher total assets and debt. In terms of financial metrics, they’re negatively impacted. It would raise a company’s debt-to-capital ratio to reflect the higher debt compared to equity, it would negatively impact their return-on-assets because now companies will appear more asset intensive, and it will decrease EPS, lowering shareholder ROI. Feb. 2011 Recent Update The comment period on leases closed in December 2010. The FASB and the IASB have met several times since then and published their initial responses to the input they received from the various interested parties.  They are “redeliberating” the principles involved in Lease Accounting.  Some of the issues they are looking at include: The core definition of a lease.  This will articulate principles on what is a lease and what is “not-a-lease.” One theory or supposition is that they might define a lease as the transfer of certain but not all major ownership attributes for a certain period of time.  So a year’s lease of an aircraft might be a “lease,” but a year’s lease of half a floor in an office building would be “not-a-lease.”  The ownership attributes transferred from the core owner to the user are different; the airline must maintain, paint, and do whatever it needs to do on the aircraft. However, the office renter will have strictly limited rights in respect to the rented space. The differences between a lease contract and service contract.  Even if they call them “leases” for the purpose of commercial law, a service contract might not be accounted for as a lease. The accounting to be done by the lessee.  They would define when the bank or landlord would retain the asset on their balance sheet, and perhaps by implication, when the lessor would not need to include the asset on theirs.  So if the finance house keeps the airplane or office on their balance sheet, the tenant doesn’t need to.  I’m not sure that I can draw the opposite conclusion where the finance house doesn’t report but the tenant must. The difference, if any, between a financing lease and other leases, and the implications to the accounting. The present value calculation when renewable terms exist. They have reduced the circumstances in which one must look at the renewable terms of a lease in calculating the present value.  In most circumstances, you will use the lease term rather than the potential renewable term. Their latest discussion this past week with the contents of the discussion was not available at the time of me writing this entry.  For more details, the results of the discussions are posted on both the FASB and the IASB websites. Implied Software Changes Whatever the final rules turn out to be, all ERP systems, such as Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards, and Oracle Hyperion will need to change their software to accommodate the new rules. The following lists some changes that might have to be made to accounting software depending on what the final standards will be in June 2011: Lease tracking may require modifications with tracking of additional lease details that might require a centralized repository to maintain Accounting may need to be modified as there are many changes to how capital leases and the new “other than finance” leases are accounted for both on the lessee and lessor side.  For example, valuation, amortization, and disclosure will be considerably different requiring different types of data to be captured. Companies may need to modify their chart of accounts depending on how they want to track leases, which could then impact financial reporting and consolidation Business processes may require changes which could then impact internal controls Software applications may need to perform more advanced computations on leases Reports and KPIs may need to reflect new operating metrics Hold Onto Your Seats           Before you redo all your lease agreements and call your software vendors asking when the changes to the software will be made, remember that the rules are not finalized yet, and from appearances, will not reflect the proposals in the exposure draft.  Not only are there objections to putting the operating lease assets on anyone’s balance sheet, there are lots of objections to subjectivity and the data required for the valuation.  According to Seamus, there is huge opposition from New York bankers, the airlines, the EU, the Communist Party of China (since it impacts their exporting business), and Republicans (hearing complaints from small and large businesses). Even if everyone can agree on the proposed changes, 2013 might be the earliest that companies would need to change how they report leases. The Boards will finish their deliberations in April, May or June 2011.  As we’ve seen with other Exposure Drafts, if the changes are minor and the principles met the General Acceptance consensus criteria, the Standard could be finalized at that time.  However, if substantial changes are made, a fresh exposure draft, comment period, and review period might be involved, too. Seamus added an interesting perspective. Even if the proposed changes do pass, don’t you think our customers, such as Boeing, GE Capital, United Airlines, etc. will be clever enough to come up with a new kind of financing arrangement that complies with the new accounting? How about the large retail customers, such as Best Buy and Macerich? Don’t you think they might simply cut deals around retail locations with new contracts that prevent their leases from being capital leases? Instead of blindly adapting the software to meet the principles outlined in the final standard, our software needs to accommodate how businesses will respond to the new rules. We cannot know our customers’ responses until the rules are finalized. Oracle is aware of the potential changes and is staying abreast of the developments through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to worldwide regulatory compliance. Oracle products have been IFRS and GAAP compliant for years and we will continue to maintain those standards going forward.

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >