Search Results

Search found 1065 results on 43 pages for 'calculation'.

Page 10/43 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to display a border-bottom only if table cells are not empty (CSS)

    - by Polarpro
    Hey there, I've got a Filemaker calculation that generates an HTML page with several tables. If the calculation results in values for certain fields the result would be <table> <tr><td>Example value 1</td></tr> <tr><td>Example value 2</td></tr> ... </table> If the calculation finds no values to be displayed, the result would simply be <table> </table> In the first case, I want to the table to display a border at the bottom (or any other horizontal line); in the second case, I don't want to display a border at the bottom. I cannot find a way to get this done using a CSS... Thanks in adavance :-)

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • EPM 11.1.2.2.000 release - considerations

    - by THE
    (guest Article by Nancy) Please be aware with the upcoming release of EPM v11.1.2.2.000, it is highly recommended you first read the"ORACLE® ENTERPRISE PERFORMANCE MANAGEMENT SYSTEM 11.1.2.2.000 Readme" prior to installing this release. We want to highlight the "Installation Information" section which includes the following late-breaking information: Business Rules Migration to Calculation Manager Oracle Hyperion Calculation Manager has replaced Oracle Hyperion Business Rules as the mechanism for designing and managing business rules, therefore, Business Rules is no longer released with EPM System Release 11.1.2.2. If you are applying 11.1.2.2 as a maintenance release, or upgrading to Release 11.1.2.2, and have been using Business Rules in an earlier release, you must migrate to Calculation Manager rules in Release 11.1.2.2. (See Oracle Enterprise Performance Management System Installation and Configuration Guide.) Planning User Interface Enhancements This release of Planning includes a large number of user interface enhancements, as described in Oracle Hyperion Planning New Features. To optimize performance with these new features, you must implement the following recommended configuration. Server: 64-bit, 16 GB physical RAM Client: Optimized for Internet Explorer 9 and Firefox 10 or higher Client-to-Server Connectivity: High-speed internet connection or VPN connection between client and server, client-to-server ping time < 150 milliseconds for best performance The new, improved Planning user interface requires efficient browsers to handle interactivity provided through Web 2.0 like functionality. In our testing, Internet Explorer 7, Internet Explorer 8, and Firefox 3.x are not sufficient to handle such interactivity, and the responsiveness in these versions of browsers is not as fast as the user interface in the previous releases of Planning. For this reason, we strongly recommend that you upgrade your browser to Internet Explorer 9 or Firefox 10 to get responsiveness similar to what you experienced in previous releases. In some instances, the response times in Internet Explorer 7, Internet Explorer 8 and Firefox 3.x could be acceptable. Hence, we suggest that you uptake the new user interface only after you conduct an end user response test and you are satisfied with the results of these tests for these versions of browsers. Please note that it is still possible to leverage the old user interface and features from Planning Release 11.1.2.1. (For more information, see “Using the Planning Release 11.1.2.1 User Interface and Features” in the Oracle Hyperion Planning Administrator's Guide.) IBM HTTP Server and IIS Default Ports Both IBM HTTP Server and IIS Web Server use 80 as their default port. If you are using WebSphere, you must change one of these defaults so that there is no port conflict. If you have further questions, please utilize the  Planning or Essbase MOS Community.

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Announcement - Advisor Webcast HFM - Calc Manager

    - by THE
    Stay tuned for next weeks Advisor Webcast.Greg and Tanya are going to run a 45 Minute session on HFM and Calc Manager: Advisor Webcast: New Features and Improvements in HFM and Calculation Manager 11.1.2.2.300on Wednesday, 14.Nov.2012 - 16:00 CET As of the  Registration Note 1494304.1: This webcast is intended for people responsible for the operation and maintenance of Oracle Hyperion Financial Management application. This overview of new features and changes in HFM and Calc Manager, as well as upgrade paths and certified products is intended to support decision process for product upgrade. TOPICS WILL INCLUDE: New features and enhancements in Hyperion Financial Management Adding custom dimensions to existing applications Enhancements in Smartview, ICT, Equity Pickup and Taskflows modules Changes in User Interface Enhanced Copy Application Utility New in Calculation Manager Financial Management Script to Graphical Conversion

    Read the article

  • Advisor Webcast: Hyperion Planning: Migrating Business Rules to Calc Manager

    - by inowodwo
    As you may be aware EPM 11.1.2.1 was the terminal release of Hyperion Business Rules (see Hyperion Business Rules Statement of Direction (Doc ID 1448421.1). This webcast aims to help you migrate from Business Rules to Calc Manager. Date: January 10, 2013 at 3:00 pm, GMT Time (London, GMT) / 4:00 pm, Europe Time (Berlin, GMT+01:00) / 07:00 am Pacific / 8:00 am Mountain / 10:00 am Eastern TOPICS WILL INCLUDE:    Calculation Manager in 11.1.2.2    Migration Consideration    How to migrate the the HBR rules from 11.1.2.1 to Calculation Manager 11.1.2.2    How to migrate the security of the Business Rules.    How to approach troubleshooting and known issues with migration. For registration details please go to Migrating Business Rules to Calc Manager (Doc ID 1506296.1). Alternatively, to view all upcoming webcasts go to Advisor Webcasts: Current Schedule and Archived recordings [ID 740966.1] and chose Oracle Business Analytics from the drop down menu.

    Read the article

  • Force netsh/arp binding multicast IP addres with specific MAC address

    - by Olivier
    I would like to setup an binding from an IP address to a MAC address using netsh. Goal is to bond an IP address which is a multicast address (224.224.x.y) to a given MAC address (which is NOT the calculated one from the multicast IP address : 01:00:5e:X:Y:Z It used to work with Windows XP (was it a bug that used to be "perfect" for my needs?), but Windows 7/8/8.1 forces the MAC address to the calculated one instead of letting me put what I want! (http://nettools.aqwnet.com/ipmaccalc/ipmaccalc.php shows MAC address calculation for multicast IP address) Thus I'm doing the following. Listing existing mappings: netsh.exe interface ip show neighbors "Ethernet" Interface 12 : Ethernet Internet address Physical address Type 224.0.0.22 01-00-5e-XX-YY-ZZ static Then adding my interface mapping manually: netsh.exe interface ip add neighbors "Ethernet" "224.xxx.yyy.zzz" "00-80-EE-UU-VV-WW" Finally, listing again my mappings: netsh.exe interface ip show neighbors "Ethernet" Interface 12 : Ethernet Internet address Physical address Type 224.0.0.22 01-00-5e-XX-YY-ZZ static **224.xxx.yyy.zzz 01-00-5e-UU-VV-WW static** As you can see, the MAC Address of the second entry (the one I just made) has been dynamically replaced by the calculated MAC Address corresponding to my IP Address... Calculation is done as follow (and displayed in hexa): UU=(xxx-128) VV=yyy WW=zzz But I don't want that behavior. My IP address and MAC address cannot be changed, and I must associate them accurately. Does anybody know how to disable MAC address substitution/calculation in netsh? Thanks, Olivier.

    Read the article

  • How to dynamically expand a string in C

    - by sa125
    Hi - I have a function that recursively makes some calculations on a set of numbers. I want to also pretty-print the calculation in each recursion call by passing the string from the previous calculation and concatenating it with the current operation. A sample output might look like this: 3 (3) + 2 ((3) + 2) / 4 (((3) + 2) / 4) x 5 ((((3) + 2) / 4) x 5) + 14 ... and so on So basically, the second call gets 3 and appends + 2 to it, the third call gets passed (3) + 2 , etc. My recursive function prototype looks like this: void calc_rec(int input[], int length, char * previous_string); I wrote a 2 helper functions to help me with the operation, but they implode when I test them: /********************************************************************** * dynamically allocate and append new string to old string and return a pointer to it **********************************************************************/ char * strapp(char * old, char * new) { // find the size of the string to allocate int len = sizeof(char) * (strlen(old) + strlen(new)); // allocate a pointer to the new string char * out = (char*)malloc(len); // concat both strings and return sprintf(out, "%s%s", old, new); return out; } /********************************************************************** * returns a pretty math representation of the calculation op **********************************************************************/ char * mathop(char * old, char operand, int num) { char * output, *newout; char fstr[50]; // random guess.. couldn't think of a better way. sprintf(fstr, " %c %d", operand, num); output = strapp(old, fstr); newout = (char*)malloc( 2*sizeof(char)+sizeof(output) ); sprintf(newout, "(%s)", output); free(output); return newout; } void test_mathop() { int i, total = 10; char * first = "3"; printf("in test_mathop\n"); while (i < total) { first = mathop(first, "+", i); printf("%s\n", first); ++i; } } strapp() returns a pointer to newly appended strings (works), and mathop() is supposed to take the old calculation string ("(3)+2"), a char operand ('+', '-', etc) and an int, and return a pointer to the new string, for example "((3)+2)/3". Any idea where I'm messing things up? thanks.

    Read the article

  • C# How to calculate first 10 numbers out a bill number of 12?

    - by Sef
    Hello, Lets assume i have a bill number that has 12 numbers: 823 45678912 My question is how exactly do i do calculations with the first 10 numbers? To verify if a given in billnumber is correct i have to perform the following calculation: (first 10 numbers) % 97 = result And if the result of the calculation is the same as the last 2 numbers of my bill number, then it is verified. Thanks in advance.

    Read the article

  • When should a uniform be used in shader programming?

    - by Phineas
    In a vertex shader, I calculate a vector using only uniforms. Therefore, the outcome of this calculation is the same for all instantiations of the vertex shader. Should I just do this calculation on the CPU and upload it as a uniform? What if I have ten such calculations? If I upload a lot of uniforms in this way, does CPU-GPU communication ever get so slow that recomputing such values in the vertex shader is actually faster?

    Read the article

  • Python : Crating Dynemic Functions

    - by jigar
    I have issue where i want to create Dynamic function which will do some calculation based to values retrieved from database, i am clear with my internal calculation but question in how to create dynamic class: My Structure is something like this : class xyz: def Project(): start = 2011-01-03 def Phase1(): effort = '2d' def Phase2(): effort = '3d' def Phase3(): effort = '4d' Now want to generate those all PhaseX() function dynamically so can any one suggest me how to achieve such thing using Python Code Waiting for Positive reply Regards Thank You

    Read the article

  • how to check how many bits in a byte array?

    - by newandfresh
    Im creating a download speed test, and im downloading a 800megabit file to a Byte[] in a memory stream with webClient.DownloadDataAsync(new Uri(link), memStreamArray); How can i check how many bits are in the memStreamArray while downloading? I need this so i can do a calculation on size / time to get the speed in realtime. Im planing on performing this calculation in the webClient.DownloadProgressChanged event.

    Read the article

  • Performance of vector::size() : is it as fast as reading a variable?

    - by zoli2k
    I have do an extensive calculation on a big vector of integers. The vector size is not changed during the calculation. The size of the vector is frequently accessed by the code. What is faster in general: using the vector::size() function or using helper constant vectorSize storing the size of the vector? I know that compilers usually able to inline the size() function when setting the proper compiler flags, however, making a function inline is something that a compiler may do but can not be forced.

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • Load dll's from Environment Variable Path from a service

    - by Paulo Manuel Santos
    We install Matlab Runtime on a machine, then we restart a .net windows service that invokes methods from the Matlab Runtime. The problem is that we receive TypeInitializationException errors until we restart windows. We think this happens because Environment Variables are not changed on services until restart and Matlab uses the %Path% variable to reference it's core DLL's. My question is, do you think I can change the %Path% variable so that Matlab will use it when referencing the core dll's for it's engine? Or is it possible to add a directory to the runtime DLL loading mechanism of .NET so that those Matlab core dll's would be referenced correctly without restarting the machine? Here is the exception we get System.TypeInitializationException: The type initializer for 'MatlabCalculation.Calculation' threw an exception. ---> System.TypeInitializationException: The type initializer for 'MathWorks.MATLAB.NET.Utility.MWMCR' threw an exception. ---> System.DllNotFoundException: Unable to load DLL 'mclmcrrt710.dll': Kan opgegeven module niet vinden. (Exception from HRESULT: 0x8007007E) at MathWorks.MATLAB.NET.Utility.MWMCR.mclmcrInitialize() at MathWorks.MATLAB.NET.Utility.MWMCR..cctor() --- End of inner exception stack trace --- at MatlabCalculation.Calculation..cctor() --- End of inner exception stack trace --- at MatlabCalculation.Calculation.Finalize() "Kan opgegeven module niet vinden" = "The specified module not found"

    Read the article

  • Generating permutation in Python with specific rule

    - by twfx
    Let say a=[A, B, C, D], each element has a weight w, and is set to 1 if selected, 0 if otherwise. I'd like to generate permutation in the below order 1,1,1,1 1,1,1,0 1,1,0,1 1,1,0,0 1,0,1,1 1,0,1,0 1,0,0,1 1,0,0,0 Let's w=[1,2,3,4] for item A,B,C,D ... and max_weight = 4. For each permutation, if the accum weight has exceeded max_weight, stop calculation for that permutation, move to next permutation. For eg. 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,0,1 --> 7 > 4 finished, move to next 1,1,0,0 --> 3 finished, move to next 1,0,1,1 --> 8 > 4, finished, stop, move to next 1,0,1,0 --> 4 finished, move to next 1,0,0,1 --> 5 > 4 finished, move to next 1,0,0,0 --> 1 finished, move to next [1,0,1,0] is the best combination which does not exceeded max_weight 4 My questions are What's the algorithm which generate the required permutation? Or any suggestion I could generate the permutation? As the number of element can be up to 10000, and the calculation stop if the accum weight for the branch exceeds max_weight, it is not necessary to generate all permutation first before the calculation. How can the algo in (1) generate permutation on the fly?

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • Ajax Form submittion in Google App Engine with jQuery

    - by user271785
    could not figure out why it is not working: i need to send request to server, generate some fragment of html in python with meanCal method, and then want that fragment embedded into submitting html file using calculation method and dynamically shows in dyContent div. all the processes are done by single click on submit button in a form. any suggestions??? thanks in advance. the submitting html: <div id="dyContent" style="height: 200px;"> waiting for user... {{ mgs }} </div> <div id="leturetext"> <form id="mean" method="post" action="/calculation"> <select name="meanselect"> <option value=10>example</option> <option value=11>exercise</option> </select> <input type="button" name="btnMean" value="Check Results" /> </form> </div> <script type="text/javascript"> $(document).ready(function() { //$("#btnMean").live("click", function() { $("#mean").submit(function(){ $.ajax({ type: "POST", cache: false, url: "/meanCal", success: function(html) { $("#dyContent").html(html); } }); return false; }); }); </script> python: class MainHandler(webapp.RequestHandler): def get(self): path = self.request.path if doRender(self, path): return doRender(self,'index.htm') class calculationHandler(webapp.RequestHandler): def post(self): doRender(self, 'Diagnostic_stats.htm', {'mgs' : "refreshed.", }) def get(self): doRender(self, 'Diagnostic_stats.htm') class meanHandler(webapp.RequestHandler): def get(self): global GL index = self.request.get('meanselect'.value) if (index == 10): allData = GL.exampleData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Example Mean is: " + dataMean, }) return else: allData = GL.exerciseData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Exercise Mean is: " + dataMean, }) def main(): global GL GL = GlobalVariables() application = webapp.WSGIApplication( [('/calculation', calculationHandler), ('/meanCal', meanHandler), ('.*', MainHandler), ], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main()

    Read the article

  • iPhone slide view passing variables

    - by sebastyuiop
    Right, I'm trying to make an app that has a calculation that involves a stopwatch. When a button on the calculation view is clicked a stopwatch slides in from the bottom. This all works fine, the problem I can't get my head around is how to send the recorded time back to the previous controller to update a textfield. I've simplified the code and stripped out most irrelevant stuff. Many thanks. CalculationViewController.h #import <UIKit/UIKit.h> @interface CalculationViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { IBOutlet UITextField *inputTxt; } @property (nonatomic, retain) UITextField *inputTxt; - (IBAction)showTimer:(id)sender; @end CalculationViewController.m #import "CalculationViewController.h" #import "TimerViewController.h" @implementation CalculationViewController - (IBAction)showTimer:(id)sender { TimerViewController *timerView = [[TimerViewController alloc] init]; [self.navigationController presentModalViewController:timerView animated:YES]; } TimerViewController.h #import <UIKit/UIKit.h> @interface TimerViewController : UIViewController { IBOutlet UILabel *time; NSTimer *myTicker; } - (IBAction)start; - (IBAction)stop; - (IBAction)reset; - (void)showActivity; @end TimerViewController.m #import "TimerViewController.h" #import "CalculationViewController.h" @implementation TimerViewController - (IBAction)start { myTicker = [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(showActivity) userInfo:nil repeats:YES]; } - (IBAction)stop { [myTicker invalidate]; #Update inputTxt on calculation view here [self dismissModalViewControllerAnimated:YES]; } - (IBAction)reset { time.text = @"0"; } - (void)showActivity { int currentTime = [time.text intValue]; int newTime = currentTime + 1; time.text = [NSString stringWithFormat:@"%d", newTime]; } @end

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • is this a design pattern?

    - by Michel
    Hi all, i have to build some financial data report, and for making the calculation, there are a lot of 'if then' situations: if it's a large client, subtract 10%, if it's postal code equals '10101', add 10%, if the day is on a saturday, make a difficult calculation etc. so i once read about this kind of example, and what they did was (hope i remember well) create a class with some base info and make it possible to add all kinds of calculationobjects to it. So to put what i remembered in pseudo code Basecalc bc = new baseCalc(); //put the info in the bc so other objects can do their if bc.Add(new Largecustomercalc()); bc.Add(new PostalcodeCalc()); bc.add(new WeekdayCalc()); the the bc would run the Calc() methods of all of the added Calc objects. As i type this i think all the Calc objects must be able to see the Basecalc properties to correctly perform their calculation logic. So all the if's are in the different Calc objects and not ALL in the Basecalc. does this make sense? I was wondering if this is some kind of design pattern?

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • Calculate age in days/months/years in OpenOffice

    - by Sanjay
    In need to find the age in days - months - years in OpenOffice. There is DATEDIF() in Microsoft Excel. You can use it to find the difference in days/months/years between two dates. Age Calculation You can calculate a persons age based on their birthday and todays date. The calculation uses the DATEDIF() function. The DATEDIF() is not documented in Excel 5, 7 or 97, but it is in 2000. (Makes you wonder what else Microsoft forgot to tell us!) Birth date : 01-Jan-60 Years lived : 52 =DATEDIF(C8,TODAY(),"y") and the months : 4 =DATEDIF(C8,TODAY(),"ym") and the days : 30 =DATEDIF(C8,TODAY(),"md") One can calculate by below formula, but it is cumbersome to calculate months. Another way to calculate age This method gives you an age which may potentially have decimal places representing the months. If the age is 20.5, the .5 represents 6 months. Birth date : 01-Jan-60 Age is : 52.41 =(TODAY()-C23)/365.25

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >