Search Results

Search found 2911 results on 117 pages for 'numerical analysis'.

Page 3/117 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • Keyword Analysis Tools

    One of the most essential free webmaster tools is to possess a great analytics program. Free website analysis for websites and blogs is vital for success and involves plenty of capabilities, like traffic analysis. Free website analysis must present what pages are usually viewed the most.

    Read the article

  • Windows Azure Platform TCO/ROI Analysis Tool

    - by kaleidoscope
    Microsoft have released a tool to help you figure out how much money you can save by switching to Windows Azure from your on-premises solution. The tool will provide you with a customized estimate of potential cost savings you (or your company or organization) may achieve by building on the Windows Azure Platform. Upon completion of the TCO and ROI Calculator profile analysis, you will be presented with a detailed report which shows estimated line item costs for an accurate TCO and a 1 to 3 year ROI analysis for you or your company or organization. You should not interpret the analysis report you receive as a part of this process to be a commitment on the part of Microsoft, and Microsoft makes no guarantees regarding the accuracy of any information presented in the report. You should not view the results of this report as a substitute for engaging with a third party expert to independently evaluate you or your company’s specific computing needs. The analysis report you will receive is for informational purposes only. For more information check this link. Geeta, G

    Read the article

  • SEO Secrets - On-Page Competition Analysis

    SEO Competition Analysis--in order to be complete, must always be undertaken in two equally important steps. Step One, is the Off-Page Competition Analysis. Step Two, is the On-Page Competition Analysis. This article will cover the the second step--in the context of assessing your competitors' websites. On-Page Elements--As evident from the term, these are the website qualities that can be found within the web pages of your competitors site.

    Read the article

  • FxCop / Code Analysis with VS2010 Ultimate

    - by Cuartico
    I've getting some information about this, but I still can find a proper answer, I was asked recently in my company for this : "run a fxcop analysis on that code and tell me the results". Ok, I have VS2010 Ultimate which has code analysis, but before making any comment, I browse it on the internet cause I want to implement the best choice... So, let's say I'm gonna use the same rules on both analyzers: Should I recommend using one above the other? Should I say "hey, thats kinda old, let's use code analysis!" Should I get the same results on different computers? (for what I undersand, fxcop gives you some "points" and for what I've read, sometimes it gives you diff points on diff computers, I don't know about this with code analysis Thanks, any help would be appreciated

    Read the article

  • Visual Studio Code Analysis: CA0001 Error Running Code Analysis - object reference not set to an instance of an object

    - by sturdytree
    For a WPF application being developed in VS 2012 (Ultimate), the application runs fine when a particular project's code analysis is disabled. Enabling it results in the error above. This was working fine until recently (i.e. running with code analysis enabled for the particular project) and the only recent change I can think of is removing NHibernate Profiler (using NuGet). Will be grateful for any pointers on how to debug this, or to see a more detailed log/error message.

    Read the article

  • C++ static code analysis tool on Windows

    - by KTC
    What C++ static code analysis tool are there on Microsoft Windows, and which would you recommend? Please state whether a particular tool relies on cygwin, and whether it cost money. One per post as per for voting up & down. Similar Question: http://stackoverflow.com/questions/141498/what-open-source-c-static-analysis-tools-are-available

    Read the article

  • Disable Code analysis warnings .NET

    - by acidzombie24
    In visual studios i can run code analysis on my .NET project. I am running basic correctness and have 85 warnings. Which is a little much. Also majority of them are in external code. How do i disable specific warnings so i can focus on the more important warnings? I tried the below but it does not recognize code analysis warnings. (I first tried w/o the CA) #pragma warning disable CA1820 CA1065 CA2100

    Read the article

  • Code Analysis In Python

    - by Jerub
    What tools are good to use for code analysis in python? I have a large source repository split across multiple projects, and I would like to be able to run tools across the directories to see details like Cyclomatic Complexity, and perhaps be able to spot errors using static analysis. Ideally, I would like to be able to produce a report about the health of the source code, so we can spot problem areas that need to be addressed.

    Read the article

  • SQL Server Analysis Services, DNS, AD, Kerberos, Connection Issues

    - by ScaleOvenStove
    Running into a very weird issue. Converting servers to Windows 2008/SQL 2008. Have a server, SERVER_A, brand new, setup with Win2k8,Sql2k8 - works. Have a Server SERVER_B, running Windows2003/SQL2005. I want to migrate from SERVER_B to SERVER_A. I have all db's, cubes, etc setup on SERVER_A and it is mimicking functionality. Since users are using Excel to connect to SSAS, they connection string has SERVER_B in it. What I want to do, is, change DNS on the network to point SERVER_B (by name) at the ip of SERVER_A. I have successfully done this with another server, SERVER_C, but I need to do it with SERVER_B. What I have found is that with SERVER_C, after changing DNS, had to remove SERVER_C from AD and then it worked. I could connect to SERVER_C (DB), SERVER_C (SSAS Default Instance) and SERVER_C (SSAS Named instance) and it all was actually connecting to SERVER_A I tried to do the same with with SERVER_B, and no luck. Changed DNS, removed from AD, and it wouldn't connect. Found out that there were some SPN's in AD set up, so removed those and tried again. I then could connect to SERVER_B (DB), SERVER_B (SSAS Named Instance), but not SERVER_B (SSAS Default Instance). I could connect to SERVER_B (SSAS Default Intance WITH the Port #), but I need to be able to connect without the port number. I am at a loss to as why I can't connect to the default instance without a port #. Not sure if it is SPN's in AD, or another AD issue, or something else. Pretty sure it isnt something on the server (because SERVER_C works!) Any insight or suggestions would be greatly helpful!!

    Read the article

  • Text tagging/analysis tool for Mac

    - by Mark Porter
    I'm a doctoral student doing research in the humanities. As part of my research I have gathered together a lot of interview text. To analyse this data I want to be able to easily tag sections of text with keywords (the tags need to be able to overlap, and perhaps be organised hierarchically) and later be able to collate those sections from across multiple files. I need to be able to do this on a Mac. It feels like a simple task but I can't find any software for doing it that isn't either horribly clunky or a massive overkill worth hundreds of pounds. Is there any good software for doing this, or are there any good ways of doing it with other software?

    Read the article

  • apache log analysis tools for multiple virtual hosts?

    - by shreddd
    I am interested in trying to get a side by side usage comparison of all the virtual hosts being served up by my apache server. In the simplest case, I want to see a list (or bar chart) with each virtual host and the number of requests/traffic on that site. I've been playing around with webalyzer and awstats but I haven't been able to compare multiple virtual hosts in the same infographic. Anyone have any suggestions on tools for doing this (or how I might use the above tools to do so)?

    Read the article

  • Extreme Optimization – Numerical Algorithm Support

    - by JoshReuben
    Function Delegates Many calculations involve the repeated evaluation of one or more user-supplied functions eg Numerical integration. The EO MathLib provides delegate types for common function signatures and the FunctionFactory class can generate new delegates from existing ones. RealFunction delegate - takes one Double parameter – can encapsulate most of the static methods of the System.Math class, as well as the classes in the Extreme.Mathematics.SpecialFunctions namespace: var sin = new RealFunction(Math.Sin); var result = sin(1); BivariateRealFunction delegate - takes two Double parameters: var atan2 = new BivariateRealFunction (Math.Atan2); var result = atan2(1, 2); TrivariateRealFunction delegate – represents a function takes three Double arguments ParameterizedRealFunction delegate - represents a function taking one Integer and one Double argument that returns a real number. The Pow method implements such a function, but the arguments need order re-arrangement: static double Power(int exponent, double x) { return ElementaryFunctions.Pow(x, exponent); } ... var power = new ParameterizedRealFunction(Power); var result = power(6, 3.2); A ComplexFunction delegate - represents a function that takes an Extreme.Mathematics.DoubleComplex argument and also returns a complex number. MultivariateRealFunction delegate - represents a function that takes an Extreme.Mathematics.LinearAlgebra.Vector argument and returns a real number. MultivariateVectorFunction delegate - represents a function that takes a Vector argument and returns a Vector. FastMultivariateVectorFunction delegate - represents a function that takes an input Vector argument and an output Matrix argument – avoiding object construction  The FunctionFactory class RealFromBivariateRealFunction and RealFromParameterizedRealFunction helper methods - transform BivariateRealFunction or a ParameterizedRealFunction into a RealFunction delegate by fixing one of the arguments, and treating this as a new function of a single argument. var tenthPower = FunctionFactory.RealFromParameterizedRealFunction(power, 10); var result = tenthPower(x); Note: There is no direct way to do this programmatically in C# - in F# you have partial value functions where you supply a subset of the arguments (as a travelling closure) that the function expects. When you omit arguments, F# generates a new function that holds onto/remembers the arguments you passed in and "waits" for the other parameters to be supplied. let sumVals x y = x + y     let sumX = sumVals 10     // Note: no 2nd param supplied.     // sumX is a new function generated from partially applied sumVals.     // ie "sumX is a partial application of sumVals." let sum = sumX 20     // Invokes sumX, passing in expected int (parameter y from original)  val sumVals : int -> int -> int val sumX : (int -> int) val sum : int = 30 RealFunctionsToVectorFunction and RealFunctionsToFastVectorFunction helper methods - combines an array of delegates returning a real number or a vector into vector or matrix functions. The resulting vector function returns a vector whose components are the function values of the delegates in the array. var funcVector = FunctionFactory.RealFunctionsToVectorFunction(     new MultivariateRealFunction(myFunc1),     new MultivariateRealFunction(myFunc2));  The IterativeAlgorithm<T> abstract base class Iterative algorithms are common in numerical computing - a method is executed repeatedly until a certain condition is reached, approximating the result of a calculation with increasing accuracy until a certain threshold is reached. If the desired accuracy is achieved, the algorithm is said to converge. This base class is derived by many classes in the Extreme.Mathematics.EquationSolvers and Extreme.Mathematics.Optimization namespaces, as well as the ManagedIterativeAlgorithm class which contains a driver method that manages the iteration process.  The ConvergenceTest abstract base class This class is used to specify algorithm Termination , convergence and results - calculates an estimate for the error, and signals termination of the algorithm when the error is below a specified tolerance. Termination Criteria - specify the success condition as the difference between some quantity and its actual value is within a certain tolerance – 2 ways: absolute error - difference between the result and the actual value. relative error is the difference between the result and the actual value relative to the size of the result. Tolerance property - specify trade-off between accuracy and execution time. The lower the tolerance, the longer it will take for the algorithm to obtain a result within that tolerance. Most algorithms in the EO NumLib have a default value of MachineConstants.SqrtEpsilon - gives slightly less than 8 digits of accuracy. ConvergenceCriterion property - specify under what condition the algorithm is assumed to converge. Using the ConvergenceCriterion enum: WithinAbsoluteTolerance / WithinRelativeTolerance / WithinAnyTolerance / NumberOfIterations Active property - selectively ignore certain convergence tests Error property - returns the estimated error after a run MaxIterations / MaxEvaluations properties - Other Termination Criteria - If the algorithm cannot achieve the desired accuracy, the algorithm still has to end – according to an absolute boundary. Status property - indicates how the algorithm terminated - the AlgorithmStatus enum values:NoResult / Busy / Converged (ended normally - The desired accuracy has been achieved) / IterationLimitExceeded / EvaluationLimitExceeded / RoundOffError / BadFunction / Divergent / ConvergedToFalseSolution. After the iteration terminates, the Status should be inspected to verify that the algorithm terminated normally. Alternatively, you can set the ThrowExceptionOnFailure to true. Result property - returns the result of the algorithm. This property contains the best available estimate, even if the desired accuracy was not obtained. IterationsNeeded / EvaluationsNeeded properties - returns the number of iterations required to obtain the result, number of function evaluations.  Concrete Types of Convergence Test classes SimpleConvergenceTest class - test if a value is close to zero or very small compared to another value. VectorConvergenceTest class - test convergence of vectors. This class has two additional properties. The Norm property specifies which norm is to be used when calculating the size of the vector - the VectorConvergenceNorm enum values: EuclidianNorm / Maximum / SumOfAbsoluteValues. The ErrorMeasure property specifies how the error is to be measured – VectorConvergenceErrorMeasure enum values: Norm / Componentwise ConvergenceTestCollection class - represent a combination of tests. The Quantifier property is a ConvergenceTestQuantifier enum that specifies how the tests in the collection are to be combined: Any / All  The AlgorithmHelper Class inherits from IterativeAlgorithm<T> and exposes two methods for convergence testing. IsValueWithinTolerance<T> method - determines whether a value is close to another value to within an algorithm's requested tolerance. IsIntervalWithinTolerance<T> method - determines whether an interval is within an algorithm's requested tolerance.

    Read the article

  • Perl numerical sorting: how to ignore leading alpha character [migrated]

    - by Luke Sheppard
    I have a 1,660 row array like this: ... H00504 H00085 H00181 H00500 H00103 H00007 H00890 H08793 H94316 H00217 ... And the leading character never changes. It is always "H" then five digits. But when I do what I believe is a numerical sort in Perl, I'm getting strange results. Some segments are sorted in order, but then a different segment starts up. Here is a segment after sorting: ... H01578 H01579 H01580 H01581 H01582 H01583 H01584 H00536 H00537 H00538 H01585 H01586 H01587 H01588 H01589 H01590 ... What I'm trying is this: my @sorted_array = sort {$a <=> $b} @raw_array; But obviously it is not working. Anyone know why?

    Read the article

  • Should static analysis warnings fail the CI build?

    - by Cara
    Our team is investigating various options for static analysis in our project, and have mixed opinions about whether we want our Continuous Integration build to fail because of warnings from static analysis. The argument against failing the build is that there are often exceptions to the rules, and attempting to work around them just to make the build succeed reduces productivity. A better approach would be to generate reports with the build, and regularly dedicate developer time to addressing the reported issues. The counter-argument is that it is easy for the technical debt to build up if the bugs are not addressed immediately. Also, if the build fails when a potential bug is introduced, the amount of time required to fix it is reduced. What are your thoughts?

    Read the article

  • Configuring Team System Code Analysis via a FxCop rules file

    - by Ian G
    Is there anyway to configure the code analysis rules in Visual Studio Team System to match those in an FxCop configuration file and keep them in sync automatically? Not all the developers on the team have TS so keeping the rules we are currently running in an FxCop file is required so everyone can run the same set, but it would nice for those with to be able to run them in the IDE. We're introducing static analysis to an existing project so turning on everything now isn't a useful option. (We are not using Foundation Server for source control, if that makes any difference.)

    Read the article

  • matlab fit exp2

    - by HelloWorld
    I'm unsuccessfully looking for documentation of fit function using exp2 (sum of 2 exponents). How to operate the function is clear: [curve, gof] = fit(x, y,'exp2'); But since there are multiple ways to fit a sum of exponents I'm trying to find out what algorithm is used. Particularly what happens when I'm fitting one exponent (the raw data) with a bit of noise, how the exponents are spread. I've simulated several cases, and it seems that it "drops" all the weight on the second set of coefficients, but row data analysis often shows different behavior. Does anyone have suggestions of documentation?

    Read the article

  • Static source code analysis with LLVM

    - by Phong
    I recently discover the LLVM (low level virtual machine) project, and from what I have heard It can be used to performed static analysis on a source code. I would like to know if it is possible to extract the different function call through function pointer (find the caller function and the callee function) in a program. I could find the kind of information in the website so it would be really helpful if you could tell me if such an library already exist in LLVM or can you point me to the good direction on how to build it myself (existing source code, reference, tutorial, example...). EDIT: With my analysis I actually want to extract caller/callee function call. In the case of a function pointer, I would like to return a set of possible callee. both caller and callee must be define in the source code (this does not include third party function in a library).

    Read the article

  • android spectrum analysis of streaming input

    - by TheBeeKeeper
    for a school project I am trying to make an android application that, once started, will perform a spectrum analysis of live audio received from the microphone or a bluetooth headset. I know I should be using FFT, and have been looking at moonblink's open source audio analyzer ( http://code.google.com/p/moonblink/wiki/Audalyzer ) but am not familiar with android development, and his code is turning out to be too difficult for me to work with. So I suppose my questions are, are there any easier java based, or open source android apps that do spectrum analysis I can reference? Or is there any helpful information that can be given, such as; steps that need be taken to get the microphone input, put it into an fft algorithm, then display a graph of frequency and pitch over time from its output? Any help would be appreciated, thanks.

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Which tool can list writing access to a specific variable in C?

    - by Lichtblitz
    Unfortunately I'm not even sure how this sort of static analysis is called. It's not really control flow analysis because I'm not looking for function calls and I don't really need data flow analysis because I don't care about the actual values. I just need a tool that lists the locations (file, function) where writing access to a specific variable takes place. I don't even care if that list contained lines that are unreachable. I could imagine that writing a simple parser could suffice for this task but I'm certain that there must be a tool out there that does this simple analysis. As a poor student I would appreciate free or better yet open source tools and if someone could tell me how this type of static analysis is actually called, I would be equally grateful! EDIT: I forgot to mention there's no pointer arithmetic in the code base.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >