Search Results

Search found 61882 results on 2476 pages for 'verify dmi pool data'.

Page 12/2476 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database

    - by Jane Story
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When working with a Hyperion Profitability and Cost Management (HPCM) Standard Costing application, there can often be a requirement to check data or allocated results using reporting tools e.g Smartview. To do this, you are retrieving data directly from the Essbase databases related to your HPCM model. For information, running reports is covered in Chapter 9 of the HPCM User documentation. The aim of this blog is to provide a quick guide to finding this data for reporting in the HPCM generated Essbase database in v11.1.2.2.x of HPCM. In order to retrieve data from an HPCM generated Essbase database, it is important to understand each of the following dimensions in the Essbase database and where data is located within them: Measures dimension – identifies Measures AllocationType dimension – identifies Direct Allocation Data or Genealogy Allocation data Point Of View (POV) dimensions – there must be at least one, maximum of four. Business dimensions: Stage Business dimensions – these will be identified by the Stage prefix. Intra-Stage dimension – these will be identified by the _Intra suffix. Essbase outlines and reporting is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s02.html For additional details on reporting measures, please review this section of the documentation:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/apas03.html Reporting requirements in HPCM quite often start with identifying non balanced items in the Stage Balancing report. The following documentation link provides help with identifying some of the items within the Stage Balancing report:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/generatestagebalancing.html The following are some types of data upon which you may want to report: Stage Data: Direct Input Assigned Input Data Assigned Output Data Idle Cost/Revenue Unassigned Cost/Revenue Over Driven Cost/Revenue Direct Allocation Data Genealogy Allocation Data Stage Data Stage Data consists of: Direct Input i.e. input data, the starting point of your allocation e.g. in Stage 1 Assigned Input Data i.e. the cost/revenue received from a prior stage (i.e. stage 2 and higher). Assigned Output Data i.e. for each stage, the data that will be assigned forward is assigned post stage data. Reporting on this data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s03.html Dimension Selection Measures Direct Input: CostInput RevenueInput Assigned Input (from previous stages): CostReceivedPriorStage RevenueReceivedPriorStage Assigned Output (to subsequent stages): CostAssignedPostStage RevenueAssignedPostStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the stage business dimensions for the stage you wish to see the Stage data for. All other Dimensions NoMember Idle/Unassigned/OverDriven To view Idle, Unassigned or Overdriven Costs/Revenue, first select which stage for which you want to view this data. If multiple Stages have unassigned/idle, resolve the earliest first and re-run the calculation as differences in early stages will create unassigned/idle in later stages. Dimension Selection Measures Idle: IdleCost IdleRevenue Unassigned: UnAssignedCost UnAssignedRevenue Overdriven: OverDrivenCost OverDrivenRevenue AllocationType DirectAllocation POV One member from each POV dimension Dimensions in the Stage with Unassigned/ Idle/OverDriven Cost All the Stage Business dimensions in the Stage with Unassigned/Idle/Overdriven. Zoom in on each dimension to find the individual members to find which members have Unassigned/Idle/OverDriven data. All other Dimensions NoMember Direct Allocation Data Direct allocation data shows the data received by a destination intersection from a source intersection where a direct assignment(s) exists. Reporting on direct allocation data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s04.html You would select the following to report direct allocation data Dimension Selection Measures CostReceivedPriorStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the SOURCE stage business dimensions and the DESTINATION stage business dimensions for the direct allocations for the stage you wish to report on. All other Dimensions NoMember Genealogy Allocation Data Genealogy allocation data shows the indirect data relationships between stages. Genealogy calculations run in the HPCM Reporting database only. Reporting on genealogy data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s05.html Dimension Selection Measures CostReceivedPriorStage AllocationType GenealogyAllocation (IndirectAllocation in 11.1.2.1 and prior versions) POV One member from each POV dimension Stage Business Dimensions Any stage business dimension members from the STARTING stage in Genealogy Any stage business dimension members from the INTERMEDIATE stage(s) in Genealogy Any stage business dimension members from the ENDING stage in Genealogy All other Dimensions NoMember Notes If you still don’t see data after checking the above, please check the following Check the calculation has been run. Here are couple of indicators that might help them with that. Note the size of essbase cube before and after calculations ensure that a calculation was run against the database you are examing. Export the essbase data to a text file to confirm that some data exists. Examine the date and time on task area to see when, if any, calculations were run and what choices were used (e.g. Genealogy choices) If data does not exist in places where they are expecting, it could be that No calculations/genealogy were run No calculations were successfully run The model/data at feeder location were either absent or incompatible, resulting in no allocation e.g no driver data. Smartview Invocation from HPCM From version 11.1.2.2.350 of HPCM (this version will be GA shortly), it is possible to directly invoke Smartview from HPCM. There is guided navigation before the Smartview invocation and it is then possible to see the selected value(s) in SmartView. Click to Download HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database (Right click or option-click the link and choose "Save As..." to download this pdf file)

    Read the article

  • Integrating Data Mining into your BI Solution (Presentation)

    I recently gave a live meeting presentation to the UK User Group on Integrating Data Mining into your BI Solution.  In it I talk about and demo ways of using your data mining models inside Integration Services, Analysis Services and Reporting Services.  This is the first in a series of presentations I will be doing for the UG as I try to get the word out that Data Mining can be for the masses. You can download my deck and my line meeting recording from here.

    Read the article

  • Google Webmaster Tools Data Highlighter says "Failed to load data, please try again later"

    - by George Garside
    I seem to be unable to access the data highlighter in Google Webmaster Tools since I attempted to start a new highlight on a page. Clicking the red Start Highlighting button to open the tagger did nothing, so I refreshed. Now, the page loads without the middle content section, then a few seconds later shows the following error: Failed to load data, please try again later. I can't get any of the middle section to load, even the list of current pages/page sets that have been highlighted—this error shows. I thought it may be a Google service outage, but other sites' data highlighters work fine. It also seems coincidental that it stopped working after I attempted to start highlighting—I was able to list the existing pages and page sets fine before that, and still am able to access the service on other sites. I've tried clearing browser data and have tried Google Chrome as well—same problem. What's happened?

    Read the article

  • Exporting Master Data from Master Data Services

    This white paper describes how to export master data from Microsoft SQL Server Master Data Services (MDS) using a subscription view, and how to import the master data into an external system using SQL Server Integration Services (SSIS). The white paper provides a step-by-step sample for creating a subscription view and an SSIS package. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data Learning Resources

    - by Lara Rubbelke
    I have recently had several requests from people asking for resources to learn about Big Data and Hadoop. Below is a list of resources that I typically recommend. I'll update this list as I find more resources. Let's crowdsource this... Tell me your favorite resources and I'll get them on the list! Books and Whitepapers Planning for Big Data Free e-book Great primer on the general Big Data space. This is always my recommendation for people who are new to Big Data and are trying to understand it....(read more)

    Read the article

  • The Best Articles for Backing Up and Syncing Your Data

    - by Lori Kaufman
    World Backup Day is March 31st and we decided to provide you with some useful information to make backing up your data easier. We’ve published articles about backing up various types of data and settings both offline and online. There’s all kinds of settings on your computer to backup in addition to your personal data, such as Wi-Fi passwords, drivers, and settings for programs like web browsers, Office, and Windows Live Writer. There are also many tools available to help you keep your data and settings backed up. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Distortion in format of data in wordpad file when shifted from windows XP to winows 2007

    - by Harpreet
    I have many data files which were set to open in wordpad file in windows XP. Those files have a particular format for data, like following: Name of Data file No. of data columns Name of data in column_1 Name of data in column_2 . . . Name of data in column_n column_1 column_2 column_3 ... column_n Now my computer has been formatted and OS is changed to windows 2007, however when I open my data files in wordpad the above format of data is no more present. The format in wordpad in windows 2007 seems to be distorted. Does anyone knows what to do to restore the format as shown above, which is what the data used to look like in XP? I have attached the snap shot of the new distorted format of data as seen in wordpad in windows 2007. The snap shot shows 100 column names, however the data columns present are only 5 when it should be actually 100 data columns.

    Read the article

  • how to recover deleted ntfs patition with data entirely while installing ubuntu 13.04

    - by Anson Varghese
    I've installed ubuntu 13.04 onto my hp 2231tx computer. During installation all of my data was erased. I didn't know all of my three partitions would be deleted. I was shocked after finding out that all of my personal data was erased. I didn't know what to do to resolve this problem so I search google for an answer. I found a program called testdisk and I used it to recover about half of my data. Among this data weren't my personal photos and videos. Is there a way to recover the other half?

    Read the article

  • Data Loading Issues? Try the new Demantra Data Load Guided Resolution

    - by user702295
    Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.   New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra.  Contact me with any questions or suggestions.   Thank You!

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

  • Best Data Structure For Time Series Data

    - by TriParkinson
    Hi all, I wonder if someone could take a minute out of their day to give their two cents on my problem. I would like some suggestions on what would be the best data structure for representing, on disk, a large data set of time series data. The main priority is speed of insertion, with other priorities in decreasing order; speed of retrieval, size on disk, size in memory, speed of removal. I have seen that B+ trees are often used in database because of their fast search times, but how about for fast insertion times? Is a linked list really the way to go? Thanks in advance for your time, Tri

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • MySQL: Blank row in table after LOAD DATA INFILE

    - by Tom
    Hi, I'm uploading a large amount of data from a CSV (I'm doing it via MySQL Workbench): LOAD DATA INFILE 'C:/development/mydoc.csv' INTO TABLE mydatabase.mytable CHARACTER SET utf8 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\r'; However, I'm noticing that it keeps adding an empty line full of nulls/zeros after the last record. I'm guessing it's because of the "LINES TERMINATED" command. However, I need that to load the data in correctly. Is there some way around this / some better SQL to avoid the blank row in the table? Thanks

    Read the article

  • Update tableview instantly as data pushed in core data iphone

    - by user336685
    I need to update the tableview as soon as the content is pushed in core data database. for this AppDelegate.m contains following code NSManagedObjectContext *moc = [self managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"FeedItem" inManagedObjectContext:moc]]; //for loop // push data in code data & then save context [moc save:&error]; ZAssert(error == nil, @"Error saving context: %@", [error localizedDescription]); //for loop ends This code triggers following code from RootviewController.m - (void)controllerWillChangeContent:(NSFetchedResultsController*)controller { [[self tableView] beginUpdates]; } But this updates the tableview only at the end of the for loop ,the table does not get updated after immediate push in db. I tried following code but that didn't work - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { // In the simplest, most efficient, case, reload the table view. [self.tableView reloadData]; } I have been stuck with this problem for several days.Please help.Thanks in advance for solution.

    Read the article

  • Core Data data type for just the date - not including time

    - by Jason
    I am new at Core Data, and it seems like it is a great way to manage the data store. However I am also very memory-conscious due to the fact that the iPhone doesn't have that much of it. I was a little surprised to see that the data types are so limited - eg. there is a Date type which includes also the time, but no Date type for just the date! All the time information takes up precious bytes of memory, if I just wanted an attribute with the date (e.g. 2/15/2010 rather than 2/15/2010 02:34:48), how could I do this? Is it possible?

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • Exim - Sender verify failed - rejected RCPT

    - by Newtonx
    While checking on Exim's log messages I found many entries of the following message "Sender verify failed" "rejected RCPT" ... I 'm not an exim expert... I'm afraid Exim is not delivering 100% emails to recipients, because our Email Marketing Application its getting a lower OPEN RATE. Can someone helpe understand this log messages? Is it my server saying "No Such User Here" or a remote server? 174.111.111.11 represents my server IP. Thanks Exim log 2010-10-02 14:00:19 SMTP connection from myserverdomain.com.br () [174.111.111.11]:54514 I=[174.111.111.11]:25 closed by QUIT 2010-10-02 14:00:19 SMTP connection from [174.111.111.11]:54515 I=[174.111.111.11]:25 (TCP/IP connection count = 2) 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 Warning: Sender rate 672.4 / 1h 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 sender verify fail for <[email protected]>: No Such User Here 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 F=<[email protected]> rejected RCPT <[email protected]>: Sender verify failed 2010-10-02 14:00:19 SMTP connection from myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 closed by QUIT 2010-10-02 14:00:19 SMTP connection from [174.111.111.11]:54516 I=[174.111.111.11]:25 (TCP/IP connection count = 2) 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 Warning: Sender rate 673.3 / 1h 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 sender verify fail for <[email protected]>: No Such User Here 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 F=<[email protected]> rejected RCPT <[email protected]>: Sender verify failed 2010-10-02 14:00:19 SMTP connection from myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 closed by QUIT 2010-10-02 14:00:19 SMTP connection from [174.111.111.11]:54517 I=[174.111.111.11]:25 (TCP/IP connection count = 2) 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54517 I=[174.111.111.11]:25 Warning: Sender rate 674.3 / 1h 2010-10-02 14:00:20 H=myserverdomain.com.br () [174.111.111.11]:54517 I=[174.111.111.11]:25 sender verify fail for <Luciene_souza_vasconcellos=hotmail.com--2723--bounce@e-mydomain.com.br>: No Such User Here 2010-10-02 14:00:20 H=myserverdomain.com.br () [174.111.111.11]:54517 I=[174.111.111.11]:25 F=<Luciene_souza_vasconcellos=hotmail.com--2723--bounce@e-mydomain.com.br> rejected RCPT <[email protected]>: Sender verify failed

    Read the article

  • apport-collect fails with "certificate verify failed" when trying to report a bug on launchpad

    - by Francesco
    I am trying to report a bug but I get root@beagle:/usr/lib/python2.7/dist-packages/apport# apport-collect <bug_id> ERROR: connecting to Launchpad failed: [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed You can reset the credentials by removing the file "/root/.cache/apport/launchpad.credentials" Moreover firefox tells me Certificate is not currently valid for bugs.launchpad.net. What can I do?

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >