Skip to main content

Quick Tips on data recast and data validation

I was working on a data recast process which involves applying some complex mappings to the source data before it can be loaded to the target cube. This is quite a common exercise in applications related to Essbase, Planning and HFM.

I used a database to hold all those mappings along with the source data. I would recommend this approach as you have full control on what mapping you are applying and you can keep track of what mapping changes you are doing since the data recast process is a repeatable process

After the mappings have been applied, I had to do some manual addition of few accounts subtract data from few accounts. I am not an expert in SQL so excel pivot table to the rescue. All good and the data is ready to be loaded to the cube

Tip #1: Standardization is the key
Always use a standard load rule and a standard data load format for the data file (sort the columns from least to most and the dense dimension at the last and for columns). Also, use a standard delimiter in the rule file to prevent future modifications.

Tip #2: Header is the identity
Have a header record to identify the dimension for the column. The dimension is identifiable if it has prefixes. Otherwise, always have a header record

Tip #3: Know your data first
Often I get request from my friends and others (somehow they reach out to me through a common list of friends) that they get the error"Essbase encountered a field in the data source that it did not recognize". This generally happens when you have a number in a member column whereas essbase excepts a member name. There are instances where the member name is actually a number and when you use excel to do some transformation, excel converts the number to exponential format 1e+10. when you import the data to excel / transform the data using "Text to Columns", define the column data format as "Text" as shown below


Refer to the error document here for the list of possible error messages that you might encounter

Tip #4: Special characters are not special
If you have special characters in your member names, ensure that the delimiter in the rule file is not one among those special characters. If you have spaces in your member names, ensure that you have quotes surrounded like "Opening Inventory" and your delimiter is not SPACE. If you are saving the file as a csv and your member name has "," then follow the same principle as above for SPACES.

Tip #5: Avoid Copy Paste
Do not copy the data from Excel to file and update the delimiter.

All the above tips acts as a checkpoint to ensure that you have a repeatable process and following a standard methodology. (I have used a relational database to store the mapping. It's not mandatory to have a database. Whatever process you are comfortable with works. Ensure that you don't change the process each time you have to run the mapping and load the data again). Have a broader vision on how the process can be replicated for any future data recast processes. I do not have access to ODI / FMDEE which is another option but time consuming.

I was all set to get the data out and have the right delimiter set. Got the data out of the excel, loaded to the cube and started validation. Data doesn't tie.But, data doesn't lie. Where did it go wrong?

Tip #6: Data doesn't lie
(Recommended for small data sets). Validation always happen at the top level. Since, ours an ASO cube, I don't have to run any aggregations. I love ASO. I really love ASO (Awesome Storage Option :) ). I quickly copied the data file in to excel, added the missing members in the file and tadaaaaa. I realized what went wrong. I was loading data to 5 periods ( 4 periods and BegBal) and the BegBal column was completely empty and the load rule shifted the other 4 period columns to left. Q1 data got loaded to BegBal, Q2 to Q1 and so on. I added #Mi in the BegBal column and all good
This is due to the violation of Rule #5
That's all I have for today. Let me know if you have any more tips to add. You can post in the comments and i will add them as updates to this blog post

Happy learnung

Comments

Popular posts from this blog

PBCS/EPBCS - ASO exclude shared & Dynamic

As you all are aware that Oracle releases patches to EPM cloud every month (EDMCS is released every 2 months) and the patches are applied on first-week of Friday in Dev and third-week of Friday in Prod I did a post long back about a challenge that I have faced in on-premise and how I have addressed that. New functions were released in Nov-2018 release of PBCS. Below is an excerpt from the readiness document. You can find the document here New Aggregate Storage Functions in Calculation Manager The following Aggregate Storage functions have been added to Calculation Manager. These functions filter the shared and dynamic members from the Point of View (POV). The functions can be used in the POV of an Aggregate Storage custom calculation or allocation. @FilterDynamic(Dimension Name, Member Name) removes all dynamic members from the list of members @FilterShared(Dimension Name, Member Name) removes all shared members from the list of members @FilterSharedAndDynamic(Dimensio

EPM Cloud Tips & Tricks - #1

The first EPM cloud product was released in 2014 and it's been six years till date. I was recently part of an FCCS implementation project. I know what you might be thinking. Coming from completely essbase and planning background and been working on it for almost 12 years and doing an FCCS project? Well, it turned out that way and it was a change for me too than being in my comfort zone and took it as a challenge.  I have been very busy for over the past one year and I didn't really had a chance or time to get back to my blogging and sharing my knowledge. The project finally went live and I am going to share my bit of learnings. Some of it you might alread know The first tip is going to be an easy one and those who have worked in FDMEE / Data Management in the cloud, you might already know it. But, this is very important when it comes to FCCS as zeroes are valid from Balance Sheet standpoint Data Management by default doesn't load zeroes. Below is a excerpt from the document

EPM Cloud (PBCS/EPBCS/FCCS) - Report Bursting & Reports scheduling

The first product of Oracle EPM cloud was launched in 2014 and it's been 5 years and over the course of these 5 years, Oracle has every EPM product in cloud to what is available in on-premise. With so many products under the EPM belt each with its own functionality, there is a real need to have a single unified reporting tool that can handle all your reporting requirements across all your EPM products at one place. EPRCS (Enterprise Performance Reporting Cloud Service) is the Oracle direction to address all the reporting needs for any organization of any size. If your team is responsible for the management, narrative and external reporting with the ability to author, collaborate and a solid approval process, you definitely have to consider implementing EPRCS at your organization EPRCS can connect to your EPM Cloud products, Essbase cloud and also to your Fusion Applications Essbase app. It addresses all your financial, management, narrative and disclosure reporting. I am not goi