...
The following types of information can be captured in meta model on a per attribute basis for a given entity.
Data set Description - The module, entity, and description of the data set.
Attribute Description - Name, Description, Format, and Length. This can be captured at both the data model and BI model levels.
Attribute Security - Masking requirement and masking information.
Attribute Transforms - Transform rule that needs to be applied at the inbound and outbound. By default, trim is applied to all categorical attributes.
Attribute Filtering - Whether the attribute should be part of the data filtering process and if so what rule.
Attribute IQM - Whether the attribute should be utilized in IQM match. For more information on IQM, please refer to IQM FAQs and BAPCore documentation.
Attribute EDA - Whether the attribute should be utilized in EDA analysis.
Attribute Calendar - Whether the attribute should be utilized in calendar join operations.
Attribute Validation - Whether the attribute should be part of the data validation process and if so what rule.
Attribute Quality - Whether the attribute should be part of the data validation process and if so what rule.
What are the data types supported in meta model?
...
Special character issue in filter and validation rules - filter records process would fail. The error log will indicate a ‘special character error’. Avoid special characters while defining filter and validation rules. If you are defining meta-model in excel or CSV, export that to an R file using dput function and search for any special characters and if there are any special characters, remove those before saving the metamodel to the core data lake.
No parent is defined in meta model - If no parent information is defined in the meta model i.e parent location and parent attributes, no nested data would be created.
For integers, the default should be ‘0' and for numeric, it should be ‘0.01’. For characters, the default value should be ‘NOT AVAILABLE’
Impute method for numeric and integer should be mean and for characters, it should be DEFAULT.
meta model column names should be all lower case.
IQM metric match should be ‘YES’ only for integer and numeric columns. This should be set to YES only for integer or numerical columns that are meaningful to data analysis. For example. you would set customer_credit_score and customer_age to ‘YES’ but not postal_code (if its defined as an integer).
IQM codes should be ‘YES' for characters columns. This should be set to YES only for categorical columns that are meaningful to data analysis. For example. you would set customer_type and customer_group to ‘YES’ but not customer_name or customer_id.
Entity Name should be exactly as defined in the HDFS folder structure. The entity name is used to dynamically lookup data lake paths.
EDA Dimensions should be ‘YES’ for character columns. This should be set to YES only for categorical columns that are meaningful to data analysis. For example. you would set customer_type and customer_group to ‘YES’ but not customer_name or customer_id. as_of_date should be set to NO for eda_dimension
EDA Metrics should be ‘YES’ for integer and numeric columns. This should be set to YES only for integer or numerical columns that are meaningful to data analysis. For example. you would set customer_credit_score and customer_age to ‘YES’ but not postal_code (if its defined as an integer). as_of_date should be set to NO for eda_metric.
EDA iterate by should be the as of the date and only the as of date. This would allow for creating EDA analytics on daily basis.
entity_attribute_nested_key_role should always be set to YES when it is a parent lookup key and it is in use indicator is also set to YES
Entity Attribute Calendar Join Key should only be specified as of the date and no other key.
Each record in meta-model needs to be unique (entity names, object/BI names all need to be unique).
Parent Lookup location should have a relative path instead of an absolute path (i.e. /BigAnalytixsPlatform/BAPRAM/Customer/FDL/Stage)
IQM Processor should be run with the nesting records processor ON, otherwise, turn IQM off if not Nesting.
Regarding nested attributes
We recommend to keep entity_attribute_compare_key_role, entity_attribute_parent_lookup_key_role as
NO
for as_of_date if the dates are not matching between two entities. If you keepYES
then it would introduce NULLS in nesting for those rows where as_of_date is not matchig between the two entity data sets and replace it with impute method defined in the meta model for the given attribute.For other matching ID columns between two entities, entity_attribute_compare_key_role, entity_attribute_parent_lookup_key_role should be
YES
, with entity_attribute_parent_lookup_location defined to the FDL/Stage of the parent entity. This will nest the column between two entities based on the defined common attribute.
Regarding entity_attibute_calendar_join_key_role
If the data size is small, then don’t set entity_attibute_calendar_join_key_role to YES to any of the attributes. (NOTE: In spark, partitioning is an expensive operation when we try to partition too small or too big dataset)
If the data size is too big (greater than 100GB or something), then the best practice is, don’t try to partition the dataset i.e. don’t set entity_attibute_calendar_join_key_role to YES to any of the attributes.
If the data size is big and If you are setting entity_attibute_calendar_join_key_role to YES to any of the attributes then make sure that column is a date column and the actual data for the date column should be in the format of 'YYYY-MM-DD'. (NOTE: General thumb rule is that there should be one column called as_of_date which would serve all the purposes.). The value of as_of_date should not be ‘NOT AVAILABLE’ when calendar_join_key_role is YES for it.
...
Here are a couple things to keep in mind when writing your transform rules:
Do not put ‘Select’ at the beginning of the query, as it is appended automatically to the front when the backend processor runs
The SQL query should end in “as <column_name>”
The transform rules that you define will be executed in the SDL-FDL workload when you define the TRANSFORM_RECORDS_PROCESSOR in the Ingest Model