...
Special character issue in filter and validation rules - filter records process would fail. Error log will indicate ‘special character error’. Avoid special characters while defining filter and validation rules. If you are defining meta-model in excel or csvCSV, export that to an R file using dput function and search for any special characters and if there are ny any special characters, remove those before saving the metamodel to the core data lake.
No parent is defined in meta model - If no parent information is defined in the meta model i.e parent location and parent attributes, no nested data would be created.
For integers, the default should be ‘0' and for numeric, it should be ‘0.01’. For characters, the default value should be ‘NOT AVAILABLE’
Impute method for numeric and integer should be mean and for characters, it should be DEFAULT.
meta model column names should be all lower case.
IQM metric match should be ‘YES’ only for integer and numeric columns. This should be set to YES only for integer or numerical columns that are meaningful to data analysis. For example. you would set customer_credit_score and customer_age to ‘YES’ but not postal_code (if its defined as an integer).
IQM codes should be ‘YES' for characters columns. This should be set to YES only for categorical columns that are meaningful to data analysis. For example. you would set customer_type and customer_group to ‘YES’ but not customer_name or customer_id.
Entity Name should be exactly as defined in the HDFS folder structure. The entity name is used to dynamically look up lookup data lake paths.
EDA Dimensions should be ‘YES’ for character columns. This should be set to YES only for categorical columns that are meaningful to data analysis. For example. you would set customer_type and customer_group to ‘YES’ but not customer_name or customer_id. as_of_date should be set to NO for eda_dimension
EDA Metrics should be ‘YES’ for integer and numeric columns. This should be set to YES only for integer or numerical columns that are meaningful to data analysis. For example. you would set customer_credit_score and customer_age to ‘YES’ but not postal_code (if its defined as an integer). as_of_date should be set to NO for eda_metric.
EDA iterate by should be the as of the date and only the as of date. This would allow for creating EDA analytics on daily basis.
entity_attribute_nested_key_role should always be set to YES when it is a parent lookup key and it is in use indicator is also set to YES
Entity Attribute Calendar Join Key should only be specified as as of the date and no other key.
Each record in meta model needs to be unique (entity names, object/BI names all need to be unique).
Parent Lookup location should have a relative path instead of an absolute path (i.e. /BigAnalytixsPlatform/BAPRAM/Customer/FDL/Stage)
IQM Processor should be run with the nesting records processor ON, otherwise, turn IQM off if not Nesting.
What if some module’s entities have a dependency on other module’s entities (ex: BAPOIM (FieldActivity) depends on BAPRAM(ServicePoint))?
In this case, when you are uploading a meta-model for the parent entity(ServicePoint as per example), need to change the parent module name to the current module name. (As per the given example, For ServicePoint, need to change the module name from ‘BAPRAM' to 'BAPOIM’)
I have a large number of small files in Meta folder that may impede read performance?
You can consolidate multiple Meta files into one using consolidate functionality. This is assuming that the schema of all files in the meta-model folder is same.
Can I consolidate Meta files?
You can consolidate multiple Meta files into one using consolidate functionality. This is assuming that the schema of all files in the meta-model folder is same.