Getting started with Sprngy
Registration
Stating the obvious, the first step is to register on sprngy.com and select your product preferences. Sprngy provides a trial to get familiarized with the product set.
After the registration is complete, Sprngy will send a confirmation email when the cloud resources are deployed and ready for use with link to the user interface. This process usually takes 15-20 minutes after registration is complete.
Getting Started with Sprngy’s Unified Data Analytics Platform (HPEL)
Using the link to the user interface, you can:
Set up Application: 'Define the Data entities grouped under an ‘application’. E,g, ‘Billing’ application can have entities like ‘client’, ‘services’, ‘bills’, ‘payments’ etc. Relationships between the entities can be defined here.
Upload Data: Upload data in csv files for the entities defined.
Define metadata blueprint: Define the ‘Meta Model’ using the steps provided in the interface. Bulk upload feature is also available to be used as an alternative.
Define Data processing preferences: Using the simple screen provided in the interface, define data processing preferences in the ‘Ingest Model'. Bulk upload feature is also available.
Run workloads: Workloads can now be run to process the data. Workloads can be run in two steps:
Staging Data Lake (SDL) to Fast Data Lake (FDL): This is to run Sprngy’s pre-built data profiling pipelines based on the meta model and ingest model definitions. This workload results in pristine data layer.
Fast Data Lake (FDL) to Business Data Lake (BDL): This is to run Sprngy’s pre-built data correlation pipelines based on the meta model and ingest model definitions. This workload results in business-ready data layer providing rapid access to curated and correlated data.
Other features the administrative interface provides are:
Define Data Import Routines: Import routines (including connection details) to import data from other sources can be defined using the ‘Import Model’. Data import routines allow you to mirror data from sources into raw data lake to offload operational reporting workloads and ingest curated and corelated data into business data lake for advance analysis.
Define Analytic Models: Algorithms to support machine learning modelling requirements can be defined using ‘Analytic Models’. This is for advanced use cases.
User Management: Features to manage Access and permissions to the administrative interface.
Manage Workloads: Features to monitor workload status as well as for re-running and deleting workloads, as required.
View Logs: Interface to filter and view application logs.
The user interface guide provides in-depth information for all the above features.
Getting Started with Sprngy’s Federated Query & Business Intelligence (HPQL)
Sprngy’s HPQL provides SprngyBI with visualization and Query features. With SprngyBI, you can:
Connect to Business Data: SprngyBI provides pre-built connectors to Data Lakes, Relational Databases and no-SQL databases.
Create and Share Visualizations: SprngyBI’s rich no-code visualizations can be used to create visualizations in minutes.
Perform ad-hoc analysis with SQL Queries: SprngyBI provides a SQL editor to perform ad-hoc analysis on data.
Recommended Previous Reading
Recommended Next Reading
Building Applications with Sprngy
Copyright © Springy Corporation. All rights reserved. Not to be reproduced or distributed without express written consent.