Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel6
outlinefalse
stylenone
typelist
printablefalse

Registration

Stating the obvious, the first step is to register on sprngy.com and select your product preferences. Sprngy provides a trial to get familiarized with the product set.

After the registration is complete, Sprngy will send a confirmation email when the cloud resources are deployed and ready for use with link to the administrative user interface. This process usually takes 15-20 minutes after registration is complete.

Getting Started with Sprngy’s Unified Data Analytics Platform (HPEL)

Using the link to the administrative user interface, you can:

  • Set up Application: 'Define the Data entities grouped under an ‘application’. E,g, ‘Billing’ application can have entities like ‘client’, ‘services’, ‘bills’, ‘payments’ etc. Relationships between the entities can be defined here.

  • Upload Data:Upload data in csv files for the entities defined.

  • Define metadata blueprint:Define the ‘Meta Model’ using the steps provided in the interface. Bulk upload feature is also available to be used as an alternative.

  • Define Data processing preferences:Using the simple screen provided in the interface, define data processing preferences in the ‘Ingest Model'. Bulk upload feature is also available.

  • Run workloads:Workloads can now be run to process the data. Workloads can be run in two steps:

    • Staging Data Lake (SDL) to Fast Data Lake (FDL): This is to run Sprngy’s pre-built data profiling pipelines based on the meta model and ingest model definitions. This workload results in pristine data layer.

    • Fast Data Lake (FDL) to Business Data Lake (BDL): This is to run Sprngy’s pre-built data correlation pipelines based on the meta model and ingest model definitions. This workload results in business-ready data layer providing rapid access to curated and correlated data.

...

  • Define Data Import Routines:Import routines (including connection details) to import data from other sources can be defined using the ‘Import Model’. This is for advanced use casesData import routines allow you to mirror data from sources into raw data lake to offload operational reporting workloads and ingest curated and corelated data into business data lake for advance analysis.

  • Define Analytic Models: Algorithms to support machine learning modelling requirements can be defined using ‘Analytic Models’. This is for advanced use cases.

  • User Management:Features to manageAccess and permissions to the administrative interface.

  • Manage Workloads:Features to monitor workload status as well as for re-running and deleting workloads, as required.

  • View Logs:Interface to filter and view application logs.

The user interface guide provides in-depth information for all the above features.

Getting Started with Sprngy’s Federated Query & Business Intelligence (HPQL)

Sprngy’s HPQL provides SprngyBI with visualization and Query features. With SprngyBI, you can:

Recommended Previous Reading

Little bit about Sprngy...

Sprngy Features

Sprngy Architecture

Sprngy Product Flavors

Recommended Next Reading

Guide to Admin User Interface

Building Applications with Sprngy

Copyright © Springy Corporation. All rights reserved. Not to be reproduced or distributed without express written consent.