Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Sprngy implements pipelines and data lakes to provide high performance and secure data access to business as well as for traceability and auditing. Data lakes are defined to provide raw or stage data, pristine data and business data layers. Also, the storage is columnar to provide rapid access while keeping the costs low. 

Sprngy uses a model-driven approach for data processing. What we mean by that is that whether it is data profiling or data correlation or even importing the data from RDBMS or writing specific algorithms on the data, all of it can be done by defining models which serve as a blueprint for the processing you want to do. You can define the ETL without worrying about having to code it. You can define models to clean your data and ensure your users have pristine data to work with, you can define models to arrange your data to mirror specific business uses, you can use models to import from wherever the data is (and you can define transformations while importing the data if you want) and you can write algorithms to provide specific insights on the pristine data.

For advanced users, the models provide customization features that can be implemented using regular SQL. We have built versioning capabilities around the models to provide traceability. 

  • No labels