Attunity Ltd., a provider of data integration and big data management software solutions, is launching a new solution, Attunity Compose for Hive, which automates the process of creation and continuous loading of operational and historical data stores in a data lake.
Integrated with Attunity Replicate to support large-scale and real-time data ingestion, Attunity Compose for Hive eliminates the hard work of manual and time-consuming ETL development help companies get value from Hadoop more quickly and cost-effectively.
Part of Attunity’s Compose family of products, which is designed to help organizations automate the creation of analytics-ready datasets, Attunity Compose for Hive aims to alleviate the manual and time-consuming process of moving data into data lakes, especially when there are large amounts of data that are constantly changing – and all enterprise data changes, said Dan Potter, vice president, product management and marketing, at Attunity. “We are taking the automatic capabilities that we have with Attunity Compose to be able to build structures and allow people quick access to data. With Compose, we are creating operational data stores and historic data stores directly in Hadoop Hive based on the source information they have.”
Using the new Attunity Compose for Hive, said Potter, enterprises can accelerate ROI for data analytics on Hadoop by rapidly enabling Hive SQL-based systems of record on the platform. This capability enables enterprises to address a variety of analytics use cases the need for real-time dashboards and reporting against the operational data store that is continuously updated from source systems. The solution also supports advanced and predictive analytics that are more accurately modeled using historical data stores.
“With Attunity Compose for Hive, we not only create the structure for the Hive-based data stores – operational data store and historic data store – but we solve the problem of those continuous updates so you don’t need to do any coding,” said Potter. “It is really quite complex and messy today and it is one of the barriers to people getting value from the data lake. We have completely automated that process.”
According to Attunity, the new solution can be helpful in accelerating ROI for data analytics on Hadoop by automating the data pipeline to create and continuously update both operational and historical systems of record; enabling data consistency between source transactional systems and data lakes; and leveraging the latest advanced SQL capabilities in Hive including new ACID Merge (available with Apache Hive 2.2 – part of Hortonworks 2.6).
For more information, go to www.attunity.com.