Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, today announced a new open source compute orchestration tool using Terraform, a solution by HashiCorp, Inc., the leader in multi-cloud infrastructure automation software.
Terraform Provider Iterative (TPI) is the first product on HashiCorp’s Terraform technology stack to simplify ML training on any cloud while helping infrastructure and ML teams to save significant time and money in maintaining and configuring their training resources.
Built on Terraform by HashiCorp, an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services, TPI allows data scientists to deploy workloads without having to figure out the infrastructure.
Data scientists oftentimes need a lot of computational resources when training ML models. This may include expensive GPU instances that need to be provisioned for training an experiment and then de-provisioned to save on costs. Terraform helps teams to specify and manage compute resources. TPI complements Terraform with additional functionality, customized for machine learning use cases:
Just-in-time compute management – TPI automatically provisions and de-provisions compute resources once an experiment is finished running, helping to reduce costs by up to 90%.
Automated spot instance recovery – ML teams can use spot instances to train experiments without worrying about losing all their progress if a spot instance terminates. TPI automatically migrates training jobs to a new spot instance when the existing instance terminates so that the workload can pick up where it left off.
Consistent tooling for both data scientists and DevOps engineers – TPI delivers a tool that lets both data science and software development teams collaborate using the same language and tool. This simplifies compute management and allows for ML models to be delivered into production faster.
With TPI, data scientists only need to configure the resources they need once and are able to deploy anywhere and everywhere in minutes. Once it is configured as part of an ML model experiment pipeline, users can deploy on AWS, GCP, Azure, on-prem, or with Kubernetes.
"We chose Terraform as the de facto standard for defining the infrastructure-as-code approach,” said Dmitry Petrov, co-founder and CEO of Iterative. “TPI extends Terraform to fit with machine learning workloads and use cases. It can handle spot instance recovery and lets ML jobs continue running on another instance when one is terminated."
Iterative.ai, the company behind Iterative Studio and popular open-source tools DVC and CML, enables data science teams to build models faster and collaborate better with data-centric machine learning tools. Iterative’s developer-first approach to MLOps delivers model reproducibility, governance, and automation across the ML lifecycle, all integrated tightly with software development workflows. Iterative is a remote-first company, backed by True Ventures, Afore Capital, and 468 Capital.