Unleashing the power of Declarative Computing

Insights from our journey building Gradient, the world’s first AI compute optimization engine

Imagine a world where you could simply tell your data infrastructure what you want it to achieve, rather than meticulously configuring every detail. That’s exactly what Sync Co-founder and CEO, Jeff Chou, and Alation Co-founder and CEO, Satyen Sangani, discuss in this episode of the Data Radicals podcast.  

“Instead of having to pick the right resources and set all the configuration settings- just declare the outcomes that you want!” – Jeff Chou, Co-founder & CEO of Sync Computing

Tune into this insightful conversation to learn more about Gradient, the world’s first AI compute optimization engine, or read on for a summary of the key points.  

What is Declarative Computing?

Declarative Computing flips the script on traditional infrastructure management. Instead of specifying exact resources and cluster configurations, you simply input your desired outcomes, such as cost or runtime, and the appropriate resources are allocated based on your goals and needs.

Declarative Compute

Today, data engineers endure the trial-and-error process of configuring clusters for their jobs. This is manual, error prone, time consuming, and soul crushing work. By shifting the focus from managing resources to achieving business goals, declarative computing enables users to simplify infrastructure management. It also aligns data infrastructure management  more closely with business objectives, so that performance improvements translate directly into tangible benefits.

Learn more about Declarative Computing

The power of Gradient

Gradient embodies the essence of Declarative Computing. Want to minimize costs while hitting a specific runtime and latency? Just say the word, and Gradient will figure out the optimal configuration for your jobs using ML-powered optimization.

It’s not just another optimization tool; it’s a paradigm shift in how we approach Databricks workloads: 

Gradient’s advanced machine-learning algorithms are the key to our success. These models continue to improve using a closed-loop feedback system. After each run, performance and cost information are streamed back into Gradient, so the models can tie these outcomes to the configuration changes made. As a result, projections and recommendations improve for jobs every time they run.

Looking to the future

While Gradient currently focuses on Databricks Jobs, our vision extends far beyond. We’re setting our sights on other data clouds and even GPUs in the cloud. The concept behind Declarative Computing can be applied to any repeatable workload, including: ETL jobs, open-source Spark,EMR on AWS, serverless functions, Fargate, ECS, Kubernetes, GPUs, and any other system that runs scheduled jobs.

Our end goal is to build a universal platform that effortlessly manages diverse computing environments. We hope you join us on this journey! 

Conclusion

The shift to Declarative Computing isn’t just about technological advancement; it’s about aligning your data infrastructure directly with your business goals. Instead of guessing what resources you need to run your jobs now and in the future, simply share your goals with Gradient and it will manage your clusters for you. 

As cloud infrastructure evolves, the need for agile and cost-effective solutions becomes increasingly critical. Gradient continuously monitors and adapts to changes in your data pipelines, so your data infrastructure is always optimal. Join us as we build a future where data infrastructure optimization is effortless using AI automation. 

Interested in learning more about Gradient and what it can do for your organization?

Book a personalized demo here!