Introducing: Gradient for Databricks
Databricks optimization made easy
Wow the day is finally here! It’s been a long journey, but we’re so excited to announce our newest product: Gradient for Databricks.
Checkout our promo video here!
The quick pitch
Gradient is a new tool to help data engineers know when and how to optimize and lower their Databricks costs – without sacrificing performance.
For the math geeks out there, the name Gradient comes from the mathematical operator from vector calculus that is commonly used in optimization algorithms (e.g. gradient descent).
Over the past 18 months of development we’ve worked with data engineers around the world to understand their frustrations when trying to optimize their Databricks jobs. Some of the top pains we heard were:
- “I have no idea how to tune Apache Spark”
- “Tuning is annoying, I’d rather focus on development”
- “There are too many jobs at my company. Manual tuning does not scale”
- “But our Databricks costs are through the roof and I need help”
How did companies get here?
We’ve worked with companies around the world who absolutely love using Databricks. So how did so many companies (and their engineers) get to this efficiency pain point? At a high level, the story typically goes like this:
- “The Honeymoon” phase: We are starting to use Databricks and the engineers love it
- “The YOLO” phase: We need to build faster, let’s scale up ASAP. Don’t worry about efficiency.
- “The Tantrum” phase: Uh oh. Everything on Databricks is exploding, especially our costs! Help!
So what did we do?
We wanted to attack the “Tantrum” problem head on. Internally three teams of data science, engineering, and product worked hand in hand with early design partners using our Spark Autotuner to figure out how to deliver a deeply technical solution that was also easy and intuitive. We used all the feedback on the biggest problems to build Gradient:
User Problem | What Gradient Does |
I don’t know when, why, or how to optimize my jobs | Gradient continuously monitors your clusters to notify you of when a new optimization is detected, estimate the cost/performance impact, and output a JSON configuration file to easily make the change. |
I use Airflow or Databricks Workflows to orchestrate our jobs, everything I use must easily integrate. | Our new python libraries and quick-start tutorials for Airflow and Databricks Workflows make it easy to integrate Gradient into your favorite orchestrators. |
I just want to state my runtime requirements, and then still have my costs lowered | Simply set your ideal max runtime and we’ll configure the cluster to hit your goals at the lowest possible cost. |
My company wants us to use Autoscaling for our jobs clusters | Whether you use auto-scaled or fixed clusters, Gradient supports optimizing both (or even switching from one to the other). |
I have hundreds of Databricks jobs. I need batch importing for optimizing to work | Provide your Databricks token, and we’ll do all the heavy lifting of automatically fetching all of your qualified jobs and importing them into Gradient. |
We want to hear from you!
Our early customers made Gradient what it is today, and we want to make sure it’s meeting companies’ needs. We made getting started super easy (you can Just login to the app here). Feel free to browse the docs here. Please let us know how we did via Intercom (in the docs and app).
More from Sync:
Choosing the right Databricks cluster: Spot instances vs. on-demand clusters, All-Purpose Compute vs. Jobs Compute
Choosing the right Databricks cluster: Spot instances vs. on-demand clusters, All-Purpose Compute vs. Jobs Compute
Databricks Compute Comparison: Classic Jobs vs Serverless Jobs vs SQL Warehouses
Databricks Compute Comparison: Classic Jobs vs Serverless Jobs vs SQL Warehouses