Is Databricks’s autoscaling cost efficient?
Here at Sync we are always trying to learn and optimize complex cloud infrastructure, with the goal to help more knowledge to the community. In our previous blog post we outlined a few high level strategies companies employ to squeeze out more efficiency in their cloud data platforms. One very popular response from mid-sized to
Blog, Case Study
The top 6 lessons learned why companies struggle with cloud data efficiency
Here at Sync, we’ve spoken with companies of all sizes, from some of the largest companies in the world to 50 person startups who desperately need to improve their cloud costs and efficiencies for their data pipelines. Especially in today’s uncertain economy, companies worldwide are implementing best practices and utilizing SaaS tools in an effort
We’re hiring, let’s build.
What are we building? At Sync we’re building something that is really hard. We’re trying to disrupt a $100B industry where some of the world’s biggest companies live. On top of that, we’re attacking a layer in the tech stack that is mired in complexity, history, and evolution. So why do we think we’re going
Sync Autotuner for Apache Spark – API Launch!
The Sync Autotuner has enabled developers, data engineers, and data scientists, from small startups to large enterprises, easily tune their Spark jobs and reduce costs, improve runtime, or both. Infrastructure tuning can significantly impact data engineering productivity. Most developers and data engineers will tell you that trying to figure out the optimal Spark and cluster
Sync presentation on global optimization of Apache Spark at the 2022 Databricks conference
We were thrilled to present our work at the 2022 Databricks conference in San Francisco! Below is a recording of our talk where we introduce the Apache Spark Autotuner and the longer term orchestration work.