Case Study

Restaurant Chain Delivers Machine Learning on AWS


The Customer:

Already doing $10+ billion in annual sales, a top-5 U.S. restaurant chain turned to machine learning to continue delivering exceptional quality and customer experience as they continued to grow.


The Challenge They Faced:

The company needed to get hundreds of models into production using AWS and Sagemaker. They realized that highly manual processes and individually built ML pipelines were ultimately unscalable; they needed a framework to streamline the operations of AWS ML pipelines they were relying on to ensure consistent quality food and services at-scale.


How We Helped:

phData created an opinionated workflow or “assembly line” for getting ML models trained, deployed,
updated, and secured both efficiently and reliably using cloud-native AWS technologies.


What We Got Done:

The new framework has streamlined the restaurant company’s ability to get ML models into production faster, more efficiently, and with less risk — putting their ambitious goal of deploying 400+ forecasting models throughout 2020 in reach — to help them deliver superior food quality and service at increasingly massive scale.

Full story: Leading U.S. Restaurant Chain Delivers Quality
At-Scale Through Machine Learning on AWS

A top U.S. restaurant chain, already doing $10+ billion in annual sales, knew that to sustain their aggressive growth trajectory, they needed to continue making good on their core brand promise: delivering exceptional quality and customer service at a competitive price. They weren’t historically a tech-focused company. But to ensure consistency across many hundreds of restaurant locations, they needed to become one.

They launched several machine learning (ML) projects designed to maintain their differentiation across many hundreds of locations, powered by AWS and ML technologies. Among them was a set of many computer vision models that ultimately became key to how they ensure food quality (e.g., what does a “good” sandwich look like versus a “bad” one ), as well as prototype models that will be key to improving order speed and accuracy at drive-thrus as they continue to grow.

However, it wasn’t long before the company understood that machine learning, like the restaurant business, is much more complex to do successfully as you reach an increasingly massive scale.

The ML learning curve

Originally, each ML model was hand-built in Jupyter on a laptop, deployed in a way that requires manual runs, and maintenance was completely hands-on. Because the company was, in effect, constantly reinventing the wheel, it took significant time and developer resources to get each new model into production. And for their computer vision use case — in order to recognize the many different items on their menu — they needed to train and deploy a lot of new models.

ML models were being cobbled together in random notebooks, Python programs and R scripts — without processes or controls to maintain a central repository and version controls. They had no way of tracking the life of a model from training through to production. And there was no central solution for monitoring the performance and quality of ML models over time.

The lack of these best practices contributed to serious inefficiencies and risks:

Ultimately, the restaurant company saw that to deliver quality food and service at-scale, they also needed to deliver technology at-scale. But making machine learning efficient and automated was easier said than done.

Quality on the line

Solving these problems meant building a system of standards, processes, and automated workflows robust enough to get ML models into production and ensure availability — a challenge even for seasoned data scientists.

The restaurant company had been supplementing their team with university students who had talent, but not necessarily experience. And due to the complex set of variables unique to the restaurant industry — for example, the linguistic quirks of patrons ordering at the drive thru — using a hodgepodge of canned, off-the-shelf solutions wasn’t an option. Realizing they needed proven data and ML experts, they ultimately decided to partner with phData.

The phData ML team worked to understand the company’s requirements, then created an opinionated workflow or “assembly line” — based on a set of standard infrastructure including Airflow, AWS Sagemaker, and AWS Batch. The team combined this infrastructure foundation with information architecture, process, automation, and best practices — for getting ML models trained, deployed, updated, and secured.

As a result:

phData was able to integrate the new ML components and workflow seamlessly with the existing ecosystem of cloud and DevOps tools that the company was already using, like Jenkins and AWS CloudFormation, to manage infrastructure operations.

Scaling technology to scale quality customer experience

The new automated workflow the company created with phData has streamlined their ability to get ML models into production faster, more efficiently, and with less risk:


The customer now has a foundation not only for their computer vision models, but for all the ML initiatives they’re depending on to scale their brand promise of superior food quality and customer experience. They’re hitting the ground running, working toward an ambitious goal of deploying 400+ forecasting models throughout 2020.

And they’re excited to continue working with phData — harnessing data and machine learning to power a range of innovative use cases: from using speech processing to automate drive thru orders to micro-forecasting demand (accounting for complex factors like weather and traffic) to accurately predict how many baskets of fries should be cooking to improve drive-thru efficiency.

Take the next step
with phData.

Learn how phData can help solve your most challenging data analytics and machine learning problems.