Maximizing Value with AI: A Review of "The AI Playbook"
Introduction
In an era where machine learning (ML) and artificial intelligence (AI) are no longer futuristic concepts but a present-day imperative, The AI Playbook by Eric Siegel emerges as a crucial guide for organizations navigating the complex landscape of AI implementation. As someone deeply entrenched in data science and AI, I've witnessed firsthand the challenges plaguing AI projects, often leading to disappointingly high failure rates. It's this very issue that I addressed in my article, "How I Know Your Data Science/ML Project Will Fail Before You Even Begin."
Could this book provide the roadmap that so many organizations desperately need? Could it be useful both for data scientists and business executives? As I delved into its pages, I found myself nodding along, recognizing this playbook's potential for data-informed executives, data specialists, and product managers.
Summary
- Many organizations struggle with implementation and face high ML project failure rates. The AI Playbook addresses these challenges by providing a practical guide for both business leaders and technical professionals.
- Traditional ML projects often focus too heavily on technology, leading to deployment failures. The book reframes ML initiatives as business projects, emphasizing operational improvements and change management.
- Successful ML projects require a structured approach that aligns technical work with business objectives. The bizML framework (Value, Target, Performance, Fuel, Algorithm, Launch) provides a step-by-step guide for implementing ML projects with a focus on business value.
- Data quality and quantity are often overlooked in favor of sophisticated algorithms. The book stresses that investing in better data usually yields more significant improvements than pursuing the latest ML techniques.
- There's often a communication gap between technical and business professionals. The book bridges this divide by explaining technical concepts in accessible terms for business leaders while emphasizing business context for data scientists.
- Many ML projects fail due to a lack of cross-functional collaboration and change management. The book provides practical advice on fostering collaboration between teams and managing the organizational changes that come with ML implementation.
- The AI Playbook is recommended for executives, data scientists, and product managers involved in ML initiatives. It earns a 5/5 star rating for its practical wisdom and grounded approach to making ML work for business.
Three Key Insights for Executives, AI Practitioners, and Product Managers
1) Reframing ML Projects as Business Initiatives
One of the most impactful insights from The AI Playbook is the fundamental reframing of ML projects. I have always believed that ML initiatives should not be viewed as technology projects but as business initiatives to improve operational performance. Eric does a great job weaving this theme throughout the book. This shift in perspective is crucial for several reasons:
- Focus on Business Value: The book underscores a crucial point—the most sophisticated ML models and pristine datasets are worthless if they don't alter business outcomes. By framing ML projects as business initiatives, Eric encourages organizations to prioritize tangible results over technical perfection, ensuring that ML efforts translate into real-world improvements. While this sounds obvious, it’s surprisingly not the most common approach.
- Change Management: This reframing puts change management squarely on the agenda. Many ML projects fail because they don't recognize that deploying ML means changing how the business operates. By viewing ML projects as business initiatives, organizations are more likely to employ proper change management techniques, increasing the chances of successful deployment.
- Bridging the Knowledge Gap: Eric calls for a meeting in the middle between business and technical professionals. He argues that business leaders must develop a certain level of data literacy to guide ML projects effectively. In contrast, technical professionals must deepen their understanding of business fundamentals and outcomes. As the author puts it, "To drive business with [ML], you must fully grasp its fundamentals, even if you aren't working 'under the hood.'" He likens this to driving a car—you don't need to know how the engine works, but you do need to understand acceleration, momentum, friction, and the rules of the road.
- Collaborative Approach: This reframing necessitates a more collaborative approach to ML projects. Rather than siloing the technical work, successful projects involve deep collaboration between data scientists and business stakeholders at every step, from defining performance goals to preparing data and developing the model.
- Success Stories: Eric provides compelling examples of successful ML projects prioritizing business objectives. One standout case is UPS's use of ML to predict package deliveries, which saved the company $350 million and 185 million miles annually. This success wasn't just about the technology—it required an incremental rollout, extensive change management, and training to get staff to trust and effectively use the new system.
By reframing ML projects as business initiatives, organizations can avoid the common pitfall of developing technically impressive models that fail to deploy or deliver value. This perspective shift ensures that ML projects are driven by business needs, supported by proper change management, and executed through close collaboration between technical and business professionals.
2) The bizML Framework with a Practical Example
At the heart of The AI Playbook is the bizML framework, a six-step process designed to guide organizations through successful ML project implementation. This framework is particularly valuable because it emphasizes business value from the outset and provides a structured approach that both technical and non-technical professionals can follow.
The six steps of bizML are:
- Value: Establish the deployment goal. This step outlines how ML will enhance business operations. It clearly defines the expected business impact, setting the foundation for the entire project.
- Target: Establish the prediction goal. You specify the exact outcome the model will forecast for each instance. This precision is crucial as it directly influences the model's relevance to business objectives.
- Performance: Establish the evaluation metrics. This step determines how the model's effectiveness is measured. It ensures alignment between technical performance and business success criteria.
- Fuel: Prepare the data. This phase involves structuring and refining the data to meet the project's needs. It's a critical step that often requires significant time and resources but is essential for the model's success.
- Algorithm: Train the model. During this step, the chosen algorithm(s) processes the prepared data to create a predictive model. The resulting model encapsulates the patterns and insights derived from the data.
- Launch: Deploy the model. This final step involves implementing the model in real-world operations. It's where the model's predictions are translated into actionable insights that drive business improvements.
Let's break down each step and see how they work together in practice using a customer churn prediction project for a SaaS company:
- Value: The SaaS company's annual churn rate is currently 15%, and it wants to reduce it to 12%. The goal is to implement a system that identifies at-risk customers and triggers personalized retention campaigns to achieve $15 million in saved revenue.
- Assumptions:
- The calculations assume a linear relationship between churn rate and revenue loss.
- The intervention strategies are assumed to lead to the desired reduction directly. These assumptions need validation through pilot studies or A/B testing.
- The LTV remains constant and is not affected by the retention efforts.
- Calculations:
- Current customers: 100,000
- Average customer Lifetime Value (LTV): $5,000
- Current annual churn: 15,000 customers (15% of 100,000)
- Goal annual churn: 12,000 customers (12% of 100,000)
- Current lost revenue: 15,000 * $5,000 = $75 million
- Goal lost revenue: 12,000 * $5,000 = $60 million
- Potential saved revenue: $75 million - $60 million = $15 million annually
- Assumptions:
- Target: The prediction goal is: "Will a customer cancel their subscription within the next 3 months?" This timeframe is chosen because historical data shows that 80% of churning customers show signs of disengagement 2-4 months before canceling.
- Performance: The team establishes the following metrics:
- Lift: > 3 at the top decile (i.e., churn rate among the top 10% of predicted churners should be > 45%, given the baseline rate of 15%). The team prioritizes lift over accuracy for imbalanced classification problems like churn prediction. Lift indicates how much more frequently churners appear in the model-identified group compared to the overall customer base. Why not accuracy? With a 15% churn rate, a model that predicts that no one churns is 85% accurate
- Recall (percentage of actual churners identified): > 70%
- Precision (percentage of identified churners who actually churn): > 60%
- ROI of retention campaigns: > 300%. Calculations:
- Estimated cost of retention campaign per customer: $500
- Value of retained customer: $5,000 (LTV)
- ROI = (Benefit - Cost) / Cost = ($5,000 - $500) / $500 = 900% = 9x
- The 300% (3x) ROI target is conservative given this calculation
- Fuel: The team prepares 18 months of historical data that is aggregated and normalized on a per-customer basis, including:
- Usage metrics: login frequency, feature utilization, user activity levels
- Support interactions: number of tickets, resolution time, satisfaction scores
- Billing history: payment delays, plan changes, add-on purchases
- Account details: industry, company size, contract length
- Churn events: date and reason for cancellation
- Algorithm: The data science team develops and tests various classification models to predict customer churn, focusing on both performance metrics and interpretability to ensure stakeholder buy-in and actionable insights. They selected the GBM model for its strong performance and ability to provide feature importance, which aids in interpretability for stakeholders. While the Deep Neural Network showed slightly better performance, the team prioritized the GBM's interpretability, which is crucial for stakeholder buy-in and is generally very effective for structured data like in this case. Sample model outputs:
- Logistic Regression: 2.5 lift; 65% recall; 55% precision
- Random Forest: 3.2 lift; 72% recall; 62% precision
- Gradient Boosting Machine (GBM): 3.5 lift; 75% recall; 65% precision
- Deep Neural Network: 3.6 lift; 76% recall; 67% precision
- Launch: The chosen model is integrated into the company's CRM system. It scores all customers daily and flags those with > 60% churn probability. These high-risk customers are automatically entered into a tiered intervention program:
- 60-75% risk: Customers receive a triggered email campaign with product usage tips to increase engagement.
- 75-90% risk: Personalized outreach from the customer success team to address specific issues and provide tailored support.
- > 90% risk: Comprehensive account review and a custom retention offer from the sales team to prevent churn.
After six months, the results show:
- The churn rate was reduced from 15% to 13.8% (annualized)
- $3 million in retained revenue (projected $6 million annually)
- 600 customers retained (from 7,500 expected to churn to 6,900 actual churns)
These results show that the company is progressing towards its goal of reducing churn to 12% and saving $15 million annually, but it has not yet fully achieved it. The project demonstrates promising initial results, with potential for further improvement as the team continues to refine its approach.
The project's success relied on more than just the technical solution:
- Cross-functional collaboration: Recurring meetings between data science, customer success, and sales teams to refine the intervention strategies based on model insights.
- Change management: A month-long training program for the customer success team on interpreting model outputs and tailoring retention strategies.
- Ongoing monitoring: Bi-weekly model performance reviews and quarterly retraining to adapt to changing customer behaviors and business conditions.
This detailed example demonstrates how the bizML framework ensures business needs drive ML projects, establishing clear, quantifiable goals and metrics for successful deployment. By following this approach, the SaaS company avoided the pitfall of developing a technically impressive model that fails to deliver real business value, instead creating a system that demonstrably improved their bottom line while providing a clear path for continued improvement.
3) The Importance of Data Quality and Quantity
Throughout The AI Playbook, Eric consistently emphasizes the critical role of data in ML projects. He argues that improving data often yields better results than focusing solely on algorithmic sophistication. This insight is encapsulated in a quote from Peter Norvig, Google's Director of Research: "We don't have better algorithms than anyone else. We just have more data."
The book expands on this concept, noting that while more data is beneficial, the quality of that data is equally crucial. Peter further clarifies, "More data beats clever algorithms, but better data beats more data." This perspective is vital for organizations looking to enhance their ML capabilities, suggesting that investments in data quality and quantity are more beneficial than pursuing the latest ML algorithms.
Key points The AI Playbook makes about the importance of data include:
- Data as the Source of Predictive Power: Data encodes prior happenings, the experience from which ML will learn. ML software is only as good as the data you give it.
- The Integrity of Output Variables: While ML is robust to noise within input variables, the output variable's integrity is critical. It must align with ground truth as it guides the learning process and serves to evaluate model performance.
- Representativeness of Training Data: Training data must be representative of the data encountered during deployment. The learning cases must represent the same "reality" in which the model will be used.
By highlighting the crucial role of data, Eric provides a valuable counterpoint to the often algorithm-centric discussions in the ML field. Improving data quality and quantity usually yields more significant payoffs than focusing solely on algorithmic improvements. This insight can help organizations prioritize their efforts and investments in ML projects, potentially leading to better outcomes and more successful deployments. This insight can also incentivize business teams to prioritize and invest in how data is created, collected, and analyzed.
Final Thoughts and Rating
The AI Playbook stands out as an invaluable resource in the crowded field of AI and machine learning literature. Its unique strength lies in its practical, business-focused approach to implementing ML projects, providing a much-needed bridge between technical expertise and business acumen.
Key Strengths
- Practical, Actionable Advice: The bizML framework offers a clear, step-by-step guide for implementing ML projects, making it immediately applicable to organizations of all sizes and across various industries.
- Emphasis on Business Value: By consistently focusing on the business impact of ML projects, readers never lose sight of the ultimate goal: improving operational performance and driving tangible results.
- Bridging Technical and Business Understanding: The book excels at explaining technical concepts in accessible terms for business leaders while also emphasizing the importance of business context for data scientists and ML practitioners.
- Change Management Focus: Emphasizing the human aspects of ML implementation, including change management and cross-functional collaboration, addresses a critical yet often overlooked aspect of successful ML deployments.
- Data-Centric Approach: By highlighting the paramount importance of data quality and quantity, the book provides a valuable counterpoint to the often algorithm-centric discussions in the ML field.
Areas for Potential Expansion
While The AI Playbook is comprehensive, there are a few areas where it could potentially expand:
- Building Data Literacy: While the book emphasizes the importance of data literacy, additional guidance on building this capability across organizations would be valuable. However, this topic could be a book in itself and depends heavily on each organization's culture and starting point.
- Data Preparation: The book acknowledges that data preparation typically consumes up to 80% of an ML project's technical efforts. While it addresses this challenge, it could delve deeper into the root causes and solutions. From my experience, the primary issue is often cultural—many businesses underinvest in data management, leading to poor data quality and accessibility. The book could emphasize more strongly that robust data preparation is non-negotiable and offer strategies for improving data collection processes, governance, and fostering a data-centric culture within organizations.
Recommendations
The AI Playbook is a must-read for several key audiences:
- Executives and Business Leaders: The book provides the knowledge needed to effectively sponsor and guide ML initiatives, helping to bridge the gap between technical possibilities and business realities.
- Data Scientists and ML Practitioners: This book offers invaluable insights into the business side of ML projects for technical professionals, helping them better align their work with organizational goals and increase the chances of successful deployment. The appendix also includes some excellent resources on how to frame and communicate your projects.
- Product Managers: The bizML framework aligns well with product management principles, making this book particularly useful for product managers working on data-informed products or features.
Rating: 5/5 Stars
The AI Playbook earns a full 5-star rating for its unique and invaluable contribution to the field of applied ML. It is an excellent resource focusing on how data projects can best add business value in practice.
The book's strength lies in its practical wisdom. By reframing ML projects as business initiatives, providing a clear framework for implementation, and emphasizing the critical role of data and change management, Eric has created a playbook that addresses the real-world challenges of ML deployment.
Moreover, the book's emphasis on cross-functional collaboration and mutual understanding between business and technical professionals is exactly what the industry needs to increase the success rate of ML projects.
In an era where AI and ML are often hyped beyond recognition, The AI Playbook provides a grounded, practical approach to help organizations realize these powerful technologies' true potential. It's not just a book about ML—it's a book about how to make ML work for your business. For anyone involved in ML projects, from conception to deployment, this book is an essential read.