Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Creating Intelligent Products MEAP V01

Generative AI advanced analytics smart


automation Leo Porter
Visit to download the full and correct content document:
https://ebookmeta.com/product/creating-intelligent-products-meap-v01-generative-ai-
advanced-analytics-smart-automation-leo-porter/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Learn AI-Assisted Python Programming (MEAP V01) Leo


Porter

https://ebookmeta.com/product/learn-ai-assisted-python-
programming-meap-v01-leo-porter/

Generative AI for Data Analytics (MEAP) Artur Guja

https://ebookmeta.com/product/generative-ai-for-data-analytics-
meap-artur-guja/

Data Storytelling with Python Altair and Generative AI


(MEAP V01) Angelica Lo Duca

https://ebookmeta.com/product/data-storytelling-with-python-
altair-and-generative-ai-meap-v01-angelica-lo-duca/

AI Assisted Testing MEAP V01 Mark Winteringham

https://ebookmeta.com/product/ai-assisted-testing-meap-v01-mark-
winteringham/
Generative AI in Action (MEAP V02) Amit Bahree

https://ebookmeta.com/product/generative-ai-in-action-
meap-v02-amit-bahree/

The AI-Powered Developer (MEAP V01) Nathan B. Crocker

https://ebookmeta.com/product/the-ai-powered-developer-
meap-v01-nathan-b-crocker/

LLMs in Production (MEAP V01): From language models to


successful products Christopher Brousseau

https://ebookmeta.com/product/llms-in-production-meap-v01-from-
language-models-to-successful-products-christopher-brousseau/

The Complete Obsolete Guide to Generative AI (MEAP) 10


of 11 chapters available Edition David Clinton

https://ebookmeta.com/product/the-complete-obsolete-guide-to-
generative-ai-meap-10-of-11-chapters-available-edition-david-
clinton/

Smart Log Data Analytics: Techniques for Advanced


Security Analysis Florian Skopik

https://ebookmeta.com/product/smart-log-data-analytics-
techniques-for-advanced-security-analysis-florian-skopik/
Creating Intelligent Products
1. 1_Creating_better_products_with_AI
2. 2_Mapping_out_the_modern_AI_landscape
3. 3_How_AI_is_transforming_industries
4. index
1 Creating better products
with AI
This chapter covers
When and how to get started on your AI journey
The mental model of an AI system
How to structure your team and steer it toward AI success
What the product development process looks like in AI

AI offers a huge potential both for the automation of repetitive


tasks, but also for stupefying creativity. As in most innovation areas,
some companies are quick to jump on the train: AI features such as
chat, personalization, and tailored recommendations are becoming
ubiquitous in software products.[1] On the other hand, the
“laggards” - those companies who miss out on the relevant
developments - most likely will stay behind as they watch their peers
go from their first attempts in AI to more mature strategies,
acquiring relevant AI expertise and building their moats along the
journey. And then, there are a bunch of innovators and enthusiasts
that try to use AI for the sake of AI, squeezing it into their products
without validating the market opportunity and the actual value it
provides to the users.

The path to AI leadership is not straightforward. Many companies


assume that all they need to include AI in their offering is to hire AI
experts and let them create the technical magic. This approach often
leads them straight into the integration fallacy: even if these experts
produce the best models and algorithms, their outputs often get
stuck at the level of playgrounds, sandboxes and demos, and never
really become full-fledged parts of the production system. Over the
years, I have seen a great deal of frustration from data scientists
whose technically impeccable AI outputs were not fully integrated
and leveraged in user-facing products. Rather, they had the status of
experimental projects that gave internal stakeholders the impression
of riding the AI wave. With the broad proliferation and utility of AI
since the launch of ChatGPT in 2022, companies can no longer
afford to use AI as a “lighthouse” feature whose main purpose is to
show off their technological acumen.

To successfully integrate AI, it is crucial to ensure that the amazing


technological capabilities of AI models meet open user needs and
desires, are properly reflected in the user experience, and add real
value to the overall offering of the business. In many teams and
companies, this is the job of the Product Manager (PM) - but there
are many scenarios where other team members, like UX designers,
business-savvy engineers, and the founders of early-stage startups,
carry this responsibility.[2] This book addresses the learning needs
for developing and launching successful AI products.

Managing the development of AI products is very different from the


traditional software development process, where there is a relatively
clear dichotomy consisting of the backend and the frontend
components. In AI, you will not only need to add new roles and skills
to your team but also ensure closer collaboration between the
different parties. For example, if you are working on a virtual
assistant, your UX designers will have to understand prompt
engineering to create an effortless user flow. Your data annotators
need to be aware of your brand and the “character traits” of your
virtual assistant to create training data that are well-aligned, and you
as a product manager need to grasp and scrutinize the architecture
of your data pipeline to make sure it meets the governance
requirements and concerns of your users.

This book will provide the necessary guidance and background


knowledge to execute on these responsibilities. Throughout the
book, you will acquire the following skills:
Critically evaluate AI opportunities and ensure you are using AI
for the right things
Smoothly communicate with engineers and other technical roles
and achieve optimal internal alignment of your AI efforts
Create beautiful user experiences with AI
Communicate the value but also the limitations of your AI
products to users

Product management is an exciting and strategic job mainly because


it requires you to look at your company and its products from many
different perspectives - the business, the market, your users and
customers, internal talent, and the available technological
possibilities. This book will explain these perspectives, giving you a
holistic overview of the challenges and trade-offs you will face when
developing AI products and features.

1.1 How AI can enhance your


products
AI is a general-purpose technology and can be used to improve a
wide range of applications in different industries. AI can take
different roles in a product. It can be at the center stage from the
beginning of your product development, or enhance your product
with some “nice-to-have” features. Understanding the role of AI in
your product is important to formulate your product strategy and
especially to estimate the feasibility and the resources needed for
your development. In this section, we will look at the different roles
of AI, as well as some specific examples of AI-driven products.

1.1.1 What is the role of AI in your product?


One of the first things you should consider is the role of AI in your
product. This depends on your general business situation - you could
be building a company from scratch on the basis of AI, pivoting
towards an AI product, or adding AI features to an existing product.
Let’s consider the three main scenarios:

Adding AI features: You already have a product and are


enhancing it with one or more AI features. For example, with
the emergence of pre-trained Large Language Models, modern
collaboration apps such as Notion and Miro have quickly
implemented AI features (cf. Figure 1.1). This not only extends
the value of their product but also helps them maintain (or
build) the image of an innovative company that is on top of the
newest technological developments.
Building a product driven by AI: Here, the AI is front and
center and provides your core competitive advantage. For
example, Github Copilot is a tool that makes autocomplete
suggestions as you are coding along in your favorite code editor.
If it was not for the AI, the product could not provide added
value. As AI becomes more powerful, it offers more and more
opportunities to increase productivity by automating work that
was previously carried out by humans. For example, in customer
service, AI can help deal with a large number of standard,
frequently recurring requests, thus allowing humans to focus on
more complex, rare cases. In this scenario, AI and automation
are at the core of the product’s value.
Leveraging AI for internal product improvement: AI
doesn’t need to be at the front and center of your product and
positioning. It can also help you improve your products while
staying behind the scenes. For example, if you are building a
highly scalable product with a huge user base, AI can help you
analyze the paths users take through your product, detect
friction and other issues, and even make suggestions on how to
optimize these. Using AI, you can also personalize your
messaging and product experience for fine-grained user
segments or even individuals.

Figure 1.1 The “Ask AI” feature was introduced in Notion to speed up
users’ work with their content
1.1.2 Vertical and horizontal use cases for AI
To further define and position your product, you need to understand
whether it addresses horizontal or vertical use cases. A horizontal
use case is one that is relevant across many different industries and
occupations. For example, a spell-checking tool like Grammarly
addresses a rather universal need of writing correctly. It can not only
be used by workers in many different industries, but also by
students and other individuals. By contrast, a vertical use case is
focussed on a specific industry. Examples are drug discovery in
healthcare, autonomous driving in the automotive industry, and
algorithmic trading in the finance industry.

Depending on whether your product has a horizontal or a vertical


focus, the PM job will look differently. Horizontal products require
you to understand a broad market of potentially diverse users and to
identify a common denominator of needs that are shared by all of
these users. By contrast, the market for vertical products is more
narrow, but also more demanding in terms of domain knowledge.
Thus, you will need to bring rich domain knowledge to the table and
likely also fine-tune AI models to the domain.

In terms of horizontal use cases, about 75% of the value that


generative AI use cases could deliver falls across four areas,
namely customer operations, marketing and sales, software
engineering, and R&D.
In terms of industry-specific applications, banking, high tech,
and life sciences are predicted to see the biggest impact as a
percentage of their revenues from generative AI. For example,
in banking, the technology could deliver value equal to an
additional $200 billion to $340 billion annually if use cases such
as automated customer service, anti-money laundering (cf.
section 3.1.2), and AI-driven content creation were fully
implemented. In high tech, the potential mainly stems from
optimizations of the software development process, while life
sciences can tremendously benefit from the automation of drug
discovery and development (cf. section 3.2.2).

1.1.3 Examples of AI products


AI is already present in a large and growing number of real-life
products. This can be both physical products like cars and medical
devices, and digital products like shopping websites and your
favorite virtual assistant. AI has a broad potential to disrupt not only
scalable B2C applications but also many activities of our work life.
Thus, investment professionals rely on AI to support their decisions
with better insights and analytics, while sales and marketing people
enjoy the relief when AI automates the repetitive, routinized part of
their jobs. AI can cover a broad range of skills - for example, robotic
applications heavily rely on sensorimotor skills, AI-powered text
generation relies on linguistic virtuosity, and applications like mental
health assistants even manifest social and emotional competence.
Thus, the range of tasks and applications that can be enhanced with
AI is massive and diverse. For inspiration and a more concrete
understanding of AI products, let’s look at three specific examples
where AI creates the primary “magic” of a product.

Personalized recommendations with Netflix

The Netflix recommendation system is an AI-powered feature


designed to enhance the viewer's streaming experience. Unlike
traditional streaming platforms, Netflix leverages AI to provide
tailored content suggestions.

When using Netflix, users may find that the platform recommends
shows and movies that align with their preferences, often
introducing them to unexpected content they turn out to enjoy. The
recommendation system achieves this by analyzing not only the
behavior of a specific user but also of other users with similar tastes.
The analyzed data includes viewing history, duration, and user
interactions. As users engage with the platform, the AI continually
refines its recommendations, aiming to improve user satisfaction.

Netflix's recommendation system shows how AI technology


transforms content consumption into a personalized experience. As
users expect more and more personalization from digital products,
the entertainment industry offers interesting opportunities for using
AI to serve broad audiences in a tailored way.

Аugmented sales intelligence with Salesforce Einstein

Salesforce Einstein allows companies to scale and elevate their


customer relationship management and sales strategies. Imagine
being in a high-stakes sales meeting where precision is crucial.
Salesforce Einstein serves as an AI collaborator, providing you with
tailored information that simplifies and supports your decision-
making. By analyzing customer data, it predicts purchasing trends
and offers recommendations for an effective sales pitch, empowering
your team to secure deals more effectively.
What sets Salesforce Einstein apart is its versatility. It extends
beyond sales assistance to automate repetitive tasks, provide
insights into customer behavior, and offer highly accurate sales
forecasts. This multifaceted AI tool acts as a valuable asset to
business operations.

Salesforce Einstein shows how organizations can optimize their


revenue potential by automating repetitive or data-intensive tasks.
As marketing and sales become more data-driven and quantitative,
using AI can help make sense of these large amounts of information
and streamline the sales efforts in a business. While Salesforce
Einstein is an industry-overarching solution for sales and customer
relationship management, there are also many opportunities for
niche products that focus on the needs of specific industries.

Autonomous driving with Tesla Autopilot

Let’s move into the physical domain, where AI can be leveraged in


combination with robotics, IoT (Internet of Things), and sensors.
Picture yourself driving your Tesla on a highway as an autopilot
seamlessly takes control of your vehicle. You lean back while the AI
skillfully navigates through traffic, maintains optimal speeds, and
even changes lanes with a simple tap. It's akin to having a reliable
co-pilot who anticipates every driving maneuver, ensuring a smooth
journey.

The Tesla Autopilot harnesses insights from millions of miles of


driving data, constantly improving and adapting with each software
update. However, while comfort and convenience are highly
attractive selling points, safety is by far the more important
requirement that producers of autonomous vehicles are struggling
with. And the reality of Tesla’s advanced driver assistance system
(ADAS) is sobering - as of 2023, it is claimed to have caused 17
deaths and hundreds of accidents.[3] To sidestep the tremendous
responsibility that comes with offering the full package of
autonomous driving, many smaller AI companies target specific
components of the driving experience, like automated steering or
parking. Bigger companies such as Tesla and large automotive
producers integrate these components into a larger system. They
also invest the resources for continuous testing, rigorous fulfillment
of the safety expectations, and the management of the associated
risks and incidents.

In this section, we have seen that AI can be leveraged at multiple


levels of a product, including individual features, the core
competitive advantage, and an enabler that allows you to improve
the product. AI can be used in a broad range of products to enhance
their value in various ways, for example by personalizing the user
experience, augmenting the intelligence of the user, and increasing
their comfort and convenience. Oftentimes, successful AI products
live in areas with a lot of data and/or repetitive tasks that can be
automated. Chapter 3 will provide a deep dive into the potential of
AI for specific industries. For now, feel free to pause and brainstorm
AI opportunities for your own products. The next sections will dive
into the process of AI development and will help you assess the
feasibility and value of your ideas.

1.2 Starting your AI journey


If your company is developing or planning to develop digital
products, the moment to consider and plan your AI strategy is now.
On the one hand, AI is offering huge opportunities since the quality
of AI models keeps improving and AI is getting generally available to
a broad variety of companies and users. Developments in
foundational models and computing infrastructure are accelerating
AI expansion. On the other hand, as AI is becoming a commodity
rather than a differentiator, companies that ignore AI simply run the
risk of being left behind by their competition. Of course, this does
not mean that you need to start “doing AI” in your next sprint,
iteration, or even quarter. Rather, you should have a clear view of
the opportunities this technology opens up for your business, and a
plan to address these at a time that is right for your business and
your market.

One of the challenges when implementing AI is finding the right


balance between building a lightweight MVP (“minimum viable
product”) and setting up a solid foundation for long-term
development. In traditional software development, agile
methodologies support this process. However, with AI, most of us
start with a larger number of unknowns due to the following two
reasons:

Rate at which AI is evolving: the power, capabilities, and


risks of AI are not completely understood, and evolving at
dramatic speeds.
Skills available in the companies: Many companies lack the
skills, knowledge, and planning needed for efficient AI
development.

This book will help the readers to understand the building blocks
behind decision-making for AI features and products, embracing the
whole journey from the MVP to a mature AI-driven product:

The “build-or-buy” decision: Do you want to develop your AI


expertise internally, or rather build on top of existing APIs and
services while reducing your development effort?
A review of your data situation: Do you already have suitable
data, and if not, what can you do to start collecting them ASAP?
Is it necessary to conduct large-scale data annotation?
Basic infrastructure and setup: If you are going for your own
implementation, which cloud provider will you use? How will you
set up Machine Learning operations (MLOps)manage the
lifecycle of your data and models?
Iterative choice and refinement of your UX concept: Are you
fully automating a task or rather providing assistance? The latter
option involves not only a more interactive UX, but also
educating your users about how they can best interact with the
AI and leverage the “partnership” between human and AI.
Differentiation: Can you generate a competitive advantage using
AI? If yes, where exactly will your moat be - is it in the data,
the market, your internal expertise, or other internal
advantages?
Risk and compliance requirements: This building block
addresses the critical need for managing risks and ensuring
compliance in AI development. It involves assessing and
mitigating potential risks associated with your AI system, such
as biases in data or decision-making, security vulnerabilities,
and unintended consequences. Compliance requirements,
including legal and regulatory obligations, must be carefully
considered. This could encompass data privacy (e.g., GDPR),
industry-specific regulations, and ethical considerations.

Making these strategic considerations will force you and your team
to learn “on the job” and acquire AI knowledge that can be reused
for many different AI features in the future. The more thoughtful
and systematic you are about these aspects from the beginning, the
more straightforward your journey will be.

With this broader picture in the background, let’s now consider three
factors to assess whether your market-side opportunities can be
successfully addressed with AI: the availability and differentiation of
your data, the nature of your learning problems, and a couple of ‘red
flags' that are indicators that AI might not be the right solution.

1.2.1 Competing on data


Data is a must for starting your AI journey. It can come in different
shapes and formats, but it must contain the learning signals based
on which you can train your model. For some learning problems, you
can just use data that occurs “in the wild”. Let’s say you want to
train a model that can generate text in a less exposed language like
Hebrew - in this case, using a large quantity of Hebrew text that you
can scrape online will provide a sufficient learning signal. However, if
you want to perform a more specialized task, say sentiment analysis
for Hebrew movie reviews, you will need a set of Hebrew texts
paired with sentiment annotations, for example positive/negative
labels, that can be produced by humans.

You might already have an internal dataset to start with - or just a


vague feeling of sitting on a wealth of organizational data that is not
appropriately utilized and leveraged. With the right data engineering
effort, you can use this data to kick off a self-reinforcing cycle called
the “data flywheel”. At the beginning, your flywheel will enable
learning. In later sections, we will see that the data flywheel can
also turn a vicious cycle that propagates negative characteristics
such as toxicity and bias.

How does the data flywheel work? First, a model is trained based on
the available data. Once in production, this model helps you
generate useful AI outputs. These outputs increase the value of your
product and attract more users, which in turn create more data,
better outputs, and even more users - and on it goes:

Figure 1.2 The data flywheel


A classic example of a data flywheel is Netflix’s recommendation
feature. In the beginning, Netflix would just recommend the most
popular videos. Over time, they collected data about views and
ratings, which could be used to train a recommendation algorithm.
The more personalized recommendations are making the product
more popular, and the additional users and views help make the
algorithm more and more accurate.

Having an initial dataset allows you to quickly kick-start your


development - but what if this kind of data is not (yet) available?
This is a common situation, also called the “cold start” problem, and
you will need to make an extra effort to bootstrap the initial dataset.
In Chapter 11 (Designing your data strategy), we will take a detailed
look into strategies of data collection, annotation, and augmentation.

1.2.2 Understanding your learning problem


Once you have assessed your overall data situation, it’s time to think
about your learning problem. Just like human life, successful AI is
all about learning. As humans, we constantly learn and enrich our
world knowledge from the rich variety of multi-sensory inputs (for
example, sounds, emotions, and visuals) around us. We also
optimize our actions and behavior based on the results of our past
actions. While AI and machine learning are more limited in the
spectrum of inputs and actions, they work along the same lines.
Each AI task can be characterized by its inputs and its outcome - the
knowledge or skill acquired by the AI model once the learning is
done. For the learning to be successful, the inputs have to contain
appropriate learning signals - those bits of information that allow the
model to generalize to new examples of the learning problem. Let’s
return to our example of sentiment analysis on movie reviews. The
traditional input to a sentiment classifier is a set of (text, sentiment
label) examples. The outcome of the learning is for the model to be
able to assign sentiment labels to new, unseen texts. A learning
signal is some attribute of the text that is indicative of the
sentiment label. Consider the following review:

Text: I really enjoyed watching this movie - the plot was


fantastic, and anyway Anthony Hopkins is one of my favorite
actors.

Sentiment label: Positive

The expressions really enjoyed, fantastic, and favorite are sure


indicators of positive sentiment. The model will see these over and
over again as it loops through other examples and learns that they
correlate with the positive label. Now, let’s consider the following
text:

Text: On Saturday, I went to the cinema with my boyfriend. We


bought some ice cream and popcorn and watched a movie with
Anthony Hopkins.

Sentiment label: ?

Pretty dry, isn’t it? Just as we as humans wouldn’t be able to


confidently assess the sentiment behind this text, the sentiment
model will struggle to hook up learning signals from which it can
generalize to new reviews.

Now, try to evaluate your current learning problem from three


different perspectives:

Complexity: does the problem handling require complex logic?


If your problem is rather simple and well-defined and can be
solved with a manageable set of rules or control structures,
machine learning will most likely be overkill. For example, if you
are managing an investment portfolio whose strategy is based
on a couple of financial indicators such as market cap, EBITDA,
and revenue, you might consider supporting your investment
decisions using hard-coded rules. However, if the problem
involves high variability and numerous edge cases, AI might be
your best bet. This will apply to most tasks that work with
unstructured data, like the sentiment analysis task outlined
above.
Scale: is the scale of the problem large enough? The higher the
number of predictions that you will make with your model, the
higher the ROI. If you are managing one investment portfolio of
10 companies that you need to review quarterly, you might as
well do so manually and spare the overhead of an AI initiative.
However, if you are managing ten portfolios in dynamic
industries that need to be rebalanced weekly - well, you might
consider seeking support from AI.
Number of classes: does the task require individual
personalization, or can its complexity be reduced? For example,
you are considering automating the creation of your content for
product marketing. Can you break down your customer base
into a smaller number of customer segments with shared
demographics that expose similar behavior and preferences? If
you can do this without oversimplifying things, you might be
able to significantly simplify your model and cover it with a
couple of proven rules (cf. the point on Complexity).

1.2.3 Exclusion criteria


Beyond these positive signals, there are also some business-level
risks that you should assess and manage before jumping into your
AI development. The following “red flags” might handicap your
journey:

Your problem requires perfect accuracy. Machine learning


is probabilistic for a reason - in most cases, it outputs
predictions with a high likelihood but doesn’t ensure complete
certainty. Thus, most algorithms will not be able to deliver
100% accuracy.
You need a white box. Your organization is accountable for
your AI outputs and you need to provide an explanation for
every output. This is quite doable for many of the simpler
algorithms, but the more powerful your algorithm, the larger its
black-box component, and the more difficult it becomes to
ensure full traceability of the results. For many critical AI
applications, explainability is not only important to win the trust
of the users but is also an important component of AI
regulations such as the EU AI Act.
Your AI solution risks falling behind the data. The data
distribution underlying your problem is constantly changing, and
your algorithm does not have the “time” (aka required data
quantity) to adapt to these changes.

In this section, we have considered the need for companies to


consider AI strategies early, highlighting the importance of balancing
MVP development with long-term planning. We introduced seven key
building blocks for AI decision-making and discussed the importance
of learning signals in AI projects. We also took a look at "red flags"
related to AI implementation. Once you have taken stock of your
general situation and envisioned AI strategy, it’s time to zoom in on
specific market opportunities and the AI systems and products that
can address them. In the next section, I introduce a mental model
which will help you validate, design, and develop these systems and
products.

1.3 The mental model of an AI


system
In this section, we will consider the building blocks of an AI system
in the context of your business and product. Throughout the book,
we will use the following mental model to describe AI products and
systems:
Figure 1.3 Mental model of an AI system

This mental model encourages builders to think holistically, create a


clear understanding of their target product, and update it with new
insights and inputs along the way. The model can be used as a tool
to ease collaboration, align the diverse perspectives inside and
outside the AI team, and build successful products based on a
shared vision. It can be applied not only to new, AI-driven products
but also to AI features that are incorporated into existing products.

The following sections will provide an overview of each of the


components. We will start with the business aspects - the
opportunity and the value - and then move to product- and
technology-related elements. While product managers are often well
aware of the business and UX side, AI developments also require an
in-depth understanding of the technical aspects including the data
and the algorithm, since they influence the quality, capabilities, and
limitations of the final product. We will use the running example of
sentiment analysis on movie reviews on a streaming platform,
envisioned to support users when they choose what movie they
want to watch. In the course of the book, we will reuse this model to
analyze many other AI features, products, and solutions.

1.3.1 Opportunity
You might be excited by all the cool stuff you can now do with AI -
but to create value with a new feature or product, you need to back
it with a market opportunity and build something your customers
need and love. In the ideal world, opportunities arise from
customers who say what they need or want. These can be unmet
needs, pain points with the current way of doing things, or desires, i.
e. those “wishlist” items that customers are willing to pay for. You
can dig for this information in existing customer feedback, as found
in product reviews and the notes from your sales and success teams.
For example, you might find that users of your streaming site are
complaining that there is no reliable way to rank movies by quality
criteria. Beyond this, you can also conduct proactive customer
research using tools like surveys and interviews. For a detailed walk-
through of the discovery of customer-facing opportunities, you can
consult Teresa Torres’s book Continuous Discovery Habits.

Beyond those needs that are explicitly communicated by customers,


there is a myriad of other sources for opportunities, such as:

Market positioning: AI is “trendy” and helps reinforce the


image of a business as innovative, high-tech, future-proof, etc.
For example, it can elevate your business from a streaming
website to an AI-powered service and differentiate it from
competitors. This trick is to be applied with caution and should
be combined with other opportunities, otherwise, you risk losing
credibility.
Competitors: When your competitors make a move, it is likely
they have already done the underlying research and validation.
After some time, you can actually see whether the development
was successful. Use this information to learn, iron out the
mistakes, and optimize your own solution. For example, your
competitors have already implemented sentiment analysis. After
some research, you learn that their classifiers are trained on
generic sentiment data, and users complain about the low
quality. This is the time for you to move ahead with a tailored,
domain-focused sentiment analysis that leads to higher accuracy
and more delighted users.
Regulations: Megatrends such as technological disruption and
globalization, force regulators to tighten their requirements.
Regulations create pressure, a bullet-proof source of
opportunity, and are hard to compromise upon. While sentiment
analysis of movies will hardly address a reasonable regulation,
new requirements like mandatory sustainability reporting for
large companies introduce new, resource-intensive tasks and
activities and open up a myriad of opportunities for automation
and AI.
Enabling technologies: Emerging technologies and
development leaps in existing technologies, such as the wave of
generative AI in 2022-23, can open up new ways of doing
things, or elevate existing applications to a new level. For
example, conversational interfaces and virtual assistants have
existed for decades. Large Language Models significantly
improved their usability and quality, thus enabling a large-scale
proliferation and adoption.

Finally, in the modern product world, opportunities are often less


explicit and formal and can be directly validated in experiments,
which allows for a more agile and speedy process of experimentation
and development. Thus, in product-led growth[4], team members
can come up with their own hypotheses without a strict data-driven
argument. These hypotheses can be formulated in a piecemeal
fashion, like modifying a prompt or changing the local layout of
some UX elements, which makes them easy to implement, deploy,
and test. By removing the pressure to provide a priori data for each
new suggestion, this approach leverages the intuition and creativity
of all team members while enforcing a direct validation of the
suggestions. Let’s say that we have already implemented sentiment
analysis on our streaming site - now, your team can start
experimenting with different ways of visualizing and explaining the
results. When starting out, tests can be run with the employees of
the company and in controlled user tests. At some point, you will
want to validate your tweaks “in the wild”, releasing them to your
real users and measuring metrics such as usage and satisfaction.

1.3.2 Value
To dig out the value of your AI offering, you first need to map it to a
specific business problem that you framed accordingly (use case)
and figure out the ROI (return on investment). ROI can be measured
along different dimensions. It forces you to shift away from the
technology and the specific features and focus on the user-side
benefits of the solution. These can be:

Increased efficiency: For example, in customer service, AI


can handle routine requests, free up the human workforce for
more complex requests and thus lead to a higher throughput.
Efficiency gains often go hand-in-hand with cost savings, since
less human effort is required to perform the same amount of
work.
Increased accuracy: For example, in fraud detection, AI
algorithms can analyze vast amounts of data and identify
patterns and anomalies that are difficult for humans to detect.
A more personalized experience: For example, in financial
services, AI can recommend customers how to optimize their
investment and spending decisions based on the knowledge of
their individual context and situation. Without AI, this job would
require the help of human experts, which is an expensive and
hardly scalable offering.
Convenience: The integration of generative AI in collaboration
tools such as Notion reduces turn-around time and friction for
teams that are using the tools. Thus, Notion’s Ask AI features
use ChatGPT to seamlessly assist users with writing and content
creation.
Competitive edge: For example, in the investment space,
having access to unique data-driven insights creates an
information advantage that sets you apart from your
competitors and allows you to make more informed investment
decisions.
Compliance: For example, using automated compliance checks
against relevant regulations, companies can improve their
regulatory readiness, avoid fines, and build a strategic
relationship with regulators.

While efficiency and accuracy ROI can be directly quantified, for


“softer”, less immediate gains like convenience and personalization,
you will need to think of more indirect proxy metrics like user
satisfaction. Finally, thinking in terms of end-user value will not only
shift your focus from concrete features to the value that you can
eventually charge to your users. As a welcome side effect, it can also
reduce technical detail in your public communication. This will
prevent you from accidentally inviting unwanted competition to the
party.

A fundamental aspect of value that will be emphasized in this book is


the impact of AI solutions on sustainability and our society. For
example, in subsequent chapters, we will talk about AI systems that
allow users to optimize the usage of scarce natural resources such
as water. Modern companies are subject not only to tighter
sustainability regulations, but also to close public scrutiny.
Sustainability is a key area where the interests of the company
intersect with those of regulators and society and should be actively
explored for value creation potentials.

Getting back to sentiment analysis on movie reviews, the main value


lies in increased efficiency and accuracy: with a clear quantitative
indicator of the popularity of a film, your users spend less time
browsing through movies to make their choice. They are also more
likely to pick high-quality movies based on the sentiment data you
provide. Finally, over time, your AI might also create emotional and
reputational value for your brand, differentiating it as a streaming
platform that is “in the know”, respects the time and voice of its
users, and quickly steers them to high-quality entertainment.
In later chapters, we will see how you can formulate and
communicate your AI value to various stakeholders, as well as guide
the development to maximally realize this value.

1.3.3 Data
For any kind of AI and machine learning, you need to collect and
prepare your data so it reflects the real-life inputs and provides the
right learning signal for your model (cf. Section 1.2.2). There are
different ways to get your hands on a decent dataset:

You can use an existing dataset. This can either be a


standard machine learning dataset or a dataset with a different
initial purpose that you adapt for your task. In our sentiment
analysis example, you could use the IMDB Movie Reviews
Dataset which contains sentiment-annotated movie reviews to
bootstrap an initial model. The downside of this solution is that
existing datasets will probably not fully reflect the distribution of
your real-life data.
You can annotate the data manually to create the right
learning signals. For example, if you have a set of movie
reviews, human annotators can sit down and assign a sentiment
label to each review. This will move your data closer to the real-
world distribution, thus improving the accuracy of the final
model. Annotation can be done either internally or using an
external provider or a crowdsourcing service such as Amazon
Mechanical Turk. The quality of crowdsourced data is often
lower, so you might need to annotate more to compensate for
the noise.
If you have an annotated dataset but find out that it is not large
enough for learning, you can add more examples to it using
data augmentation. Thus, in our sentiment analysis example,
you could generate paraphrases of the reviews in your dataset
to increase the number of samples per sentiment label.
Especially in early-stage companies, you might be facing the “cold-
start” problem - while AI is planned as an integral part of your
product, you don’t yet have any users from which to collect real-life
data and need to get creative in your data acquisition. In most
cases, a skillful combination of the different methods will yield the
best results. As a PM, you should always take a close look at your
compiled dataset and see whether you need to optimize its
composition by adding examples for underrepresented phenomena
and features and discarding irrelevant or redundant information.

Once the data is assembled, your technical team will engage in data
cleaning. It is good to keep an eye on these activities and make sure
you understand the various steps and their purpose. Most
importantly, you should make sure that semantically relevant
information is kept, while useless information is discarded to reduce
the noise and ease the follow-up learning.

Finally, you need to decide on a sampling strategy. Most real-life


datasets are unbalanced - some classes are more prominent than
others. This can be appropriate for many learning problems, but
unbalanced datasets can also lead to harmful, discriminatory
outcomes, for example when certain racial groups are
underrepresented in the data. Make sure that the selected sampling
strategy does not introduce this kind of bias into your model.

In Chapter XXX, we will take an in-depth look at the various


methods for collecting, annotating, and preprocessing your data for
AI training and inference.

1.3.4 Intelligence
The actual learning power and intelligence of your AI system resides
in the models you are using. In terms of the core AI models, there
are several approaches that you can use individually or in
combination.
Using rules

If your problem is relatively simple and low-dimensional, consider


using manually coded rules to solve it - at least at the beginning of
your journey. This approach is also called “symbolic AI” and has
several advantages - especially at the beginning of your AI journey.
First, it will speed you up and potentially even allow you to launch
your “AI” without iterating through the whole cycle of training and
deploying a machine learning model. Second, by manually dissecting
the problem, you and your team will acquire an in-depth
understanding of the underlying phenomenon and the relevant
features, which can serve as a great basis for more advanced
solutions. Third, with a rule-based model in place, you can collect
usage data to utilize for training. And finally, rule-based models can
not only yield a relatively high precision, but also provide predictable
and explainable outputs.

One of the biggest issues of rule-based approaches is recall - for


most real-life problems, you will not be able to cover all edge cases
with manual rules. You should only use this approach if you are
confident that errors are either infrequent or less “visible” to not
discourage your users and harm the reputation of your product. In
our example, you could use a sentiment lexicon (collection of
positive and negative words), check how many indicators of each
polarity occur in a given movie review, and decide on the sentiment
label. However, the overall accuracy of this approach will be low: on
the one hand, sentiment lexicons don’t cover all the possible words
for each polarity. On the other hand, language is infinitely versatile,
and the rule-based approach will miss linguistic and stylistic
constructions such as negation, irony, and sarcasm, which are
relevant to determining the sentiment of a text. You can go on to
model these patterns using syntactic rules, but the quantity and
complexity of the rules can escalate and make your system
impossible to maintain. Thus, while there was a lot of
experimentation with rules at the advent of sentiment analysis, the
focus of researchers and practitioners quickly shifted to the training
and use of statistical machine learning models.

An example where rules can be applied rather successfully is


compliance. For example, in the finance industry, rules can be used
to ensure that financial transactions adhere to specific legal and
regulatory requirements, where strict rules need to be followed to
avoid non-compliance. In this case, the learning domain was
explicitly defined up-front by humans, and translating it into formal
rules is much easier.

Training your statistical ML model from scratch

In general, this approach works well for simpler but highly specific
problems for which you own distinguished know-how or decent
datasets. Simple problems can often be solved with simple machine
learning methods such as logistic regression, which are
computationally less expensive than fancy deep learning methods.
However, there are other cases where your problem will be more
complex, so you might also consider training a deep neural network
from scratch. In the case of sentiment analysis, training sentiment
models from scratch was a widespread and performing approach
before the rise of pre-trained foundational models, and both simple
ML algorithms and deep learning can yield good accuracy.

Our world is changing quickly, as do the distributions of the data that


go into our AI models. One of the main advantages of machine
learning models is that they can be easily adapted to new data
distributions. If your model is small and simple, it can be regularly
retrained with fresh data; if it is more complex and expensive to
train, you can regularly update it with the new data. Thus, the
knowledge and behavior of your model can be kept up-to-date with
new data and patterns. This is in contrast to complex rule-based
systems, where changing one rule can lead to the collapse of the
whole system.
Fine-tuning a pre-trained model

As more and more pre-trained models become available and portals


such as Huggingface offer model repositories as well as standard
code to work with the models, fine-tuning these models has become
the go-to method to try. This approach has made AI so popular in
the past years. When you work with a pre-trained model, you can
benefit from the investment that someone has already made into the
data, training, and evaluation of the model, which already “knows” a
lot of stuff about the world. All you need to do is fine-tune the model
using a task-specific dataset, which can be much smaller than the
dataset used originally for pre-training. You will still need to consider
the issues of model selection, the infrastructure overhead of using a
bigger model (especially at inference time), and the maintenance
schedule of the model.

Prompting an existing model

If fine-tuning of pre-trained models has become popular with the


publication of BERT in 2018, GPT-3 has kicked off the trend for direct
prompting in 2020. Advanced models of the GPT family as well as
from other providers such as Anthropic and AI21 Labs are available
for inference via API, but most of them don’t allow for direct fine-
tuning of the model. With prompting, all the domain- and task-
specific information required for a task is passed at inference time in
the prompt. This can include specific content to be used, examples
of analogous tasks (few-shot prompting), as well as instructions for
the model to follow. Prompting allows you to unlock the rich
cognitive capabilities of large models. In Chapter 5, we provide a
detailed introduction to essential prompting methods as well as
general guidelines and best practices.

Evaluating your system


No matter the approach you choose to go for in the end, one of the
most important components of a sound AI strategy is the testing and
evaluation of the system. It will allow you not only to make sure that
the performance of your model is reliable over time, but will also
help you understand the ROI of your optimization efforts. Even if
you decide to alter your approach to AI - for example, by replacing a
rule-based model with machine learning - your evaluation
methodology can be applied to benchmark the results and justify the
change.

Precision: Precision measures the accuracy of the positive


predictions made by the model. For sentiment analysis on movie
reviews, precision would answer the question: "Of all the
reviews predicted as positive, how many are truly positive?" For
example, if the model predicts 100 movie reviews as positive,
and 80 of them are truly positive while 20 are false alarms
(negative reviews mistakenly classified as positive), the
precision is 80%.
Recall: Recall measures the model's ability to capture all the
truly positive reviews. In our example, it would answer the
question: "Of all the actual positive reviews, how many did the
model correctly identify?" Thus, if there are 150 actual positive
reviews, and the model correctly identifies 120 of them while
missing 30, the recall is 80%.

Precision and recall form a delicate trade-off in machine learning.


When you aim to improve precision (reduce false positives) in tasks
like sentiment analysis, you often do so at the cost of lower recall
(missing some true positives). Conversely, if you strive for higher
recall (capturing more true positives), precision may decrease due to
a higher chance of false positives. Finding the optimal balance
depends on the specific application and the consequences of false
positives vs. false negatives. As an example, imagine you are
implementing a process for medical diagnosis of a life-threatening
condition. The first step is an automated X-ray analysis, and if
alarming signs are found, the patient is then further inspected by
human experts. In this case, the cost of missing a true positive is
much higher than the cost of inspecting a false positive, so you will
tend to favor recall over precision.

In later chapters, we will dive deeper into the various intelligence


paradigms as well as the evaluation methods and metrics that are
appropriate in specific scenarios.

1.3.5 User experience


The user experience of AI products is a fascinating area - after all,
we hope to “partner” with an AI that can supercharge our
intelligence without sidestepping it. This human-AI partnership
requires a thoughtful and sensible discovery and design process.
Already in the 1950’s, John McCarthy, one of the founding fathers of
AI, complained: “As soon as it works, no one calls it AI anymore.”
Well, the implication here is that a great deal of what we technically
consider AI doesn’t work as reliably as desired, and one of the UX
challenges is how to address the unavoidable errors that will be
made by the AI. Some key considerations for a successful user
interface are:

Onboard and educate your users: AI is a relatively new


technology, and a smooth, but thorough onboarding experience
is necessary for successful usage. Onboarding can be more
complex than for the average digital product - after all, many AI
systems will require you to also onboard custom data or even
integrate the digital interface with physical devices like sensors.
Maintain an appropriate level of AI awareness: While
mature, time-tested systems such as Netflix’ recommendation
feature can just do its work in the background, newer systems
such as ADAS systems and products that rely on LLMs should
keep the user aware that AI is at work and can produce wrong
and unpredictable outputs. For example, you can do this by
highlighting the error-prone portions of the output with so-
called confidence scores (numeric values that signal how
confident the AI is about the correctness of its output).
Build trust: Amidst the wave of public AI interest triggered by
ChatGPT, initial user adoption was high, but end user demand
soon plateaued and user retention was much lower when
compared to other digital products.[5] One of the core pillars for
user retention is building trust into the reliable and reasonably
correct functioning of the AI system. According to Don Norman,
“trust develops over time and is based on experience, along
with continual reliable interaction.”[6] Building trust in an AI
product that is per se prone to errors involves transparency,
clear communication, and a focus on continuous improvement.
Give the users control - both perceived and actual: At the
input, you can provide additional options that advanced users
can use to flexibly configure the input to steer the behavior of
the model. At the output, you can allow the user to fix errors
made by the AI and iteratively achieve the optimal output.
Finally, you should incentivize the user to provide feedback that
flows back into your training and allows you to improve the
algorithm. As users see the gradual improvements, they will
provide more feedback, get back to the product more often, and
naturally nurture your data flywheel.

The effort you invest into the UI will depend on how much of a task
you are automating. If you are doing full automation without giving
the user options to correct and iterate, the UI can be relatively
simple. However, in most scenarios, AI will assist or augment the
user, who will still be required to make the final decisions and
perform parts of the task (“augmented intelligence”). In that case,
the user interface will be more complex: it should clearly show the
labor distribution between human and AI, highlight error potentials,
and allow the user to iteratively get to the desired outcome. As part
of your preparation, you should carefully study the original workflow
of the user into which your applications will be introduced. This also
requires a clear understanding of the steps it will replace in the
workflow, but also the new steps it might introduce, like prompt
engineering for generative AI applications.

To get inspiration for your UX, you can start broadly and look at the
best practices from different areas of design such as HMI (human-
machine interaction) and industrial design. You can then narrow
down your perspective and look at comparable implementations by
competitors. When you do this, a range of design patterns will
emerge, like autocomplete, command palettes, copilots, and direct
chat. To identify the most suitable pattern, consider the following
questions and requirements:

Integration with the “host” system: What do your users


need to do to benefit from your AI - is it switching windows,
going to an additional application or browser tab, or using a
browser extension? The tighter the coupling of the AI with the
original app, the smoother the UX - however, it obviously also
comes at the cost of flexibility, since you will need to adjust your
integration for each single “host” system.
Accuracy: How critical is the correctness of the result? For
example, if you are suggesting autocompletes, your
commitment is low - the users can choose whether they go your
way or continue on their own. However, if you are providing a
standalone application that will be opened intentionally to get
help on a specific task, the expectations are higher, and you
should prioritize the accuracy requirement.
Speed and latency: How long is the user willing to wait for a
response? If we are talking about supporting the user “on the
fly” as they are doing their job, for example in a coding co-pilot,
speed is key. You don’t want your feature to spit out a response
when the user has already advanced several steps into the
workflow. On the other hand, if you help your users “initialize” a
task in a more open manner, for example in generating an initial
draft of a text, you can expect more goodwill.
Incentives to provide feedback: You should try to get as
much feedback from the user as possible. This is quite
straightforward in click-through features, like search and
recommendation: clicks provide positive feedback. On the other
hand, with features such as chat, you will have a harder time
motivating the user. The classical thumbs up/down buttons and
comment fields can be useful, but expect many users to skip
them due to lack of time and/or interest. A useful approach can
be to “hide” more advanced and useful features, such as
downloads and sharing features, behind a mandatory feedback
mechanism (cf. Midjourney, where the user has to select one of
the generated images before it can be downloaded).

How could a possible user interface look for our sentiment analysis
example? You could display aggregated (averaged) sentiment scores
on the detail page of each movie and also offer rankings and fine-
grained filters. You can use pop-ups and textual elements to make
the user aware and alert of the AI functionality. And while this
feature is purely analytical and does not involve iteration and error
fixing on the part of the user, you can try and collect feedback after
the user watches the movie, and think of attractive incentives and
rewards for users that provide high-quality feedback.

In later chapters, we will take an in-depth look at the UX design


patterns for AI and provide guidelines for implementing a good
human-AI partnership.

1.3.6 Non-functional requirements


Beyond the data, algorithm and UX which enable you to implement
specific functionality, so-called non-functional requirements (NFRs)
such as accuracy, latency, scalability, reliability, and data governance
ensure that the user indeed gets the envisioned value. The concept
of NFRs comes from software development but is not yet
systematically accounted for in the domain of AI. Rather, these
requirements are picked up in an ad-hoc fashion as they come up
during user research, ideation, development, and operation of AI
capabilities.

You should try to understand and define your NFRs as early as


possible since different NFRs will be coming to life at different points
in your journey. For example, privacy needs to be considered
starting at the very initial step of data selection. Accuracy is most
sensitive in the production stage when users start using your system
online, potentially stressing it with unexpected inputs, and scalability
is a more strategic consideration that comes into play when your
business scales the number of users and/or requests or the
spectrum of offered functionality.

In this book, we will be very explicit about identifying and managing


NFRs and especially focus on the trade-offs between different NFRs.
For example, one of the most important trade-offs is between output
quality on the one hand, and latency and the use of compute
resources on the other hand. To achieve higher quality, you need to
train your model with more data and/or more parameters, which
affects not only resource use, but also training and inference speed.
To allow users to manage this trade-off, most pretrained Large
Language Models are offered in different sizes - for example, the
Llama model is offered in sizes from 7B to 65B parameters.
Additionally, when enriching your data, you will also need to make
sure that all the conditions for the privacy and security of your data
are checked off. How you prioritize the different requirements will
depend on the available computational resources, the expected UX,
and the impact of the decisions made or supported by the AI.

When adding sentiment analysis to your movie streaming platform,


you could decide to focus on accuracy and adaptability. After all,
providing highly accurate data about movie evaluation is the core
value behind the feature. Since the views of users change over time,
you need to make sure that your model adapts accordingly, and that
recent reviews get more weight in the final sentiment than older
reviews. On the other hand, latency might be of lower importance
since your data will be processed in batches, for example overnight,
and getting real-time updates is not exactly critical for your users.
Also, you will most probably not face any privacy challenges if you
present your users with aggregated sentiment scores that don’t
allow a traceback to the original individuals who posted them.

In later chapters, we will develop a systematic approach to NFRs in


AI, which will allow you to plan ahead, define your requirements and
balance the related trade-offs.

In this section, we have learned a mental model of AI systems that


will be used as a “blueprint” for AI features and products throughout
the book. In your own work, this model can be used as a tool for the
discussion, planning, and definition of AI products by a cross-
disciplinary product team, as well as for alignment with the business
department. It allows to bring together the diverse perspectives of
product managers, UX designers, data scientists, engineers, and
other team members.

1.4 Setting up your team for AI


success
Now that we have defined the mental model and the main
components to take care of, let’s look at how you can set up your
team for AI success. We will first look at the internal composition of
your product development team that will join you on the journey.
Then, we will see how you can successfully manage the interfaces
with other functions in your business and bring all important
stakeholders on board.

1.4.1 Team composition


Product teams can be set up in different ways. In the traditional
waterfall approach, PMs write the requirements, designers create
mockups and engineers produce the software. This approach
encourages silos, increases the need for back-and-forth rework, and,
in general, is not very efficient for building products that customers
love. In product management, a more modern and agile approach to
structuring your team is the product trio. Here, the product
manager, the designer(s) and the engineer(s) work side-by-side
during the whole product development lifecycle, communicating and
collaborating during all stages. Thus, iterations between these
functions are inherently programmed into the team structure. Since
the interaction is more direct and efficient, siloes and the need for
rework are reduced. To build a deeper understanding of the work of
a product trio, the reader is referred to Continuous discovery habits
by Teresa Torres.

In this book, recommendations about the structure and the


management of the team will be based on the assumption that you
are working in the product-trio structure, and are extending it with
the needed AI roles. However, most recommendations will also be
valid for other, more traditional team setups. Even if your
organization does not (yet) implement product trios, you can still
attempt to recreate the workflows and patterns in a traditional
organizational structure and eventually convince management of the
related benefits.

Whatever the product team structure, when starting on your AI


journey, you will need to plan for a cross-disciplinary team that
combines the necessary AI, data, and domain expertise. The
following table lists the core roles in a team that is developing AI
products:

Table 1.1 Core roles in a team developing AI systems

Role Description

Product Product The product manager identifies the


management manager opportunities, customer needs and
the larger business objectives that
the AI system will fulfill,
communicates the requirements
and success criteria, and guides a
team in implementing this vision.
Software Backend The backend engineer builds the
development engineer software, participating in the full life
cycle of the product. Compared to
frontend engineers that work on the
client-facing side of software,
backend engineers focus on the
server-side components of the
system.
Frontend Frontend engineers plan, design,
engineer build, and implement the user
interface of the system, working
closely with UI and UX designers.
DevOps The DevOps engineer plays a key
engineer role in bridging the gap between
development and IT operations,
focusing on automating and
streamlining the software delivery
and infrastructure management
processes to enable faster, more
reliable, and continuous software
releases.
Software The software architect is
architect responsible for designing and
structuring the overall system,
making high-level design decisions,
and defining architectural patterns
and principles to ensure the
scalability, maintainability, and
performance of the application.
AI Machine The ML engineer is responsible for
development Learning designing and developing ML
engineer (ML systems and ensuring a smooth
engineer) process for the training, evaluation,
and integration of ML models. They
also set up the infrastructure for
experiments, models, and data,
called MLOps.
Data scientist The data scientist is a professional
who collects large amounts of data
using analytical, statistical, and
programmable skills, and uses this
data to train ML models to fulfill the
business needs.
Data engineer The data engineer implements
processes and systems to process
data, monitor data quality, and
grants and manages data access for
key stakeholders.
MLOps The MLOps engineer plays a crucial
engineer role in managing the end-to-end
machine learning lifecycle, focusing
on automating ML workflows to
ensure the scalable development of
AI systems.
Prompt The prompt engineer writes and
engineer manages the prompts for
applications that use foundational
models.
AI engineer The AI engineer uses pre-built AI
components, such as models, plug-
ins, and agents, and combines them
into AI systems that address
specific use cases.
Data and Data The data annotator labels training
knowledge annotator data for the training, fine-tuning
and evaluation of AI models and
systems. They can also be involved
in the creation of guidelines for data
annotation.
Knowledge The knowledge engineer builds
engineer knowledge representations for AI
systems, especially symbolic
systems, such as rules, ontologies,
etc.
Domain The domain expert contributes
expert specialized knowledge for domain
applications. For example, they can
assist in the initial development of
the knowledge representations as
well as in the evaluation and
improvement of the different
versions of the system.
User UX designer The UX designer ideates, tests, and
experience designs the user experience of the
product.
UI designer The UI designer designs the
graphical layout and elements of
the product, which implement the
functional experience as envisioned
by the UX designer.
Conversational The conversational designer designs
designer the conversational elements, flow
and persona of virtual assistants
and other conversational
applications.
Content The content designer creates the
designer wording around AI products and
interfaces. This involves educating
users, managing expectations,
creating AI awareness, and
communicating the limitations of
the AI product.

Not all of these roles are required on every team, and some of them
will be needed only during specific stages of development. For
example, conversational designers are only needed if you are
building applications with a chat or voice interface. If you are
building a product in knowledge-intensive domains such as medicine
or law, recruiting and employing domain experts can be pretty
expensive. Often, you might want to get their input mainly at the
beginning to make sure your data and knowledge structures are on
the right track, and at more advanced stages for testing and
evaluating the system.
The depth - and cost - of the required AI development expertise will
depend on many factors, like the competitive situation, the
complexity of your AI endeavors and your data, and the envisioned
scale. However, by far the most important factor will be whether you
gravitate towards building or buying most of your AI stack. Doing it
yourself can help you create a moat and a competitive advantage,
but it is an advanced exercise for which you will need to have AI
ninjas on your team. On the other hand, keep in mind that more and
more AI infrastructure, tooling, and models are becoming available
via managed cloud services. You can leverage those to reduce the
internal effort and recruiting overhead, speed up time-to-market,
and follow a more agile approach to AI.[7]

Now, here are the most important ways to bring AI competence to


your company and product, sorted by increasing in-house AI
expertise:

Outsource your AI development. More and more


outsourcing providers are specializing in AI. This is a highly
convenient option, especially if you are looking to build
something quickly, or if you are planning to mostly use standard
components and don’t have the ambition to generate a strategic
advantage from AI.
Hire external consultants to enable your team. This can
be a great complement if you already have some initial AI
expertise in-house. External consultants can enable and coach
your team, helping you avoid typical underwater stones on the
AI journey that can only be spotted based on experience. On
the other hand, your engineers will still be implementing large
parts of the solution and building up internal know-how and
intellectual property.
Train your engineers or encourage them to acquire AI
skills. AI is here to stay. By letting your engineers upskill in this
area, you can promote employee development and career
growth. Besides, most engineers are curious folks who are
eager for new learning experiences and will appreciate the new
opportunity.
Hire data scientists and ML engineers. While there is a
heavy shortage of AI skills at the moment[8], you can
differentiate yourself and attract top-notch talent by establishing
a work culture that promotes creativity and engineering
excellence. This book provides you with the required guidance.

Beyond data talent, you might also need to involve domain experts.
This is especially important for domains where tasks are highly
individual and require rich knowledge (e. g. medical treatments) or
practical experience (e. g. financial advisory).

Finally, plan your data annotation. The resources will depend on the
degree of automation you can build into your data annotation effort,
as well as the required level of expertise. Data labeling has grown
into an industry and in the case of standardized tasks like sentiment
analysis, you can consider outsourcing your annotation tasks, in
which case you will also need to supply clear guidelines for the
annotation. You can also build up an internal workforce for data
annotation and manage the whole lifecycle of your training data
internally. However, especially when using sample-efficient learning
methods like few-shot learning, your existing team might as well
come up with the required training examples without the additional
cost and overhead of a dedicated annotation team. Lastly, you might
need more specialized expertise if you are building highly industry-
specific applications. For instance, imagine that you want to detect a
rare type of cancer from X-rays. In this case, you will need to recruit
subject-matter experts if they are not available internally. When
doing this, you should understand that data annotation is a tedious
task and most highly qualified experts will not want to spend months
of their time, so try to keep your sample size manageable.

1.4.2 Managing interfaces


Figure 1.4 The interfaces of product management

One of the things that makes product management so exciting is its


intersection with technology, business, and user experience. On the
other hand, according to McKinsey, “one of the primary reasons that
ML projects fail is because of a lack of buy-in from various
stakeholders, including the data, technology, line-of-business, … and
compliance teams.”[9] Product managers need to have a 360° view
of these core aspects of the business, reconcile requirements from
different business functions, and actively manage the related
communications, interfaces, and trade-offs.

On the technology and engineering side, PMs research and


combine the different needs and perspectives of users, the business,
engineers, and designers. They consolidate them into requirements
and acceptance criteria, which the engineers and designers develop
against. Together with the engineers, they also negotiate and
balance the trade-off between managing technical debt - those
imperfections and compromises that accumulate in the code as your
team dashes forward to launch a new feature or product - and new
development. This is a common area of tension between engineers
who want to eliminate technical debt, and business stakeholders
who want to quickly launch and provide more value to users. Finally,
with your feature or product launched, your engineers will throw
themselves into other development challenges. At this stage, you
need to monitor the usage and user satisfaction, adopt a proactive
approach to mitigate the risks associated with AI, and prioritize and
resolve the related issues.

From the business perspective, your AI efforts need to be aligned


with the vision, mission, market positioning, and strategy of the
company. The value, but also the limitations of the AI system need
to be clearly defined and communicated in business/user terms.
Ideally, you can find a way to quantify this value and visualize it in
an easily accessible place - this way, you will have an easier time
convincing internal stakeholders to support your future AI ambitions.
[10] Keep in mind that the business value can be broader than the
user-facing value - for example, with an aggressive AI strategy, you
can establish an innovative, future-oriented image that will boost the
value of your brand. You also need to plan and manage the cost of
AI development and operation, which involves a range of different
components like data, development, infrastructure, and
maintenance. Finally, think about AI governance, security, and risk -
by anticipating potential issues, you will avoid many hassles further
down the road and, again, build trust among management and
internal stakeholders.

In the user experience corner, explainability and transparency are


key to winning the trust of your users. In case you go for
augmented intelligence with a human-in-the-loop (HITL), you also
need to manage the “human fallacy”; humans are quick to
overestimate machine intelligence and forget that they are
communicating with machines. The UX should guide your users and
clearly alert them when participation is required for the AI to work
properly. Beyond the UX in the product, think about how you can
educate users about the value, usage, and limitations of AI features.
This is an iterative process - as you collect usage data and feedback
about the pain points and preferences of your users, you can
address them with focused content and communications.
Another random document with
no related content on Scribd:

You might also like