Professional Documents
Culture Documents
Generative Ai in Action Meap V02 Amit Bahree Online Ebook Texxtbook Full Chapter PDF
Generative Ai in Action Meap V02 Amit Bahree Online Ebook Texxtbook Full Chapter PDF
Amit Bahree
Visit to download the full and correct content document:
https://ebookmeta.com/product/generative-ai-in-action-meap-v02-amit-bahree/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...
https://ebookmeta.com/product/the-ai-revolution-in-project-
management-elevating-productivity-with-generative-ai-1st-edition-
kanabar/
https://ebookmeta.com/product/creative-prototyping-with-
generative-ai-augmenting-creative-workflows-with-generative-ai-
design-thinking-1st-edition-patrick-parra-pennefather/
https://ebookmeta.com/product/generative-ai-for-data-analytics-
meap-artur-guja/
https://ebookmeta.com/product/practical-weak-supervision-doing-
more-with-less-data-1st-edition-tok-wee-hyong-bahree-amit-filipi-
senja/
Generative AI: How ChatGPT and Other AI Tools Will
Revolutionize Business 1st Edition Tom Taulli
https://ebookmeta.com/product/generative-ai-how-chatgpt-and-
other-ai-tools-will-revolutionize-business-1st-edition-tom-
taulli/
https://ebookmeta.com/product/generative-ai-on-aws-early-release-
antje-barth-chris-fregly-shelbee-eigenbrode/
https://ebookmeta.com/product/generative-ai-on-aws-building-
context-aware-multimodal-reasoning-applications-1st-edition-
fregly/
https://ebookmeta.com/product/creating-intelligent-products-
meap-v01-generative-ai-advanced-analytics-smart-automation-leo-
porter/
https://ebookmeta.com/product/data-storytelling-with-python-
altair-and-generative-ai-meap-v01-angelica-lo-duca/
Generative AI in Action
1. 1_Introduction_to_generative_AI
2. 2_Introduction_to_large_language_models
3. 3_Working_through_an_API_–generating_text
4. 4_From_pixels_to_pictures_–_generating_images
5. 5_What_else_can_AI_generate?
6. index
1 Introduction to generative AI
This chapter covers
Introducing generative AI
Examples of what can be generated using generative AI
Guidance for organizations when adopting generative AI
Understanding key use cases for Generative AI
Comparing Generative AI to other AI
Artificial Intelligence (AI) is familiar and has been around for years. We all have
been using it – when we use a search engine, see a product recommendation,
listen to a curated playlist, or use the suggested words as we type on the phone –
all these are AI-powered scenarios. However, everything seems new in AI today,
with the world on fire talking about AI, specifically Generative AI.
In addition to the technical underpinnings, this book presents real-world use cases
that show the practical applications of generative AI. Readers will receive guidance
on developing and deploying generative AI models in enterprise settings, covering
topics such as prompts, prompt engineering, model finetuning, and evaluation. We
will also explore emerging application architecture patterns, best practices, and
integration patterns with existing systems and enterprise workflows. As generative
AI continues to evolve, the book will highlight emerging tools and trends that
enterprises should be aware of, including prompt engineering, explainable AI,
transfer learning, and reinforcement learning (specifically reinforcement learning
from human feedback – RLHF).
Lastly, readers will gain an understanding of the challenges and risks associated
with generative AI, such as how to use corporate and private data, guard rails and
ethical considerations, data privacy, security, safety considerations, etc., enabling
them to make informed decisions when integrating these technologies into their
organizations.
Machine Learning and Deep Learning provide the fundamental techniques we need
to understand before diving into Generative AI. They give us the toolkit to navigate
the landscape of AI, understanding the processes behind data engineering, model
training, and inference. As we progress through this book, we will apply these
principles, but we will not get into the details in this book. Multiple books are
dedicated to both topics, and it would be more prudent for the reader to consult
those for details.
At its simplest, Machine Learning (ML) is the scientific discipline that focuses on
how computers can learn from data. Instead of explicitly programming computers
to carry out tasks, in Machine Learning, we set out to develop algorithms that can
learn from and make predictions or decisions based on data. This data-driven
decision-making is applicable in numerous real-world scenarios, ranging from spam
filtering in emails to recommendation systems on e-commerce platforms.
Deep Learning (DL), a subset of Machine Learning, takes this concept further. It
uses artificial neural networks with several layers – called ‘deep’ learning. These
networks attempt to simulate the behavior of the human brain—albeit in a
simplified form—to ‘learn’ from large amounts of data. While a neural network with
a single layer can still make approximate predictions, additional hidden layers can
help optimize the accuracy. Deep learning drives many AI applications today and
helps execute tasks with improved efficiency, speed, and scale.
Below are a few areas where generative AI is being used today. We expect to see
even more innovative and creative applications as generative AI technology
develops.
Images: Create realistic images of people, objects, and scenes that do not exist
in the real world. This technology is used for various purposes, such as creating
virtual worlds for gaming and entertainment, generating realistic product
images for e-commerce, and creating training data for other AI models.
Videos: Create videos that do not exist in the real world. This technology is
used for various purposes, such as creating special effects for movies and TV
shows, generating training data for other AI models, and creating personalized
video content for marketing and advertising.
Text (Language): Create realistic text, such as news articles, blog posts, and
creative writing. This technology is used for various purposes, such as
generating content for websites and social media, creating personalized
marketing materials, and synthetic data.
Text (Code): Generative AI models are used to augment and assist developers
when they are writing code. GitHub found in research that developers who use
their Copilot feature feel 88% more productive and are 96% faster on
repetitive tasks.
Music: Generative AI models are being used to create new music that is
original and creative. This technology is used for various purposes, such as
creating music for movies and TV shows, generating personalized playlists, and
creating training data for other AI models.
We’ll dive into the specifics of how generative AI works in the next chapter, but for
now, let’s discuss what you can generate using this technology and how it can help
your enterprise.
In this example, we will use OpenAI’s GPT-4 model to extract the first name,
company name, location, email, and phone number from the text below.
Extract the name, company, email, and phone number from the text below:
Hello. My name is Amit Bahree. I’m calling from Acme Insurance, Seattle,
WA. My colleague mentioned that you are interested in learning about our
comprehensive benefits policy. Could you give me a call back at (555)
111-2222 when you get a chance so we can go over the benefits? I can be
reached Monday to Friday during normal business hours. If you want, you
can also email me at aweomseinsrance@acme.com. Thanks, Amit.
We can see the entities extracted by the model in the output below in Figure 1.3.
Given the broad nature of the underlying model (called Foundational models –
covered in the next chapter), it is also good at understanding grammar and spelling
mistakes. For example, we misspelled "pracrtices" and "architectes" and it
understood both.
Write a short book launch email for new Generative AI book that is available at
all book stores. The book covers Introduction to Generative AI, what are the
different types and use cases, and best pracrtices for enterprises including how
they can incorporate this new AI technology in their business processes. The
book is published by Manning and the author is Amit Bahree. The name of the
book is "Generative AI in Action: Real-World Applications and Case Studies of
AI for Enterprises". The target audience is developers and enterprise
architectes and the tone is friendly and exciting.
1. What should be the subject line of the email?
2. What should be the body of the email?
And one gets an answer something like this, which is rather good.
1. Subject Line: "Explore the Power of Generative AI with our New Book:
'Generative AI in Action' by Amit Bahree!"
2. Email Body:
Dear [Recipient's Name],
I am thrilled to announce the release of a remarkable new book, "Generative
AI in Action: Real-World Applications and Case Studies of AI for Enterprises",
authored by Amit Bahree and published by Manning.
Our dynamic business environment is calling for cutting-edge solutions, and
Generative AI is just that. It is one of the most exciting advancements in the
tech industry and can be a game-changer for your business. Whether you're a
developer or an enterprise architect, this book is your ticket to harnessing the
power of Generative AI.
"Generative AI in Action" unravels the complexities of Generative AI. It
provides an accessible introduction and dives deep into its different types, use
cases, and how it can revolutionize business processes. Furthermore, it offers
the best practices for enterprises looking to incorporate this state-of-the-art
technology into their operations.
Here's what you can expect from this enlightening read:
- A comprehensive introduction to Generative AI.
- Exploration of various types of Generative AI and their real-world applications.
- Insightful case studies demonstrating how businesses leverage Generative AI.
- Valuable guidance on the best practices for implementing Generative AI in
your enterprise.
We've carefully curated this book to make Generative AI accessible to anyone
eager to understand and use it. So whether you're new to AI or an experienced
professional seeking to stay at the forefront of technological advancements,
"Generative AI in Action" is the resource you've been waiting for.
We're excited to share that "Generative AI in Action" is now available at all
leading bookstores. Grab your copy today and start your journey towards
transforming your business with Generative AI.
Thank you for your continued support and happy reading!
Best Regards,
[Your Name]
[Your Position]
[Your Contact Information]
1.3.3 Generating Images
We are not restricted to only text; we can generate images using a prompt. To be
clear, the model is not finding a similar image that satisfies these attributes; rather,
it is generating something new that does not exist. It does not matter if we ask for
something whimsical that will not exist in the real world – such as a dog writing a
book with the Taj Mahal in the background. The AI can deconstruct and
comprehend each aspect and then use that to create something new – similar to
how an artist would. In the example below in Figure 1.4, we use OpenAI’s DALL-E 3
model to generate the images from the prompt.
And you get an answer like this, including the steps required to start, which is
impressive. Of course, this is just an illustrative example to show the model's power
– understanding the context and rules of the request, including the programming
language, the Software development kit (SDK), and packages to use, etc., and then
generating code. This code does not follow established best practices (e.g., one
should not have their API key in the code).
import openai
We can run this and see the output shown in Figure 1.5 below. Note, given the
non-deterministic nature of AI, we get a little different response each time we run
this. In Chapter 3 – Generating Text, we will see how we can control some of this
using different options and nudge the model towards more deterministic. Some of
these are how large language models (LLMs), a type of Generative AI model, work.
We will explore LLMs in the next chapter.
For example, we can ask the model to solve a simple math equation (such as the
example below) and not only solve it but also explain the steps and give us the
answer.
Here is the response, where we see the model working through its steps and
showing us the exact “thought” process it underwent. Achieving something like this
that can generalize across a wider domain space with traditional AI would be
difficult, if not impossible.
Generative AI, on the other hand, can understand the intent of the question
because it has a broader understanding of the world and can generate step-by-step
processes. Another aspect that allows this is the phenomenon of emergent behavior
of generative AI models. This behavior is the ability to outline a step-by-step
process. It is not present in any of the individual components of the model but
emerges from the interaction of the components. The next chapter will cover
emergent behavior in more detail when introducing large language models.
There are many reasons why an AI system tends to hallucinate, but often because
the underlying model cannot distinguish what facts and fiction are in its training
data. LLMs are trained to generate coherent, context-aware text rather than
factually accurate responses. They tend to hallucinate when the prompt or context
is inaccurate and relevant to the task. We will cover hallucinations and techniques
one can use to minimize them later in the book.
Table 1.2 below outlines a few enterprise use cases – these are generic because
they are more horizontal examples applicable to multiple industries. Table 1.3 later
outlines some industry-specific use cases.
These are just a few enterprise use cases for generative AI. As the technology
continues to develop, we can expect to see even more innovative and impactful
applications for generative AI in the years to come.
If enterprises do not set up proper ethical guidelines for generative AI, it could lead
to unintended consequences. Generative AI can create misinformation and fake
content, including fake news articles, fake images, sensitive content, and malicious
content. A picture is not worth a thousand words anymore; some images are so
good that it is getting harder to distinguish fake from real images. In some cases,
the generated output could also infringe on third-party copyrighted material. Adding
human oversight and contribution can help address some of these challenges.
Generative AI models can also unintentionally amplify biases present in training
data, leading to problematic outputs that perpetuate stereotypes and harmful
ideologies.
Companies need a solid plan for using generative AI and ensuring it aligns with
their business goals, like how it will affect sales, marketing, commerce, service, and
IT jobs. And where there are life-and-death decisions, ensure there is a human-in-
the-loop who is making the final decision with the AI as a co-pilot and assisting.
On the other hand, generative AI leans toward a probabilistic approach, where the
outcome is calculated based on probabilities influenced by the input data and
learned patterns. This allows these AI systems to create outputs that were neither
hard-coded nor explicitly taught to the system. Generative AI needs the classical
data science process, enhancing and complementing it in many ways. Generative AI
can help deal with new types of data and content, evaluate the quality and validity
of generated outputs, and ensure the ethical and responsible use of generative AI.
Classical data science and generative AI must work together to create value and
impact from data.
One significant difference is that the modality to “talk” to these newer Generative
AI models is by prompting them via a prompt, as we have seen in examples earlier.
More formally, a prompt is a set of instructions that tells the generative AI system
what kind of content we want to create. The better the prompt, the better the
results. A prompt should be tailored to the type of response you want to receive
and to the specifics of the Generative AI you use. We will cover Prompt engineering
in more detail later.
Prompting, in some ways, allows us to be more expressive and not only outline the
requirements and the intent but also capture empathy and emotion via language.
Prompt engineering is a new rising area for developers and enterprises – some of
which are more art and less science.
Prompt: Write a funny haiku about prompt engineering vs. traditional AI.
AI generated text:
Table 1.4 outlines broad categories to show differences between generative and
traditional AI architectures. It is also important to note that there is a lot of overlap
between different types of AI. For example, a chatbot might use both generative
and predictive models - generating responses based on a learned understanding of
language and predicting what type of response a user will likely find most helpful.
Area Difference
Creation Traditional AI focuses on prediction or classification tasks, identifying
vs what something is or forecasting what will happen next based on
Prediction existing data. Generative AI creates new content and outputs that did
not exist in the original data.
Hosting Relative to generative AI, traditional AI models are less complex and
and require fewer computing resources, allowing them to run on various
Inference hardware, from small edge devices to large cloud clusters and
everything in between. This flexibility cloud-to-edge is a huge
advantage for enterprises.
Generative models are very large and complex; for the most part,
they are available only on large cloud compute nodes via an API.
These have other advantages where the knowledge of the world
encoded in these foundational models is available to everyone, and
other constraints that one needs to consider.
Note: There is a growing set of smaller open-source models that can
be run on-premise, but today, they are still more experimental and
nascent. Many claim to be AI Quality (AIQ), similar to OpenAI’s
models. Given the broad spectrum that generative AI covers, there is
a new emerging set of benchmarks, such as Stanford’s Holistic
Evaluation of Language Model (HELM -
https://crfm.stanford.edu/helm/latest/) and Hugging Face recently
published an Open LLM Leaderboard -
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard.
Training Generative models require a different method of training (self-
method supervision and multi-task learning), which is longer and much more
expensive because of the massive scale of data, model sizes, and
computing resources needed. The costs and complexity of managing
this are enormous, and we will touch on them later in the chapter.
Training Generative AI models are typically trained on large datasets of
dataset existing content, while traditional AI models are typically trained on
smaller datasets of labeled data. For example, a generative AI model
for image generation might be trained on a dataset of millions of
images. In contrast, a discriminative AI model for image classification
might be trained on a few thousand labeled image datasets.
Model Generative AI models are often more complex than other types of AI
complexity models. This is because generative AI models need to learn the
patterns and relationships in the data to generate new content similar
to the existing content.
Adaptation There are no adaptive techniques for traditional AI other than labeling
approach more data and going through a full ML loop of training, deploying, and
evaluating. Generative AI, on the other hand, has world knowledge
and is vast. Sometimes, one needs to tailor them to specific needs
and tasks or distill internal private and proprietary knowledge; this is
done via adaptation. There are various techniques depending on the
needs.
Many organizations may fall for the excitement and fear of missing out on
generative AI, which appears to be magical. However, the truth is that having a
foundational model like GPT-4, a big language model, does not make any difference
by itself. These advanced AI systems must be implemented and connected to the
enterprise's business lines and processes like any other external software. We will
see examples of how to implement this in subsequent chapters.
At a high level, there should not be many changes from an overall approach; it is
still advised that enterprises should take a thoughtful and strategic approach when
incorporating Generative AI. Below are a few key considerations – these span
various dimensions that most enterprises need to consider, from strategic, business,
and technical.
Crawl, walk, and run – Start small and do not rush in to do too much too soon.
Start with a small pilot project to evaluate, learn, and adapt. This is a complex
technology, and it takes time to develop and deploy effective generative AI
applications. Do not expect to see results overnight.
Define clear objectives and the right use cases – It is important for enterprises
to carefully evaluate potential use cases and select those that are most likely to
deliver value. The selected use case will guide the choice of AI models, data
preparation, and resource allocations. Some generative AI applications are
more mature and have a proven record of success; for others, it is still early
days.
Establish governance policies – Generative AI can generate data, some of
which may be sensitive or harmful. Enterprises must establish governance
policies to ensure this data is used responsibly and securely. These policies
should address issues such as data ownership, data privacy, and data security.
Establish Responsible AI and Ethical Governance – Considering the ethical
implications of using generative AI is important. Establish a separate
responsible AI and ethical set of policies that reflect the company's values and
that it would be important to manage the reputation and the brand. This
includes concerns around bias in AI outputs, the potential misuse of generated
content, hallucinations and incorrect details in generated content, and the
implications of automating tasks that humans previously performed. A robust
AI governance and ethics framework can help manage these risks.
Experiment and iterate – Unlike computer science, AI, specifically generative
AI, is nondeterministic, and depending on the model parameters and settings,
the output can be quite different. As with any AI application, it is essential to
take an iterative approach when implementing generative AI. Start with smaller
projects, learn from the outcomes, and gradually scale up. This approach helps
to manage risk and gain practical experience.
Design for failure – Most generative AI models today are commercially available
as a cloud API. As such, they are complex and have a considerable latency
compared to more traditional APIs. Enterprises should adhere to cloud best
practices and design for failure. Enterprises should factor in best practices of
retry mechanics, including exponential backoff policies, caching, security, etc.
Expand existing Architecture – These new generative AI endpoints are just
additional pieces of the overall system. As such, most organizations will want to
keep their existing architecture guidance and practices and expand their
existing architecture and best practices, not start from scratch. There are new
constructs that need to be incorporated – things such as context windows,
tokens, embeddings, etc.
Manage cost – Generative AI is complex and much more expensive. The cost is
typically measured differently (such as tokens) and not API calls. Much of this
is new and different for enterprises, and the costs are quite easy to get out of
hand.
Complement traditional AI – In most cases, Generative AI would help assist
existing investment in traditional AI enterprises already have. Both sets of
technologies are not mutually exclusive but rather support each other.
Open-Source vs. Commercial models – Some models are commercially
available, such as Azure OpenAI’s GPT models, and open source, such as
Stable Diffusion. Depending on the use case, it is important to validate which
models to use, what the licensing allows, and what legal and regulatory
aspects are already covered.
From an enterprise perspective, there are new aspects of Generative AI that one
needs to consider – most, if not all, of these would be things to add to existing
architecture best practices and guidance and not throw out anything. We will cover
the details later in the book, but new architectural patterns must be accounted for
at a high level. Many of these we have touched on earlier in this chapter, but the
key ones are:
Prompts, and later, we will see how to consider engineering and managing
aspects around prompts, including tokens and context windows.
Model adaptation to make the output better for specific tasks.
Integrating generative AI into existing enterprise line-of-business systems –
these new AI models alone do not solve a business problem.
Design for failure – something that is not new per se when building mission-
critical systems, but many still take shortcuts.
Cost and ROI – These generative AI systems are tremendously expensive, a
key part because the underlying compute is very expensive. The costs will
come down over time, but they must be conscious, planned, and designed
upfront. For example, the cost of GPT3.5 Turbo from OpenAI came down by
90%, and its quality went up by 90% compared to GPT-3.
Implement policies and approaches for Open-source (OSS) vs. Commercial
models. Each week, newer models are powering generating AI systems being
released – some are commercial, others are OSS, with different licensing
structures.
Vendor – There are a few vendors in production that enterprises can use today,
but more are coming soon. Today, two of the most mature are OpenAI and
Azure OpenAI –similar where the former targets smaller companies and
startups, with the latter more at Enterprises. Google is also releasing its
Generative AI suite on Google Cloud, and there have been announcements
from Amazon. In addition, there are also a lot of well-funded startups that have
announced similar products. Enterprise would want to look at each one in the
complete sense of the vendors and identify which one they would want to take
a dependency on.
First, you need to define your goals clearly. What problems are you trying to solve
with generative AI? Where can it add the most value? This could be anything from
content creation in marketing to improving customer service with chatbots,
predictive modeling for business strategies, or even product or service innovation.
We are implementing an Enterprise ChatGPT like OpenAI’s ChatGPT in our example.
Still, one that is deployed and runs in an enterprise setting, using internal and
proprietary data, and only authorized users would have access to it. The Enterprise
ChatGPT,
Second, we need to ensure you have the right resources in place – this includes
people with the right skills, the appropriate hardware and software infrastructure,
outlining measures of success, and setting up the proper governance and ethics
guidelines.
Third, consider the data. In our example, the Enterprise ChatBot would need access
to the enterprise data, which is relevant, high-quality, and can be used by the user.
This data needs to be ingested and indexed so we can use it to help answer
proprietary questions. And before that, the data must be managed properly,
ensuring privacy and legal compliance. Remember, the quality of the data fed will
influence the output quality.
Next, we need to integrate the Enterprise ChatBot into the line of business
applications that address the use case and the problem we are trying to address. As
an enterprise, we will also want to address the risks associated with generative AI
and implement corporate guidance around Safety and Responsible AI.
1.8 Summary
Generative AI can be used for multiple use cases – Entity extraction,
generating specific and personalized text, generating images, generating code,
interpreting, solving logical problems, and generating music.
Generative AI use cases can be horizontal across most industries (such as
customer services and personalized marketing) or industry-specific (such as
fraud detection in finance or personalized treatment plans in healthcare).
Traditional AI predominantly operates in pre-defined narrow swim lanes and
can act only in those dimensions, unlike generative AI, which is broader and
allows for more flexibility.
Outlined an approach and architecture considerations for enterprises to adopt
and implement generative AI.
1.9 References
[1] M. Roser, "The brief history of artificial intelligence: The world has changed fast
– what might be next?," 6 12 2022. [Online]. Available:
https://ourworldindata.org/brief-history-of-ai.
2 Introduction to large
language models
This chapter covers
Overview of Large Language Models (LLMs)
Key use cases powered by LLMs
Foundational Models and their impact on AI Development
New architecture concepts for LLMs - prompts, prompt
engineering, embeddings, tokens, model parameters, context
window, and emergent behavior
Comparison of open-source and commercial LLMs
Note:
This cut-off is important because, after this point, the model is not
aware of any events, advancements, new concepts, or changes in
language usage that occur. For example, the training data for GPT
cut-off in September 2021, meaning the model does not know real-
world events or advancements in various fields beyond that point.
The key point is that while these models can generate text based on
the data they were trained on, they do not learn or update their
knowledge after the training cut-off. They cannot access or retrieve
real-time information from the internet or any external database.
Their responses are generated purely based on patterns they have
learned during their training period.
Note:
It's worth noting that all these methods have their pros and cons:
Base LLM are versatile and can handle many tasks without
additional training. However, it might not be as accurate or
reliable as you'd like for specific tasks or domains – especially in
an enterprise setting.
Instruction-based usage can be very effective for some tasks
but requires careful prompt crafting and doesn't fundamentally
change the model's capabilities. This is where many of the
prompt engineering techniques and best practices apply.
Fine-tuning can yield excellent results for specific tasks or
domains. However, it requires additional resources and comes
with the risk of overfitting the training data, which could limit
the model's ability to generalize to new examples.
Azure OpenAI and OpenAI are both services that provide access to
OpenAI's powerful language models, but they have some key
differences. Generally speaking, OpenAI caters to more small-
medium-business individual developers and startups. Azure OpenAI,
on the other hand, is more for enterprises that need additional
security, availability in different parts of the world, and regulatory
needs.
As a result, the APIs between the two are similar but not the same.
It is important to call out that the underlying models, however, are
the same, and Azure OpenAI has its deployment that incorporates
these additional features that most enterprises require.
This model starts with the input text – the prompt. The prompt is
first converted into a sequence of tokens using tokenization. Each
token is then converted into a numerical vector via a process called
embedding, which acts as the encoder input.
Once all the layers have processed the information, the model
outputs a prediction for the net token in the learned sequence. This
outcome is converted back to the text and is the response we see.
This process runs in an iterative loop and occurs for each new token
generated, hence creating a coherent text output. The final text that
the model generates is an emergent property of this layered,
iterative process. The final output sequence is also called a
Completion.
2.6.1 Prompts
A prompt is how we "talk" to these models. A prompt is just text
describing the task we are trying to achieve using natural language.
The output of these models is also text. The ability to express our
intention in this manner (natural language) instead of conforming to
the input restrictions of a machine makes prompts powerful. Crafting
or designing the text in the prompt is akin to programming the
model and creating a new paradigm called Prompt Engineering,
which we cover later in the book.
AI-Generated:
AI-Generated:
Note:
2.6.2 Tokens
Tokens are the basic units of text that a large language model (LLM)
uses to process both the request and the response, i.e., understand
and generate text. Tokenization is the process of converting text into
a sequence of smaller units called tokens. When using LLMs, we use
tokens to converse with these models, one of the most fundamental
elements to understanding LLMs.
Tokens are the new currency when incorporating LLMs into your
application or solutions. Tokens directly correlate with the cost of
running models – both in terms of money and cost in terms of the
experience with latency and throughput. The more tokens, the more
processing the model has to do. This means more computational
resources are required for the model, which means lower
performance and higher latency.
Tokens are a part of the text that LLMs process; this part can be
individual characters, words, sub-words, or even larger linguistic
units, depending on the tokenization algorithm. A rough rule of
thumb is that one token is approximately four characters or 0.75
words for English text. For most LLMs today, the token size that they
support includes both the input prompt and the response.
Let's see with an example; Figure 2.4 below shows how the
sentence "I have a white dog named Champ" gets tokenized (using
OpenAI's tokenizer in this case). Each block represents a different
token. In this example, we use eight tokens.
H EARING the maid tap lightly on her door for the third or fourth
time, Ulrica uttered a semiconscious “Come.” It was her usual
rising hour but to-day she was more depressed than usual, although
the condition was common enough at all times. The heavy drag of a
troubled mental state was upon her. Was it never to end? Was she
never to be happy again? After several weeks of a decidedly
acceptable loneliness, during which Harry had been in the west
looking after his interminable interests, he was about to return. The
weariness of that, to begin with! And while she could not say that she
really hated or even disliked him deeply (he was too kind and
considerate for that), still his existence, his able and different
personality, constantly forced or persuaded upon her, had come to
be a bore. The trouble was that she did not truly love him and never
could. He might be, as he was, rich, resourceful and generous to a
fault in her case, a man whom the world of commerce respected, but
how did that avail her? He was not her kind of man. Vivian before
him had proved that. And other men had been and would be as glad
to do as much if not more.
Vivian had given all of himself in a different way. Only Harry’s
seeking, begging eyes pleading with her (after Vivian’s death and
when she was so depressed) had preyed upon and finally moved her
to sympathy. Life had not mattered then, (only her mother and
sister), and she had become too weary to pursue any career, even
for them. So Harry with his wealth and anxiety to do for her—
(The maid entered softly, drew back the curtains and
raised the blinds, letting in a flood of sunshine, then
proceeded to arrange the bath.)
It had been, of course, because of the magic of her beauty—how
well she knew the magic of that!—plus an understanding and
sympathy she had for the miseries Harry had endured in his youth,
that had caused him to pursue her with all the pathetic vehemence of
a man of fifty. He was not at all like Vivian, who had been shy and
retiring. Life had seemed to frighten poor Vivian and drive him in
upon himself in an uncomplaining and dignified way. In Harry’s case
it had acted contrariwise. Some men were so, especially the old and
rich, those from whom life was slipping away and for whom youth,
their lost youth, seemed to remain a colored and enthralling
spectacle however wholly gone. The gifts he had lavished upon her,
the cars, the jewels, this apartment, stocks and bonds, even that
house in Seadale for her sister and mother! And all because of a
beauty that meant so little to her now that Vivian was gone, and in
the face of an indifference so marked that it might well have wearied
any man.
How could she go on? (She paused in her thoughts to survey and
follow her maid, who was calling for the second time.) Though he
hung upon her least word or wish and was content to see her at her
pleasure, to run her errands and be ever deferential and worshipful,
still she could not like him, could barely tolerate him. Before her
always now was Vivian with his brooding eyes and elusive, sensitive
smile; Vivian, who had never a penny to bless himself with. She
could see him now striding to and fro in his bare studio, a brush in
one hand, or sitting in his crippled chair meditating before a picture
or talking to her of ways and means which might be employed to
better their state. The pathos!
“I cannot endure that perfume, Olga!”
In part she could understand her acceptance of Harry after Vivian
(only it did not seem understandable always, even to her), for in her
extreme youth her parents had been so very poor. Perhaps because
of her longings and childish fears in those days she had been
marked in some strange way that had eventually led her to the
conviction that wealth was so essential. For her parents were
certainly harassed from her sixth to her thirteenth years, when they
recovered themselves in part. Some bank or concern had failed and
they had been thrown on inadequate resources and made to shift
along in strange ways. She could remember an old brick house with
a funereal air and a weedy garden into which they had moved and
where for a long time they were almost without food. Her mother had
cried more than once as she sat by the open window looking
desolately out, while Ulrica, not quite comprehending what it was all
about, had stared at her from an adjacent corner.
“Will madame have the iris or the Japanese lilac in the
water?”
She recalled going downtown once on an errand and slipping
along shyly because her clothes were not good. And when she saw
some schoolgirls approaching, hid behind a tree so they should not
see her. Another time, passing the Pilkington at dinner-time, the
windows being open and the diners visible, she had wondered what
great persons they must be to be able to bask in so great a world. It
was then perhaps that she had developed the obsession for wealth
which had led to this. If only she could have seen herself as she now
was she would not have longed so. (She paused, looking gloomily
back into the past.) And then had come the recovery of her father in
some way or other. He had managed to get an interest in a small
stove factory and they were no longer so poor—but that was after
her youth had been spoiled, her mind marked in this way.
And to crown it all, at seventeen had come Byram the inefficient.
And because he was “cute” and had a suggestion of a lisp; was of
good family and really insane over her, as nearly every youth was
once she had turned fourteen, she had married him, against her
parents’ wishes, running away with him and lying about her age, as
did he about his. And then had come trying times. Byram was no
money-maker, as she might have known. He was inexperienced, and
being in disfavor with his parents for ignoring them in his hasty
choice of a wife, he was left to his own devices. For two whole years
what had she not endured—petty wants which she had concealed
from her mother, furniture bought on time and dunned for, collectors
with whom she had to plead not to take the stove or the lamp or the
parlor table, and grocery stores and laundries and meat-markets
which had to be avoided because of unpaid bills. There had even
been an ejectment for non-payment of rent, and job after job lost for
one reason and another, until the whole experiment had been
discolored and made impossible even after comfort had been
restored.
“I cannot endure the cries of the children, Olga. You will
have to close that window.”
No; Byram was no money-maker, not even after his parents in far-
distant St. Paul had begun to help him to do better. And anyhow by
then, because she had had time to sense how weak he was, what a
child, she was weary of him, although he was not entirely to blame. It
was life. And besides, during all that time there had been the most
urgent pursuit of her by other men, men of the world and of means,
who had tried to influence her with the thought of how easily her life
could be made more agreeable. Why remain faithful to so young and
poor a man when so much could be done for her. But she had
refused. Despite Byram’s lacks she had small interest in them,
although their money and skill had succeeded in debasing Byram in
her young and untrained imagination, making him seem even
weaker and more ridiculous than he was. But that was all so long
ago now and Vivian had proved so much more important in her life.
While even now she was sorry for Harry and for Byram she could
only think of Vivian, who was irretrievably gone. Byram was
successful now and out of her life, but maybe if life had not been so
unkind and they so foolish——
“You may have Henry serve breakfast and call the car!”
And then after Byram had come Newton, big, successful,
important, a quondam employer of Byram, who had met her on the
street one day when she was looking for work, just when she had
begun to sense how inefficient Byram really was, and he had proved
kind without becoming obnoxious or demanding. While declaring,
and actually proving, that he wished nothing more of her than her
good-will, he had aided her with work, an opportunity to make her
own way. All men were not selfish. He had been the vice-president of
the Dickerson Company and had made a place for her in his office,
saying that what she did not know he would teach her since he
needed a little sunshine there. And all the while her interest in Byram
was waning, so much so that she had persuaded him to seek work
elsewhere so that she might be rid of him, and then she had gone
home to live with her mother. And Newton would have married her if
she had cared, but so grieved was she by the outcome of her first
love and marriage that she would not.
“The sedan, yes. And I will take my furs.”
And then, living with her mother and making her own way, she had
been sought by others. But there had been taking root and growing
in her an ideal which somehow in the course of time had completely
mastered her and would not even let her think of anything else, save
in moments of loneliness and the natural human yearning for life.
This somehow concerned some one man, not any one she knew, not
any one she was sure she would ever meet, but one so wonderful
and ideal that for her there could be no other like him. He was not to
be as young or unsophisticated as Byram, nor as old and practical
as Newton, though possibly as able (though somehow this did not
matter), but wise and delicate, a spirit-mate, some such wondrous
thing as a great musician or artist might be, yet to whom in spite of
his greatness she was to be all in all. She could not have told herself
then how she was to have appealed to him, unless somehow surely,
because of her great desire for him, her beauty and his
understanding of her need. He was to have a fineness of mind and
body, a breadth, a grasp, a tenderness of soul such as she had not
seen except in pictures and dreams. And such as would need her.
“To Thorne and Company’s first, Fred.”
Somewhere she had seen pictures of Lord Byron, of Shelley, Liszt
and Keats, and her soul had yearned over each, the beauty of their
faces, the record of their dreams and seekings, their something
above the common seeking and clayiness (she understood that
now). They were of a world so far above hers. But before Vivian
appeared, how long a journey! Life had never been in any hurry for
her. She had gone on working and seeking and dreaming, the while
other men had come and gone. There had been, for instance, Joyce
with whom, had she been able to tolerate him, she might have found
a life of comfort in so far as material things went. He was, however,
too thin or limited spiritually to interest a stirring mind such as hers, a
material man, and yet he had along with his financial capacity more
humanity than most, a kind of spiritual tenderness and generosity at
times towards some temperaments. But no art, no true romance. He
was a plunger in real estate, a developer of tracts. And he lacked
that stability and worth of temperament which even then she was
beginning to sense as needful to her, whether art was present or not.
He was handsomer than Byram, a gallant of sorts, active and
ebullient, and always he seemed to sense, as might a homing
pigeon, the direction in which lay his own best financial opportunities
and to be able to wing in that direction. But beyond that, what? He
was not brilliant mentally, merely a clever “mixer” and maker of
money, and she was a little weary of men who could think only in
terms of money. How thin some clever men really were!
“I rather like that. I’ll try it on.”
And so it had been with him as it had been with Byram and
Newton, although he sought her eagerly enough! and so it was
afterward with Edward and Young. They were all worthy men in their
way. No doubt some women would be or already had been drawn to
them and now thought them wonderful. Even if she could have
married any one of them it would only have been to have endured a
variation of what she had endured with Byram; with them it would
have been of the mind instead of the purse, which would have been
worse. For poor Byram, inefficient and inexperienced as he was, had
had some little imagination and longings above the commonplace.
But these, as contrasted with her new ideal——
“Yes, the lines of this side are not bad.”
Yes, in those days there had come to her this nameless unrest,
this seeking for something better than anything she had yet known
and which later, without rhyme or reason, had caused her to be so
violently drawn to Vivian. Why had Vivian always grieved so over her
earlier affairs? They were nothing, and she regretted them once she
knew him.
“Yes, you may send me this one, and the little one with
the jade pins.”