Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Innovation Guide for AI Coding Assistants

Published 24 October 2023 - ID G00796830 - 27 min read

By Analyst(s): Arun Batchu, Philip Walsh, Jim Scheibmeir, Hema Nair, Manjunath Bhat,
Tigran Egiazarov

Initiatives: Software Engineering Technologies; Adopt Modern Architectures and


Technologies; Artificial Intelligence

AI coding assistants enable developers to write code faster,


boosting developer productivity and developer experience.
Software engineering leaders must move quickly and decisively to
harness the benefits of AI coding assistants while also mitigating
security, quality and legal risks.

Overview
Key Findings
■ Software engineering leaders face a paralyzing number of options today (over 40
and growing in 2023, up from just a handful in 2022) for AI coding assistants.
Suppliers continuously evolve capabilities, complicating leaders’ ability to decide
how to adopt these tools.

■ Early adopters of AI coding assistants have reported results that range from overly
optimistic to outright dismissive, making it difficult for software engineering leaders
to understand and calculate the potential return on investment.

■ Developers are racing ahead of their employers in using AI coding assistants,


resorting to using unapproved versions to get their job done when they lack access
to enterprise-approved versions. This behavior greatly increases risks of enterprise
code and data being used for training models, as evidenced by a prominent incident.

■ Challenges such as developers frustrated with errors in generated code, security


leaders concerned by privacy and security issues, and legal and compliance leaders
raising issues about intellectual property issues are inhibiting enterprisewide
adoption of AI coding assistants.

Recommendations
Software engineering leaders should:

Gartner, Inc. | G00796830 Page 1 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


■ Break the cycle of decision paralysis by choosing a supplier that has a growing
number of paying enterprise customers and a compelling product roadmap, and
whose product has must-have capabilities such as toxicity and bias filters; code
completion and generation; and the ability to explain, debug, refactor and translate
code.

■ Assess the real productivity gains by carefully choosing key metrics to measure
gains and designing hands-on experiments to measure impacts for use cases
relevant to their enterprise context — instead of succumbing to vendor or peer claims
of productivity gains.

■ Provide an enterprise-ready alternative to personal AI coding assistants for their


developers by fast-tracking proofs of concept and pilots of the product from their
chosen supplier — instead of blocking access to AI coding assistants or taking a
wait-and-see approach.

■ Drive enterprisewide adoption of AI coding assistants by creating a cross-functional


task force of engineering, architecture, security and legal experts to identify and
mitigate risks continually, from the evaluation phase to enterprise rollout.

Strategic Planning Assumption(s)


By 2028, 75% of enterprise software engineers will use AI coding assistants, up from less
than 10% in early 2023.

By 2028, systematic adoption of AI coding assistants in 2023 will result in at least 36%
compounded developer productivity growth.

Contribute to Beta Research

The following research is a work in progress that does not represent our final position. We
invite you to provide constructive feedback to help shape this research as it evolves. All
relevant updates and feedback will be incorporated into the final research.

Table of Contents
Use these jumplinks to navigate through this document:

■ Market Definition

■ Emerging Technology Map

Gartner, Inc. | G00796830 Page 2 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


■ Market Trends

■ Market Evolution

■ Benefits and Use Cases

■ Piloting and Evaluating Vendors

■ Managing Risks

■ Representative Vendors

Market Definition
Back to top

Gartner defines AI coding assistants as technology that helps developers write code by
using foundation models trained on millions of lines of code and code-related
documentation such as open-source software repositories, stack exchange data and, in
some cases, proprietary data that the supplier has access to. Developers prompt the
assistants with natural language and code snippets to generate new code. These tools
also analyze, explain, debug and refactor code; generate documentation; and translate
between programming languages. AI coding assistants support multiple programming
and natural languages, and they integrate into programming environments such as code
editors, command-line terminals and chat interfaces.

Standard features. AI coding assistants must be able to perform code completion. This
includes the ability to plug in to a code editor, generate code and generate comments.

Must-have features. AI coding assistants must also be able to:

■ Filter for toxicity and bias.

■ Enable developers to chat with the foundation model. This includes generating,
explaining, debugging, refactoring and translating code.

■ Enable developers to explore problem-solving approaches by asking free-form


natural language questions.

■ Generate unit tests.

Optional features. AI coding assistants may also provide:

Gartner, Inc. | G00796830 Page 3 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


■ Deployment of private instances of the model

■ Security filters and intellectual property filters

■ Customization to enterprise codebase; this includes retrieval-augmented generation


(RAG) grounded in enterprise repositories and fine-tuning of base foundation models
with enterprise code

■ Training data transparency

■ Tight integration between editor and chat

■ Ability to debug errors

■ API access to foundation model

Emerging Technology Map


Back to top

The Emerging Technology Map for AI coding assistants outlines the key use cases,
capabilities and trends in this market (see Figure 1).

Gartner, Inc. | G00796830 Page 4 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Figure 1: Emerging Technology Map for AI Coding Assistants

Market Trends
Back to top

Developers are racing ahead of their employers.

More than 80% of developers are already using these tools to write code for personal
projects, according to a May 2023 Stack Overflow survey. 1 After a slow start, enterprises
are accelerating their adoption of AI coding assistants too. In our May 2023 survey, only
13% of IT leaders reported that they had implemented AI coding assistants, with 36%
currently investigating and 29% planning on investigating in 2024. 2 In a more recent
Gartner survey for 2024, the numbers jumped — with 17% having already deployed, 23% in
the deployment stage, 20% piloting and 13% in the planning stage. 3

SaaS vendors are introducing their own AI coding assistants.

Gartner, Inc. | G00796830 Page 5 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


SaaS vendors (such as Salesforce and ServiceNow) have doubled down on providing their
own AI coding assistants and models. 4,5 Some vendors are focused on selling general-
purpose assistants, while others are focused on assistants that optimize developer
experience on their proprietary platforms.

Startups are offering private model instances.

Startups are offering private instances of foundation models to meet emerging enterprise
needs for accessing these models in an air-gapped environment. Private instances can be
customized to an enterprise’s codebase through prompt engineering techniques or fine-
tuning. These offerings improve the utility and relevance of AI-generated code by
accounting for enterprise context and patterns. Further, private model instances provide
greater transparency on the provenance of training datasets, which eases concerns about
security and legal risks.

Open-source communities are expanding the market.

Open-source communities further increase the range of options as they continuously


release and improve code foundation models and AI coding assistants. While most of
these offerings are community-supported, commercially supported open-source models
are also emerging.

Tools use a family of foundation models, not just one.

AI coding assistant vendors are using a family of models in their offerings, instead of one
large model. Different models are optimized for different use cases. For example, an AI
coding assistant will use different models for autocompletion, chat and code generation.
AI coding assistants are also providing direct access to the code-specific models beyond
the interactive assistants, which enables enterprises to build novel, custom solutions.

Vendors are releasing domain-specific AI coding assistants and models.

Vendors are offering domain-specific models and assistants that produce higher-quality
code. Domain-specific assistants and models outperform generic assistants in certain
areas, such as specialized infrastructure-as-code (IaC) languages, specific programming
language and unit tests. Vendors that have proprietary languages are also developing AI
coding assistants specific to their ecosystem, and vendors in the modernization business
are building tools to help with those initiatives.

Gartner, Inc. | G00796830 Page 6 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Market Evolution
Back to top

When OpenAI introduced ChatGPT in November 2022, AI coding assistants captured the
attention of developers across the globe. AI coding assistants have now become one of
the most hyped technologies in the history of software development.

Since the release of ChatGPT, the number of commercial AI coding assistants has
exploded from just a handful in 2022 to over 40 in 2023, and it is still growing. 6 Their
capabilities have also evolved rapidly from just the ability for developers to complete code
faster to a much richer set that includes code generation, explanation and debugging. The
open-source community has also introduced a variety of code foundation models, some
of which are already offered as a service. We expect that open-source and commercial AI
coding assistants will continue to proliferate.

Vendors and early adopters of AI coding assistants have reported results that range from
overly optimistic to outright dismissal, making it difficult for software engineering leaders
to calculate the potential return on investment. 7,8,9

Vendors will rapidly adopt each other’s innovations and will continue to differentiate
themselves across various dimensions:

Gartner, Inc. | G00796830 Page 7 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


1. Vendors will introduce a wide variety of new coding assistants everywhere a
developer touches code, including the outer loop of DevOps (integration, deployment
and operations code).

■ Specialized AI coding assistants will assist with IaC languages. We will also
see assistants that are tailored to database languages, especially SQL, and
command-line terminals.

■ Code editors and integrated development environments (IDEs) will offer AI


coding assistants through standard plug-in mechanisms. AI coding assistant
vendors will integrate with more editors. IDE vendors will be forced to compete
with both open-source and commercial offerings.

■ SaaS vendors will offer AI coding assistants for their developer platforms, and
proprietary coding languages via popular code editors and IDEs as well as their
proprietary web-based programming interfaces (see Cloud Development
Environments in the Hype Cycle for Software Engineering, 2023).

■ Popular open-source code editors will be modified to offer generative AI


capabilities natively, instead of as plug-ins.

2. AI coding assistants will continue to improve the quality of generated code by


selectively using an organization’s own enterprise code context. They will use both
fine-tuning and prompt-engineering methods. Vendors will offer one of these
methods or both.

3. Vendors will introduce AI coding assistants that can understand complex code
dependencies and system boundaries. These tools will combine programming
language understanding methods with code foundation models to translate to a
target programming language ecosystem. They will help with refactoring initiatives
that remediate architecture technical debt.

4. Some vendors will allow a choice of curated third-party models, including open
source.

5. We will see widespread enterprise adoption of general-purpose coding assistants by


2026. Today’s dominant players will continue to gain market share, but newer
players will capture niche domains (such as safety-critical and functional coding)
and will appeal to organizations that have unique constraints (such as air-gapped
environments).

Gartner, Inc. | G00796830 Page 8 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


6. Most AI coding assistants will change traditional programming paradigms steadily
and incrementally. Some startups and established vendors will attempt to reimagine
application development environments and disrupt the developer ecosystem.

Benefits and Use Cases


Back to top

AI coding assistants boost developer productivity and developer experience by enabling


several key use cases (see Figure 2).

Early adopters of AI coding assistants have reported results that range from overly
optimistic to outright dismissive, making it difficult for software engineering leaders to
calculate the potential return on investment. 7,8,9 Leaders should estimate the impact of AI
coding assistants on developer productivity and developer experience by designing and
conducting proofs of concept and pilots tailored to company-specific use cases.

Figure 2: AI Coding Assistant Benefits and Use Cases

Code Completion

Gartner, Inc. | G00796830 Page 9 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Developers use code autocomplete features provided by code editors to boost the speed
at which they complete their programming tasks. As they type, the code editor helps them
discover what is possible. With a keystroke, it helps them complete the line.

However, developers still need to recall the libraries, methods and functions to call — an
increasingly impossible task, as the number of options keeps multiplying. They also have
to fill in the details after the autocomplete, and autocomplete features only address one
line of code at a time. Code editors are not able to suggest alternate ways of solving the
problem.

AI coding assistants enhance the code completion use case in the following ways:

■ Integration with development environments — AI coding assistants integrate


directly into the code editors that developers use. This seamless integration allows
for real-time code completion.

■ Predictive power — AI coding assistants not only look at the code and comments
above the cursor but can also scan any following lines to learn the context. This
context awareness enables them to predict complex code structures such as loops
and function blocks.

■ Relevance and accuracy — AI coding assistants excel at patterns they have seen
before, especially from open-source data that was part of their training set. They are
particularly adept at predicting boilerplate and repetitive code.

■ Developer experience — AI coding assistants free up developers from menial tasks,


enabling them to focus on the creative and complex aspects of coding that are the
most interesting.

■ Semantic understanding — AI coding assistants are capable of understanding the


semantics and naming conventions of variables and methods within the context of
the code file.

■ Contextual awareness — Beyond the code file, AI coding assistants can use context
from other files that the developer has open in the code editor, as well as other
gleanable metadata, to improve their predictions.

AI coding assistants can also generate comments as well as documentation strings


(docstrings) that explain the purpose and functionality of methods. This capability is
helping developers increase the number of useful comments and create docstrings that
explain methods.

Gartner, Inc. | G00796830 Page 10 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Unit Test Generation
One useful variant of code completion is developers’ ability to prompt the AI assistant to
generate unit tests. Unit tests are essential for improving program reliability, but they are
tedious to produce. Thus, developers write and maintain few unit tests. There is increasing
evidence from our inquiry data that developers who use AI coding assistants are writing
more unit tests than before. However, developers may substitute the rigor and benefits of
test-driven development (TDD) with the ease of generating tests with these tools.
Developers may also accept unit tests that are neither complete nor rigorous. Leaders
should coach their teams to work with, and not delegate, unit test writing to the AI coding
assistants.

Infrastructure-as-Code (IaC) Script Generation and Completion


AI coding assistants enable developers to complete and generate shell scripts, as well as
infrastructure-related, domain-specific languages. Some IaC languages are proprietary. AI
coding assistants trained on open-source data that do not have sufficient proprietary data
are unable to produce high-quality completion or generations. In this case, vendors
supporting these languages are releasing proprietary models by training their models on
code and data they have access to.

Code Understanding
AI coding assistants enable developers to paste pieces of code into the chat interface and
get explanations in natural language. They also allow developers to highlight code within
the code editors for which they need an explanation. With these features, AI coding
assistants increase developers’ ability to understand complex and unfamiliar code, and
even code in an unfamiliar programming language.

Some vendors are combining the power of code foundation models with program
understanding — the science of making sense of the structure and semantics of a
program, and the programs to which it is connected, to generate a code graph. This
combination helps developers improve their understanding of code dependencies and
helps architects ensure systems’ structure and components adhere to enterprise design
principles.

Code Generation
Conversational chat interfaces in AI coding assistants, backed by code foundation
models, enable developers to explore and generate large chunks of new code — even
entire programs — by prompting the AI with free-form natural language.

Gartner, Inc. | G00796830 Page 11 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


AI coding assistants are rapidly adding these chat interfaces in conjunction with code
completion. This allows developers to use free-form natural language to generate code
and to use the code completion interface dynamically. Vendors are increasingly
integrating these two form factors to multiply the boost in developer productivity.

Code Debugging
When developers get stuck debugging errors without an AI coding assistant, they rely on
peers to help them or search the internet for fixes. The conversational chat form factor of
an AI coding assistant is helpful in debugging errors (as long as the model has been
trained on the versions of code that have errors).

Pull Request Summarization


When developers using a Git-based version control system are ready to get their code
merged, they issue what is called a “pull request” for peers to review the changes before
committing it to the integration code branch. Automatically generated pull request
descriptions save time, improve communication and review efficiency, and enhance
reasoning about code changes. The AI feature analyzes differences and developer work to
create pull request summaries in a narrative format that provide context and reasoning
about updates. Descriptions follow pull request templates to summarize changes in a
story-like structure that walks through code revisions. Reviewers can better understand the
purpose and implications of changes through these contextual descriptions. By
automatically generating summaries of code diffs, the AI feature reduces manual
documentation efforts while improving understanding and streamlining the review
process.

Code Translation
An emerging use case for AI coding assistants is translating code from one programming
language into another. This translation capability, also known as transpilation, is helping
developers proficient in one language become productive in other languages more quickly.
The ability to translate programming languages helps developers rewrite programs.

Some developers are combining this translation capability with code explainability and
generation to help with their modernization efforts. However, one-to-one translation may
not result in well-written, targeted code and may require refactoring.

Code and System Refactoring

Gartner, Inc. | G00796830 Page 12 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Refactoring is the process of simplifying the structure of code while preserving the
semantics. Popular IDEs provide built-in refactoring assistance to developers for popular
programming languages, but AI code assistants go further. They make refactoring easier
to develop by making proactive suggestions, including in editors that do not have
refactoring capabilities for a target programming language.

While code refactoring within a program is useful, some AI coding assistants are enabling
developers to refactor larger portions of code across multiple programs. Vendors are
combining code graphs (i.e., network structures of code and their dependencies) with the
reasoning ability of code foundation models to achieve large-scale refactoring.

Code Modernization and Technical Debt Reduction


A handful of vendors are introducing sophisticated AI coding assistants that understand
complex program dependency structure across a large swath of programs. This approach
helps developers and system architects reduce technical debt and modernize their code.
These leading vendors are introducing new user interfaces including dependency
visualizations, code pattern search, conversational chat and change impact analysis. The
vendors are targeting mission-critical systems that have monolithic architectures and are
written in older programming languages that few developers understand.

Fine-Tuning Foundation Models


The open-source community and commercial vendors are providing access to code
foundation models. This access has opened up new possibilities for improving developer
productivity and developer experience, such as fine-tuning the models with business
context code and data for more relevant, stylized code that incorporates enterprise code
patterns.

Acceptance Test Generation


An emerging use case is the use of foundation models, trained on natural and
programming languages, to generate acceptance tests (e.g., in the form of Gherkin) from
user stories. Using such a model, clients have not only converted user stories expressed in
natural language into executable acceptance tests but also improved existing tests.

Piloting and Evaluating Vendors


Back to top

Gartner, Inc. | G00796830 Page 13 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


The goal of piloting AI coding assistants is to establish POCs and gauge possible use
cases. The pilot should not be about making long-term vendor decisions, as vendors are
rapidly evolving their offerings and adding new features.

To pilot and evaluate vendors for AI coding assistants, software engineering leaders
should:

■ Start by establishing a clear value hypothesis tied to business metrics while


mitigating threats. According to the 2022 Gartner AI Use Case ROI Survey,
organizations with the greatest AI maturity are more likely to define their business
metrics at the ideation phase of every AI use case. 10 These leading organizations
place a significantly greater emphasis on risk mitigation upfront when evaluating
use cases. A common metric is the time saved in a developer’s day or week. Another
is cycle time improvement in user story delivery. Yet another metric is improvement
in developer satisfaction. An emerging metric is if time and effort saved can be
moved to improving the quality of software delivered.

At the same time, the 2023 Gartner Security in Software Engineering Survey found that
while two-thirds of respondents believe generative AI improves productivity, more than
half also believe generative AI in coding makes their organization more vulnerable to
threats. 11 Put a risk mitigation plan in place by monitoring and correcting generated code
for security vulnerabilities; replicas from the AI’s training dataset; and variable, method
and API hallucinations.

■ Establish a cross-functional task force of engineering, architecture, security and


legal experts. The task force should be involved early in the vendor selection process
to assess risks such as generated security vulnerabilities and copyrighted code and
documentation. The team should also monitor the generated output for bias, explicit
material and other ethical challenges.

■ Narrow down a list of vendors for POCs. Rule out vendors that do not meet the pilot
team’s risk assessment criteria.

■ Baseline key metrics before using AI coding assistants. Use common developer
productivity and experience frameworks, such as DevOps Research and Assessment
(DORA) metrics and the SPACE framework, which includes satisfaction and well-
being, performance, activity, communication and collaboration, efficiency and flow
(see How to Measure and Improve Developer Productivity).

Gartner, Inc. | G00796830 Page 14 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


■ Assess the impact of AI coding assistants using both quantitative metrics and
qualitative survey data. Avoid relying on any single category of metrics. Use a
combination of activity-based output metrics and value-based impact metrics, as
discussed above. Developer surveys are a common method to assess the impact.
Software engineering intelligence platforms can also help measure the impact of AI
coding assistants (see Innovation Insight for Software Engineering Intelligence
Platforms).

■ Select top-performing tools based on POC results and user feedback. Expand the
pilot across an increasingly diverse user base over the next several weeks to validate
the impact and efficacy of risk mitigation measures. Refine and iterate the pilot
quarterly to address key pain points, expand to new use cases and scale successes
(see How to Pilot Generative AI). Establish mechanisms to allow users to learn from
each other. A simple internal chat can evolve into a community of practice and allow
mentors to develop and disseminate best practices.

Table 1 summarizes the key actions that software engineering leaders should take to pilot
and evaluate vendors for AI coding assistants.

Gartner, Inc. | G00796830 Page 15 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Table 1: Action Plan for Piloting and Evaluating Vendors
(Enlarged table in Appendix)

Managing Risks
Back to top

The cross-functional task force of engineering, architecture, security and legal experts
must identify and mitigate the risks of using AI coding assistants, both upfront during
vendor selection and on an ongoing basis. The team should take actions to manage the
following risks of adopting AI code assistants.

Low code and text quality. Most training data is from open domains such as open source.
If the training data is not filtered for problems such as unsecure code, code with
nonpermissive licenses, toxic material or biased content, the AI coding assistants can
reproduce such content.

Gartner, Inc. | G00796830 Page 16 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


■ Mitigation strategy: Train developers to evaluate code and text quality before using
it. Give preference to vendors that are taking demonstrable, responsible AI methods
to mitigate such challenges. In addition, use technologies in your DevOps pipeline to
catch such problems automatically, should they slip by manual inspection.

Use of consumer versus enterprise-grade AI coding assistants. Vendors use prompts and
conversations to improve models, so models trained on prompts could expose proprietary
code and associated data to third parties, who can use it for competitive, or worse,
harmful use. 12

■ Mitigation strategy: Use only enterprise-grade AI coding assistants. Carefully


evaluate terms and conditions to match your organization’s privacy and security
posture.

Intellectual property lawsuits. Pending lawsuits against a vendor may halt adoption of
the affected vendor’s solution and decelerate the overall adoption of AI coding assistants.

■ Mitigation strategy: Insist on intellectual property indemnification. See Quick Answer:


How to Protect IP When Using GenAI for Software Engineering.

Regional regulations. Legal constraints may prevent organizations from adopting their
preferred tools.

■ Mitigation strategy: Actively test multiple AI coding assistants that comply with
existing and emerging regulations.

Diminished realized productivity gains. Productivity gains in coding activities may be


dwarfed by the overall inefficiency of an organization’s software development life cycle.

■ Mitigation strategy: Seek to eliminate bottlenecks outside the coding activities that
diminish the productivity gains. Use software engineering intelligence platforms to
help accomplish this goal.

Lack of proper training. Ineffective training will dampen the return on investment for AI
coding assistants.

Gartner, Inc. | G00796830 Page 17 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


■ Mitigation strategy: Make self-service training materials available to developers,
including online courses and books. Provide training via mentors who have
mastered use of the preferred AI coding assistant. Develop communities of practice
to facilitate peer-to-peer learning.

Lack of verification. Trusting generated code and text without verification may lead to low
production quality.

■ Mitigation strategy: Introduce stringent unit test and code review stage gates, as well
as automated code inspection tools. Train your engineers to detect hallucinations
and errors such as nonexisting methods and APIs, inefficient code, vulnerabilities
and generated entities such as dates, numbers, keys and URLs.

Representative Vendors
Back to top

The vendors listed below do not imply an exhaustive list. This section is intended to
provide more understanding of the technology and its vendor offerings.

Gartner, Inc. | G00796830 Page 18 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Table 2: Representative Vendors for AI Coding Assistants
(Enlarged table in Appendix)

Evidence
1
2023 Developer Survey, Stack Overflow.

2
2023 Gartner IT Leader Poll on Generative AI for Software Engineering. This survey
was conducted online from 2 through 8 May 2023 to gather data regarding the current
and expected use of generative AI in software engineering. In total, 91 IT leaders who are
members of Gartner’s Research Circle, a Gartner-managed panel, participated. Participants
were primarily from North America (n = 44) and EMEA (n = 33); other respondents came
from Asia/Pacific (n = 12) and Latin America (n = 2). Disclaimer: Results of this survey do
not represent global findings or the market as a whole but reflect the sentiments of the
respondents and companies surveyed.

Gartner, Inc. | G00796830 Page 19 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


3
2024 Gartner Technology Adoption Roadmap for Large Enterprises Survey. This
survey was conducted through an online panel survey among more than 600 respondents
from North America, EMEA and Asia/Pacific across industries and enterprises with annual
revenue of more than $1 billion. This research summarizes findings from more than 120
respondents identified as software engineering leaders. These results will allow software
engineering leaders to cut through vendor hype to determine which technologies to invest
in and when, in order to remain competitive among peers.

4
Einstein for Salesforce Developers, Salesforce.

5
Generative AI, ServiceNow.

6
Gartner’s Secondary Research Service (SRS) team contributed to validating the vendor
profiles, which included information about product offerings and supported use cases.
The research was conducted by the SRS team members Mujtaba Shamim and Romita
Datta Chaudhuri.

7
Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and
Happiness, The GitHub Blog.

8
Announcing New Tools for Building With Generative AI on AWS, AWS Machine Learning
Blog.

9
Westpac Sees 46 Percent Productivity Gain From AI Coding Experiment, iTnews.

10
2022 Gartner AI Use Case ROI Survey. This survey sought to understand where
organizations have been most successful in deploying AI use cases and figure out the
most efficient indicators they have established to measure those successes. The research
was conducted online from 31 October through 19 December 2022 among 622
respondents from organizations in the U.S. (n = 304), France (n = 113), the U.K. (n = 106)
and Germany (n = 99). Quotas were established for company sizes and industries to
ensure a good representation across the sample. Organizations were required to have
developed AI to participate. Respondents were required to be in a manager role or above
and have a high level of involvement with the measuring stage and at least one stage of
the life cycle from ideating to testing AI use cases. Disclaimer: The results of this survey
do not represent global findings or the market as a whole, but reflect the sentiments of the
respondents and companies surveyed.

Gartner, Inc. | G00796830 Page 20 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


11
2023 Gartner Security in Software Engineering Survey. This survey was conducted
online from 7 June through 14 July 2023. It sought to understand the different aspects of
security practices, such as responsibilities, skills, metrics, requirements, processes, tools
and technologies, and security roles in software engineering. In total, 300 software
engineers and security professionals, up to the role of senior vice president and across
industries, participated. The respondents were from North America (n = 178), EMEA (n =
76) and Asia/Pacific (n = 46). Disclaimer: The results of this survey do not represent global
findings or the market as a whole, but reflect the sentiments of the respondents and
companies surveyed.

12
Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak, Bloomberg.

Document Revision History


Innovation Insight for ML-Powered Coding Assistants - 21 November 2022

Recommended by the Authors


Some documents may not be available as part of your current Gartner subscription.

Assessing How Generative AI Can Improve Developer Experience


Emerging Tech: Generative AI Code Assistants Are Becoming Essential to Developer
Experience
Quick Answer: Should Software Engineering Teams Use ChatGPT to Generate Code?
Quick Answer: How Can Generative AI Tools Speed Up Software Delivery?
Quick Answer: How Can Generative AI be Used to Improve Testing Activities?
Quick Answer: How to Ensure Quality in AI-Generated Code
Quick Answer: Can We Use ChatGPT for Code Transformation and Modernization?
Quick Answer: How to Protect IP When Using GenAI for Software Engineering

Quick Answer: Mitigating the Top Five Security Risks of AI Coding


Hype Cycle for Software Engineering, 2023

Gartner, Inc. | G00796830 Page 21 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of
Gartner, Inc. and its affiliates. This publication may not be reproduced or distributed in any form
without Gartner's prior written permission. It consists of the opinions of Gartner's research
organization, which should not be construed as statements of fact. While the information contained in
this publication has been obtained from sources believed to be reliable, Gartner disclaims all warranties
as to the accuracy, completeness or adequacy of such information. Although Gartner research may
address legal and financial issues, Gartner does not provide legal or investment advice and its research
should not be construed or used as such. Your access and use of this publication are governed by
Gartner's Usage Policy. Gartner prides itself on its reputation for independence and objectivity. Its
research is produced independently by its research organization without input or influence from any
third party. For further information, see "Guiding Principles on Independence and Objectivity." Gartner
research may not be used as input into or for the training or development of generative artificial
intelligence, machine learning, algorithms, software, or related technologies.

Gartner, Inc. | G00796830 Page 22 of 22

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Table 1: Action Plan for Piloting and Evaluating Vendors

Step Time Frame Goal Details People Involved

1. Planning 4-8 weeks Set a clear value hypothesis. Identify business outcomes Senior engineers
Formalize the pilot team. and KPIs to measure Engineering leaders
Make a risk assessment success. Data science team
framework. Involve the task force to Risk team
assess cost and technical
feasibility as well as identify
legal and security risks
upfront.

2. Research and Design 1-4 weeks Establish baseline metrics. Identify metrics to assess the Senior engineers
Devise an assessment plan impact on developer Engineering leaders
and mechanisms. experience and productivity. Data science team
Create a vendor shortlist. Design assessment
mechanisms such as surveys
and guidance on running POC
experiments.
Assess vendors based on
criteria for cost, technical
capability and risk
established in the planning
phase.

3. POC Evaluation 2-4 weeks Shortlist top-performing Test tools across a variety of Small group of engineers
vendors.Start testing tools. use cases. Engineering leaders

Gartner, Inc. | G00796830 Page 1A of 6A

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Document the impact on
developer experience and
productivity.

4. Piloting 4-8 weeks Validate tool effectiveness. Expand the pilot to include a Tens to hundreds of people,
Validate risk mitigation diverse user base. including:Product team(s)
strategy. Offer training, quantify time Platform team
Identify pain points and savings and assess user Risk team
opportunities. experience.

5. Rollout 4-8 weeks Successfully implement AI Execute phased All teams across the
coding assistants across the organizational rollout. organization
organization with minimal COE, platform engineering
disruptions. team
Mentors

6. Iteration Ongoing Sustain optimized usage and Continuously monitor and COE
continually improve tool evaluate using real-time CoP
capabilities. analytics platforms. Data science team

COE = center of excellence; CoP = community of practice; KPIs = key performance indicators

Source: Gartner

Gartner, Inc. | G00796830 Page 2A of 6A

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Table 2: Representative Vendors for AI Coding Assistants

Vendor Product Supported Use Cases

Anthropic Claude Code completion


Code understanding
Code generation
Code debugging
Code translation

Amazon CodeWhisperer Code completion


Code generation

CircleCI Ponicode Unit test generation

Codeium Codeium Code completion


Code understanding
Code generation
Code debugging
Code and system refactoring
Code translation
Code applications

Diffblue Diffblue Cover Unit test generation

GitHub Copilot Code completion


Code understanding
Code generation
Code debugging
Code refactoring

Gartner, Inc. | G00796830 Page 3A of 6A

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Code translation

GitLab Duo Code completion


Code understanding
Code generation
Code debugging
Code refactoring
Code translation

Google Codey, Duet AI Code completion


Code understanding
Code generation
Code debugging
Code translation
Code applications

Hugging Face SafeCoder Code completion


Code understanding
Code generation
Code debugging
Code translation

IBM watsonx Code and system refactoring

Meta Code Llama Code completion


Code understanding
Code generation
Code translation
Code applications

Morphis Tech K.Explorer Code understanding

Gartner, Inc. | G00796830 Page 4A of 6A

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Code generation
Code debugging
Code and system refactoring

OpenAI ChatGPT, GPT-4, GPT-3.5 and other foundation Code completion


models Code understanding
Code generation
Code debugging
Code translation
Code applications

Parasoft Jtest Unit test generation

Red Hat Ansible Lightspeed Code completion


Code understanding
Code debugging

Replit Replit Code completion


Code understanding
Code generation
Code debugging
Code refactoring
Code translation

Salesforce Apex GPT Code completion


Code generation

ServiceNow Now Assist Code completion


Code generation

Sourcegraph Cody Code completion

Gartner, Inc. | G00796830 Page 5A of 6A

This research note is restricted to the personal use of Saskia.Smith@gartner.com.


Code understanding
Code generation
Code debugging
Code and system refactoring
Code translation

Stability AI StableCode Code completion


Code understanding
Code generation
Code translation
Code applications

Tabnine Tabnine Code completion


Code understanding
Code generation
Code debugging
Code refactoring
Code translation

Source: Gartner (October 2023)

Gartner, Inc. | G00796830 Page 6A of 6A

This research note is restricted to the personal use of Saskia.Smith@gartner.com.

You might also like