Professional Documents
Culture Documents
AI Career Guide
AI Career Guide
Home
Home
How a machine
learning engineer
cut his teeth in AI
What it takes to
succeed in an AI role
AI creates 40,000
new roles at Accenture
Why the firm plans to
hire thousands of AI
experts as part of a
$3bn investment
Navigating AI:
Security red flags
to watch out for
A security research
lab founder advises
on the risks of AI
HOME
AI creates 40,000
new roles at Accenture
Why the firm plans to
hire thousands of AI
experts as part of a
$3bn investment
A t 8am, Saurabh Agarwal can be found flipping through the
pages of some latest artificial intelligence (AI) research
paper over coffee to keep pace with the dynamic field.
Agarwal, a machine learning (ML) engineer at MavQ, an
India-based AI platform company, has seen AI developments
“The sheer joy of seeing what data can do is exciting. In the previ-
ous IT boom, we had a faint idea of what’s possible, but back then
data was limited. Today, the exponential jump in data consumption
has made models easy and resource-friendly,” says Agarwal.
“In the next five years, I see a lot of scope for MLOps [machine
unfolding since the day he dipped his toes into the technology. learning operations] and model productisation. It is very reward-
Navigating AI: A computer science graduate from Jaipur in North India, Agarwal ing to be involved in scaling models at my company, and also
Security red flags
to watch out for
started fiddling with the power of data during a full-time internship in making sure that they are not bulky or expensive. Keeping a good
A security research data engineering and cloud. He soon found himself cutting his teeth tab on costs and competitive edge makes this job thrilling every
lab founder advises deeper into data analytics and building data pipelines for ML models. day,” he adds.
on the risks of AI
Under the guidance of mentors such as Gaurav Kheterpal, a
Salesforce trailblazer, Mulesoft specialist and multicloud expert, Lucrative career
and with the support of AI communities, Agarwal cemented his grip Machine learning can be a lucrative career in India. According to
in ML modelling. For over two years, he learnt how to develop ML Agarwal, an average ML engineer’s salary package can range
models from scratch and, later, got the hang of deploying them. from Rs. 15-20 lakhs (US$18,000-24,000) per annum, with higher
Besides scaling models, he also handles models for converting level domains commanding Rs. 60-70 lakhs per annum as one
paper documents into digital formats while working on deep learn- gets more proficient.
ing. Alongside his work, he is finishing an executive course in AI Besides MLOps, which involves productisation and optimisation
and ML from the Birla Institute of Technology and Science, Pilani. of ML models, applied ML and advanced modelling are other hot
PUTILOV_DENIS/ADOBE
How a machine
learning engineer
cut his teeth in AI
What it takes to
succeed in an AI role
AI creates 40,000
new roles at Accenture
Why the firm plans to
hire thousands of AI
experts as part of a domains. But those who are clear and sharp about their specific vis-a-vis the top crust in the industry. Right now, we are either on
$3bn investment domain will do the most, according to Agarwal. par or better than the big names,” he says.
“Anyone with a good command of software engineering will find Agarwal is confident that AI will only amplify data’s potential for
Navigating AI: it easy to get in. One has to be comfortable and cognisant about business in the future. “As models and deep learning get better,
Security red flags
to watch out for
data – that’s the basic trait. Before you get into ML modelling, an we will find AI working to our advantage. For that, scaling and
A security research appreciation and a knack for how data affects business is para- executing models while confronting the ‘black box’ problem would
lab founder advises mount in this profession,” he says. be crucial,” he adds, referring to the challenge of understanding
on the risks of AI
how decisions are made by some complex AI models.
Explainability and the ethics of modelling Agarwal usually wraps up his day with a jog or a badminton
He also points out the importance of explainability and the ethics match. On the track, he knows when to sprint and pause, some-
of modelling: “One has to be able to ask, ‘Why is this model telling thing that comes naturally to a good AI professional.
us this?’ and be aware of its implications.” His advice to ML and AI aspirants is twofold: choose your domain
Agarwal spends most of his day leading a team of 15 to 20 well and have an innate love for data. “AI is very vast. Understand
people to productise ML models. He is also involved in bench- your own domain. Do not hype it or dismiss it. It will not take away
marking of models against the top ones in the industry. “We have all jobs. The calculator never did. We need to adapt to AI and
to constantly check how accurate, how good and how fast we are make it augment us. It’s that simple,” he says. n
A
How a machine
learning engineer
cut his teeth in AI ccenture will invest $3bn over three years in its data Accenture CEO Julie Sweet said: “Companies that build a
What it takes to
succeed in an AI role
and artificial intelligence (AI) practice, which will include strong foundation of AI by adopting and scaling it now, where
adding 40,000 AI experts to the workforce, making acqui- the technology is mature and delivers clear value, will be better
AI creates 40,000
sitions and training existing staff. positioned to reinvent, compete and achieve new levels of per-
new roles at Accenture AI-skilled staff will be added through hiring, training and acquisi- formance. Our clients have complex environments, and at a time
Why the firm plans to tion, said Accenture. when the technology is changing rapidly, our deep understanding
hire thousands of AI
experts as part of a The IT services giant said it would use the data and AI prac- of ecosystem solutions allows us to help them navigate quickly
$3bn investment tice to develop services for 19 specific industry sectors based on and cost-effectively to make smart decisions.”
AI to take advantage of what it described as the generative AI
Navigating AI: “megatrend” in its recent technology vision report. Company-wide team
Security red flags
to watch out for
Following the $3bn investment announcement, Paul Daugherty, Accenture has already established a company-wide team, the gen-
A security research group chief executive at Accenture Technology, said: “Over the erative AI and large language model (LLM) center of excellence.
lab founder advises next decade, AI will be a megatrend, transforming industries, It has also published its A new era of generative AI for everyone
on the risks of AI
companies, and the way we live and work, as generative AI trans- study, advising firms on the use of the technology.
forms 40% of all working hours. In March, when announcing its annual tech vision, Accenture
“Our expanded data and AI practice brings together the full said the distance between digital and physical worlds would
power and breadth of Accenture in creating industry-specific solu- “collapse” over the next 10 years, with the adoption of generative
tions that will help our clients harness AI’s full potential to reshape AI accelerating change. A study of nearly 5,000 senior execu-
their strategy, technology and ways of working, driving innovation tives across the globe, which was part of Accenture’s tech vision
and value responsibly and faster than ever before.” report, When atoms meet bits: The foundations of our new reality,
As part of the investment, Accenture said it would double its AI revealed 96% believe the convergence of the digital and physical
talent pool to 80,000. worlds will transform business in the next 10 years. n
AI creates 40,000
new roles at Accenture
Why the firm plans to
hire thousands of AI
experts as part of a
$3bn investment
L ou Steinberg, founder and managing partner of CTM
Insights, a cyber security research lab and incubator,
doesn’t watch movies about artificial intelligence (AI)
because he believes what he sees in real life is enough.
Steinberg has also worn other hats, including a six-year tenure
What would you say are the top three
things we should really be worried
about right now when it comes to AI?
Steinberg: My short- to medium-term concerns with AI are in
three main areas.
as chief technology officer of TD Ameritrade, where he was First of all, AI- and machine learning-powered chatbots and
Navigating AI: responsible for technology innovation, platform architecture, decision support tools will return inaccurate results that are mis-
Security red flags
to watch out for
engineering, operations, risk management and cyber security. construed as accurate, as they used untrustworthy training data
A security research He has worked with US government officials on cyber issues and lack traceability.
lab founder advises as well. Recently, after a White House meeting with tech leaders Second, the lack of traceability means we don’t know why AI
on the risks of AI
about AI, Steinberg spoke about the benefits and downsides of gives the answers it gives – though Google is taking an interest-
having AI provide advice and complete tasks. ing approach by providing links to supporting documentation that
Firms with agendas might try to skew training data to get people a user can assess for credibility.
to buy their cars, stay in their hotels, or eat at their restaurants. Third, attempts to slow the progress of AI, while well meaning,
Hackers may also change training data to advise people to buy will slow the pace of innovation in Western nations while coun-
stocks that are being sold at inflated prices. They may even teach tries like China will continue to advance.
AI to write software with built-in security issues, he contended. While there have been examples of internationally respected
In an interview with Computer Weekly, Steinberg drilled down into bans on research, such as human cloning, AI advancement is not
these red flags and how organisations can mitigate the risks of AI. likely to be slowed globally.
GORODENKOFF/ADOBE
How a machine
learning engineer
cut his teeth in AI
What it takes to
succeed in an AI role
AI creates 40,000
new roles at Accenture
Why the firm plans to
hire thousands of AI
experts as part of a How soon can bad actors jail-break AI, rather see a chain of steps that led to a decision. Transparency
$3bn investment and what would that mean for society? and traceability are key.
People have already gotten past guardrails built into tools like
Navigating AI: ChatGPT through prompt engineering. For example, a chatbot Who can exploit AI the most?
Security red flags
to watch out for
might refuse to generate code that is obviously malware but will Governments? big tech? Hackers?
A security research happily create one function at a time that can be combined to cre- All of the above can and will exploit AI to analyse data, support
lab founder advises ate malware. decision-making and synthesise new outputs. Exploiting AI comes
on the risks of AI
Jail-breaking of AI is already happening today, and will continue down to whether the use cases will be good or bad for society.
as both the guardrails and attacks gain in sophistication. The abil- If made by a tech company, it will be to gain commercial advan-
ity to attack poorly protected training data and bias the outcome is tage, ranging from selling you products to detecting fraud to per-
an even larger concern. Combined with the lack of traceability, we sonalising medicine and medical diagnoses. Businesses will also
have a system without feedback loops to self-correct. tap cost savings by replacing humans with AI, whether to write
movie scripts, drive a delivery truck, develop software, or board
When will we get past the black box problem of AI? an airplane by using facial recognition as a boarding pass.
Great question. As I said, Google appears to be trying to reinforce Many hackers are also profit-seeking, and will try to steal money
answers with pointers to supporting data. That helps, though I would by guessing bank account passwords or replicating a person’s
GORODENKOFF/ADOBE
detection will create cases where even real images, like your
medical scans, are untrustworthy. CTM has technology to iso-
late untrustworthy portions of data and images, without requir-
ing everything to be thrown out. We are working on a new way
to detect synthetic deepfakes.