10 Ethical Dilemmas in Science

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

The pseudoscience of skincare

No matter how much you spend, you’re still going to get old

By 2025, the global skincare market is estimated to be worth $189.3


billion. But even sooner than that, by 2020, a subset of skincare
dubbed “skin tech” is expected to be worth $12.8 billion. Beauty
devices such as LED masks, electronic face scrubbers and
microneedlers, facial massagers, smart mirrors, skincare cameras, and
handheld machines that deliver a microcurrent to your skin are just
some of the at-home skin tech that people are investing in these days.

Buying youth
Fueled by concern over both premature aging and skin diseases (and,
of course, a good dose of vanity), more and more people are willing to
shell out hundreds of dollars to closely monitor and treat their skin.
But many of these devices have little or no reliable scientific evidence
to back them up. Even research done by the manufacturers
themselves isn’t terribly convincing since it usually involves very few
test subjects and includes no other lifestyle information about
participants for context. But most consumers never see that research
anyway.

Now, the fact that there’s little independent research on beauty tech is
partly because the only people who want to pay to prove that a device
works are the ones selling it. And some devices simply don’t lend
themselves to double-blind placebo-controlled studies – how are you
going to administer a placebo for a microneedler, for example?

There’s no doubt that our obsession with beauty and with the
“quantified self” is driving the propagation of questionable beauty tech.
But is this really letting us live our best lives or is it adding an extra
layer of anxiety about how we should look?

Putting aside the cultural issues at play, the main question is whether
this tech does any good. In cases where it helps detect skin cancer,
sure. But how about when it gives you a wrinkle score? Who are you
being scored against?
Our faces, ourselves?

Nevertheless, beauty tech companies continue to capitalize on our


obsessions with ourselves and our trust in anything that sounds like
it’s based on scientific evidence. Their marketing schemes tout devices
as the means to achieve personalized care and advice. The “experts”
they employ to help legitimize their products are not scientists but
celebrities, aestheticians, and, often, dermatologists not involved in
legitimate scientific research or with their own brands to market.

The “evidence” that beauty tech devices “work” is often measured in


terms of whether or not the client sees improvement, a less-than-
objective measure of efficacy. But that doesn’t stop the press from
helping market this beauty tech and parroting meaningless phrases
such as “clinically tested” or “dermatologist-approved.” This isn’t
science.

In fact, skin tech is the very definition of pseudoscience. Companies


that make skin tech devices use statements and practices that
scientists use but don’t follow the scientific method, especially when it
comes to replicating or challenging results. Now, this doesn’t mean
nothing on the market can change the way your skin looks – you
might have had great luck with a device. No one is telling you not to
use what makes you happy.

To be fair, much of the tech is physically harmless (partly because it


doesn’t do much of anything at all). But what do we make of the
psychological harm that the constant measurement of beauty does to
us? And how do we become more intelligent consumers of other goods
when we’re so easily duped by pseudoscientific claims?

In the end, we’re all going to age (if we’re lucky).

AI and gamification in hiring


It seems like the use of AI is a badge of honor for companies trying to
recruit fresh talent and yet 2019 saw a consistent flow of warnings
that the technology is simply not sophisticated enough to be reliable or
ethical.
Now, companies are touting their use of AI or “deep learning” (which is
a subset of AI) in the hiring process. They hope to use algorithms to
find patterns in employment data and the results of games and tests
to determine the best candidates to fill their positions. And despite
years of concern about algorithmic bias, they’re even claiming that AI
can help make companies more diverse!

There’s a lot of information available online about each and every one
of us and companies are already incorporating what they can into their
detailed assessments. But there’s a big difference between what a
company can learn about a candidate and what
they should know about them. How much data is a company entitled
to gather about a (potential) worker anyway? If HR can already look at
your social media accounts and credit history, is scraping the web for
your data and using it to quantify your fitness for a job really pushing
the boundary? Well, yes.

Are you your data?

More questions arise when companies employ neurological games or


emotion-sensing facial recognition as part of their assessments,
claiming that they can give deeper insight into a person’s abilities and
personality. But how deep do they go? Gamification is already being
trialed in health assessments, so if it can give us insight into, say,
someone’s mental health, a company theoretically can’t use that
information in hiring without violating the Americans with Disabilities
Act.

Can machines really do a better job of determining a “good fit” for a


company than humans can? Are we really just a set of data points to
be gathered and crunched instead of the full context of our resumes
and interviews? And if so, isn’t that kind of depressing?

Predatory journals
Just because a piece of research is published in a journal doesn’t mean
it’s legitimate. But how are we to know that? In this age of being able
to find nearly anything online, what’s stopping us from coming across
the abstract of a piece of bad research and taking it as scientifically
valid, whether we’re students working on a project or journalists
reporting on a story? The answer is: almost nothing.

What are predatory journals?


•Predatory journals are ones that lack ethical editorial practices such
as peer review and have such low publishing standards that they’ll
publish just about anything (sometimes for a price). Researchers
estimate that there are roughly 8000 of these journals and they exist
in every field, providing fodder for further research and headlines for
stories that we’re all duped into believing even though the evidence
doesn’t hold.

It’s no surprise that these have popped up if you think about the
opportunities that come with supposed expertise in an area. In
academia, those on the tenure-track must publish, and they are
judged by committees made up of faculty from various fields. Those
who don’t know what to make of the reputation of various journals
often fall into the trap of judging scholars based on quantity rather
than quality. Many scholars have been duped by invitations from
predatory journals to publish their work there – after an important
paper that they’ve spent months or years writing has been rejected by
legitimate journals (sometimes simply because they receive more
manuscripts than they can possibly publish), they may be happy just
to be published somewhere.

But while predatory journals do pose a danger to the integrity and


reliability of published research and damage the legitimacy of
publishing, they aren’t all full of garbage research. The occasional
piece from a well-known or senior researcher who was duped into
publishing there helps them gain and maintain credibility. Sometimes
they even list scholars on their editorial board without their permission
in an effort to look more legitimate (and sometimes scholars are so
chuffed to be asked that they say yes to offers without much
investigation).
Predatory authors
•Not all authors in predatory journals are innocent victims. Some seek
out these “come one, come all” repositories so they can pad their CVs.
These “predatory authors” aren’t interested in critical evaluation of
their work, only publication, and as a result, we see the publication of
pseudo-scientific claims and flat-out bad research that can’t be
replicated and should have never seen the light of day.

University of Colorado Denver librarian and researcher Jeffrey Beall


coined the term “predatory publishing” and published a list of
predatory publishers from 2010-2017. It was meant to help
researchers identify and avoid journals with unethical practices. But
because there was too much grey area for some, it’s since been
removed after complaints from some journals and scholars (despite
being lauded by most). Other lists are now run anonymously.

These days we’re simply swimming in fake data with no real way to tell
– sometimes even the scientists themselves can’t – if research is
legitimate. Do you know how to tell if an article or journal is
legitimate? And how do we measure this problem against another
known issue in scientific publishing – the replication crisis that plagues
research in even the most prestigious journals?

The HARPA SAFEHOME proposal


In an effort to prevent mass shootings, the White House is considering
a controversial proposal from the Suzanne Wright Foundation to try to
predict gun violence by monitoring the mentally ill.

The proposal – called SAFEHOME – is the brainchild of Bob Wright who


runs the organization named after his wife and whose stated goal is to
fund research into pancreatic cancer. SAFEHOME, which stands for
Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes,
would be part of a larger proposal to establish a new government
department called the Health Advanced Research Projects Agency
(HARPA), modeled after DARPA.

Estimated to cost $40-60 billion, SAFEHOME would employ tracking


software to detect signs of mental instability that could foreshadow
mass shootings. If this sounds like pseudoscience, that’s because it is.
Just because we have AI and machine learning that can process
massive amounts of behavioral data doesn’t mean it actually works.
Neurobehavioral technology is still in its infancy.

Now, the idea of an advanced health research agency isn’t necessarily


controversial, but the idea that it should come from an outside source
certainly is. Questions remain about how HARPA would be run. But
that’s a separate conversation.

Do we have the tech?


•The concern with SAFEHOME is whether or not it’s possible or ethical
to use technology such as phones and smartwatches or devices such
as an Amazon Echo to try to detect mental illness or the propensity of
the mentally ill to become violent. We don’t currently have reliable
benchmarks to predict that behavior and the kind of monitoring this
would involve has the potential to create massive privacy violations.
And what happens to the falsely accused?

Many have seen this project as an effort to deflect attention away from
the gun rights debate, and it remains to be seen just what would
happen to those identified as high risk by the algorithms SAFEHOME
might employ.

While the project would begin by collecting and processing data from
volunteers who would give up any expectation of privacy, it does open
the door to a future where the government can surveil those deemed
mentally ill. And there’s no way of knowing yet how transparent the
project would be.
Arresting the innocent

And what happens even if we do come up with even remotely reliable


benchmarks for predicting gun violence? How do we intervene? Can
you punish someone because an algorithm predicted they might be
violent even though they haven’t yet committed a crime (like a real-
life Minority Report)? Even gun advocates might raise questions about
whether people can be barred from owning weapons based on a
predictive algorithm bound to produce millions of false positives.

It’s also important to point out that many mass shootings in the


U.S. have been perpetrated by white supremacists and not the
mentally ill (since racism is not classified as a mental illness).

Class Dojo and classroom surveillance


ClassDojo is an online behavior management system “actively used in
95% of all K-8 schools in the U.S. and 180 countries” with “1 in 6 U.S.
families with a child under 14,” according to the software maker’s
website.

Its intention is to foster positive student behaviors and it allows


children to earn something called “Dojo Points” based on their
classroom conduct. In order to earn points, teachers have to monitor
and record behavior in some seemingly objective way and they can
even project real-time scores to the rest of the class or to parents
(with whom teachers can now limit face-to-face interaction).

A culture of surveillance
•Critics are concerned about the privacy of this information as well as
the psychological effects that constant measurement has on children.
It’s meant to reinforce good behavior and help teachers and parents
intervene with students exhibiting “bad” behavior. But is it yet another
insidious tool in our new surveillance culture as well as our obsession
with the quantification of the self?
ClassDojo claims the concerns voiced by critics are wrong:

“The picture painted of ClassDojo by the article and by some online


pundits is in stark contrast to the way real classrooms and households
experience ClassDojo: positive moments, ways to improve, and causes
for celebration, all shared over a safe, private 2-way communication
channel between teacher and parent.”

This reveals a fundamental difference in the way people think about


monitoring, measuring, and sharing information about children and to
what extent it’s acceptable and helpful. Even if we put aside privacy
concerns about the data being hacked or misused, the constant
quantification of behavior and achievement can be anxiety-inducing
and promote unhealthy competition. There’s no doubt that Class Dojo
is right about providing “positive moments,” but that’s not what critics
are worried about. They’re worried about the shame-inducing negative
moments and the already high-anxiety classroom culture that kids
experience.

Good behavior – but at what price?

And while the ClassDojo team is made up of parents and former


teachers and no doubt is trying to solve the troubling problem of
creating a better classroom and alleviating parental concerns about
their kids, we need to think about the psychological consequences (in
addition to the privacy concerns). In fact, this is a great example of a
group of people trying to do good but simply batting away the ethical
issues surrounding their technology. Simply replying that the majority
of adults have a good experience using the software isn’t a helpful
response.

First grade classroom at the elementary school in the village Ait Sidi
Hsain, near Meknes. The school has 610 students of which 243 are
girls. There are 16 teachers. The school is part of a conditional cash
transfer (CCT) program — Tayssir — which started out in 2008. It aims
to prevent children in Morocco’s poorest rural areas from dropping out
of school by transferring regularly a monthly amount of money per
child to the families. So far, 254 households in the village has
benefited from the program. Getting and keeping boys and girls in
school, particularly in underprivileged areas, is a key development
priority for the government of Morocco. The World Bank assisted in
this effort advising on the design and implementation of the targeted
cash transfer program, as well as by evaluating results to facilitate
optimal scale-up in the future. The CCT program now reaches more
than 340,000 students (or 200,000 households) both in elementary
and secondary school. (Photo: Arne Hoel)

Since the data gathered by ClassDojo includes teacher-assessed


behavior and character traits and assigns them a value, one might also
wonder what counts as a model student and how much room that
leaves for diversity (not only in terms of ethnicity but neurodiversity as
well).

ClassDojo isn’t going anywhere any time soon – in fact, its use will
probably grow. But that’s why it’s important for people to keep asking
questions about whether and how students are benefitting and how
this exceptionally personal data is being kept safe (especially
since schools are popular targets for hackers and ransomware).

We could certainly use some good, independent research on how it’s


affecting children, not just in terms of their scores, but their sense of
self.

Grinch bots
We first heard about Grinch Bots in 2017 when online entities began
using cyberbots to snap up popular goods as soon as they hit the
market or went on sale for Black Friday. The goal is to increase
demand and control the supply for everything from children’s toys to
event tickets and make money by jacking up their resale prices on
sites like eBay.

While states such as New York have tried to crack down on these
cyber retail stalkers, they’re tough to find and they can get past
CAPTCHA software by employing humans to do that work for very little
money.

Fighting the bots


•This happens all over the world, but in the U.S., Congress tried to
block “cyberscalping” by passing the Better Online Ticket Sales (BOTS)
Act of 2016. However, that law only applies to event tickets and still
isn’t very effective since these so-called “grinches” are adept at
making hundreds of accounts. The Stopping Grinchbots Act 2018 was
introduced last year and is now awaiting more action from the House
Committee on Energy and Commerce. But this would simply make it
illegal to resell all products purchased by automated bots and doesn’t
apply to the rest of the world. This is the tricky thing about online
commerce (and the Internet in general) – it transcends legal
boundaries.

The cybersecurity firm Radware found that a huge amount of online


traffic comes from bots in the days leading up to Black Friday/Cyber
Monday.

According to their blog:

“While it appears that internet traffic is at its annual high during the
prep days before Black Friday/Cyber Monday, 37% of that traffic is
comprised of bots, not holiday shoppers…Bad bots are at their highest
level a few days prior to Black Friday/Cyber Monday,
representing 96.6% of total traffic to retailers’ login pages. This
indicates that bot masters are using this time as preparation days
before the surge in customer shopping.”

So gangs of cybercriminals are now swimming in limited-time offers on


goods that you’ll now have to buy at a mark-up on secondary sale
sites. More nefarious thieves even break into customer accounts and
drain them of points and other rewards – such as digital currency. And
they’re not limited to Christmas. Any time there’s a sale or special
event or offering, these bots and their masters are on the job.
No, it’s not fair, but there’s very little consumers can do other than to
check on and protect their accounts with new passwords every now
and then. But retailers are also in a bind. Millions of bot hits on big
sales days can slow down web traffic, causing customers to go
somewhere else.

Either way, we can expect to pay more for products cybercriminals


have their eyes on, whether it’s due to price inflation on third-party
sites or a raise in the cost of goods from the money retailers will have
to invest in more effective cybersecurity.

Maybe now is the time to think about just how much you really need
that shiny new item.

Project Nightingale
Project Nightingale was announced in November of this year and
raised approximately one round of panic before the news cycle moved
onto the next big thing. But just because it’s not on the front page
doesn’t mean it’s gone away (and just because the project could start
in the U.S. doesn’t mean it’s irrelevant to the rest of the world).

First, let’s start with Ascension, the Catholic health care system that
also happens to be the second-largest in the United States. With
roughly 2,600 hospitals, doctors’ offices and other related facilities
spread over 21 states, it holds tens of millions of patient records – and
these records have comprehensive health information on millions of
Americans. It’s a valuable resource for anyone wanting to do health
research. 

Then along came Google, a company that has had a rough few years
PR-wise and has largely lost the public’s trust (even as we continue to
use it every day). When it was announced that Google was developing
software to compile, store and search medical records and that both
companies had signed a Health Insurance Portability and
Accountability Act (HIPAA) agreement, the goal was clear – Ascension
was going to transfer the health records to the Google Cloud.

Now, as far as we know, the agreement states that Google can’t do


anything with these records other than provide services to Ascension.
But when The Wall Street Journal first brought the partnership, dubbed
“Project Nightingale” to light in November of 2019, they also reported
that neither doctors nor patients had been informed of what was
happening with these records and that roughly 150 Google employees
had access to the data.

In the best of circumstances, these employees are trustworthy people,


the records are well protected by a powerful tech company, and what
comes out of Project Nightingale will be new insight into human health
that benefits us all.

But the way consumers found out and the fact that it wasn’t a
transparent process naturally raised some suspicions, especially in the
privacy arena. In fact, these suspicions still exist since most of what
we know comes from anonymous insiders. Those insiders also report
that Ascension employees have expressed concern over some of the
ways Google intends to handle the data, claiming that it is not HIPAA-
compliant. Google denies this.

Google Cloud exec Tariq Shaukat has promised that patient data


gathered from the project will not be combined with any Google
customer data, and standard legal agreements about medical data
sharing generally prohibit the use of this kind of data for other
purposes. But Google has had its fair share of controversies when it
comes to medical data. Take, for example, the project known as
DeepMind, which spawned an AI-powered diagnosis app
called Streams that illegally held over 1.6 million patient records. Or
the lawsuit faced by Google and the University of Chicago Medical
Center after a collaboration on patient records that the Center did not
get consent to share.

It’s important to remember that other large tech companies – such as


Microsoft and Apple – are also launching health projects. But this only
raises more questions about what right we have to our own healthcare
data and how it’s used.

If we’ve learned anything over the last decade, it’s that secure data
can be hacked and anonymized data can be de-anonymized. So this
raises an important question – what could possibly go wrong? And
we’re about to find out since the partnership has now triggered a
federal inquiry.

Student tracking software


There’s a good chance a college or university already knows quite a bit
about a student before they even send in an application – and some of
the data they gather affects admissions decisions.

Colleges are watching you


• College websites use cookies to track things like clicks and time
spent on various pages to gauge student interest. The software they
employ can send your name, contact information, and other identifying
details such as ethnicity to admissions officers.

Earlier this year, The Washington Post investigated one instance of this


increasingly common practice. Internal university records sent to the
paper revealed that the University of Wisconsin-Stout had employed
this software and – in just one example of its employment – received a
file on an applicant showing that she was a graduating high school
senior from Little Chute, Wisconsin, was of Mexican descent, and had
applied to the university. They also received a list of 27 pages she had
clicked on when she visited the UW-Stout website and how long she
spent on each page. She was also assigned an “affinity index,” gauging
just how likely she was to accept an offer of admission.
This type of tracking is not uncommon nor is it new. The Post reported
that “at least 44 public and private universities in the United States
work with outside consulting companies to collect and analyze data on
prospective students, by tracking their Web activity or formulating
predictive scores to measure each student’s likelihood of enrolling.”
This potential violation of student privacy is also not confined to
admissions. In 2014, Harper College, a two-year community college in
Illinois, employed tracking software to try to identify which students
were unlikely to complete their degrees. The goal was timely
intervention, but it’s not always the thought that counts when it comes
to collecting this kind of data – especially when schools are not
transparent about it.
Colleges justify the use of this software and their collaborations with
outside consulting firms in a variety of ways. For schools with smaller
endowments, they claim that identifying students who are likely to be
able to pay tuition helps keep them afloat. For those with thousands of
applicants, knowing who is most likely to accept an offer can save time
and money.

Schools are “scoring” your interest


• But when colleges assign scores to students based on income and
interest, it strips applications of much of their context and  it also
discriminates against low-income students or those without dedicated
Internet access.

To make matters worse, these actions – which can easily be seen as


privacy violations – don’t have to be reported to students. It’s nearly
impossible to know if and how a school is tracking you. And it might
also be illegal – at least in the U.S. – though it will likely take a lawsuit
to find out. Technically, colleges must disclose to students how they
use and share their data according to the Family Educational Rights
and Privacy Act (FERPA). Some schools have tried to get around this
by dubbing the consulting companies they work with “school officials.”

Opt-out at your own risk


• Most schools don’t let students opt-out of data collection, stating that
they simply don’t have to visit their websites if they don’t want to be
tracked. Others force students to e-mail admissions officials to
formally opt-out of tracking, but one wonders how that information
goes over when it comes time to make an admissions decision.

These sorts of predictive analytics have the potential to harm a


prospective student’s college admission based on an algorithm that
assumes high-income students with high online engagement are ideal
candidates. It also encourages schools to spend the vast majority of
their resources on these “ideal” students to the detriment of those who
have a low “affinity index” score. 

Gaming the system


• This digital privileging puts already disadvantaged students at a
further disadvantage and it increases the already high anxiety level of
college applicants. Then again, now that students know it’s happening,
what’s stopping them from mindlessly engaging with websites to game
the tracking system? And if schools are spending millions on this
software only to have it be misleading in the end, wouldn’t that money
have been better spent on improving their educational offerings?

The corruption of tech ethics


Can anyone be a tech ethicist?
•Just because tech ethics raises interesting questions about whether
we should pursue certain projects doesn’t mean that’s all there is to
this field of inquiry. But you wouldn’t know that by the plethora of
people now claiming to be experts in this area who think the extent of
their job is to sit around and ponder what might be “right” or “wrong.”

Everyone from philosophers in other fields who now find tech


commentary to be a way to get their name in the news, to journalists
throwing out “should we or shouldn’t we” questions and calling it tech
ethics, to lawyers who confuse ethics with legality are muddying the
waters when it comes to meaningful and organized inquiry. That
doesn’t mean we shouldn’t be encouraging people to ask these
questions (after all, that’s what the Top 10 List is for!), but we need to
keep in mind that as a field, tech ethics is supposed to lead to answers
and actions that benefit humanity.

What tech ethics isn’t


•Equally, when “ethical decision making” is turned into a 1-day
seminar for business leaders or policymakers, it turns ethics training
into something to merely check off on a list of “skills.” Having
someone with minimal training in a meeting doesn’t mean a company
is engaged in ethical inquiry about their practices.
For tech companies, “ethics” is now simply a word to throw around to
indicate that they’ve thought a little harder about something before
moving forward. It’s an easy out that allows them to self-
regulate instead of having a real expert interrogate their products and
practices.

Of course, this is not to say that one must have a Ph.D. in philosophy
to be a tech ethicist. In fact, if academics keep the field to themselves,
it’s unlikely ever to make it out of the so-called Ivory Tower.

The point is that people who “do” ethics need to have rigorous training
and understand the frameworks for ethical decision making.
Otherwise, ethics turns into a merry-go-round where resolutions are
made on the fly by people who use whatever evidence they want in
order to decide if something is right or wrong.

Tech ethics is a formal field of inquiry


•We can build as many ethics committees/councils as we want and
force science and engineering students to take a class in ethics to
make them better ethical decision-makers, but that doesn’t erase the
need for dedicated, independent inquiry based on a formal framework.
While that might sound stodgy, and old frameworks might not always
apply to new tech, it’s the ability to organize thoughts into a formal
framework that allows ethics to move forward instead of whirl around
as a series of open-ended questions.

And while interdisciplinary inquiry is deeply important, we must not


confuse slapping together a group of people to mull over problems
(whether it’s a center for ethical inquiry or an ethics board), vague
statements of a company’s ethical principles, or the employment of an
“ethics officer” with actually doing tech ethics in a way that safely and
honestly regulates technology and its applications. This allows
companies to get away with “ethics theater” instead of doing the hard
work.

Deepfakes
In a world where we believe whatever we like to hear, is there any
reliable way to stop the spread of misinformation?
These days, just about anyone can download deepfake software to
create fake videos or audio recordings that look and sound like the real
thing. While there has been a lot of fear surrounding the damage they
might do in the future, until now, deepfakes have been limited
to superimposing faces into porn or swapping out audio to make it look
like politicians are saying something controversial. Because the
Internet is full of fact-checkers, most deepfakes have been outed
almost immediately.

What’s at stake when it comes to deepfakes?


• But people are justifiably concerned about their potential to do great
harm in the near future. From ruining marriages to interfering with
democratic elections, the creation of these fakes can have major
consequences. Part of the problem is that people seek out and believe
things that justify their worldview, so even if someone calls out a
deepfake video or audio recording, there are people bound to still
believe in them.

Of course, video and audio can already be easily manipulated or taken


out of context, but the application of deep learning to create hard-to-
identify fakes is something we need to take seriously, especially as
they become more sophisticated.

Deepfakes use technology called generative adversarial networks


(GANs) in which two machine learning models play off of one another
to create a nearly impossible-to-detect forgery. While one model
creates the video the other attempts to detect signs that it’s a fake.
Once the detection model can no longer find vulnerabilities in the
forgery, it’s ready to be uploaded for all sorts of nefarious purposes.

Are deepfakes free speech?


• There’s still the unresolved question of whether deepfakes are illegal.
Are they covered by the First Amendment? Does intellectual property
law come into play when a video is manipulated?

Companies such as Facebook and Microsoft have recently launched


challenges to encourage people to create tools to detect deepfakes
before they become easy enough for anyone to create. DARPA has
been looking into deepfake detection since 2016, a year before the
first fake video appeared on Reddit. 

There are two federal bills currently under consideration in the U.S.,
the Malicious Deep Fake Prohibition Act and the DEEPFAKES
Accountability Act. California, New York, and Texas are also attempting
to build state legislation to regulate them. But one wonders, do
politicians understand the technology well enough to regulate it? Will
those laws conflict with First Amendment rights? Is there even a way
to regulate the technology or should we be concentrating on its
weaponization instead?

One way governments are trying to stop the creation and spread of
deepfakes is by regulating social media, the most common platform on
which they are shared. But tech companies have already proved
themselves largely immune to this type of regulation.

So what can we do about this new high-tech wave of disinformation?


Regulate it? Provide more resources for media literacy to make them
less of a threat? Rely on sharing platforms to detect and ban them (or
hold them accountable for their dissemination)?

You might also like