DS Mock2 Updated

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

1.

The GPS receiver in the watch takes the information signals from at least 3
satellites.

2. The signals transmitted are: coordinates of the three different satellites.

3. The difference between the (atomic) distance between the three satellites is
used to show the routes of his training runs by calculating the 2D dimensions.

4. The method of trilateration is used to determine position from the distance


to satellites and is used to determine Jaime's exact position on a sphere.




Reasons for Sharing Personal Health Reasons for Not Sharing Personal
Information Health Information
The university may have access to data analytics There may be no way of knowing which
tools that can interrogate his personal health other third parties the university is sharing
information and give him feedback on his Jaime's data with (values, systems)
health/fitnessPersonalized health analytics feedback. Lack of trust and privacy concerns
The university may be able to provide additional The university may impose conditions that
health-related information may mean the data is not used for the
Suggest additional health improvement patterns purposes it was intended (values)encounter sense of
The university may be able to analyze Jaime's Once the data is shared, it is hard deception
to
data against other and/or larger data sets guarantee that it is deleted when it is no
Compare Jaime's health to global standards longer needed (values)
Jaime may accept that his data is "out there" Patient privacy is a concern. Is data
already, so there is no harm in re-sharing it anonymized, and does the university have
(values) sufficient security measures in place?Lack of security
Jaime may wish to contribute towards research at The sharing of Jaime's data may have
the University of Sierra Nevada (values) unintentional consequences (systems,
Jaime's data contribute to serve the humanity accountability)

When to depend on App to track health


Reasons to Rely on Health App Reasons Not to Rely on Health App
The health app may be more consistent in its advice The health app may be based on a
than a human doctor (systems) generic profile and not have sufficient
background data to make a more
not meaningful diagnosis than a human
The health app will not be influenced by the patient Users may not trust the health app due
during the consultation / will be completely to reliability and integrity issues
objective (systems)
The health app may lead to savings (for the user of The quality of the data being collected
the app) so that other treatments that are currently may be poor and the advice might not
not available may become possible as money is be reliable
freed up / health app may be more cost-effective for
the user than visiting a specialist
The health app is available 24/7 (systems) Negative results/unreliability could
result in anxiety or cause Jaime to
overexert himself in order to follow the
advice of the app
It is convenient, the app is available immediately, Jaime may trust his judgment more
and Jaime would not have to visit a doctor or sports than the app
scientist to get advice (systems)
The information from the app is available
immediately without any delays (systems)
The health app can be updated almost
instantaneously, whereas doctors would have to
attend courses to ensure new procedures, etc., are
explained (systems)
The app is available in any location so could, e.g.,
be used on holiday and the information and advice
would still be available (systems, ubiquity)
matches
When to trust AI to call police

Reasons to Trust AI Systems Reasons Not to Trust AI Systems


Enhanced police decisions with
large amount of analysed data.
It can analyze large amounts of data, so Unlike humans, the AI system cannot make
police decisions will be based on greater decisions based on ethical criteria
information (systems) Lack ethical judgement
It can provide more information to police to Creating rules that take into account all of the
assist them in carrying out their job possible ethical dilemmas may not be possible
(systems) and cannot always determine right from wrong
(systems)
Standard response to dangers can eliminate If unsupervised learning is used, the AI may
human error, such as panic responses or ill- self-learn and arrive at decisions that may not
judged responses / increase reliability and be appropriate (systems, algorithms)
consistency Reduce human panic situations Unsupervised Decision Risks due to self-learning
There is less chance of not noticing a There may be inherent biases in the algorithms
dangerous situation, whereas humans can May include biased algorithms
get distracted (systems)Improved awareness
of surrounding environment
The AI system can react faster than a human The AI could become unreliable due to a glitch
to suspicious circumstances in the system Unreliable due to technical problems
Swift Suspicion Response There might be a new situation never new situation; leading
Encounter
anticipated by the AI system to error processing
The AI system may be seen as a form ofmonitoring; privacy
concern
monitoring, concerns about the loss of privacy
There is the problem of accountability. At what
point can the AI system, or the programmers,
etc., be held accountable for an error?

Sentencing criminals using artificial intelligence (AI)

In 10 states in the United States, artificial intelligence (AI) software is used for sentencing
criminals. Once criminals are found guilty, judges need to determine the lengths of their prison
sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-
offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of
re-offending. Research has indicated that AI software is often, but not always, more reliable than
human judges in predicting who is likely to re-offend.

There is general support for identifying people who are unlikely to re-offend, as they do not need
to be sent to prisons that are already overcrowded.

Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software.
Eric had to answer over 100 questions to provide the AI software with enough information for it
to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and
wanted to see the algorithms that led to this sentence. Eric lost the appeal.

On the other hand, the European Union (EU) has passed a law that allows citizens to challenge
decisions made by algorithms in the criminal justice system.

1. Explain two problems the developers of the AI system could encounter


when gathering the data that will be input into the AI system. [4]
Incompatibility: Different datasets may require conversion to a single format for integration.

Structural Mismatches: Varying database structures may result in fields not aligning,
Answers may include: leading to unusable integrated data.

• The different datasets may not be compatible with each other…


• which means it has to be converted to a single format.
• The various databases may have different structures…
• fields in one database may not correspond to fields in other databases / a lot of the
integrated data is of no value.

_
Award [1] for identifying a problem the developers of the AI system will encounter when
gathering the data that will be input into the AI system and [1] for a development of that reason
up to a maximum of [2].

2. Identify two characteristics of artificial intelligence (AI) systems. [2]

Answers may include:

• Ability to seem intelligent


• Power to copy intelligent human behaviour
• Capacity to learn
• Decision-making ability
• Adaptation to circumstances
• Well-defined goals
• Problem-solving skills
• Reasoning ability
• Autonomy
• Flexibility
Award [1] for identifying each characteristic of artificial intelligence systems up to a
maximum of [2]._

3. The developers of the AI software decided to use supervised machine


learning to develop the algorithms in the sentencing software. Identify two
advantages of using supervised learning.
[2]

Answers may include:

• More accurate (than unsupervised and reinforcement learning).


• Easier to compare outcomes to predicted results / easier to train the algorithms.
Award [1] for identifying each advantage of using supervised learning up to a maximum
of [2].

4. To what extent should the decisions of judges be based on algorithms rather


than their knowledge and experience? [8] When judges can use AI in courts

Answers may include:

Arguments for Using Algorithms Arguments Against Using Algorithms


We don’t know what is going on inside a judge’s mind, The algorithms should not be "black
so it’s a black box too boxes", i.e., they should be revealed
(transparency) no transparency
Judges can be biased Bias mitigation necessity. It's almost impossible to define fairness,
so how can the algorithm be fair? Fairness challenge
This would be a more standardized process, i.e., more Biases could be incorporated into the
uniform and logical software because of the attitudes of the
human beings who created it Creator Bias
Risk assessment tools could lead to less incarceration The value of the algorithm depends on the
and less crime Incarceration reduction potential. data it uses, and criminal justice data is
often unreliable Unreliable data inputted
This software could be used in conjunction with a Algorithms look at group behavior, not
judge's decision i.e., to provide guidance individual behavior Collective measure
It could be one factor among many,and i.e.,decision
not the support Algorithms don't look at how different
Inability to link crime
determinative factor factors interact (at least not yet) events
The software could be used to identify outliers, i.e., Is the AI system fit for purpose, will it
people very likely to commit a crime or people very give appropriate sentences?
unlikely to commit a crimeOutlier identification capability.
The software could be based on the experience of many May shift the power too far towards
judges/experienced experts in this field, so it could be AI/machine learning
more reliable than the judgment of a single person
Collective expertise reliability. Does not determine who is accountable if
a sentence is unjust/inappropriate
Lack of accountability clarity.
In part (c) of this question it is expected there will be a balance between the terminology related
to digital systems and the terminology related to social and ethical impacts.

Keywords: laws, standardized, transparency, accountability, judgement, bias, fairness,


automation, machine learning, algorithm, reliability, change, power, systems, values

5. Outline one problem that may arise if proprietary software rather than
open-source software is used to develop algorithms. [2]

Answers may include:

• Proprietary software cannot be easily adapted to the needs of the user.


• Changes to the software may take a long time / may not meet the needs of the user / may
be driven by commercial interests.

Award [1] for identifying a problem of using proprietary software and [1] for a
development of that problem up to a maximum of [2]._

6. The developers of the AI software used visualizations as part of the


development process.

Explain one reason why visualizations would be used as part of the


development process. [2]

Answers may include:

• Visualizes the flow of data through an information system…


• which could make a potentially complex system understandable by a non-technical
audience.
Award [1] for identifying a reason why visualizations may be used as part of the
development process and [1] for a development of that reason up to a maximum of [2].
They don't want to see an exact version of themselves, as they
will feel that they are constantly monitored.
When to use AI to recommend activities

Reasons to Rely on AI System Reasons Not to Rely on AI System


Recommendations Recommendations
The recommendations are likely to be based on the The recommendations may not be
greatest data set possible and use data analytics sufficiently customized to each
techniques: real-time interactions, etc. (systems) passenger's need if enough data is not
Extensive totalitarian data analystics collected (systems) Lack of certainity
The use of data analytics techniques / AI should The AI system may be expensive - the
mean that they will be appropriate for individual increased amount of customer
cruise ship passengers - customized for each satisfaction may not be sufficient to
passenger based on data Customized desires justify this cost Expensive
Can create a unique experience for passengers - There may be particular characteristics of
Lack of habitual
recommended activities, etc. the passengers that the AI system may understanding of
unique experinces not be able to understand (systems) human
Can provide changes to passenger experience This may be a gimmick and an example
quicker than if done without AI (systems) of technological determinism (values,
Provide changes quickly ethics) gimmick passengers
Ensures compatibility between different devices or platforms.
When to use VPN in blocked countries

Reasons to Use a VPN Reasons Not to Use a VPN


The service is legitimate and cannot be Rajesh should abide by the laws of the country
accessed by any other means and by circumventing the laws may place
Legitimate/legal content access method. himself at risk
Too risky in country's that prevent the use of VPN
Do not accept laws that prevent the use of VPNs,
because it is a personal right to avoid surveillance
Rajesh may believe that the blocking of the The content may be deemed to be inappropriate
service is unlawful/inappropriate/the in the host country and has been blocked for
degree of censorship/surveillance is religious/security/political reasons (values)
draconian Respect for host laws.
There may be technical advantages of The use of a VPN may reduce the speed of
Technical advantages
using a service like a VPN (systems) downloading or streaming (systems) speed reduction.
Through a VPN, Rajesh can browse the Rajesh may not be able to download or stream
web in complete anonymity (systems) films despite having a subscription to a VPN
complete anonymity offered while browsing because a few broadcasting apps use anti-VPN
technology to restrict content outside a specific
region Anti-VPN content restrictions.
Rajesh may find media access through a
VPN more economical compared with
subscribing to the same services in a
different country (regulations, ISP)
Economical advantage; less costly
Fake news

We see and hear news every day and trust that the information provided is accurate. That belief
may soon end.

Artificial intelligence (AI) software is now being developed that can produce fake video footage
of public figures using recordings of their own voices. Using as little as one minute of user-
generated content (data), it can reproduce a particular person’s voice. The developer of this
software demonstrated the results by using the voices of Bill Clinton, George Bush and Barack
Obama in a computer-generated conversation.

Once a person’s voice has been reproduced, a fake video can be created by processing hundreds
of videos of the person’s face. Video footage of politicians are often used, as there is so much
data available online.

Law professor John Silverman commented that, as humans we tend to believe what we see, and
the increased number of tools to make fake media that is unrecognizable from real media is
going to prove a major challenge in the future.

Discuss the claim that companies who develop software that can create fake videos of politicians
should be accountable for the fake videos posted by users of their software on social media
platforms.
Answers may include:

Software companies should be accountable (claim)

• Although there may be no legal requirement for the software company to monitor the
videos of users of their software, there may be ethical reasons why they should, and it is
not appropriate to hide behind an end-user agreement when the software has the potential
to develop fake videos (ethics, values, transparency).
• The software company has not been seen to take all avoidable steps to prevent the fake
videos being posted / the software company can be proved to be acting outside the spirit
of the law or maliciously (ethics, values).
• The software company has allowed users to purchase the software in countries where
there may be rigorous censorship laws appreciating that its use may be seen as unlawful
(ethics, values).

Software companies should not be accountable (counter-claim)

• If the software company has positioned itself as a responsible developer, and their policy
documentation explicitly shows that they are practising what they preach (values, ethics)
and have acted responsibly by minimizing the potential harm that may be caused.
• It is unrealistic and unenforceable, however well intentioned, for the software company to
be accountable for the content of the videos.
• If the end-user agreement stated explicitly that the user would be accountable, would that
clause be enforceable by the software company?
• It is hard to determine at what point the software company would be accountable, as the
software itself does not have the capability to cause harm, it is the user whodoes so
(values).
• At what point is a video considered fake? Is a spoof video a fake video (media
authenticity)?
• What happened to free speech or freedom of expression (values, ethics)?

In this question it is expected there will be a balance between the terminology related to digital
systems and the terminology related to social and ethical impacts.

Keywords: politics, political speech, lobbying, machine learning, speech recognition, image
analysis, deepfakes, media authenticity, synthetic digital media, monitoring, accountability,
responsibility, change, expression, power, values, ethics

Refer to HL paper 1 Section B markbands when awarding marks. These can be found under the
'Your tests' tab > supplemental materials > Digital society markbands and guidance document.
Q.2

A cashless society Tap and Go - Benefit

In the near future, it is possible that cash will not be accepted as a means of payment in Sweden.
People are already using alternative ways of paying, such as mobile payment, card payment and
internet payment. Currently, over 95% of citizens in Sweden have internet access.

Many people in Sweden claim there are advantages of using an app developed by Swish. The
Swish app allows friends to share a restaurant bill, pay where credit or debit cards are not
accepted, for babysitting or parking tickets, or make a donation at church.

However, other people in Sweden claim that making the Swish app the only means of payment
may increase inequalities within the country.

Discuss whether countries should pass legislation making apps such as Swish the only means of
payment.

Answers may include:

Benefits of apps such as Swish being the only form of payment (claim)

• No need to carry cash (digitalization) / credit cards – no risk of being stolen/no risk of not
having enough cash. Availability of payment with no cash or card
• No risk of credit card being used fraudulently (values). Fraud protection
• Transactions are recorded – there is proof of payment, and the system should increase
transparency / acceptability. direct proof of payment
• Payments are made immediately – no need to wait until person has time to go to bank to
get cash (feasibility). Immediate recorded transactions; no need to visit bank to withdraw.
• Can solve other problems regarding money – bills can be shared, one person pays, and
money is transferred (feasibility). Flexible bill sharing between friends.
• In an emergency, money can be transferred to dependents without them being close (e.g.,
children at university) (feasibility). Emergency money transfer.
• Allows money transactions between individuals. Ease of Peer-to-peer transactions.
• Can limit the amount of money to be transferred, thus preventing individuals spending
more than they have available (acceptability / feasibility). Spending limit control.
• Many people are already used to the app, so it would be a good choice if the country were
going cashless (acceptability). User acceptance
• Easier for users to track budgets / spending, as all transactions are digital (transparency).
Easier Budget tracking
Disadvantages of apps such as Swish being the only form of payment (counter-claim)

• Swedish banks will be able to obtain more data on their users’ transaction habits (privacy
concern, ethics, values). bank surveillance; privacy concern
• Is not available to people who do not have a bank account, so a potential digital divide
concern (equity). Digital divide; not available to people with no bank account (elderly)
• May be problematic for tourists who may not have the app or who cannot link the
purchase to their bank account (systems, equity). Inconvenient for tourists.
• Removes the anonymity of the payee in transactions – the app may store a user’s
transaction history. This would include date, item, recipient of the money and cost of the
item (values, ethics). no anonymity
• The bank controls (power) the maximum amount of money that can be transferred, which
may limit a person’s spending and may not be appropriate in certain situations
(acceptability). Bank-controlled spending limits
• It may not be technically possible to make the transition from a society that uses cash for
transactions to one that does not (change, systems). Different society transition challenges
• If a single app is used, it would give Swish an unfair monopoly over the technology
(power, values, ethics). Monopoly concerns; unfairness.
• Digital divide – smartphone ownership and use by mature adults.
• If a person loses their phone, breaks it, or its battery runs out, they have no way to pay for
anything (systems, equity). Dependency on smartphones and newtworks.
• Failure in / lack of phone network coverage could affect when and where people could
use the app (systems, equity).
• Failure of the system / technical issues / down time would prevent people from making
transactions (systems, feasibility). Technical issues

Accept implicit and explicit references to apps such as the Swish app.

In this question it is expected there will be a balance between the terminology related to digital
systems and the terminology related to social and ethical impacts.

Keywords: business, families, digital divide, access, inclusion, acceptability, feasibility, equity,
digitalization, anonymity, privacy, change, power, systems, values, ethics

Refer to HL paper 1 Section B markbands when awarding marks. These can be found under the
'Your tests' tab > supplemental materials > Digital society markbands and guidance document.
pic 1

pic 3
pic 2
How technology companies use influencers to promote their products [2]
Answers may include:

• Providing the influencer with the product for free, so that the influencer agrees to post
about the product on social media sites.
• Tech companies use social media data to find influencers who will target their market.
That way, the influencer’s message is focused on their specific market.

_
Award [1] for way how technology companies use influencers to promote their products and [1]
for a development of that reason up to [2] marks._

Why technology companies work with influencers to promote their


products [2]

Answers may include:

• It is more cost effective than target advertising, as the company can select the tech
influencer by their follower demographic.
• Influencers are viewed as experts, so followers will trust endorsements and product
mentions.
• Influencer content may be a personal narrative, which helps differentiate posts from the
type of features- or sales-driven ones a brand might do for the same product on their own
feed.

_
Award [1] for one reason why technology companies work with influencers to promote their
products and [1] for the development of that reason up to [2] marks._

must describe each picture.

Answers may include:

• Increased from 1:29 in 2013 to 2:25 in 2019. pic 1


• Usage ranges from 1:55 in Europe to 3:31 in South America. pic 2
• Usage ranges from those over the age of 55 with the lowest use (1:15) to those aged 16–
24 with the highest usage (3:05 hours). pic 3

_
Award [1] for each relevant trend up to [2] marks. _

Source C = Digital App


Source D = Written School Guide

Answers may include:

Type of solution: (format)

• Source C is a technology-based intervention where the usage can be controlled by the


app, whereas Source D is a non-technological intervention where the use is controlled by
the user. Which may be more effective in promoting responsible use?

Agency: (controllers)

• Source D encourages input from family members for setting rules; likewise, source C
provides data for family discussions and allows anyone to enforce screen-free time. In
both cases, each family member has agency in the solution.

Ease of use: (user-friendly)

• Source C requires parents to configure the app, and this may not be done correctly so less
effective; likewise, the rules created by families (Source D) may not be complete/have
gaps/not deal with all eventualities and therefore be less effective.

Monitoring: (observation)

• Source C provides evidence of usage, which could lead the discussions on responsible
use and rule setting, whereas Source D relies on self-reporting for the discussion and
negotiation of rules only with no hard data to determine the effectiveness of the rules.

Data collection: (gathering)


• The app in Source C could share data/behaviours/insights that may be used to
improve/determine the effectiveness of the solution, whereas there is no tangible and
immediate method of data collection for third parties in Source D.

Answers may include:

Role of parents: (Commitment and strictness)

• Setting/negotiating limits, goals (Source C and D).Outlining rules to be followed in order to reduce negative usage.
• Following through with consequences, responsibility. Intensive follow-up
• Modelling good practice by moderating their usage in front of their children (Source A
and B). A good role model
• Use of technology-based solutions as noted in Source C.
• Setting boundaries and promoting non-technology activities, e.g., screen-free
evenings/holidays, charging overnight in a common space (Source C and D).
• Controlling access through data plans and encouraging children to pay for their own data
after the limit has been reached (responsibility, accountability).
• Watching for warning signs, including spending too much time alone, not getting enough
sleep, worse physical health, and not taking part in healthy activities (mental health).

Role of schools: (Advice and guidance)

• Policies may ban/restrict/monitor the use of social media/mobile phones to encourage


face-to-face communication (Source B). Strictly ban use of mobile communication within school boundaries, except
for educational purposes.
• May have clear and consistent policies/guidelines/consequences for use, e.g., in class
only for educational use.
• Promote the educational use of mobile devices, e.g., educational apps, time management,
self-management, podcasts. Informing students about the benefits of mobile devices that reflect on their personality in
general.
• Online safety as part of a pastoral programme/curriculum (education).
**
Responsibility of other stakeholders**
Role of government: (Financing and law enforcement)

• Not all parents are aware of safety issues and need to be educated themselves (education).
Governments could play a role with campaigns/laws to raise awareness targeted at
parents (Source A). Special awareness campaigns for parents
• National campaigns targeted at children for alternatives to phone usage/social media
(Source A and B).
Enforce serious laws
• Changes in laws, such as increase in age for access to social media sites (e.g., COPPA).
Children's Online
• Healthcare funding for research into and support for addiction/mental health. Privacy Protection Act
Providing financial assistance or support

Role of social media companies: (Cooperation and responsibility)

• Accounts may need to have more rigorous age-verification methods before usage.
• Activate time limits settings in apps such as Facebook and Instagram.
• Mobile devices can track/monitor usage such as screen time, apps used.
• Analyse data collected to provide insights that may be used to improve/promote safety
and healthy habits (digital literacy).

Marking notes: It is not necessary to explicitly refer to each source to achieve the highest mark
band, but there must be an explicit reference to at least two sources. To achieve the highest
markband the sources must be synthesized in an integrated manner rather than a systematic
analysis of each individual source.

**Keywords:**education, responsibility, accountability, privacy, anonymity, monitoring,


surveillance, apps, social media, regulations, policies, laws, health, addiction, mental well-being,
change, power, systems, ethics, values
Why there has been a significant decrease in in-person communication
with teens from 2012 to 2018 [4]

Answers may include:

• Increase in number of friends on social media due to social media being more widely
available.
• Increase in friends in different countries/time zones due to increased coverage worldwide
of social media sites.
• Increase in friends in different countries/cultures due to the use of translation tools for
communication in multiple languages.
• Increase in access to mobile technology due to the decrease in cost of phones/data.
• Increase in use of asynchronous communication methods as asynchronous
communication has become more socially acceptable.
• More people to communicate with due to the increase in number of people in 2018 with
access to mobile technology/social media.
• Increase in number of ways to communicate in addition to in-person, which allows
people who were previously unable to communicate to hold discussions.

_
Award [1] for identifying each reason why there has been a significant decrease in in-person
communication with teens from 2012 to 2018 and [1] for a development of that reason up to [2]
marks. Award a maximum [4] marks. _
Background information not needed.
autonomous vehicles

Simply uber

Autonomous Vehicle Readiness Index (AVRI): A measure assessing a country's readiness and preparedness for
autonomous vehicles.

Ethical decision-making model (e.g., Markkula): A framework guiding ethical decision-making to align with pre-identified
principles.

Society of Automotive Engineers (SAE) Scale: Classifies levels of vehicle automation features.

Transport network company (TNC): Provides transportation services via online platforms or apps typically offering
ridesharing.
(With reference to the proposed interventions. information included in the sources and your
own inquiries, recommend a digital intervention that would most effectively address the
challenge of access to healthcare and medicine in remote and rural communities.) [12 marks]

Structure of the evaluation of the intervention:

1. Equity - does the intervention address the needs, claims, and intersects of those affected
by the challenge?

2. Acceptability - Do specific affected people and/or communities view the intervention as


acceptable?

3. Cost - What are the financial, social, cultural, and environmental costs associated with
the intervention?

4. Feasibility - Is the intervention technically, socially, and politically feasible? What are
some of the barriers?

5. Innovation - Is the intervention innovative in its approach? Has it been attempted


before?

6. Ethics - Is the intervention ethically sound, and who determines the ethical status of the
intervention?

7. Recommendation -

‫مافي‬

You might also like