20-05-2024, 1014 PM Microsoft Lens

You might also like

Download as pdf
Download as pdf
You are on page 1of 16
pl SESSION 4.1 4.2 Al Ethics AX Ethical Issues Around Al A AI Bias and Al Access INTRODUCTION There is a popular saying — “With power comes responsibility”, which is true for everyone and everything. Everything ? Aren't you surprised ? Since when the ‘things’ are expected to be responsible ? Well, you know the answer to this question yourself ; and that is ever since the things ‘ome intelligent and powerful, even though artificially (Al). it involves ethical, moral, legal responsibility and even thics pertaining to artificial intelligence, AI bias, gin our discussion. have bec ‘And when we talk of responsibility, more. In this session, we shall explore ¢ and advantages and disadvantages of AI. So, let us bet ETHICAL ISSUES AROUND Al “the moral principles that govern a person or a group’ behaviour or actions” Ox “the moral correctness of a conduct or action”. In short, Ethies are the moral responsibility of anyone or anything that can impact others. Since AI has gained so much power that it can change the lives of people, it also has to look into ethical issues around it as AI can have enormous impact on societies and even nations. Major ethical issues of AI are : The dictionary defines ethics as, © Bias and Fairness © Accountability © Transparency © Safety © Human-Al interaction © Trust, privacy and control © Cyber security and malicious use | Automation and impact over jobs © Democracy, civil rights, Robot right ete. Session 4: ALETHICS 37 Figure 41 lists various ethical issues surrounding AL | Al Actions Safety Al Setup, c= a e a suman ltraton vias Bes anda © Accountability cyber security and AL Ethical malicious use Issues aa “rust, privacy and control - Al Future Bp Aikeraton aoe °° Impact over jobs (Over things and people) © Ruditabilty “Human rights vs. , Pee Interpretabitiy Figure 4.1 Ethical Tssues of AI 4.2.1 Examples of Al Ethical Issues Let us discuss some examples of AI ethical issues, 1. Bios and Fairness Ethically an AI system should be free from all types of biases and be fair, e.g., an Al system designed for picking candidates for a job must not be biased against any gender, race, colour or sexuality and so forth. It should be free from all such things and be totally fair. 9, Accountability Al leams and evolves over time and data. What if an evolved algorithm makes some big mistake? Who would be accountable for it? For instance, when an autonomous Tesla car hit arandom pedestrian during a test, Tesla was blamed and not the human test driver sitting inside, and certainly not the algorithm itself. But what if the program was created by dozens of different people and was also modified with each incident and more data available? Can the developers or the testers be blamed then? 3, Tronsparency Transparency means nothing is hidden and everything that AI performs is explainable. Transparency ensures that there is full information and knowledge about these : ® data used, its range, interval and sources etc. © models used are appropriate for the context make sense, © models are thoroughly tested ® why particular decisions are made | 4. Safety Al technology, tools and practices should be so implemented such that they cause no direct or indirect harm to data, people and the outcomes. AI practices must be safe to ensure the well being of individual persons and the public welfare. AI practices must uphold public trust through the responsible use of technologies. 38 ARTIFICIAL INTELLIGENCE-X ae Ability to explain the workings in Transparent language people can understany ( Clear, consistent, and understandable i its working | (> 71} nity 6 S00 how reais a Ethics in Al vary with changing inputs Ethical purpose, i 7 build, and use Allows third-parties to assess data | (> S inputs and provide assurance that | | te cuputscanbe sted | rat | Eliminates or reduces Auditabl | the impact of bias on certain users Figure 4.2 5. Human Al Interaction With evolution of AI, many AI technologies, such as humanoid robots, have their actions and appearance similar to the human beings or other living beings. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. Thus, AI must be responsible enough and must not exploit this (i.e., looks and actions similar to living beings) to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. In short, Al must not deceive humans or other living beings, and it must not threaten or violate human dignity in any way. 6. Trust, Privacy and Control Improved AI “faking” technologies make what once was reliable evidence — a unreliable evidence — this has already happened to digital photos, sound recordings, = Video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, aN video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So, we cannot ee trust digital interactions while we are at the same time increasingly dependent on such interactions. Thus, it is the ethical Tesponsi- bility of the creator and user of Al to ensure that these are not misused. Note Deenfake isa technology that can generate fake digi phe 105, "ecordings, and video, which lok just as original ag eee Sound an aan 7. Cyber Security and Malicious use 9. Human Rights in the Age of Al Session 4: AL ETHICS 39 tomation and Impact over Jobs AI and robotics are leading to increased automation in all types of fields and industries and at many places, robots are replacing humans too. This will lead to many humans losing their jobs. But AI does not mean that jobs are reduced, it just means that the nature of jobs and work is predominantly changing, Thus, it is the ethical responsibility of an organisation to upgrade the skillset of its workers so that they upgrade their skillset and be ready for futuristic AI oriented jobs. It is ethical responsibility of governments too (equally and even more) to bring appropriate changes to the education, training, internships and opportunities for its people, keeping in mind the evolving nature of jobs and impact of AI over them, Al has generated new form of threats and it has led to new discussion — how to protect human rights in the age of AI ? There are many consequences of use and application of AI in our lives, such as : With smart cities, people create a trail of data for nearly every aspect of their lives, which reveal minute details about our lives. AI can process and analyse all this data and all this data is available to government but also to potential advertisers. This is a huge risk to data privacy and protection — violates human right to privacy. ® Decisions based on AI processed data and algorithms may be biased depending upon the accuracy/inaccuracy and bias of the algorithm, e.g., many facial recognition software being developed have shown bias towards fair skinned people. It leads to biased decision and violates human right to fair chance and justice. oo ‘® As Al can process humongous sets of data, it can analyse huge symptoms dataset of person and can predict may possible future ailments and disease. Using this analy, the health insurance companies may deny insurance to some people and thus wou violate human right to affordable healthcare. 4O ARTIFICIAL INTELLIGENCE-X There are many more such incidents and examples AI Ethics where AI can be used to violate human rights (some AL ethics is a set of more examples given below in Fig. 4.3). Thus, it is PRMrmies, ima, Ueingues important for govemment and authorities to of right and wrong to guide ensure the protection of human rights from various conduct in the develop mn forms of misuse of AI. of AI technologies. pusseer Exemplary cases Bremer cases X Use of data without or against the explicit Infringement of the right to opinion, due to will of customers excessive use of algorithms in socal media X_ Disproportionate use of intimate and personal =x _—Replacement of democratic decisions by AT data of individuals by public institutions decisions (robotocracy) Situation IL Situation 11 Situation I = wat ‘The output of Al leads The use of Alin The input of Al conflicts tg ynintended human rights _-—=—areas conflicts with — with human rights noe a ee Exemplary cases % Unlawful discrimination in job applications based on ethnicity X [llicit discrimination of women in the public health system Figure 4.3 Human Rights Violations and AI 4.3 ALBIAS AND Al ACCESS AI Bias (Artificial Intelligence Bias) is an important term, which you should know about. ‘Bias’, as you must be knowing, means inclination or prejudice for or against one person OF group, especially in a way considered to be unfair. When AI programs, tools and algorithms exhibit any kind of bias, it is called AI bias. Let us understand it with the help of examples. Example 1 In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. It used a computer program to determine which applicants would be invited for interviews. And the list of applicants it picked was found to be biased against women and those with non-European names. The computer program was developed to Session 4: Al ETHICS 41 atch human admissions decisions, using the data they collected and how they formed mi e final list — doing so ou 90 to 95 percent accuracy. As human decisions were biased, the Ne same bias creeped into computer algorithm, %0 pomple2 na similar incident in US, a healthcare algorithm, which was being used to decide about the extra medical facilities for people, produced faulty results that favoured white tients over black patients. Again, the bias in the algorithm crept from the data and Ftory of human made decisions in the past, i z ra Al expert calls for end to UK use of ‘racially biased’ algorithms Al Bias Could Put Women’s: Lives At Risk - A Challenge For It building Regulators ‘ender bias in Al marie fairer algorithms: stm ontsselSed eee ‘Milionsfblack peopleaffected by racial biasinhealth-care algorithms The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good. “Artificial Inteligence has a gender bias ‘The Best Algorithms Struggle to Recognize Black Faces Equally uterus ra {AT Bias _ AT bias is an anomaly (irregularity of abnormality) in the result let us now define produced through AI based programs and algorithms because of AL bias formally. prejudiced (discriminatory) assumptions made during the algorithm development process or prejudices in the training data. ! 43.1 Possible Bias in Data Collection Data plays an important role in an AI model/algorithm’s functioning. AT model or algorithm is trained to function in a certain way using a huge sample set of data, known as training 4ata. Now, if this training data is biased, the result produced by the AI model/algorithm (that used this data) will also be biased. Let us understand what is bias in data collection. Thining Data in Al Training data is a huge collection of labelled information that’s used to build an AI model (©. machine teaming model). The training data usually consists of annotated text, sages, video, or audio, Through training data, an AI model leams to perform its task at a high level of accuracy. * Colection of a _ abelled data “son making pact through ae To A 42 ARTIFICIAL INTELLIGENCE-x Training Data is used to store people characteristics in the form of feature values. Bias ina data collection happens if some data representing a feature, group, ethnicity etc. is under- represented or over-represented. For instance, consider this — a minority group's preferred colour for car is ‘Red’ because of their cultural influence. Now if another dataset stores that ‘Red’ is also a preferred choice of colour for aggressive drivers, then without much representation, this dataset may link the minority group with aggressive driving — an AI bias, here. collection of labelled infor. mation that’s used to build an AI model (e.g., machine learning model). Reasons for Al Bias in Data Other than the over- and under-representation, there are many more reasons that cause or contribute to AI bias. — These are : Bias in Data Collection Ble ee Bias in data collection Oe cael refers to flawed or unbalanced (ii) Flawed and unbalanced data collection data with over- or under- representation of data related to specific features or groups or ethnicity etc. in the final data-collection. (iti) Under- or over-representation of specific features (iv) Wrong assumptions (v) No proper bias testing (vi) No bias mitigation (i.e., reducing the severity of bias) Human Biases in Data WiAiwry Reporting bias Stereotypical bias Group attribution error Selection bias Overgeneralization SEE Core eur et) Sampling error Non-sampling error Insensitivity to sample size Correspondence bias In-group bias Ensuring Data Fairness Out-group homogeneity bias Historical unfairness Implicit associations {associations of concepts (e.g, black, gay) and eva- luations (e.g, good, bad) Implicit stereotypes Prejudice Bias blind spot Confirmation bias Subjective validation Experimenter’s bias CChoice-supportive bias Halo effect {overall impression of a ‘person influences the judge- ‘ment about the person) Human Biases in Collection and Annotation Neglect of probability Anecdotal fallacy Illusion of validity It is very important for the collectors to do the following for the faimess of data and thereby, fair de ion-making ® Identifying the correlation of features with data (should be diverse) Session 4 : Al ETHICS al i ‘he correlation among ntifying the c S sets of data, whi i , ° i imising the impact 50 as to have fair dae Studying the impact of data and 4, onsering bases in human decisions and he data col gnsusing “balanced” data aay ° supervised decision-making ° Bias testing by learning about bi, i Regular Bias t ut biases and induci y repeated testing of data and modification Realy a faimess by thorough and Note the outputs are inde a Al if theo Pendent of sensitive parameters eo Siac ate skal, tlie ay ceases tic.) for a specttc ask that is already affected by soctal discrimination tn” cae Trusted Al Principles = 2 | pesonslo Accountable Transparent Empoweetog relisie Seeking and Develo oa oping a Prometing Respecting the eat Teveraging feansperent ser economiogranmn _eraepariing the | ee aing tecchack far exveristcologuce vermparmem ural vas of pe data we are continuous users through for our customers, Not just those of wer improvement. Machineiven theremploygereng —"atustthose recommendations, ‘soseyae ee I:slcations of Biases in Al Technology The AI bias can lead to biased decisions and nullify the intended use of AT technology ina specific context. You have already read some examples of AI biases in earlier lines. Following are some more examples of how Al bias impacts decisions and results into biased results, Here are some real-life examples of AI bias : * COMPAS. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a software used by the US courts to judge the probability of a defendant (the person accused of committing a crime) becoming a recidivist (act of repeating the Previously committed crime). Due to the heavily biased data, the model predicted twice as many false positives for recidivism in black offenders than white offenders. ® Amazon's Hiring. In 2014, Amazon developed an AI recruiting system to streamline their hiring process. It was found to be discriminatory against women as the data used ‘o train the model was from the past 10 years, where most selected applicants were sen due to the male dominance in the tech industry. Amazon scraped this system Mm 2018, a 44 ARTIFICIAL INTELLIGENCE-X © US Healthcare. The US healthcare system used an AI model that assigned a lower ris rate for black people than white people for the same disease. This was because the model was optimized for cost, and since black people were perceived as being less able to pay, the model ranked their health risk lower than their white counterparts, This resulted in lower healthcare standards for black people. © Twitter Image Cropping. In September of 2020, Twitter users found out that the image cropping algorithm favoured white faces over black faces, i.e., when an image with a different aspect ratio than the preview window is posted on Twitter, the algorithm crops parts of the image and shows only a certain portion of the image as the preview. This AI model often showed white faces in the preview window in a picture with white and black faces. © Facebook's Advertisement Algorithm. In 2019, Facebook allowed advertisers to target people based on their race, gender, and religion. This led to jobs like nursing and secretary being targeted to women, while jobs like janitor and taxi drivers targeted men, especially men of colour. The model also learned that real estate ads had a better click-through rate when shown to white people, resulting in a lack of real-estate advertisements to minority people. ‘These are just a few common examples of AI bias. There are many instances of unfair AI practices with or without the knowledge of the developer. Reducing and Mitigating Al Bios Let us now learn how the AI people can reduce the Al biases in data collections and decisions : () Thorough Research. The data collector must research their users or subjects in advance about which the data are being collected. They should be aware of general results and odd results of the data. (ii) Diversity of Team. The team working for data collection or algorithm development must be diverse so that one person or team does not have major influence on data and algorithm of decision-making. (iii) Data Diversity. Combine inputs from multiple sources to ensure data diversity. (iv) Standardised Data Labelling. The team must have standardised way of labelling so that accurate, consistent and standardised data labels are used in data collection. (v) Identify Bias-proneness. The team should identify the possible occurrences of biases among data sets and use multi-pass annotations, multiple set of annotators label the data so as to minimise the possible bias. (vi) Data Review. Enlist the help of someone with domain expertise to review collected and/or annotated data. Someone from outside of the team may see biases that the team has overlooked. Session 4: AL ETHICS 45 regutar Data Analysis. The team should keep track of errors and problem areas \ vit) so as tO respond to and resolve them quickly, ular Bias Testing. The team must test the collected data, training data and (vii) Fe overall performance of the algorithm against biases and use approaches and tots. to mitigate the biases, Standardised mn Diversity of Team ata Diversity Data Labelling eon So that one person or ‘Combine inputs trom So that accurate, 4) sodas ieam does not have ‘muttle sources to Consistent and Senet a ‘major influence on ensure data diversity standardised data a decision making labels are used ae’ proneness Data Review Regular Data Analysis Regular Bias Testing sis ie re possble Enlist the help of ‘The team should keep ‘Test the collected data, 5, erty Pot hiases Sonora coe track of errors and training data and the ata 5288 expertise to review problem areas so as overall performance gr St pass collected andlor to respond to and cof the algorithm rs annotated data, resolve them quiciy. against biases. ato ee SS SS SES = (22 Digital Resource links “Ey https://www.youtube.com/watch?v=BfHaRUt7EXU&t=271s | https://www.youtube.com/watch?v=fA5xpRngbKM. https://www-youtube.com/watch?v=24cSV_xIRjU&t-9s y ee Se Adyantoges and Disadvantages of Al Advantages © Use of AI reduces in human error. © Use of AI helps in lessening repetitive work. © AT technology provides digital assistance. ® Al technology aids in faster and more accurate decisions. Disadvantages © Use of AI results in cost overruns. ® There is dearth of talent in AI domain as of now. ® There is lack of practical AI based products. ® There is a big potential for misuse of AI. oo ATTEN 46 ARTIFICIAL INTELUGENCE-x BOS Swope Activity BALLOON DEBATE Divide the whole class in even number of teams. Each team may consist of 3-4 students | Two teams get the same theme. Two teams with the same theme will be labelled as ‘Pro Team’ and ‘Against Team’ respectively. Alllot the themes to the teams from the followin, Cost of AI project, Application of Al, Scope of Error, Impact on Society, Impact on Jobs, Scope of | Misuse, Role in nation building, Impact on countries’ relationships (role of AI in war technology | and power of the nation) * The ‘Pro Team’ will talk about the positive outcome of Al as per their allotted theme. * The ‘Against Team will talk about against the outcome of Al as per their allotted theme, * The teams should be creatively unique in pinpointing why AI is beneficial/harmful for the society, Finally compile all the points of the teams after a proper discussion on each point. Theme Pros Cons Ls i Check Point — 1. Which of the following are some ethical issues around AI ? (@ Bias and Fairness (b) Accountability (©) Cyber Security and Malicious use (@) All of these 2. ‘For hiring purposes, a big firm made use of an Al based software and it resulted in hiring very few women in past 5 years’. Which AI ethical issue is related to this ? (a) Transparency (®) Bias and Fairness (©) Trust, privacy and control (d) Automation and impact over jobs 3. Results based on incomplete or prejudiced data, produced by Al tools, are signified as - (@) Al fairness (b) Deepfake (c) Robotics @ Al bias 4. ‘A college student created a fake picture of his fellow student using Al's deepfake technology and registered for a competition’. Which Al ethical issue is related to this ? (@) Transparency () Bias and Fairness (© Trust, privacy and control (@) Automation and impact over jobs Session 4: AL ETHICS 47 \ 5, ‘Aig data analytics company illegally usey rf about their political preferences ana aa _ data social media use Which AT ethical issue is related to hig 7 USUS it to affect the res (o) Human rights and At (©) Transparency 1s of a consistency to know ult of election in that area’ ©) Cyber security and malicious use (4) Safety Vy factory worke livelihood of th ‘An automobile company laid off man heavily affecting the employment and, toit? (a) Transparency (0 Trust, privacy and control TS eversince it started using robot ese workers’, Which AI ethic: (b) Bias and Fairness (@) Automation and impact over jobs lodels is known as () Testing (d) Annotated §, What is/are the reason(s) behind AI bias in data ? (@ Flawed and unbalanced data collection (©) No proper bias testing and mitigation (¢) None of these 7, The data used by AT algorithms to build At my (o Fair (0) Training, aeadata: (©) Wrong assumptions (d) All of the above 9, Which of the following measure(s) will ensure Data Fairness in Al? (a) Ensuring balanced data ee (&) Observing biases in human decisions and the data collected (0) Supervised decision-making (@) Regular bias t (@) All of the above a on (f) None of these 10. Which of the following measure(s) are useful for teducing and mitigating AI bias ? (@) Team diversity (© Data diversity (0) Data review (@ Bias testing (¢) All of the above (f) None of these Competency Based Questions 11, Facebook research developed two Al based chatbots namely Alice and Bob. But after some time facebook shutdown these chatbots. The reason told behind it was that Alice and Bab developed own language after few exchanges and started interacting in that language. This case is clear indicator of AI ethics (@) Trust, Privacy and Control (6) Accountability (9 Human rights and AT (d) All of these 12. A big multinational company established a huge campus in XYZ country and it also built a residential campus for its thousands of employees in that country. Nearly 90% of these company employees now work during nights because of time zone difference from the head office. Thus during the daytime, the electricity consumption is very low in the company campus and residential complex. The XYZ country’s government is implementing the new technologies in various sectors. For power distribution, it also got developed an Al based software from a big software development firm. This Al based power distribution software distributes power smartly as Per the consumption pattern of an area. A 48 ARTIFICIAL INTELLIGENCE-X Since the MNC campus and complex use very less power during day time, the AI software hag power in that area without accounting for the right use of power. cut For this discrepancy, which AI ethical issue is related to it ? (@ Bias and Fairness (b) Accountability (©) Transparency (d) Human-Al interaction 13. ABC manufacturing company is hugely successful company. It keeps innovating its produ packaging keeping in mind the ease of use and quality. Recently, XYZ; company came up wig unique design of packaging boxes which proves highly useful. However, it wants tht packaging box should have any design fault. To ensue this, the XYZ: company employed an 4y based software. twas shown a huge collection of images showing the comect design of packaging box. After repeated use of this huge collection of images, the AI software can now remove be, with incorrect design. a Such a huge collection of data used for teaching AI software to work in a certain way is calleg data @Al (b) Testing (0) Training 14, In2014, Amazon developed an Al recruiting system to streamline their hiring process. It was found to be discriminatory against women as the data used to train the model was from the past 10 yeas, where most selected applicants were men due to the male dominance in the tech industry. Amazon scraped this system in 2018, This is an example of Al ethical issue (@) Bias and Fairness (@ All of these (b) Accountability (0) Transparency (@) Human-AI interaction Ananya has recently bought a robot to do cleaning including mopping. The robot senses the dirty floor, calculates the average time taken to mop, senses the obstacle and changes the direction, also goes for auto charging. Her housemaid was feeling insecure as if she would not be required in the near future, This is a problem related to : [CBSE 2021-22 (Term 1] 15, (b) AL Ethics (a) Al Bias (@) Data Privacy (0) Al Creating Unemployment < LET Us REVISE % Various ethical issues of AI are : Bias and Fairness, Accountability, Transparency, Safety, Human ~ AI interaction, Trust, Privacy and Control, Malicious use of AI, Impact over jobs, Human and Robot rights and so forth. + Deepfake is a technology that can generate fake digital photos, sound recordings, and video, which look Just as original as possible, % Aleethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. ATbias is an anomaly (irregularity of abnormality) in the result produced through AI based programs and algorithms because of prejudiced (discriminatory) assumptions made during the algorithm development ‘Process or prejudices in the training data. * Training data is a huge collection of labelled information that's used to build an AI model (e.g., machine learning model). Solution Time nTime 1 Some possible reasons of ar pj Session 4 Av eTHICS 49) Under- or over-representation ve i tion iag mitigation (i.e, reducing the <2 P*c%e features, rome ene flawed and unbalanced data collection; Bias in data collection ‘Severity of bias), 17 assumptions; No proper bias testing: No bias oF unbalanced data : hah with over- or under-representation of data A given AT made is far if he out ap sav et OM A dticolction, / religious faith, disabj are independent oy Decifc task shy site Parameters (e., gender, race, sexuality, rask that aM dient ; Name some ethical issues of Ap, Ans. Safety, transparency, bias and faimess, accountabil Match the following with respect to ethics in Al as | @ Clear, | ®) Eliminate or reduce the impact of Be once (©) Allowing third parties to “oo Inputs and provide | (iii) Transparent (a)_APility to see how results can vary with changing inputs Ans. (@)- (i); @=(0); @-(@; aaa (@)- (i Lis avo examples of Human Rights violations of A Ans. (@ Collecting personal data of peo (ii) Al bias. ple without their knowledge. Jame some ethical is i - d i Some GNA issues related fo: () Al setup, (i) Al actions, (ii) Al impact, (io) A future, ns. (0) Bias and faimess, Accountability, Transparency (@ Safety, Human-Al interaction, Trust, Privacy and control (ii) Automation, Impact over jobs, Auditability and Interpretability (ie) Control of AI over things and people, Human rights, Robot rights What is deepfake ? Ans. Deepfake is a technology that can generate fake digital photos, sound recordings, and video, which look just as original as possible. What is AI bias ? Give example, Ans. AI bias is an anomaly (irregularity of abnormality) in the result produced through ATbased programs and algorithms because of prejudiced (discriminatory) assumptions made during the algorithm development process or prejudices in the training data, eg, a health insurance AI program preferred whites over blacks for extra healthcare services because of an AT bias. What is fairness of an AI model ? Ans, A given Al model is fair if the outputs are independent of sensitive parameters (e., gender, race, sexuality, religious faith, disability, etc.) for a specific task that is already affected by social discrimination. 50 ARTIFICIAL INTELLIGENCE-X pf 8. What is the role of training data in Al? Ans. An Al model or algorithm is trained to function in a certain way using a sample set of, data, known as training data. The completeness in terms of representation (not over. uunder-represented) and balance of data (correct and unbiased representation) affects the outcome ‘Al model. If this training data is biased, the result produced by the AI model/algorithm (that uscy this data) will also be biased. a ‘Thus, it is very important that the training data used for an AI model must be complete, correc balanced and unbiased. 9. Define AI ethics. ‘Ans, Alethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies, 10. Hot would you ensure that the bias does not creep in data ? Ans, One can ensure that the bias does not creep in or gets minimised by the following steps; x Identifying the correlation of features with data (should be diverse). x Identifying the correlation among sets of data, while studying the impact of data and minimising the impact so as to have fair decisions. x Observing biases in human decisions and the data collected. x Ensuring “balanced” data. x Supervised decision-making. x Regular Bias testing by leaning about biases and inducing fairness by thorough and repeated testing of data and modification in training data. < GLOSSARY ‘Al Bias An anomaly (irregularity of abnormality) is the result produced through Al based programs and iminatory) assumptions made during the algorithm development WAawmrv algorithms because of prejudiced (disc process or prejudices in the training data. Al Fairness Assurance of fair outputs of an Al model by minimising the impact and role of sensitive. parameters (e.g., gender, race, sexuality, religious faith, disability, etc.) for tasks and situations prone to social discrimination Training Data A collection of labelled, annotated information that's used to build an Al model. Assignment oe 1. What do you understand by Al ethics ? (a) Set of principles and techniques to ensure the conduct of AI models fair and right, per sec: () Restrictions of tasks to be done via Al (©) Embedding control data in AI models (@ None of these 2, In an Al based model the machine is trained with huge amounts of data, called ___——" which helps it in training itself around the data. (@) Machine Learning, () Artificial Intelligence (©) Training Data (@) Deep Fake ————E=_ == -- -—-OllOOOOOSHOOYS 6 9, 10, 11 12, 13 i 15 16, 18, Session 4: AETHICS 51 __ is a technology that can generate fake di tal media looking, as original as possible. (@ Deep Learning —_(b) Deepfake (0) Fakedeep (@) All of these Al ____ refers to an irregularity/abnormality in the result produced via an Al based program or model because of prejudiced assumptions in data or algorithm, (@) Deepfake (b) Bias (0) Fairness (@ Training, A given AI model is if the outputs are independent of sensitive parameters (e.g., gender, race, sexuality, religious faith, disability, etc) for a specific task that is already affected by social discrimination, (o) Biased (©) Ethical (0) Fair (@ Both (b) and (0) Define AI Ethics. List some ethical issues related to Al, What is deepfake ? How is deepfake related to Al ethics 2 What is AI bias 2 Give some examples of AI bias. Discuss the bias in data collection of Al. How is training data useful for AI ? Why should it be fair ? How would you assure that the data used for an AI model in fair and unbiased ? What are the possible reasons of Al-bias ? Why is diversity of data collection team important for Al ? Why is Regular bias testing important for a training data ? Discuss some principles which will make as AI model trusted. Discuss how AI may be used in human rights violations. PRACTICAL ASSIGNMENT Study various cases of Al biases around you. 1 2. a a Figure out how training data could have resulted in AI bias, ée, find out the probable reasons, such as: * Was the data balanced ? ™ Was the data over- or under-represented 2 ™ Was bias testing proper ? * Was the data development team diverse and unbiased 2 and so forth. Suggest the solutions for the probable reasons that you find.

You might also like