Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Компјутерска етика

Вовед

Компјутерската етика (англиски: Computer ethics) претставува множество на правила за коректно


однесување во компјутерскиот свет. Употребата на компјутерите во денешно време е во пораст, затоа
потребно е да постојат правила, или начини на однесување во компјутерскиот свет. Компјутерската
етика се грижи за постоењето на тие правила. Таа ги подучува луѓето како да се однесуваат правилно
кога користат компјутер. Со зголемување на употребата на компјутерите се зголемуваат и бројот на
нормите кои ги воведува комјутерската етиката. Таа ни помага да ја зголемиме приватноста при
користењето на комјпутерите, да се заштитиме од вирусите, од некоректното користење на софтвер.
Постоењето на компјутерската етика е сврзано со појавата и развојот на компјутерите. Многу луѓе
сметаат дека појавата на компјутерите може да биде штетна за општеството, не гледајќи ги
позитивните страни на постоењето на компјутерите. Тие сметале дека на многу работни места на кои
работат луѓето, работата ќе ја превземат компјутерите. Но, со постоењето на компјутерската етика,
тие негативни замисли за компјутерите ќе се избледат,и луѓето ќе ја имаат вистинската слика за
појавата и развојот на компјутерите. За компјутерската етика направени се многу трудови, се со цел
луѓето да го разберат вистинскиот придонес на компјутерите во денешно време. Правилата кои ги
дефинира компјутерската етика, ни покажуваат како правилно да ги користиме компјутерите, и да не
правиме прекршоци при работата со нив. Компјутерската едукација почнува да се воведува во
основните училишта, се со цел луѓето на време правилно да се информираат за употребата на
компјутерите, и придонесите од нив. Компјутерската етика, ги учи корисниците како да се справуваат
со ризиците, и да ја зголемат безбедноста. Корисниците на компјутерите треба да имаат во предвид
дека, и покрај тоа што компјутерите работат под строги правила, софтерските програми ги создале
луѓето, и тие се замисла на луѓето.
Историја на компјутерска етика
1940 - 1970
Компјутерската етика како наука е основана од страна на Норберт Винер, професор по математика и
инженерство на Институтот за технологија во Масачусетс (англиски: MIT), во раните 1940ти. Науката
која ја пропагирал Винер се викала кибернетика. И покрај тоа што ова се случувало во екот на
Втората светска војна, Винер предвидел дека кибернетиката искомбинирана со електронски
компјутери ќе има огромни општествени и етички импликации; и дека откако војната ќе заврши,
светот ќе мине низ втора индустриска револуција, што ќе резултира со многу нови етички
предизвици и шанси.
Винер потоа ја објавил својата книга Кибернетика (англиски: Cybernetics) во која детално ја опишува
новата гранка на применета наука. Во неа тој идентификувал општествени и етички резултати од
постоењето на електронските компјутери. Тој во 1950 година ја објавил книгата “Човечка употреба
на човечките суштества“ (англиски: The Human Use of Human Beings) со која се етаблирал како
основач на компјутерската етика и покрај тоа што никаде не го употребил тој термин како назив на
новата наука. Во оваа книга тој истражувал најразлични етички проблеми кои има најголеми шанси
да се создадат со компјутерската и информациската технологија. Некои од овие проблеми се важни
дури и денес, односно шеесетина години подоцна. Дел од овие теми се: компјутерите и безбедноста,
компјутерите и (не)вработеноста, одговорностите на компјутерските професионалци, вештачка
интелигенција, компјутерите и религијата и многу други важни теми.
Како што често се случува, поради тоа што Норберт Винер бил пред времето и покрај тоа што тој се
уште не знаел дека е всушност основач на нова гранка, многумина од неговите колеги, и луѓето
воопшто, го сметале за ексцентричен научник кој многу фантазира. Вистинитоста на тврдењата во
неговите книги ќе се остварат дури две децении подоцна.
Напредувањето на новата наука било ставено во мирување се до дваесетина години подоцна, кога
Дон Паркер забележал дека почнале да се случуваат извесни информатички криминални дејства (на
пример, ограбување на банка со помош на компјутер) и направил кодекс за морал со кој би биле
1
казниви ваквите настани. Овој кодекс бил усвоен. Во следните две-три децении, Паркер пишувал
книги и статии и држел говори и работилници за полето на компјутерската етика.
1970 - 1980]
Во овој период, Јозеф Вајзенбаум, компјутерски научник на Институтот за технологија во
Масачусетс, испровоциран од реакцијата на околината врз неговиот софтверски симулатор на
психијатар по име ELIZA, ја напишал книгата “Компјутерската моќ и човечкото мислење” (англиски:
Computer Power and Human Reason), која е основна литература на компјутерската етика. ELIZA била
нападната од страна на неговите колеги кои верувале дека поради неговиот софтвер ќе останат без
работа. Книгата на Вајзенбаум, како и неговите говори, инспирирале многу мислители и проекти од
полето на компјутерската етика.
Терминот “компјутерска етика” првпат бил воведен од страна на Волтер Манер, кој забележал дека
кога компјутерите учествуваат во медицински процедури, се јавуваат нови етички прашања. Тој
развил нов курс, на почетокот само експериментале, кој го носел ова име и неколку години подоцна
терминот, како и курсот, биле прифатени насекаде. Во овој курс тој нудел материјали за учење, теми
за дискусии и многу педагошки совети за универзитетски професори за и тие да го држат истиот
курс. Работата на Манер извршила огромно влијание врз понатамошниот развиток на
компјутерската етика. Тој предизвикал воведување на многу нови курсеви од оваа област, а се
здобил и со неколку важни следбеници.
Во овој период не се случиле други поважни настани на полето на компјутерската етика. Нејзиниот
најважен развој се случил подоцна.
1980 - 1990
Во почетокот на овој период, јавноста во Америка и Европа почнале да ја загрижуваат бројни
последици на постапки поврзани со информатичката технологија. Вакви загрижувачки појави биле:
криминал извршен со компјутер, несреќи предизвикани од паѓање на компјутерска машина, напади
на приватност податоци преку компјутерски бази на податоци и огромни и скапи судски процеси за
софтверска сопственост. Во овој период се случила огромна експанзија нна компјутерската етика.
Еден од следбениците и помошниците на Манер, Терел Вард Бинум освен што работел на
пропагирањето на компјутерската етика како наука, во 1985 година напишал статија во весникот
“Метафилозофија” (англиски: Metaphilosophy), по име “Што е компјутерската етика?” (англиски:
What Is Computer Ethics?), која денес се смета за основна литература за оваа област.
Истата година, Дебра Џонсон го објавила првиот официјален учебник за компјутерскатата етика,
кој го носи истоимениот назив. Таа ја дефинирала компјутерската етика како поле кои ги испитува
начините на кои компјутерите воведуваат нови морални прашања и дилеми. Интересно е тоа што
Дебра Џонсон, и покрај тоа што го следела движењето на Манер, не верувала дека компјутерите ќе
создадат сосема нови етички проблеми. Тоа резултирало со дискусии меѓу нив двајца. Во овие
дискусии Дебра се обидувала да докаже дека компјутерите само трансформираат, односно
надградуваат стари етички проблеми.
Од тогаш, полето на компјутерската етика се развива многу брзо. Околу 1985 година, почнале да се
отвораат истражувачки центри, компјутерски курсеви, конференции, да се пишуваат учебници и
статии во весници итн. Од истражувачките центри најбитни за иднината на компјутерската етика
биле оние во Европа и Австралија. Биле развиени и ETHICOMP конференции.

1990 - денес
Во 1990 година, Готербарн го создал Институто за етички истражувања на софтверското инженерство
(англиски: Software Engineering Ethics Research Institute ), како дел од Државниот Универзитет во
Тенеси. Подоцна, заедно со Сајмон Роџерсон, развил компјутерски софтвер кој ќе им помага на
поединци, компании и организации во припремите на етичките анализи и за одредување на веројатни
етички влијанија од страна на проектите од областа на софтверското инженерство. Овој софтверски
продукт се вика SoDIS.

2
Во 1991 година, Бајнум и Манер ја одржале првата интернационална конференција за компјутерската
етика, за која многумина сметаа дека е огромен чекор напред во истоименото поле. На оваа
конференција мислењата ги споделиле многу филозофи, компјутерски професионалци, адвокати,
бизнис лидери, владини претставници, новинари и други луѓе од многу важни гранки на
општеството. Оваа конференција резултирала со множество материјали за предавања, монографи и
видео програми. [10]
Во 1995 започнала втора генерација во развитокот на компјутерската етика, која обратила повеќе
внимание на практичните активности. На една од ETHICOMP конференциите, Кристина Горњак
Кочиковска смело изјавила дека компјутерската етика набрзо ќе прејде во глобално развиена етика
која ќе биде застапена во секоја култура на планетата. Понатамошните случувања само ја потврдиле
нејзината хипотеза. Во 1999 година, Дебра Џонсон се спротивставила на хипотезата со тоа што
изјавила дека компјутерската етика нема да се развие, туку ќе биде онаа старата етика, но со нов
пресврт.
Оваа хипотеза е од големо значење за компјутерската етика бидејќи била провокативна и го
разбудила интересот за тврдењата на Манер, кои тој ги направил на истата конференција. На оваа
конференција, Горњак предвидела дека компјутерската етика ќе се развие во нешто многу повеќе од
само применета етика, како што било тогаш. Таа смело изјавила дека таа ќе се развие во една
глобална етика, прифатена од страна на секоја култура на земјата.
За разлика од хипотезата на Горњак, хипотезата на Џонсон го тврди токму спротивното, иако на прв
поглед двете изгледаат идентично.
Тоа значи дека мислителите на ова поле му пружиле на светот две многу различни гледишта за
етичката важност и за етичкиот развој на компјутерската технологија. Едното гледиште е на Дебра
Џонсон, и се базира на тврдењето дека проблемите кои ги носи информатичката технологија се оние
старите, но со некој нов пресврт. За ова гледиште може да се каже дека е конзервативно, бидејќи
тврди дека основните етички теории кои се граделе со векови порано, ќе останат нечепнати. Тоа
значи дека компјутерската етика како посебна гранка од применетата филозофија, ќе исчезне.
Другото гледиште е на триото Винер-Манер-Горњак. Овие тројца научници ја гледаат компјутерската
технологија како револуционерна област. Нивното гледиште се базира на тврдењето дека луѓето ќе
мора да ги преиспитаат основните принципи на етиката, а со тоа и самата дефиниција на човечкиот
живот.
Нарушувања на компјутерската етика
Луѓето прават многу нарушувања на етичките правила при користењето на компјутер. Тоа се случува
поради тоа што луѓето не ги доживуваат тие нарушувања како сериозни. Најголем дел од
корисниците на компјутер нелегално користат софтвер, или пак користат туѓ компјутер без да добијат
дозвола за тоа, користат туѓи лозинки. Ниската свесност на луѓето е една од причините за
нарушување на етичките правила. За да правилно навлеземе во компјутерскиот свет, мора да ги
почитуваме етичките правила. Нарушувањата на компјутерската етика се зголемуваат со зголемување
на бројот на неправилностите кои ги прават корисниците на компјутерите. Со зголемување на бројот
на компјутерските вируси, црви и хакирања, се проширува и компјутерскиот криминал, а со тоа се
прекршуваат правилата на компјутерската етика. Приватноста е една од темите, која компјутерската
етика ја засега веќе подолго време. Удобноста при користење на комјутерите за чување, пребарување
на информации, ја доведуваат приватноста во ризик. Бидејќи колку полесно се чуваат и пребаруваат
информациите, толку е и поголема, и полесна можноста за пристап до туѓи информации. Многу
прекршувања се поврзани со темата софтверска сопственост. Ричард Сталман и некои други луѓе ја
започнаа "Фондацијата за слободен софтвер". Тие сметаат дека сите информации треба да бидат
слободни, и сите програми треба да бидат достапни за копирање,учење и модифицирање. Некои луѓе
пак тврдат дека дека софтверските компании или програмерите, нема да инвестираат толку месеци во
работа, ако таа инвестиција не им се врати во форма на продажба.
Структура
Компјутерската етика е составена од правила, кои што го диктираат правилниот начин на
функционирање во светот на компјутерите.

3
Бројот на овие правила се зголемува, со зголемување на развојот на компјутерската технологија. Сите
правила од кои е составена компјутерската етика, развиваат коректен начин на функционирање на
компјутерската технологија. Бидејќи компјутерската етика ја засегаат повеќе теми, всушност можеме
да направиме класификација на правилата. Правилата можеме да ги класифицираме во зависност од
тоа, од која тема произлегуваат. Правилата од кои е составена компјутерската етика, ни ја даваат
точната визија за компјутерите, како тие ни помагаат во реалниот свет. Овие правила не спречуваат
да гледаме на развојот на компјутерите, како конкуренција за преземање на работните места. За да
почнат луѓето да ги почитуваат овие правила, тие треба да бидат запознати со нив. За таа цел, во
училиштата треба да се воведе компјутерска едукација, која ќе им помогне на корисниците уште од
рана возсраст правилно да ги користат компјутерите. Сите непочитувања на овие правила, создаваат
погрешен начин на пристап во светот на компјутерите.
Значење
Компјутерката етика е многу важен фактор за правилно функционирање во светот на компјутерите.
Правилата од кои таа е составена треба да ги почитуваат сите корисници на компјутерите. Доколку не
се почитуваат тие правила ќе се прават многу нарушувања, кои ќе имаат негативни последици за сите
кои ги користат компјутерите. Компјутерската етика разработувајќи ги темите кои неа ја
интересираат, им помага на луѓето да создадат правилна визија за компјутерите. Погрешните визии за
компјутерите, можат да доведат до тоа да луѓето добијат еден вид на одбивност кон комјутерите, а со
тоа тие самите го спречуваат нивниот развој во светот на компјутерите. Со текот на времето, како
што технологијата се развива, потребно е дефинирање на нови правила, кои ќе се применуваат
понатаму. Без овие правила на компјутерската етика, приватноста и безбедноста ќе бидат на многу
ниско ниво, а тоа воопшто не им одговара на корисниците на компјутерите, бидејќи никој не сака да
ја доведе својата приватност и безбедност во ризична состојба. Сите оние кои ги почитуваат
правилата на компјутерската етика, никогаш нема да направат неправилности, заради кои би биле
казнети. Но, за да тие правила се почитуваат мора корисниците да бидат запознати со нив, и да можат
да ги применат. Бидејќи само така развојот на компјутерскиот свет ќе се движи во правилната насока,
и сите ќе имаме корист од него, а не штети, кои негативно ќе влијаат на нас. Постапките кои при
користење на компјутер предизвикуваат штети на некои други корисници треба да бидат казнети.
Бројот на луѓето кои се занимаваат со компјутерската етика се повеќе се зголемува, со развојот на
компјутерите. Тие работат на дефинирање на кодекси во компјутерската етика, кои треба да се
почитуваат од страна на корисниците. Колку побрзо луѓето ќе ги применат овие правила, толку
поефикасно ќе биде спречувањето на грешките, и избегнувањето на казните, кои следат доколку се
направат тие грешки, односно прекршоци.
Десет заповеди на компјутерската етика
 Не го користи компјутерот за да наштетиш на друга личност.
 Не мешај се во работата на другите луѓе.
 Не чепкај по туѓи датотеки.
 Не користи го компјутерот за кражба.
 Не користи го компјутерот за донесување на лоши заклучоци.
 Не користи програми за кои немаш платено.
 Не користи туѓ кориснички акаунт без соодветна дозвола.
 Не подбивај се со човековите интелектуални способности.
 Мисли на социјалните ставови кога ги креираш своите програми.
 Користи го компјутерот на тој начин што тој ќе ги почитува човечките чувства.

4
Computer ethics

Computer Ethics is a part of practical philosophy which deals with how computing professionals should
make decisions regarding professional and social conduct. Margaret Anne Pierce, a professor in the
Department of Mathematics and Computers at Georgia Southern University has categorized the ethical
decisions related to computer technology and usage into 3 primary influences:

 1. The individual's own personal code.


 2. Any informal code of ethical conduct that exists in the work place.
 3. Exposure to formal codes of ethics.

Foundation

To understand the foundation of computer ethics, it is important to look into the different schools of ethical
theory. Each school of ethics influences a situation in a certain direction and pushes the final outcome of
ethical theory.

Relativism is the belief that there are no universal moral norms of right and wrong. In the school of
relativistic ethical belief, ethicists divide it into two connected but different structures, subject (Moral) and
culture (Anthropological). Moral relativism is the idea that each person decides what is right and wrong for
them. Anthropological relativism is the concept of right and wrong is decided by a society’s actual moral
belief structure.

Deontology is the belief that people’s actions are to be guided by moral laws, and that these moral laws are
universal. The origins of Deontological Ethics are generally attributed to the German philosopher Immanuel
Kant and his ideas concerning the Categorical Imperative. Kant believed that in order for any ethical school
of thought to apply to all rational beings, they must have a foundation in reason. Kant split this school into
two categorical imperatives. The first categorical imperative states to act only from moral rules that you can
at the same time will to be universal moral laws. The second categorical imperative states to act so that you
always treat both yourself and other people as ends in themselves, and never only as a means to an end.

Utilitarianism is the belief that if an action is good it benefits someone and an action is bad if it harms
someone. This ethical belief can be broken down into two different schools, Act Utilitarianism and Rule
Utilitarianism. Act Utilitarianism is the belief that an action is good if its overall effect is to produce more
happiness than unhappiness. Rule Utilitarianism is the belief that we should adopt a moral rule and if
followed by everybody, would lead to a greater level of overall happiness.

Social contract is the concept that for a society to arise and maintain order, a morality based set of rules must
be agreed upon. Social contract theory has influenced modern government and is heavily involved with
societal law. Philosophers like John Rawls, Thomas Hobbes, John Locke, and Jean-Jacques Rousseau helped
created the foundation of social contract.

Virtue Ethics is the belief that ethics should be more concerned with the character of the moral agent (virtue),
rather than focusing on a set of rules dictating right and wrong actions, as in the cases of deontology and
utilitarianism, or a focus on social context, such as is seen with Social Contract ethics. Although concern for
virtue appears in several philosophical traditions, in the West the roots of the tradition lie in the work of
Plato and Aristotle, and even today the tradition’s key concepts derive from ancient Greek philosophy.

5
The conceptual foundations of computer ethics are investigated by information ethics, a branch of
philosophical ethics established by Luciano Floridi. The term computer ethics was first coined by Dr. Walter
Maner, a professor at Bowling Green State University. Since the 1990s the field has started being integrated
into professional development programs in academic settings.

History

The concept of computer ethics originated in 1950 when Norbert Wiener, an MIT professor and inventor of
an information feedback system called "cybernetics", published a book called "The Human Use of Human
Beings" which laid out the basic foundations of computer ethics and made Norbert Wiener the father of
computer ethics.

Later on, in 1966 another MIT professor by the name of Joseph Weizenbaum published a simple program
called ELIZA which performed natural language processing. In essence, the program functioned like a
psychotherapist where the program only used open ended questions to encourage patients to respond. The
program would apply pattern matching pattern rules to human statements to figure out its reply.

A bit later during the same year the world's first computer crime was committed. A programmer was able to
use a bit of computer code to stop his banking account from being flagged as overdrawn. [citation needed]
However, there were no laws in place at that time to stop him, and as a result he was not charged. To make
sure another person did not follow suit, an ethics code for computers was needed.

Sometime further into the 1960s Donn Parker of SRI International, who was an author on computer crimes,[3]
led to the development of the first code of ethics in the field of computer technology.[citation needed]

In 1970, a medical teacher and researcher, by the name of Walter Manner noticed that ethical decisions are
much harder to make when computers are added. He noticed a need for a different branch of ethics for when
it came to dealing with computers. The term "Computer ethics" was thus invented.

During the same year, the ACM (Association of Computing Machinery) decided to adopt a professional code
of ethics due to which, by the middle of the 1970s new privacy and computer crime laws had been put in
place in United States as well as Europe.

In the year 1976 Joseph Weizenbaum made his second significant addition to the field of computer ethics.
He published a book titled "Computer power and Human reason" which talked about how artificial
intelligence is good for the world; however it should never be allowed to make the most important decisions
as it does not have human qualities such as wisdom. By far the most important point he makes in the book is
the distinction between choosing and deciding. He argued that deciding is a computational activity while
making choices is not and thus the ability to make choices is what makes us humans.

At a later time during the same year Abbe Mowshowitz, a professor of Computer Science at the City College
of New York, published an article titled "On approaches to the study of social issues in computing". This
article identified and analyzed technical and non-technical biases in research on social issues present in
computing.

During 1978, the Right to Federal Privacy Act was adopted and this drastically limited the government's
ability to search bank records.

During the same year Terrell Ward Bynum, the professor of Philosophy at Southern Connecticut State
University as well as Director of the Research Center on Computing and Society there, developed the first
ever curriculum for a university course on computer ethics. To make sure he kept the interests of students
alive in computer ethics, he launched an essay contest where the subject students had to write about was
computer ethics. In 1985, he published a journal titled “Entitled Computers and Ethics”, which turned out to
be his most famous publication to date.

6
In 1984, the Small Business Computer Security and Education act was adopted and this act basically
informed the congress on matters that were related to computer crimes against small businesses.

In 1985, James Moor, Professor of Philosophy at DartMouth College in New Hampshire, published an essay
called "What is Computer Ethics". In this essay Moor states the computer ethics includes the following: "(1)
identification of computer-generated policy vacuums, (2) clarification of conceptual muddles, (3)
formulation of policies for the use of computer technology, and (4) ethical justification of such policies."

During the same year, Deborah Johnson, Professor of Applied Ethics and Chair of the Department of
Science, Technology, and Society in the School of Engineering and Applied Sciences of the University of
Virginia, got the first major computer ethics textbook published. It didn't just become the standard setting
textbook for computer ethics, but also set up the research agenda for the next 10 years.

In 1988, a librarian at St. Cloud University by the name of Robert Hauptman, came up with "information
ethics", a term that was used to describe the storage, production, access and dissemination of information.
Near the same time, the Computer Matching and Privacy Act was adopted and this act restricted the
government to programs and identifying debtors.

The 1990s was the time when computers were reaching their pinnacle and the combination of computers
with telecommunication, the internet, and other media meant that many new ethical issues were raised.

In the year 1992, ACM adopted a new set of ethical rules called "ACM code of Ethics and Professional
Conduct" which consisted of 24 statements of personal responsibility.

3 years later in 1995, Gorniak Kocikowska, a Professor of Philosophy at Southern Connecticut State
University, Coordinator of the Religious Studies Program, as well as a Senior Research Associate in the
Research Center on Computing and Society, came up with the idea that computer ethics will eventually
become a global ethical system and soon after, computer ethics would replace ethics altogether as it would
become the standard ethics of the information age.

In 1999, Deborah Johnson revealed her view, which was quite contrary to Kocikowska's belief, and stated
that computer ethics will not evolve but rather be our old ethics with a slight twist.

Internet Privacy

Internet Privacy is one of the key issues that has emerged since the evolution of the World Wide Web.
Millions of internet users often expose personal information on the internet in order to sign up or register for
thousands of different possible things. This act has exposed themselves on the internet in ways some may not
realize. In other cases, individuals do not expose themselves, but rather the government or large corporations,
companies, small businesses on the internet leave personal information of their clients, citizens, or just
general people exposed on the internet. One prime example is the use of Google Streetview and its evolution
of online photography mapping of urban areas including residences. Although this advanced global mapping
is a wondrous technique to aid people in finding locations, it also exposes everyone on the internet to
moderately restricted views of suburbs, military bases, accidents, and just inappropriate content in general.
This has raised major concerns all across the world. Source: CSC300 Lecture Notes @ University of
Toronto, 2011. For more information on this topic, please visit the Electronic Privacy Information Center
website.

Another example of privacy issues with concern to Google is tracking searches. There is a feature within
searching that allows Google to keep track of searches so that advertisements will match your search criteria,
which in turn means using people as products. If you are not paying for a service onliare instead of being the
consumer, you may very well be the product.

There is an ongoing discussion about what privacy means and if it is still needed. With the increase in social
networking sites, more and more people are allowing their private information to be shared publicly. On the

7
surface, this may be seen as someone listing private information about them on a social networking site, but
below the surface, it is the site that could be sharing the information (not the individual). This is the idea of
an Opt-In versus Opt-Out situation. There are many privacy statements that state whether there is an Opt-In
or an Opt-Out policy. Typically an Opt-In privacy policy means that the individual has to tell the company
issuing the privacy policy if they want their information shared or not. Opt-Out means that their information
will be shared unless the individual tells the company not to share it.

Internet Control

Given the internet's vastness and ease of accessibility, the amount of users it sees regularly grows very fast
every day. People from all over the world are finally accepting the internet as a means of common access to
their information, news, social networking and personal entertainment. Now that the demographic makeup of
Internet users increasingly mirrors the demographics of society as a whole, it's important to examine the
relationship between higher-order consumer behavior constructs and Internet use to gain deeper insights into
the behavior of consumers on the internet. As more businesses and corporations come to understand this, the
idea of the internet having a source of control comes into thought.

The term "internet control" however, is a rather broad, catch-all category that subsumes both censorship and
surveillance. As such, it is sensitive to violations of both the right to freedom of expression and the right to
privacy.[5] With the internet's massive popularity, power struggles of the world have begun to transfer onto
the internet and the issue of internet control becomes more prevalent. Independent users, businesses, search
engines, and any possible source of information is trying to control, manipulate, bias and censor their
information on the internet whether they realize it or not. This gives public view to certain issues or events
that may be modified or not modified at all, which could easily bend opinion in frightening ways.

There are many real life examples of this. Some of the most evident deal with companies trying to get the
public to buy-in to certain things by controlling the way you see things online. Similarly, companies can also
include hidden code in proprietary software that scans the computer for various programs that are installed
and other files that may be contained on the computer. This practice is comparable to unethical methods of
fleet management and it refers to the situation that a service-oriented company exploits a number of similar
devices, which are hired by consumers and which are installed at the customer's premise; providing a survey
of the precise status of each device: which software components are or are not installed, whether the device
is in use or not, what state it is in and so forth.

Another important construct of internet control comes from how news is now delivered to us. The internet
gives news companies control in the sense that they have the ability to alter the way the public views certain
issues or events through modified information, which could easily bend opinion in frightening ways.
International news could easily spread across the globe in very little time, with sometimes little confirmation
over what is real and not. This form of "internet control" can easily be used to influence the way people
perceive certain topics and ideas. It is an issue that is being seen worldwide. In China, technological
development and social transformation provide the basic structural conditions. A fledgling civil society of
online communities and offline civic associations, the logic of social production in the internet economy, and
the creativity of Chinese internet users combine to sustain online activism under conditions of growing
political control of the internet in China.

Moreover, the broad topic of internet control is still expanding and showing signs that information, spam and
censoring has gone from paper and television to internet and computers. As more people tune into the web
nowadays, the power struggles of the world will transfer more and more onto the internet in the quest of
control, user dominance, bias and censorship.

Computer Reliability

In computer networking, a reliable protocol is one that provides reliability properties with respect to the
delivery of data to the intended recipient(s), as opposed to an unreliable protocol, which does not provide
notifications to the sender as to the delivery of transmitted data. A reliable multicast protocol may ensure
reliability on a per-recipient basis, as well as provide properties that relate the delivery of data to different
8
recipients, such as e.g. total order, atomicity, or virtual synchrony. Reliable protocols typically incur more
overhead than unreliable protocols, and as a result, are slower and less scalable. This often is not an issue for
unicast protocols, but it may be a problem for multicast protocols. TCP, the main protocol used in the
Internet today, is a reliable unicast protocol. UDP, often used in computer games or other situations where
speed is an issue and the loss of a little data is not as important because of the transitory nature of the data, is
an unreliable protocol. Often, a reliable unicast protocol is also connection-oriented. For example, the
TCP/IP protocol is connection-oriented, with the virtual circuit ID consisting of source and destination IP
addresses and port numbers. Some unreliable protocols are connection-oriented as well. These include ATM
and Frame Relay, on which a substantial part of all Internet traffic is passed.

Identifying issues

Identifying ethical issues as they arise, as well as defining how to deal with them, has traditionally been
problematic. In solving problems relating to ethical issues, Michael Davis proposed a unique problem-
solving method. In Davis's model, the ethical problem is stated, facts are checked, and a list of options is
generated by considering relevant factors relating to the problem. The actual action taken is influenced by
specific ethical standards.

Some questions in computer ethics

There are a number of computers based ethical dilemma that are frequently discussed. One set of issues deals
with some of the new ethical dilemma that have emerged, or taken on new form, with the rise of the Internet
and Social Networking. There are now many ways to gain information about others that were not available,
or easily available, before the rise of computers. Thus ethical issues about storage of personal information
are now becoming an ever increasing problem. With more storage of personal data for social networking
arises the problem of selling that information for monetary gain. This gives rise to different ethical situations
regarding access, security, and the use of hacking in positive and negative situations.

Situations regarding the copyright infringement of software, music, movies, are widely becoming discussed,
with the rise of file sharing programs such as Napster, Kazaa, and the BitTorrent (protocol) . The ethical
questions that arise from software piracy are : is it immoral or wrong to copy software, music, or movies?

A second set of questions pertaining to the Internet and the societal influence that are becoming more widely
discussed are questions relating to the values that some may wish to promote via the Internet. Some have
claimed that the Internet is a "democratic technology”. Does the Internet foster democracy and freedom of
speech? What are the ethical implications of this process on the world? Does the digital divide raise ethical
issues that society is morally obligated to change and spread the ability to access different forms of electronic
communication?

Ethical standards

Various national and international professional societies and organizations have produced code of ethics
documents to give basic behavioral guidelines to computing professionals and users. They include:

 Association for Computing Machinery: ACM Code of Ethics and Professional Conduct
 British Computer Society: BCS Code of Conduct & Code of Good Practice
 Australian Computer Society: ACS Code of Ethics and ACS Code of Professional Conduct
 IEEE: IEEE Code of Ethics
 Computer Ethics Institute: Ten Commandments of Computer Ethics

9
Rebecca Herold

Introduction to Computer Ethics

The consideration of computer ethics fundamentally emerged with the birth of computers. There was concern
right away that computers would be used inappropriately to the detriment of society, or that they would
replace humans in many jobs, resulting in widespread job loss. To grasp fully the issues involved with
computer ethics, it is important to consider the history. The following provides a brief overview of some
significant events.

Consideration of computer ethics is recognized to have begun with the work of MIT professor Norbert
Wiener during World War II in the early 1940s, when he helped to develop an anti-aircraft cannon capable of
shooting down fast warplanes. This work resulted in Wiener and his colleagues creating a new field of
research that Wiener called cybernetics, the science of information feedback systems. The concepts of
cybernetics, combined with the developing computer technologies, led Wiener to make some ethical
conclusions about the technology called information and communication technology (ICT), in which Wiener
predicted social and ethical consequences. Wiener published The Human Use of Human Beings in 1950,
which described a comprehensive foundation that is still the basis for computer ethics research and analysis.

In the mid-1960s, Donn B. Parker, at the time with SRI International in Menlo Park, CA, began examining
unethical and illegal uses of computers and documenting examples of computer crime and other unethical
computerized activities. He published "Rules of Ethics in Information Processing" in Communications of the
ACM in 1968, and headed the development of the first Code of Professional Conduct for the Association for
Computing Machinery, which was adopted by the ACM in 1973.

During the late 1960s, Joseph Weizenbaum, a computer scientist at MIT in Boston, created a computer
program that he called ELIZA that he scripted to provide a crude imitation of "a Rogerian psychotherapist
engaged in an initial interview with a patient." People had strong reactions to his program, some psychiatrists
fearing it showed that computers would perform automated psychotherapy.

Weizenbaum wrote Computer Power and Human Reason in 1976, in which he expressed his concerns about
the growing tendency to see humans as mere machines. His book, MIT courses, and many speeches inspired
many computer ethics thoughts and projects.

Walter Maner is credited with coining the phrase "computer ethics" in the mid-1970s when discussing the
ethical problems and issues created by computer technology, and taught a course on the subject at Old
Dominion University. From the late 1970s into the mid-1980s, Maner's work created much interest in
university-level computer ethics courses. In 1978, Maner published the Starter Kit in Computer Ethics,
which contained curriculum materials and advice for developing computer ethics courses. Many university
courses were put in place because of Maner's work.

In the 1980s, social and ethical consequences of information technology, such as computer-enabled crime,
computer failure disasters, privacy invasion using computer databases, and software ownership lawsuits,
were being widely discussed in America and Europe. James Moor of Dartmouth College published "What Is
Computer Ethics?" in Computers and Ethics, and Deborah Johnson of Rensselaer Polytechnic Institute
published Computer Ethics, the first textbook in the field in the mid-1980s. Other significant books about
computer ethics were published within the psychology and sociology field, such as Sherry Turkle's The
Second Self, about the impact of computing on the human psyche, and Judith Perrolle's Computers and
Social Change: Information, Property and Power, about a sociological approach to computing and human
values.

Maner Terrell Bynum held the first international multidisciplinary conference on computer ethics in 1991.
For the first time, philosophers, computer professionals, sociologists, psychologists, lawyers, business
leaders, news reporters, and government officials assembled to discuss computer ethics. During the 1990s,
new university courses, research centers, conferences, journals, articles, and textbooks appeared, and
organizations like Computer Professionals for Social Responsibility, the Electronic Frontier Foundation, and
10
the Association for Computing Machinery-Special Interest Group on Computers and Society (ACM-
SIGCAS) launched projects addressing computing and professional responsibility. Developments in Europe
and Australia included new computer ethics research centers in England, Poland, Holland, and Italy. In the
U.K., Simon Rogerson, of De Montfort University, led the ETHICOMP series of conferences and established
the Centre for Computing and Social Responsibility.

Regulatory Requirements for Ethics Programs

When creating an ethics strategy, it is important to look at the regulatory requirements for ethics programs.
These provide the basis for a minimal ethical standard upon which an organization can expand to fit its own
unique organizational environment and requirements. An increasing number of regulatory requirements
related to ethics programs and training now exist.

The 1991 U.S. Federal Sentencing Guidelines for Organizations (FSGO) outline minimal ethical
requirements and provide for substantially reduced penalties in criminal cases when federal laws are violated
if ethics programs are in place. Reduced penalties provide strong motivation to establish an ethics program.
Effective November 1, 2004, the FSGO was updated with additional requirements:

 In general, board members and senior executives must assume more specific responsibilities for a
program to be found effective:
o Organizational leaders must be knowledgeable about the content and operation of the compliance
and ethics program, perform their assigned duties exercising due diligence, and promote an
organizational culture that encourages ethical conduct and a commitment to compliance with the
law.
o The commission's definition of an effective compliance and ethics program now has three
subsections:
 Subsection (a) - the purpose of a compliance and ethics program
 Subsection (b) - seven minimum requirements of such a program
 Subsection (c) - the requirement to periodically assess the risk of criminal conduct and design,
implement, or modify the seven program elements, as needed, to reduce the risk of criminal conduct

The purpose of an effective compliance and ethics program is "to exercise due diligence to prevent and
detect criminal conduct and otherwise promote an organizational culture that encourages ethical conduct and
a commitment to compliance with the law." The new requirement significantly expands the scope of an
effective ethics program and requires the organization to report an offense to the appropriate governmental
authorities without unreasonable delay.

The Sarbanes-Oxley Act of 2002 introduced accounting reform and requires attestation to the accuracy of
financial reporting documents:

 Section 103, "Auditing, Quality Control, and Independence Standards and Rules," requires the board to:
o Register public accounting firms
o Establish, or adopt, by rule, "auditing, quality control, ethics, independence, and other standards
relating to the preparation of audit reports for issuers"
 New Item 406(a) of Regulation S-K requires companies to disclose:
o Whether they have a written code of ethics that applies to their senior officers
o Any waivers of the code of ethics for these individuals
o Any changes to the code of ethics
 If companies do not have a code of ethics, they must explain why they have not adopted one.
 The U.S. Securities and Exchange Commission approved a new governance structure for the New York
Stock Exchange (NYSE) in December 2003. It includes a requirement for companies to adopt and
disclose a code of business conduct and ethics for directors, officers, and employees, and promptly
disclose any waivers of the code for directors or executive officers. The NYSE regulations require all
listed companies to possess and communicate, both internally and externally, a code of conduct or face
delisting.

11
In addition to these, organizations must monitor new and revised regulations from U.S. regulatory agencies,
such as the Food and Drug Administration (FDA), Federal Trade Commission (FTC), Bureau of Alcohol,
Tobacco, and Firearms (BATF), Internal Revenue Service (IRS), and Department of Labor (DoL), and many
others throughout the world. Ethics plans and programs need to be established within the organization to
ensure that the organization is in compliance with all such regulatory requirements.

Example Topics in Computer Ethics

When establishing a computer ethics program and accompanying training and awareness programs, it is
important to consider the topics that have been addressed and researched. The following topics, identified by
Terrell Bynum, are good to use as a basis.

Computers in the Workplace. Computers can pose a threat to jobs as people feel they may be replaced by
them. However, the computer industry already has generated a wide variety of new jobs. When computers do
not eliminate a job, they can radically alter it. In addition to job security concerns, another workplace
concern is health and safety. It is a computer ethics issue to consider how computers impact health and job
satisfaction when information technology is introduced into a workplace.

Computer Crime. With the proliferation of computer viruses, spyware, phishing and fraud schemes, and
hacking activity from every location in the world, computer crime and security are certainly topics of
concern when discussing computer ethics. Besides outsiders, or hackers, many computer crimes, such as
embezzlement or planting of logic bombs, are committed by trusted personnel who have authorization to use
company computer systems.

Privacy and Anonymity. One of the earliest computer ethics topics to arouse public interest was privacy.
The ease and efficiency with which computers and networks can be used to gather, store, search, compare,
retrieve, and share personal information make computer technology especially threatening to anyone who
wishes to keep personal information out of the public domain or out of the hands of those who are perceived
as potential threats. The variety of privacy-related issues generated by computer technology has led to
reexamination of the concept of privacy itself.

Intellectual Property. One of the more controversial areas of computer ethics concerns the intellectual
property rights connected with software ownership. Some people, like Richard Stallman, who started the
Free Software Foundation, believe that software ownership should not be allowed at all. He claims that all
information should be free, and all programs should be available for copying, studying, and modifying by
anyone who wishes to do so. Others, such as Deborah Johnson, argue that software companies or
programmers would not invest weeks and months of work and significant funds in the development of
software if they could not get the investment back in the form of license fees or sales.

Professional Responsibility and Globalization. Global networks such as the Internet and conglomerates of
business-to-business network connections are connecting people and information worldwide. Such
globalization issues that include ethics considerations include:

 Global laws
 Global business
 Global education
 Global information flows
 Information-rich and information-poor nations
 Information interpretation

The gap between rich and poor nations, and between rich and poor citizens in industrialized countries, is very
wide. As educational opportunities, business and employment opportunities, medical services, and many
other necessities of life move more and more into cyberspace, gaps between the rich and the poor may
become even worse, leading to new ethical considerations.

Common Computer Ethics Fallacies


12
Although computer education is starting to be incorporated in lower grades in elementary schools, the lack of
early computer education for most current adults led to several documented generally accepted fallacies that
apply to nearly all computer users. As technology advances, these fallacies will change; new ones will arise,
and some of the original fallacies will no longer exist as children learn at an earlier age about computer use,
risks, security, and other associated information. There are more than described here, but Peter S. Tippett
identified the following computer ethics fallacies, which have been widely discussed and generally accepted
as being representative of the most common.

The Computer Game Fallacy. Computer users tend to think that computers will generally prevent them
from cheating and doing wrong. Programmers particularly believe that an error in programming syntax will
prevent it from working, so that if a software program does indeed work, then it must be working correctly
and preventing bad things or mistakes from happening. Even computer users in general have gotten the
message that computers work with exacting accuracy and will not allow actions that should not occur. Of
course, what computer users often do not consider is that although the computer operates under very strict
rules, the software programs are written by humans and are just as susceptible to allowing bad things to
happen as people often are in their own lives. Along with this, there is also the perception that a person can
do something with a computer without being caught, so that if what is being done is not permissible, the
computer should somehow prevent them from doing it.

The Law-Abiding Citizen Fallacy. Laws provide guidance for many things, including computer use.
Sometimes users confuse what is legal with regard to computer use with what is reasonable behavior for
using computers. Laws basically define the minimum standard about which actions can be reasonably
judged, but such laws also call for individual judgment. Computer users often do not realize they also have a
responsibility to consider the ramifications of their actions and to behave accordingly.

The Shatterproof Fallacy. Many, if not most, computer users believe that they can do little harm
accidentally with a computer beyond perhaps erasing or messing up a file. However, computers are tools that
can harm, even if computer users are unaware of the fact that their computer actions have actually hurt
someone else in some way. For example, sending an email flame to a large group of recipients is the same as
publicly humiliating them. Most people realize that they could be sued for libel for making such statements
in a physical public forum, but may not realize they are also responsible for what they communicate and for
their words and accusations on the Internet. As another example, forwarding e-mail without permission of
the author can lead to harm or embarrassment if the original sender was communicating privately without
expectation of his message being seen by any others. Also, using e-mail to stalk someone, to send spam, and
to harass or offend the recipient in some way also are harmful uses of computers. Software piracy is yet
another example of using computers to, in effect, hurt others.

Generally, the shatterproof fallacy is the belief that what a person does with a computer can do minimal
harm, and only affects perhaps a few files on the computer itself; it is not considering the impact of actions
before doing them.

The Candy-from-a-Baby Fallacy. Illegal and unethical activity, such as software piracy and plagiarism, are
very easy to do with a computer. However, just because it is easy does not mean that it is right. Because of
the ease with which computers can make copies, it is likely almost every computer user has committed
software piracy of one form or another. The Software Publisher's Association (SPA) and Business Software
Alliance (BSA) studies reveal software piracy costs companies multibillions of dollars. Copying a retail
software package without paying for it is theft. Just because doing something wrong with a computer is easy
does not mean it is ethical, legal, or acceptable.

The Hacker's Fallacy. Numerous reports and publications of the commonly accepted hacker belief is that it
is acceptable to do anything with a computer as long as the motivation is to learn and not to gain or make a
profit from such activities. This so-called hacker ethic is explored in more depth in the following section.

The Free Information Fallacy. A somewhat curious opinion of many is the notion that information "wants
to be free," as mentioned earlier. It is suggested that this fallacy emerged from the fact that it is so easy to
copy digital information and to distribute it widely. However, this line of thinking completely ignores the
13
fact the copying and distribution of data is completely under the control and whim of the people who do it,
and to a great extent, the people who allow it to happen.

Hacking and Hacktivism

Hacking is an ambivalent term, most commonly perceived as being part of criminal activities. However,
hacking has been used to describe the work of individuals who have been associated with the open-source
movement. Many of the developments in information technology have resulted from what has typically been
considered as hacking activities. Manuel Castells considers hacker culture as the "informationalism" that
incubates technological breakthrough, identifying hackers as the actors in the transition from an
academically and institutionally constructed milieu of innovation to the emergence of self-organizing
networks transcending organizational control.

A hacker was originally a person who sought to understand computers as thoroughly as possible. Soon
hacking came to be associated with phreaking, breaking into phone networks to make free phone calls, which
is clearly illegal.

The Hacker Ethic. The idea of a hacker ethic originates in the activities of the original hackers at MIT and
Stanford in the 1950s and 1960s. Stephen Levy outlined the so-called hacker ethic as follows:

1. Access to computers should be unlimited and total.


2. All information should be free.
3. Authority should be mistrusted and decentralization promoted.
4. Hackers should be judged solely by their skills at hacking, rather than by race, class, age, gender, or
position.
5. Computers can be used to create art and beauty.
6. Computers can change your life for the better.

The hacker ethic has three main functions:

1. It promotes the belief of individual activity over any form of corporate authority or system of ideals.
2. It supports a completely free-market approach to the exchange of and access to information.
3. It promotes the belief that computers can have a beneficial and life-changing effect.

Such ideas are in conflict with a wide range of computer professionals' various codes of ethics.

Ethics Codes of Conduct and Resources

Several organizations and groups have defined the computer ethics their members should observe and
practice. In fact, most professional organizations have adopted a code of ethics, a large percentage of which
address how to handle information. To provide the ethics of all professional organizations related to
computer use would fill a large book. The following are provided to give you an opportunity to compare
similarities between the codes and, most interestingly, to note the differences and sometimes contradictions
in the codes followed by the various diverse groups.

The Code of Fair Information Practices. In 1973 the Secretary's Advisory Committee on Automated
Personal Data Systems for the U.S. Department of Health, Education and Welfare recommended the
adoption of the following Code of Fair Information Practices to secure the privacy and rights of citizens:

1. There must be no personal data record-keeping systems whose very existence is secret;
2. There must be a way for an individual to find out what information is in his or her file and how the
information is being used;
3. There must be a way for an individual to correct information in his records;

14
4. Any organization creating, maintaining, using, or disseminating records of personally identifiable
information must assure the reliability of the data for its intended use and must take precautions to
prevent misuse; and
5. There must be a way for an individual to prevent personal information obtained for one purpose
from being used for another purpose without his consent.

Internet Activities Board (IAB) (now the Internet Architecture Board) and RFC 1087. RFC 1087 is a
statement of policy by the Internet Activities Board (IAB) posted in 1989 concerning the ethical and proper
use of the resources of the Internet. The IAB "strongly endorses the view of the Division Advisory Panel of
the National Science Foundation Division of Network, Communications Research and Infrastructure," which
characterized as unethical and unacceptable any activity that purposely:

 Seeks to gain unauthorized access to the resources of the Internet,


 Disrupts the intended use of the Internet,
 Wastes resources (people, capacity, computer) through such actions,
 Destroys the integrity of computer-based information, or
 Compromises the privacy of users.

Computer Ethics Institute (CEI). In 1991 the Computer Ethics Institute held its first National Computer
Ethics Conference in Washington, D.C. The Ten Commandments of Computer Ethics were first presented in
Dr. Ramon C. Barquin's paper prepared for the conference, "In Pursuit of a 'Ten Commandments' for
Computer Ethics." The Computer Ethics Institute published them as follows in 1992:

1. Thou Shalt Not Use a Computer to Harm Other People.


2. Thou Shalt Not Interfere with Other People's Computer Work.
3. Thou Shalt Not Snoop around in Other People's Computer Files.
4. Thou Shalt Not Use a Computer to Steal.
5. Thou Shalt Not Use a Computer to Bear False Witness.
6. Thou Shalt Not Copy or Use Proprietary Software for Which You Have Not Paid.
7. Thou Shalt Not Use Other People's Computer Resources without Authorization or Proper
Compensation.
8. Thou Shalt Not Appropriate Other People's Intellectual Output.
9. Thou Shalt Think about the Social Consequences of the Program You Are Writing or the System
You Are Designing.
10. Thou Shalt Always Use a Computer in Ways That Insure Consideration and Respect for Your
Fellow Humans.

National Conference on Computing and Values. The National Conference on Computing and Values
(NCCV) was held on the campus of Southern Connecticut State University in August 1991. It proposed the
following four primary values for computing, originally intended to serve as the ethical foundation and
guidance for computer security:

1. Preserve the public trust and confidence in computers.


2. Enforce fair information practices.
3. Protect the legitimate interests of the constituents of the system.
4. Resist fraud, waste, and abuse.

The Working Group on Computer Ethics. In 1991, the Working Group on Computer Ethics created the
following End User's Basic Tenets of Responsible Computing:

1. I understand that just because something is legal, it isn't necessarily moral or right.
2. I understand that people are always the ones ultimately harmed when computers are used
unethically. The fact that computers, software, or a communications medium exists between me
and those harmed does not in any way change moral responsibility toward my fellow humans.

15
3. I will respect the rights of authors, including authors and publishers of software as well as authors
and owners of information. I understand that just because copying programs and data is easy, it is
not necessarily right.
4. I will not break into or use other people's computers or read or use their information without their
consent.
5. I will not write or knowingly acquire, distribute, or allow intentional distribution of harmful
software like bombs, worms, and computer viruses.

National Computer Ethics and Responsibilities Campaign (NCERC). In 1994, a National Computer
Ethics and Responsibilities Campaign (NCERC) was launched to create an "electronic repository of
information resources, training materials and sample ethics codes" that would be available on the Internet for
IS managers and educators. The National Computer Security Association (NCSA) and the Computer Ethics
Institute cosponsored NCERC. The NCERC Guide to Computer Ethics was developed to support the
campaign.

The goal of NCERC is to foster computer ethics awareness and education. The campaign does this by
making tools and other resources available for people who want to hold events, campaigns, awareness
programs, seminars, and conferences or to write or communicate about computer ethics. NCERC is a
nonpartisan initiative intended to increase understanding of the ethical and moral issues unique to the use,
and sometimes abuse, of information technologies.

(ISC)2 Code of Ethics. The following is an excerpt from the (ISC)2 Code of Ethics preamble and canons, by
which all CISSPs and SSCPs must abide. Compliance with the preamble and canons is mandatory to
maintain certification. Computer professionals could resolve conflicts between the canons in the order of the
canons. The canons are not equal and conflicts between them are not intended to create ethical binds.

Code of Ethics Preamble.

 Safety of the commonwealth, duty to our principals, and to each other requires that we adhere, and
be seen to adhere, to the highest ethical standards of behavior.
 Therefore, strict adherence to this Code is a condition of certification.

Code of Ethics Canons.

Protect society, the commonwealth, and the infrastructure

 Promote and preserve public trust and confidence in information and systems.
 Promote the understanding and acceptance of prudent information security measures
 Preserve and strengthen the integrity of the public infrastructure.
 Discourage unsafe practice.

Act honorably, honestly, justly, responsibly, and legally

 Tell the truth; make all stakeholders aware of your actions on a timely basis.
 Observe all contracts and agreements, express or implied.
 Treat all constituents fairly. In resolving conflicts, consider public safety and duties to principals,
individuals, and the profession in that order.
 Give prudent advice; avoid raising unnecessary alarm or giving unwarranted comfort. Take care to be
truthful, objective, cautious, and within your competence.
 When resolving differing laws in different jurisdictions, give preference to the laws of the jurisdiction
in which you render your service.

Provide diligent and competent service to principals

 Preserve the value of their systems, applications, and information.

16
 Respect their trust and the privileges that they grant you.
 Avoid conflicts of interest or the appearance thereof.
 Render only those services for which you are fully competent and qualified.

Advance and protect the profession

 Sponsor for professional advancement those best qualified. All other things equal, prefer those who
are certified and who adhere to these canons. Avoid professional association with those whose
practices or reputation might diminish the profession.
 Take care not to injure the reputation of other professionals through malice or indifference.
 Maintain your competence; keep your skills and knowledge current.
 Give generously of your time and knowledge in training others.

Organizational Ethics Plan of Action

Peter S. Tippett has written extensively on computer ethics. He provided the following action plan to help
corporate information security leaders to instill a culture of ethical computer use within organizations:

1. Develop a corporate guide to computer ethics for the organization.


2. Develop a computer ethics policy to supplement the computer security policy.
3. Add information about computer ethics to the employee handbook.
4. Find out whether the organization has a business ethics policy, and expand it to include computer
ethics.
5. Learn more about computer ethics and spreading what is learned.
6. Help to foster awareness of computer ethics by participating in the computer ethics campaign.
7. Make sure the organization has an E-mail privacy policy.
8. Make sure employees know what the E-mail policy is.

Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler identified the following selected ethical bases for
IT decision making:

Golden Rule: Treat others as you wish to be treated. Do not implement systems that you would not wish to
be subjected to yourself. Is your company using unlicensed software although your company itself sells
software?

Kant's Categorical Imperative: If an action is not right for everyone, it is not right for anyone. Does
management monitor call center employees' seat time, but not its own?

Descartes' Rule of Change (also called the slippery slope): If an action is not repeatable at all times, it is not
right at any time. Should your Web site link to another site, "framing" the page, so users think it was created
and belongs to you?

Utilitarian Principle (also called universalism): Take the action that achieves the most good. Put a value on
outcomes and strive to achieve the best results. This principle seeks to analyze and maximize the IT of the
covered population within acknowledged resource constraints. Should customers using your Web site be
asked to opt in or opt out of the possible sale of their personal data to other companies?

Risk Aversion Principle: Incur least harm or cost. Given alternatives that have varying degrees of harm and
gain, choose the one that causes the least damage. If a manager reports that a subordinate criticized him in an
e-mail to other employees, who would do the search and see the results of the search?

Avoid Harm: Avoid malfeasance or "do no harm." This basis implies a proactive obligation of companies to
protect their customers and clients from systems with known harm. Does your company have a privacy
policy that protects, rather than exploits customers?

17
No Free Lunch Rule: Assume that all property and information belong to someone. This principle is
primarily applicable to intellectual property that should not be taken without just compensation. Has your
company used unlicensed software? Or hired a group of IT workers from a competitor?

Legalism: Is it against the law? Moral actions may not be legal, and vice versa. Might your Web advertising
exaggerate the features and benefits of your products? Are you collecting information illegally on minors?

Professionalism: Is an action contrary to codes of ethics? Do the professional codes cover a case and do they
suggest the path to follow? When you present technological alternatives to managers who do not know the
right questions to ask, do you tell them all they need to know to make informed choices?

Evidentiary guidance: Is there hard data to support or deny the value of taking an action? This is not a
traditional "ethics" value but one that is a significant factor related to IT's policy decisions about the impact
of systems on individuals and groups. This value involves probabilistic reasoning where outcomes can be
predicted based on hard evidence based on research. Do you assume that you know PC users are satisfied
with IT's service or has data been collected to determine what they really think?

Client/customer/patient choice: Let the people affected decide. In some circumstances, employees and
customers have a right to self-determination through the informed consent process. This principle
acknowledges a right to self-determination in deciding what is "harmful" or "beneficial" for their personal
circumstances. Are your workers subjected to monitoring in places where they assume that they have
privacy?

Equity: Will the costs and benefits be equitably distributed? Adherence to this principle obligates a company
to provide similarly situated persons with the same access to data and systems. This can imply a proactive
duty to inform and make services, data, and systems available to all those who share a similar circumstance.
Has IT made intentionally inaccurate projections as to project costs?

Competition: This principle derives from the marketplace where consumers and institutions can select among
competing companies, based on all considerations such as degree of privacy, cost, and quality. It recognizes
that to be financially viable in the market, one must have data about what competitors are doing and
understand and acknowledge the competitive implications of IT decisions. When you present a build or buy
proposition to management, is it fully aware of the risk involved?

Compassion/last chance: Religious and philosophical traditions promote the need to find ways to assist the
most vulnerable parties. Refusing to take unfair advantage of users or others who do not have technical
knowledge is recognized in several professional codes of ethics. Do all workers have an equal opportunity to
benefit from the organization's investment in IT?

Impartiality/objectivity: Are decisions biased in favor of one group or another? Is there an even playing
field? IT personnel should avoid potential or apparent conflicts of interest. Do you or any of your IT
employees have a vested interest in the companies that you deal with?

Openness/full disclosure: Are persons affected by this system aware of its existence, aware of what data are
being collected, and knowledgeable about how it will be used? Do they have access to the same information?
Is it possible for a Web site visitor to determine what cookies are used and what is done with any information
they might collect?

Confidentiality: IT is obligated to determine whether data it collects on individuals can be adequately


protected to avoid disclosure to parties whose need to know is not proven. Have you reduced security
features to hold expenses to a minimum?

Trustworthiness and honesty: Does IT stand behind ethical principles to the point where it is accountable for
the actions it takes? Has IT management ever posted or circulated a professional code of ethics with an
expression of support for seeing that its employees act professionally?

18
How a Code of Ethics Applies to CISSPs

In 1998, Michael Davis described a professional ethics code as a "contract between professionals."
According to this explanation, a profession is a group of persons who want to cooperate in serving the same
ideal better than they could if they did not cooperate. Information security professionals, for example, are
typically thought to serve the ideal of ensuring the confidentiality, integrity, and availability of information
and the security of the technology that supports the information use. A code of ethics would then specify
how professionals should pursue their common ideals so that each may do his or her best to reach the goals
at a minimum cost while appropriately addressing the issues involved. The code helps to protect
professionals from certain stresses and pressures (such as the pressure to cut corners with information
security to save money) by making it reasonably likely that most other members of the profession will not
take advantage of the resulting conduct of such pressures. An ethics code also protects members of a
profession from certain consequences of competition, and encourages cooperation and support among the
professionals.

Considering this, an occupation does not need society's recognition to be a profession. Indeed, it only needs
the actions and activities among its members to cooperate to serve a certain ideal. Once an occupation
becomes recognized as a profession, society historically has found reason to give the occupation special
privileges (for example, the sole right to do certain kinds of work) to support serving the ideal in question (in
this case, information security) in the way the profession serves society.

Understanding a code of ethics as a contract between professionals, it can be explained why each information
security professional should not depend upon only his or her private conscience when determining how to
practice the profession, and why he or she must take into account what a community of information security
professionals has to say about what other information security professionals should do. What others expect
of information security professionals is part of what each should take into account in choosing what to do
within professional activities, especially if the expectation is reasonable.

The ethics code provides a guide to what information security professionals may reasonably expect of one
another, basically setting forth the rules of the game. Just as athletes need to know the rules of football to
know what to do to score, computer professionals also need to know computer ethics to know, for example,
whether they should choose information security and risk reduction actions based completely and solely
upon the wishes of an employer, or, instead, also consider information security leading practices and legal
requirements when making recommendations and decisions.

A code of ethics should also provide a guide to what computer professionals may expect other members of
our profession to help each other do. Keep in mind that people are not merely members of this or that
profession. Each individual has responsibilities beyond the profession and, as such, must face his or her own
conscience, along with the criticism, blame, and punishment of others, as a result of actions. These issues
cannot be escaped just by making a decision because their profession told them to.

Information security professionals must take their professional code of ethics and apply it appropriately to
their own unique environments. To assist with this, Donn B. Parker describes the following five ethical
principles that apply to processing information in the workplace, and also provides examples of how they
would be applied.

1. Informed consent. Try to make sure that the people affected by a decision are aware of your planned
actions and that they either agree with your decision, or disagree but understand your intentions. Example:
An employee gives a copy of a program that she wrote for her employer to a friend, and does not tell her
employer about it.

2. Higher ethic in the worst case. Think carefully about your possible alternative actions and select the
beneficial necessary ones that will cause the least, or no, harm under the worst circumstances. Example: A
manager secretly monitors an employee's email, which may violate his privacy, but the manager has reason
to believe that the employee may be involved in a serious theft of trade secrets.

19
3. Change of scale test. Consider that an action you may take on a small scale, or by you alone, could result
in significant harm if carried out on a larger scale or by many others. Examples: A teacher lets a friend try
out, just once, a database that he bought to see if the friend wants to buy a copy, too. The teacher does not let
an entire classroom of his students use the database for a class assignment without first getting permission
from the vendor. A computer user thinks it's okay to use a small amount of her employer's computer services
for personal business, since the others' use is unaffected.

4. Owners' conservation of ownership. As a person who owns or is responsible for information, always make
sure that the information is reasonably protected and that ownership of it, and rights to it, are clear to users.
Example: A vendor who sells a commercial electronic bulletin board service with no proprietary notice at
logon, loses control of the service to a group of hackers who take it over, misuse it, and offend customers.

5. Users' conservation of ownership. As a person who uses information, always assume others own it and
their interests must be protected unless you explicitly know that you are free to use it in any way that you
wish. Example: Hacker discovers a commercial electronic bulletin board with no proprietary notice at logon,
and informs his friends, who take control of it, misuse it, and offend other customers.

20
Computer Ethics: Basic Concepts and Historical Overview

Computer ethics is a new branch of ethics that is growing and changing rapidly as computer technology also
grows and develops. The term "computer ethics" is open to interpretations both broad and narrow. On the
one hand, for example, computer ethics might be understood very narrowly as the efforts of professional
philosophers to apply traditional ethical theories like utilitarianism, Kantianism, or virtue ethics to issues
regarding the use of computer technology. On the other hand, it is possible to construe computer ethics in a
very broad way to include, as well, standards of professional practice, codes of conduct, aspects of computer
law, public policy, corporate ethics--even certain topics in the sociology and psychology of computing.

In the industrialized nations of the world, the "information revolution" already has significantly altered many
aspects of life -- in banking and commerce, work and employment, medical care, national defense,
transportation and entertainment. Consequently, information technology has begun to affect (in both good
and bad ways) community life, family life, human relationships, education, freedom, democracy, and so on
(to name a few examples). Computer ethics in the broadest sense can be understood as that branch of applied
ethics which studies and analyzes such social and ethical impacts of information technology.

In recent years, this robust new field has led to new university courses, conferences, workshops, professional
organizations, curriculum materials, books, articles, journals, and research centers. And in the age of the
world-wide-web, computer ethics is quickly being transformed into "global information ethics".

 1. Some Historical Milestones


 2. Defining the Field of Computer Ethics
 3. Example Topics in Computer Ethics
o 3.1 Computers in the Workplace
o 3.2 Computer Crime
o 3.3 Privacy and Anonymity
o 3.4 Intellectual Property
o 3.5 Professional Responsibility
o 3.6 Globalization
o 3.7 The Metaethics of Computer Ethics
 Bibliography
 Other Internet Resources
 Related Entries

1. Some Historical Milestones

1940s and 1950s

Computer ethics as a field of study has its roots in the work of MIT professor Norbert Wiener during World
War II (early 1940s), in which he helped to develop an antiaircraft cannon capable of shooting down fast
warplanes. The engineering challenge of this project caused Wiener and some colleagues to create a new
field of research that Wiener called "cybernetics" -- the science of information feedback systems. The
concepts of cybernetics, when combined with digital computers under development at that time, led Wiener
to draw some remarkably insightful ethical conclusions about the technology that we now call ICT
(information and communication technology). He perceptively foresaw revolutionary social and ethical
consequences. In 1948, for example, in his book Cybernetics: or control and communication in the animal
and the machine, he said the following:
21
It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central
nervous system to an apparatus for automatic control; and that its input and output need not be in the form of
numbers or diagrams. It might very well be, respectively, the readings of artificial sense organs, such as
photoelectric cells or thermometers, and the performance of motors or solenoids ... . we are already in a
position to construct artificial machines of almost any degree of elaborateness of performance. Long before
Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the
presence of another social potentiality of unheard-of importance for good and for evil. (pp. 27-28)

In 1950 Wiener published his monumental book, The Human Use of Human Beings. Although Wiener did
not use the term "computer ethics" (which came into common use more than two decades later), he laid
down a comprehensive foundation which remains today a powerful basis for computer ethics research and
analysis.

Wiener's book included (1) an account of the purpose of a human life, (2) four principles of justice, (3) a
powerful method for doing applied ethics, (4) discussions of the fundamental questions of computer ethics,
and (5) examples of key computer ethics topics. [Wiener 1950/1954, see also Bynum 1999]

Wiener's foundation of computer ethics was far ahead of its time, and it was virtually ignored for decades.
On his view, the integration of computer technology into society will eventually constitute the remaking of
society -- the "second industrial revolution". It will require a multi-faceted process taking decades of effort,
and it will radically change everything. A project so vast will necessarily include a wide diversity of tasks
and challenges. Workers must adjust to radical changes in the work place; governments must establish new
laws and regulations; industry and businesses must create new policies and practices; professional
organizations must develop new codes of conduct for their members; sociologists and psychologists must
study and understand new social and psychological phenomena; and philosophers must rethink and redefine
old social and ethical concepts.

1960s

In the mid 1960s, Donn Parker of SRI International in Menlo Park, California began to examine unethical
and illegal uses of computers by computer professionals. "It seemed," Parker said, "that when people entered
the computer center they left their ethics at the door." [See Fodor and Bynum, 1992] He collected examples
of computer crime and other unethical computerized activities. He published "Rules of Ethics in Information
Processing" in Communications of the ACM in 1968, and headed the development of the first Code of
Professional Conduct for the Association for Computing Machinery (eventually adopted by the ACM in
1973). Over the next two decades, Parker went on to produce books, articles, speeches and workshops that
re-launched the field of computer ethics, giving it momentum and importance that continue to grow today.
Although Parker's work was not informed by a general theoretical framework, it is the next important
milestone in the history of computer ethics after Wiener. [See Parker, 1968; Parker, 1979; and Parker et al.,
1990.]

1970s

During the late 1960s, Joseph Weizenbaum, a computer scientist at MIT in Boston, created a computer
program that he called ELIZA. In his first experiment with ELIZA, he scripted it to provide a crude imitation
of "a Rogerian psychotherapist engaged in an initial interview with a patient". Weizenbaum was shocked at
the reactions people had to his simple computer program: some practicing psychiatrists saw it as evidence
that computers would soon be performing automated psychotherapy. Even computer scholars at MIT became
emotionally involved with the computer, sharing their intimate thoughts with it. Weizenbaum was extremely
concerned that an "information processing model" of human beings was reinforcing an already growing
tendency among scientists, and even the general public, to see humans as mere machines. Weizenbaum's
book, Computer Power and Human Reason [Weizenbaum, 1976], forcefully expresses many of these ideas.
Weizenbaum's book, plus the courses he offered at MIT and the many speeches he gave around the country
in the 1970s, inspired many thinkers and projects in computer ethics.

In the mid 1970s, Walter Maner (then of Old Dominion University in Virginia; now at Bowling Green State
University in Ohio) began to use the term "computer ethics" to refer to that field of inquiry dealing with
ethical problems aggravated, transformed or created by computer technology. Maner offered an experimental

22
course on the subject at Old Dominion University. During the late 1970s (and indeed into the mid 1980s),
Maner generated much interest in university-level computer ethics courses. He offered a variety of
workshops and lectures at computer science conferences and philosophy conferences across America. In
1978 he also self-published and disseminated his Starter Kit in Computer Ethics, which contained curriculum
materials and pedagogical advice for university teachers to develop computer ethics courses. The Starter Kit
included suggested course descriptions for university catalogs, a rationale for offering such a course in the
university curriculum, a list of course objectives, some teaching tips and discussions of topics like privacy
and confidentiality, computer crime, computer decisions, technological dependence and professional codes
of ethics. Maner's trailblazing course, plus his Starter Kit and the many conference workshops he conducted,
had a significant impact upon the teaching of computer ethics across America. Many university courses were
put in place because of him, and several important scholars were attracted into the field.

1980s

By the 1980s, a number of social and ethical consequences of information technology were becoming public
issues in America and Europe: issues like computer-enabled crime, disasters caused by computer failures,
invasions of privacy via computer databases, and major law suits regarding software ownership. Because of
the work of Parker, Weizenbaum, Maner and others, the foundation had been laid for computer ethics as an
academic discipline. (Unhappily, Wiener's ground-breaking achievements were essentially ignored.) The
time was right, therefore, for an explosion of activities in computer ethics.

In the mid-80s, James Moor of Dartmouth College published his influential article "What Is Computer
Ethics?" (see discussion below) in Computers and Ethics, a special issue of the journalMetaphilosophy
[Moor, 1985]. In addition, Deborah Johnson of Rensselaer Polytechnic Institute published Computer
Ethics[Johnson, 1985], the first textbook -- and for more than a decade, the defining textbook -- in the field.
There were also relevant books published in psychology and sociology: for example, Sherry Turkle of MIT
wrote The Second Self [Turkle, 1984], a book on the impact of computing on the human psyche; and Judith
Perrolle producedComputers and Social Change: Information, Property and Power[Perrolle, 1987], a
sociological approach to computing and human values.

In the early 80s, the present author (Terrell Ward Bynum) assisted Maner in publishing his Starter Kit in
Computer Ethics [Maner, 1980] at a time when most philosophers and computer scientists considered the
field to be unimportant [See Maner, 1996]. Bynum furthered Maner's mission of developing courses and
organizing workshops, and in 1985, edited a special issue ofMetaphilosophy devoted to computer ethics
[Bynum, 1985]. In 1991 Bynum and Maner convened the first international multidisciplinary conference on
computer ethics, which was seen by many as a major milestone of the field. It brought together, for the first
time, philosophers, computer professionals, sociologists, psychologists, lawyers, business leaders, news
reporters and government officials. It generated a set of monographs, video programs and curriculum
materials [see van Speybroeck, July 1994].

1990s

During the 1990s, new university courses, research centers, conferences, journals, articles and textbooks
appeared, and a wide diversity of additional scholars and topics became involved. For example, thinkers like
Donald Gotterbarn, Keith Miller, Simon Rogerson, and Dianne Martin -- as well as organizations like
Computer Professionals for Social Responsibility, the Electronic Frontier Foundation, ACM-SIGCAS --
spearheaded projects relevant to computing and professional responsibility. Developments in Europe and
Australia were especially noteworthy, including new research centers in England, Poland, Holland, and Italy;
the ETHICOMP series of conferences led by Simon Rogerson and the present author; the CEPE conferences
founded by Jeroen van den Hoven; and the Australian Institute of Computer Ethics headed by Chris Simpson
and John Weckert.

These important developments were significantly aided by the pioneering work of Simon Rogerson of De
Montfort University (UK), who established the Centre for Computing and Social Responsibility there. In
Rogerson's view, there was need in the mid-1990s for a "second generation" of computer ethics
developments:

23
The mid-1990s has heralded the beginning of a second generation of Computer Ethics. The time has come to
build upon and elaborate the conceptual foundation whilst, in parallel, developing the frameworks within
which practical action can occur, thus reducing the probability of unforeseen effects of information
technology application [Rogerson, Spring 1996, 2; Rogerson and Bynum, 1997].

2. Defining the Field of Computer Ethics

From the 1940s through the 1960s, therefore, there was no discipline known as "computer ethics"
(notwithstanding the work of Wiener and Parker). However, beginning with Walter Maner in the 1970s,
active thinkers in computer ethics began trying to delineate and define computer ethics as a field of study.
Let us briefly consider five such attempts:

When he decided to use the term "computer ethics" in the mid-70s, Walter Maner defined the field as one
which examines "ethical problems aggravated, transformed or created by computer technology". Some old
ethical problems, he said, are made worse by computers, while others are wholly new because of information
technology. By analogy with the more developed field of medical ethics, Maner focused attention upon
applications of traditional ethical theories used by philosophers doing "applied ethics" -- especially analyses
using the utilitarian ethics of the English philosophers Jeremy Bentham and John Stuart Mill, or the
rationalist ethics of the German philosopher Immanual Kant.

In her book, Computer Ethics, Deborah Johnson [1985] defined the field as one which studies the way in
which computers "pose new versions of standard moral problems and moral dilemmas, exacerbating the old
problems, and forcing us to apply ordinary moral norms in uncharted realms," [Johnson, page 1]. Like Maner
before her, Johnson recommended the "applied ethics" approach of using procedures and concepts from
utilitarianism and Kantianism. But, unlike Maner, she did not believe that computers create wholly new
moral problems. Rather, she thought that computers gave a "new twist" to old ethical issues which were
already well known.

James Moor's definition of computer ethics in his article "What Is Computer Ethics?" [Moor, 1985] was
much broader and more wide-ranging than that of Maner or Johnson. It is independent of any specific
philosopher's theory; and it is compatible with a wide variety of methodological approaches to ethical
problem-solving. Over the past decade, Moor's definition has been the most influential one. He defined
computer ethics as a field concerned with "policy vacuums" and "conceptual muddles" regarding the social
and ethical use of information technology:

A typical problem in computer ethics arises because there is a policy vacuum about how computer
technology should be used. Computers provide us with new capabilities and these in turn give us new
choices for action. Often, either no policies for conduct in these situations exist or existing policies seem
inadequate. A central task of computer ethics is to determine what we should do in such cases, that is,
formulate policies to guide our actions.... One difficulty is that along with a policy vacuum there is often a
conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection
reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual
framework within which to formulate a policy for action [Moor, 1985, 266].

Moor said that computer technology is genuinely revolutionary because it is "logically malleable":

Computers are logically malleable in that they can be shaped and molded to do any activity that can be
characterized in terms of inputs, outputs and connecting logical operations....Because logic applies
everywhere, the potential applications of computer technology appear limitless. The computer is the nearest
thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity
[Moor, 1985, 269]

According to Moor, the computer revolution is occurring in two stages. The first stage was that of
"technological introduction" in which computer technology was developed and refined. This already
occurred in America during the first forty years after the Second World War. The second stage -- one that the
industrialized world has only recently entered -- is that of "technological permeation" in which technology
gets integrated into everyday human activities and into social institutions, changing the very meaning of
fundamental concepts, such as "money", "education", "work", and "fair elections".

24
Moor's way of defining the field of computer ethics is very powerful and suggestive. It is broad enough to be
compatible with a wide range of philosophical theories and methodologies, and it is rooted in a perceptive
understanding of how technological revolutions proceed. Currently it is the best available definition of the
field.

Nevertheless, there is yet another way of understanding computer ethics that is also very helpful--and
compatible with a wide variety of theories and approaches. This "other way" was the approach taken by
Wiener in 1950 in his book The Human Use of Human Beings, and Moor also discussed it briefly in "What Is
Computer Ethics?" [1985]. According to this alternative account, computer ethics identifies and analyzes the
impacts of information technology upon human values like health, wealth, opportunity, freedom, democracy,
knowledge, privacy, security, self-fulfillment, and so on. This very broad view of computer ethics embraces
applied ethics, sociology of computing, technology assessment, computer law, and related fields; and it
employs concepts, theories and methodologies from these and other relevant disciplines [Bynum, 1993]. The
fruitfulness of this way of understanding computer ethics is reflected in the fact that it has served as the
organizing theme of major conferences like the National Conference on Computing and Values (1991), and
it is the basis of recent developments such as Brey's "disclosive computer ethics" methodology [Brey 2000]
and the emerging research field of "value-sensitive computer design". (See, for example, [Friedman, 1997],
[Friedman and Nissenbaum, 1996], [Introna and Nissenbaum, 2000].)

In the 1990s, Donald Gotterbarn became a strong advocate for a different approach to defining the field of
computer ethics. In Gotterbarn's view, computer ethics should be viewed as a branch of professional ethics,
which is concerned primarily with standards of practice and codes of conduct of computing professionals:

There is little attention paid to the domain of professional ethics -- the values that guide the day-to-day
activities of computing professionals in their role as professionals. By computing professional I mean anyone
involved in the design and development of computer artifacts... The ethical decisions made during the
development of these artifacts have a direct relationship to many of the issues discussed under the broader
concept of computer ethics [Gotterbarn, 1991].

With this professional-ethics definition of computer ethics in mind, Gotterbarn has been involved in a
number of related activities, such as co-authoring the third version of the ACM Code of Ethics and
Professional Conduct and working to establish licensing standards for software engineers [Gotterbarn, 1992;
Anderson, et al., 1993; Gotterbarn, et al., 1997].

3. Example Topics in Computer Ethics

No matter which re-definition of computer ethics one chooses, the best way to understand the nature of the
field is through some representative examples of the issues and problems that have attracted research and
scholarship. Consider, for example, the following topics:

 3.1 Computers in the Workplace


 3.2 Computer Crime
 3.3 Privacy and Anonymity
 3.4 Intellectual Property
 3.5 Professional Responsibility
 3.6 Globalization
 3.7 The Metaethics of Computer Ethics

3.1 Computers in the Workplace

As a "universal tool" that can, in principle, perform almost any task, computers obviously pose a threat to
jobs. Although they occasionally need repair, computers don't require sleep, they don't get tired, they don't go
home ill or take time off for rest and relaxation. At the same time, computers are often far more efficient than
humans in performing many tasks. Therefore, economic incentives to replace humans with computerized
25
devices are very high. Indeed, in the industrialized world many workers already have been replaced by
computerized devices -- bank tellers, auto workers, telephone operators, typists, graphic artists, security
guards, assembly-line workers, and on and on. In addition, even professionals like medical doctors, lawyers,
teachers, accountants and psychologists are finding that computers can perform many of their traditional
professional duties quite effectively.

The employment outlook, however, is not all bad. Consider, for example, the fact that the computer industry
already has generated a wide variety of new jobs: hardware engineers, software engineers, systems analysts,
webmasters, information technology teachers, computer sales clerks, and so on. Thus it appears that, in the
short run, computer-generated unemployment will be an important social problem; but in the long run,
information technology will create many more jobs than it eliminates.

Even when a job is not eliminated by computers, it can be radically altered. For example, airline pilots still
sit at the controls of commercial airplanes; but during much of a flight the pilot simply watches as a
computer flies the plane. Similarly, those who prepare food in restaurants or make products in factories may
still have jobs; but often they simply push buttons and watch as computerized devices actually perform the
needed tasks. In this way, it is possible for computers to cause "de-skilling" of workers, turning them into
passive observers and button pushers. Again, however, the picture is not all bad because computers also have
generated new jobs which require new sophisticated skills to perform -- for example, "computer assisted
drafting" and "keyhole" surgery.

Another workplace issue concerns health and safety. As Forester and Morrison point out [Forester and
Morrison, 140-72, Chapter 8], when information technology is introduced into a workplace, it is important to
consider likely impacts upon health and job satisfaction of workers who will use it. It is possible, for
example, that such workers will feel stressed trying to keep up with high-speed computerized devices -- or
they may be injured by repeating the same physical movement over and over -- or their health may be
threatened by radiation emanating from computer monitors. These are just a few of the social and ethical
issues that arise when information technology is introduced into the workplace.

3.2 Computer Crime

In this era of computer "viruses" and international spying by "hackers" who are thousands of miles away, it
is clear that computer security is a topic of concern in the field of Computer Ethics. The problem is not so
much the physical security of the hardware (protecting it from theft, fire, flood, etc.), but rather "logical
security", which Spafford, Heaphy and Ferbrache [Spafford, et al, 1989] divide into five aspects:

1. Privacy and confidentiality


2. Integrity -- assuring that data and programs are not modified without proper authority
3. Unimpaired service
4. Consistency -- ensuring that the data and behavior we see today will be the same tomorrow
5. Controlling access to resources
Malicious kinds of software, or "programmed threats", provide a significant challenge to computer security.
These include "viruses", which cannot run on their own, but rather are inserted into other computer
programs; "worms" which can move from machine to machine across networks, and may have parts of
themselves running on different machines; "Trojan horses" which appear to be one sort of program, but
actually are doing damage behind the scenes; "logic bombs" which check for particular conditions and then
execute when those conditions arise; and "bacteria" or "rabbits" which multiply rapidly and fill up the
computer's memory.

Computer crimes, such as embezzlement or planting of logic bombs, are normally committed by trusted
personnel who have permission to use the computer system. Computer security, therefore, must also be
concerned with the actions of trusted computer users.

Another major risk to computer security is the so-called "hacker" who breaks into someone's computer
system without permission. Some hackers intentionally steal data or commit vandalism, while others merely
"explore" the system to see how it works and what files it contains. These "explorers" often claim to be
26
benevolent defenders of freedom and fighters against rip-offs by major corporations or spying by
government agents. These self-appointed vigilantes of cyberspace say they do no harm, and claim to be
helpful to society by exposing security risks. However every act of hacking is harmful, because any known
successful penetration of a computer system requires the owner to thoroughly check for damaged or lost data
and programs. Even if the hacker did indeed make no changes, the computer's owner must run through a
costly and time-consuming investigation of the compromised system [Spafford, 1992].

3.3 Privacy and Anonymity

One of the earliest computer ethics topics to arouse public interest was privacy. For example, in the mid-
1960s the American government already had created large databases of information about private citizens
(census data, tax records, military service records, welfare records, and so on). In the US Congress, bills
were introduced to assign a personal identification number to every citizen and then gather all the
government's data about each citizen under the corresponding ID number. A public outcry about "big-brother
government" caused Congress to scrap this plan and led the US President to appoint committees to
recommend privacy legislation. In the early 1970s, major computer privacy laws were passed in the USA.
Ever since then, computer-threatened privacy has remained as a topic of public concern. The ease and
efficiency with which computers and computer networks can be used to gather, store, search, compare,
retrieve and share personal information make computer technology especially threatening to anyone who
wishes to keep various kinds of "sensitive" information (e.g., medical records) out of the public domain or
out of the hands of those who are perceived as potential threats. During the past decade, commercialization
and rapid growth of the internet; the rise of the world-wide-web; increasing "user-friendliness" and
processing power of computers; and decreasing costs of computer technology have led to new privacy issues,
such as data-mining, data matching, recording of "click trails" on the web, and so on [see Tavani, 1999].

The variety of privacy-related issues generated by computer technology has led philosophers and other
thinkers to re-examine the concept of privacy itself. Since the mid-1960s, for example, a number of scholars
have elaborated a theory of privacy defined as "control over personal information" (see, for example,
[Westin, 1967], [Miller, 1971], [Fried, 1984] and [Elgesem, 1996]). On the other hand, philosophers Moor
and Tavani have argued that control of personal information is insufficient to establish or protect privacy,
and "the concept of privacy itself is best defined in terms of restricted access, not control" [Tavani and Moor,
2001] (see also [Moor, 1997]). In addition, Nissenbaum has argued that there is even a sense of privacy in
public spaces, or circumstances "other than the intimate." An adequate definition of privacy, therefore, must
take account of "privacy in public" [Nissenbaum, 1998]. As computer technology rapidly advances --
creating ever new possibilities for compiling, storing, accessing and analyzing information -- philosophical
debates about the meaning of "privacy" will likely continue (see also [Introna, 1997]).

Questions of anonymity on the internet are sometimes discussed in the same context with questions of
privacy and the internet, because anonymity can provide many of the same benefits as privacy. For example,
if someone is using the internet to obtain medical or psychological counseling, or to discuss sensitive topics
(for example, AIDS, abortion, gay rights, venereal disease, political dissent), anonymity can afford
protection similar to that of privacy. Similarly, both anonymity and privacy on the internet can be helpful in
preserving human values such as security, mental health, self-fulfillment and peace of mind. Unfortunately,
privacy and anonymity also can be exploited to facilitate unwanted and undesirable computer-aided activities
in cyberspace, such as money laundering, drug trading, terrorism, or preying upon the vulnerable (see [Marx,
2001] and [Nissenbaum, 1999]).

3.4 Intellectual Property

One of the more controversial areas of computer ethics concerns the intellectual property rights connected
with software ownership. Some people, like Richard Stallman who started the Free Software Foundation,
believe that software ownership should not be allowed at all. He claims that all information should be free,
and all programs should be available for copying, studying and modifying by anyone who wishes to do so
[Stallman, 1993]. Others argue that software companies or programmers would not invest weeks and months
of work and significant funds in the development of software if they could not get the investment back in the
form of license fees or sales [Johnson, 1992]. Today's software industry is a multibillion dollar part of the
economy; and software companies claim to lose billions of dollars per year through illegal copying
("software piracy"). Many people think that software should be ownable, but "casual copying" of personally
27
owned programs for one's friends should also be permitted (see [Nissenbaum, 1995]). The software industry
claims that millions of dollars in sales are lost because of such copying. Ownership is a complex matter,
since there are several different aspects of software that can be owned and three different types of ownership:
copyrights, trade secrets, and patents. One can own the following aspects of a program:

1. The "source code" which is written by the programmer(s) in a high-level computer language like
Java or C++.
2. The "object code", which is a machine-language translation of the source code.
3. The "algorithm", which is the sequence of machine commands that the source code and object code
represent.
4. The "look and feel" of a program, which is the way the program appears on the screen and interfaces
with users.
A very controversial issue today is owning a patent on a computer algorithm. A patent provides an exclusive
monopoly on the use of the patented item, so the owner of an algorithm can deny others use of the
mathematical formulas that are part of the algorithm. Mathematicians and scientists are outraged, claiming
that algorithm patents effectively remove parts of mathematics from the public domain, and thereby threaten
to cripple science. In addition, running a preliminary "patent search" to make sure that your "new" program
does not violate anyone's software patent is a costly and time-consuming process. As a result, only very large
companies with big budgets can afford to run such a search. This effectively eliminates many small software
companies, stifling competition and decreasing the variety of programs available to the society [The League
for Programming Freedom, 1992].

3.5 Professional Responsibility

Computer professionals have specialized knowledge and often have positions with authority and respect in
the community. For this reason, they are able to have a significant impact upon the world, including many of
the things that people value. Along with such power to change the world comes the duty to exercise that
power responsibly [Gotterbarn, 2001]. Computer professionals find themselves in a variety of professional
relationships with other people [Johnson, 1994], including:

employer -- employee

client -- professional

professional -- professional

society -- professional

These relationships involve a diversity of interests, and sometimes these interests can come into conflict with
each other. Responsible computer professionals, therefore, will be aware of possible conflicts of interest and
try to avoid them.

Professional organizations in the USA, like the Association for Computing Machinery (ACM) and the
Institute of Electrical and Electronic Engineers (IEEE), have established codes of ethics, curriculum
guidelines and accreditation requirements to help computer professionals understand and manage ethical
responsibilities. For example, in 1991 a Joint Curriculum Task Force of the ACM and IEEE adopted a set of
guidelines ("Curriculum 1991") for college programs in computer science. The guidelines say that a
significant component of computer ethics (in the broad sense) should be included in undergraduate education
in computer science [Turner, 1991].

In addition, both the ACM and IEEE have adopted Codes of Ethics for their members. The most recent ACM
Code (1992), for example, includes "general moral imperatives", such as "avoid harm to others" and "be
honest and trustworthy". And also included are "more specific professional responsibilities" like "acquire and

28
maintain professional competence" and "know and respect existing laws pertaining to professional work."
The IEEE Code of Ethics (1990) includes such principles as "avoid real or perceived conflicts of interest
whenever possible" and "be honest and realistic in stating claims or estimates based on available data."

The Accreditation Board for Engineering Technologies (ABET) has long required an ethics component in
the computer engineering curriculum. And in 1991, the Computer Sciences Accreditation
Commission/Computer Sciences Accreditation Board (CSAC/CSAB) also adopted the requirement that a
significant component of computer ethics be included in any computer sciences degree granting program that
is nationally accredited [Conry, 1992].

It is clear that professional organizations in computer science recognize and insist upon standards of
professional responsibility for their members.

3.6 Globalization

Computer ethics today is rapidly evolving into a broader and even more important field, which might
reasonably be called "global information ethics". Global networks like the Internet and especially the world-
wide-web are connecting people all over the earth. As Krystyna Gorniak-Kocikowska perceptively notes in
her paper, "The Computer Revolution and the Problem of Global Ethics" [Gorniak-Kocikowska, 1996], for
the first time in history, efforts to develop mutually agreed standards of conduct, and efforts to advance and
defend human values, are being made in a truly global context. So, for the first time in the history of the
earth, ethics and values will be debated and transformed in a context that is not limited to a particular
geographic region, or constrained by a specific religion or culture. This may very well be one of the most
important social developments in history. Consider just a few of the global issues:

Global Laws

If computer users in the United States, for example, wish to protect their freedom of speech on the internet,
whose laws apply? Nearly two hundred countries are already interconnected by the internet, so the United
States Constitution (with its First Amendment protection for freedom of speech) is just a "local law" on the
internet -- it does not apply to the rest of the world. How can issues like freedom of speech, control of
"pornography", protection of intellectual property, invasions of privacy, and many others to be governed by
law when so many countries are involved? If a citizen in a European country, for example, has internet
dealings with someone in a far-away land, and the government of that land considers those dealings to be
illegal, can the European be tried by the courts in the far-away country?

Global Cyberbusiness

The world is very close to having technology that can provide electronic privacy and security on the internet
sufficient to safely conduct international business transactions. Once this technology is in place, there will be
a rapid expansion of global "cyberbusiness". Nations with a technological infrastructure already in place will
enjoy rapid economic growth, while the rest of the world lags behind. What will be the political and
economic fallout from rapid growth of global cyberbusiness? Will accepted business practices in one part of
the world be perceived as "cheating" or "fraud" in other parts of the world? Will a few wealthy nations widen
the already big gap between rich and poor? Will political and even military confrontations emerge?

Global Education

If inexpensive access to the global information net is provided to rich and poor alike -- to poverty-stricken
people in ghettos, to poor nations in the "third world", etc.-- for the first time in history, nearly everyone on
earth will have access to daily news from a free press; to texts, documents and art works from great libraries
and museums of the world; to political, religious and social practices of peoples everywhere. What will be
the impact of this sudden and profound "global education" upon political dictatorships, isolated communities,
coherent cultures, religious practices, etc.? As great universities of the world begin to offer degrees and
knowledge modules via the internet, will "lesser" universities be damaged or even forced out of business?

Information Rich and Information Poor

29
The gap between rich and poor nations, and even between rich and poor citizens in industrialized countries,
is already disturbingly wide. As educational opportunities, business and employment opportunities, medical
services and many other necessities of life move more and more into cyberspace, will gaps between the rich
and the poor become even worse?

3.7 The Metaethics of Computer Ethics

Given the explosive growth of Computer ethics during the past two decades, the field appears to have a very
robust and significant future. Two important thinkers, however, Krystyna Gorniak-Kocikowska and Deborah
Johnson, have recently argued that computer ethics will disappear as a separate branch of ethics. In 1996
Gorniak-Kocikowska predicted that computer ethics, which is currently considered a branch of applied
ethics, will eventually evolve into something much more.[1] According to her hypothesis, "local" ethical
theories like Europe's Benthamite and Kantian systems and the ethical systems of other cultures in Asia,
Africa, the Pacific Islands, etc., will eventually be superceded by a global ethics evolving from today's
computer ethics. "Computer" ethics, then, will become the "ordinary" ethics of the information age.

In her 1999 ETHICOMP paper [Johnson, 1999], Johnson expressed a view which, upon first sight, may seem
to be the same as Gorniak's.[2] A closer look at the Johnson hypothesis reveals that it is a different kind of
claim than Gorniak's, though not inconsistent with it. Johnson's hypothesis addresses the question of whether
or not the name "computer ethics" (or perhaps "information ethics") will continue to be used by ethicists and
others to refer to ethical questions and problems generated by information technology. On Johnson's view, as
information technology becomes very commonplace -- as it gets integrated and absorbed into our everyday
surroundings and is perceived simply as an aspect of ordinary life -- we may no longer notice its presence. At
that point, we would no longer need a term like "computer ethics" to single out a subset of ethical issues
arising from the use of information technology. Computer technology would be absorbed into the fabric of
life, and computer ethics would thus be effectively absorbed into ordinary ethics.

Taken together, the Gorniak and Johnson hypotheses look to a future in which what we call "computer
ethics" today is globally important and a vital aspect of everyday life, but the name "computer ethics" may
no longer be used.

30
Ekart [irer

KON EDNA ISTORIJA NA KOGNITIVNATA NAUKA

Ne e lesno da se napi{e istorija na kognitivnata nauka. Samiot izraz e so skora{no


poteklo, a postojat i mislewa deka nema takvo ne{to kako {to e edinstvena kognitivna nau-
ka. No duri i onie {to veruvaat vo edinstvena kognitivna nauka ne se soglasuvaat vo nejzina-
ta definicija i opseg. Kako rezultat na toa nejzinata istorija e istorija na raznovidni dis-
ciplini od gledna to~ka na nivniot pridones vo novata disciplina. Kognitivnata nauka
nastana vo Soedinetite Dr`avi i duri sega e vo proces na internacionalizacija. Ottuka sle-
duva, kako pravilo, deka istorijata na kognitivnoto dvi`ewe e opi{ana od amerikanska
gledna to~ka. Ovoj izve{taj ne e isklu~ok, iako se storeni nekolku obidi da se naslika pou-
ramnote`ena slika.
Izrazot „kognitivna nauka“ e voveden od Lon`e-Hixins vo 1973 godina i se zdobil so
po{iroka upotreba duri kon krajot na sedumdesettite. Vo 1975 godina Fondacijata na
Alfred Sloan  privatna agencija za pomagawe na istra`uvawa vo Wujork  poddr`a edna
interdisciplinarna programa vo kognitivnite nauki, {to toga{ be{e golem rezultat i odi-
gra va`na uloga vo institucionalizacijata na novata disciplina. Vo 1977 godina se pojavi
spisanie pod naslov Kognitivna nauka. Edna decenija pred toa, oznakata „kognitivno” be{e
prvin upotrebena kako oznaka na ona {to toga{ be{e nov pristap vo psihologijata, a potoa i
drugite nauki ja sledea taa praktika. Taka stanuva jasno deka naslovot „kognitivna nauka“
bil primenet vrz edna grupa nauki koi ve}e bile „kognitivni“.
Uvodot za izrazot „kognitivna nauka“ ne e tolku zna~aen nastan so koj treba da se za-
po~ne edna istorija na toj predmet. Toa, me|utoa, ne zna~i deka stanuva zbor za izdvojuvawe od
istoriskiot interes: toj proces e va`en vo institucionaliziraweto na novata disciplina,
pokrenuvaweto na izdavawe novi spisanija, vtemeluvaweto profesionalni zdru`enija, vos-
postavuvaweto nau~ni programi na mnogu univerziteti. Toj ~in slu`i i kako referentna
to~ka za organizirawe na istorijata na kognitivnoto dvi`ewe sè do kristalizacijata kako
edna nova disciplina. Za taa cel, korisno e da se konsultira Izve{tajot na A`urniot komi-
tet za kognitivna nauka {to ì bil predlo`en na Fondacijata Sloan vo 1978 godina.
Avtorite na toj izve{taj ja definiraat kognitivnata nauka kako „izu~uvawe na prin-
cipite so ~ija pomo{ inteligentnite entiteti zaemodejstvuvaat so nivnata okolina“. Nagamu
taa definicija se pro{iruva na dva na~ina. Prviot e ekstenziven: lista na podoblasti na
kognitivnata nauka i nejzinite interdisciplinarni vrski. Poddomenite se kompjuterskata
nauka, psihologijata, filozofijata, lingvistikata, antropologijata i nevronaukata. Do de-
nes ne dobile site interdisciplinarni vrski status na specijalnost, a nivniot broj e prego-
lem za da mo`at iscrpno da se nabele`at. Dovolno e da se ka`e deka tuka se vklu~eni nevro-
fiziologijata, nevropsihologijata, psiholingvistikata, filozofijata na psihologijata i
kibernetikata. Vtoroto pro{iruvawe na definicijata na kognitivnata nauka e intenzivno:
„da se otkrijat kapacitetite za reprezentacija i obrabotka na umot i nivnoto strukturalno i
funkcionalno pretstavuvawe vo mozokot“. Za da ja ispolni ovaa o{pta cel, kognitivnata na-
uka opfa}a nekolku pospecifi~ni zada~i. Taa gi razgleduva apstraktnite opisi na mental-
nite kapaciteti preku nivnata struktura, funkcionirawe i sodr`ina; taa istra`uva razli~-
ni na~ini na koi kognitivnoto funkcionirawe mo`e da bide sledeno so fizi~ki sistemi; na-
stojuva da gi okarakterizira mentalnite procesi vo `ivite organizmi i gi prou~uva nevral-
nite mehanizmi vo procesot na poznanieto.
Postavuvaweto na celite na kognitivnata nauka zvu~i nepristrasno, no ako pogledne-
me vo nejzinata terminologija, zabele`uvame deka taa e oboena so odredena teoreti~nost. Ze-
mete gi, na primer, izrazite „reprezentacija“ i „obrabotka“. Mentalnite reprezentacii se

31
vnatre{ni sostojbi na sistemite, definirani so nivnoto semanti~ko upatuvawe kon nadvo-
re{ni objekti ili nastani. Ako sakame da go objasnime povedenieto na eden sistem, morame
da se povikame na mentalni reprezentacii; taa cel ne ja zadovoluva nitu vnatre{nata stru-
ktura na sistemot nitu strukturata na okolinata. Toa e „reprezentacionen metapostulat“
prifaten od pove}eto sovremeni privrzanici na kognitivnite nauki. Konceptot na obrabo-
tka e pospecifi~en vo izrazuvaweto na dve idei. Prvin, mentalnite reprezentacii se opi{a-
ni na ~isto formalen, a istovremeno i fizikalisti~ki na~in, t.e. kako „simboli~ni izrazi
ili podredenosti na nekoi elementarni, diskretni sostojbi {to soodvetstvuvaat na fizi~-
kite sostojbi na sistemot. Vtoro, spored Piqu{in site semanti~ki razjasnuvawa {to se re-
levantni za sistemot se specificirani preku formalnata, sintaksi~ka struktura na temel-
nite simboli~ki izrazi i nivnite vidoizmenuvawa. Nakuso, kognicijata e manipulacija na
semanti~ki promenetite fizi~ki simboli. Toa e su{tinata na ona {to stana poznato kako
„kompjutaciona teorija na umot“, koncept {to go zazema mestoto na postarite kako „teorija
na sistemi od fizi~ki simboli“ ili „pristap na procesirawe na informacija“.
Spored kompjutacionata teorija na umot postoi edno smislovno nivo na apstrakcija
na koe istite nau~ni generalizacii se primenuvaat vrz fizi~ki sistemi od mnogu razli~en
materijalen sostav, kako, na primer, digitalni kompjuteri i `iviot mozok. Toa e nivoto na
programa, poznato i kako „softver“. Sekoja programa ima dva aspekta. Prvo, ima algoritam-
ska struktura, ~isto formalna urednost na simboli~ki transformacii so cel da se re{i ne-
koja zadadena zada~a. Vtoro, programata pretpostavuva odredena funkcionalna arhitektura
na sistemot, na primer, negovite elementarni operacii, negoviot na~in na kontrola, smestu-
vawe na komponentite i na~inite na pristap do niv. Fukncionalnata arhitektura e izgrade-
na so nekoj programski jazik i mo`e da vosprimi mnogu razli~ni formi vo istiot fizi~ki
sistem; i obratno, istata funkcionalna arhitektura mo`e da bide ostvarena so razli~ni fi-
zi~ki sistemi. Sledstveno, mo`e da se zanemarat aktuelnite fizi~ki osobini na sistemot,
negoviot „hardver“ i da se definira „softver“-ot na simboli~kiot sistem kako vistinski
predmet na kognitivnite nauki.
Iako ne e prifatena od site istra`uva~i vo kognitivnata nauka, kompjutacionata te-
orija na umot mo`e da bide smetana kako „ortodoksno“ gledi{te, i toa so dobri pri~ini: taa
obedinuva prethodno divergentni disciplini na eden unificira~ki na~in i ja sveduva nivna-
ta raznovidnost vrz principielna podelba na trudot. Na primer, ve{ta~kata inteligencija i
kognitivnata psihologija spodeluvaat zaedni~ki predmet na istra`uvawe, no sepak se raz-
li~ni vo odnos na nivnite metodologii i posebnite fizi~ki sistemi so koi tie se zanimava-
at.
Me|u disciplinite na kognitivnata nauka, psihologijata ima poseben iako dvoen sta-
tus. Od edna strana, taa mnogu podlaboko e izmeneta so „kognitivnata revolucija“ preku zaj-
muvawe od srodnite disciplini. Od druga strana, kognitivnata psihologija e edinstveniot
element {to ne mo`e da se oddeli od nejzinata rodna disciplina bez da se uni{ti identite-
tot na vtorava: izgleda nezamislivo deka bi mo`ela da postoi psihologija bez kognitivna
psihologija. Toga{ {to bi bila taa? Kako rezultat na toa, kognitivnata nauka ili }e se gru-
pira okolu psihologijata kako jadro ili zasekoga{ }e ostane „besmislena pokrivka“ na, vo
osnovata nezavisni, specijalnosti karakterizirani so odreden stepen na teoriska interpre-
tacija. No duri i vo toj slu~aj, psihologijata }e ostane nezaobikolena vo razbiraweto na poz-
nanieto. Zatoa e opravdano istorijata na kognitivnata nauka da se fokusira vrz psihologija-
ta.
Bi postoela li psihologijata bez kognitivna psihologija? Vsu{nost imalo edna: bihe-
viorizmot, koj dominiral vo amerikanskata psihologija me|u 1930 i 1950 godina. Tokmu zatoa
vo Soedinetite Dr`avi imalo „kognitivna revolucija“, a ne i na drugi mesta: kognitivnata
psihologija se javila kako protest protiv biheviorizmot. Teoriskata cel na biheviorizmot
bila „predviduvawe i kontrola na povedenieto“. Nadvor od toa biheviorizmot nikoga{ ne
bil koheziven, a vo ~etiriesettite godini se razdelil vo razli~ni „{koli“ ili „sistemi“.
Postoel zaedni~ki streme` da se „ispolnat“ formulite pottik‡odgovor preku raznovidni
hipoteti~ki procesi vo organizmot, streme` poznat kako „neobiheviorizam“. Me|utoa, taka
vovednite interni procesi ne bile od kognitiven vid. Na primer, vo Huloviot „sistem na po-

32
vedenie“ nemalo mesto za percepcijata. Navistina, imalo mehanizmi koi sodejstvuvale so
kognitivnite funkcii, na primer anticipacijata, no tie bile konstruirani samo kako vna-
tre{ni repliki na vrskite pottik‡odgovor. Postoel i Tolmanoviot „celesoobrazen“ ili
„kognitiven“ biheviorizam, nekoj vid amalgam me|u biheviorizmot (kako metodolo{ka ori-
entacija) i Ge{talt-psihologijata (sè dodeka se odnesuvaat na su{tinskite pretpostavki).
Me|utoa, so neobiheviorizmot Tolman ne uspeal da zainteresira pogolem broj sledbenici, a
so vovlekuvawe vo mnogubrojni kontroverzi so drugi bihevioristi toj pridonesol kon otfr-
lawe na taa paradigma.
Okolu 1950 godina neobiheviorizmot ve}e ne bil produktivna paradigma. Beskrajnite
kontroverzi okolu slabo definiranite temi dominirale vo teorijata. Internite procesi sè
pove}e se namno`uvale, iako verojatno predviduvaweto i kontrolata na povedenieto sè u{te
bile osnovna cel. Vremeto bilo zrelo za radikalna promena. Takva edna promena se zbidnala
vo samoto dvi`ewe, koga Skiner vo svojata analiza na povedenieto se zalo`il za vra}awe
kon prou~uvaweto samo na javnoto povedenie; denes toj sè u{te e glaven protivnik na kogni-
tivnoto dvi`ewe. Od druga strana, vnatre{nite procesi mo`at samostojno da bidat prou~u-
vani, a ne samo vo interes na predviduvaweto i kontrolata na povedenieto. Kako rezultat,
tradicionalnite kognitivni kategorii na psihologijata mo`ele povtorno da bidat koriste-
ni kako tematski organizatori na istra`uvawe, iako sè u{te postoela odredena tendencija
tie da se povrzuvaat so konceptite pottik‡odgovor {to bile karakteristi~ni od vremeto na
bihevioristi~kata era.
„Dvi`eweto nov pogled“ ja istaknalo motivacionata odredelivost na percepcijata i
se zanimavalo so „na~inot na koj perceptivniot proces zaemodejstvuva so drugi formi na psi-
holo{ko funkcionirawe“. Proslavenata statija na Miler „Magi~niot broj sedum“ ja istak-
nala va`nosta na formiraweto povisoki edinici i aktivna elaboracija vo memorijata. Vo
„Prou~uvawe na misleweto“ Bruner, Gudnou i Ostin vo 1956 godina go opi{ale formiraweto
na koncepti kako proces {to se temeli vrz razli~ni aktivni strategii na u~enikot. Ovie
tri probivi se slu~ile vo Soedinetite Dr`avi i vo uslovi na fundamentalni istra`uvawa.
Od druga strana, Broadbent vo 1958, avtor na pionerski trud za selektivnoto vnimanie i kra-
tkotrajnata memorija, rabotel vo Velika Britanija vo kontekstot na istra`uvawe na ~ove~-
kiot faktor.
^esto se veli deka sredinata na pedesettite godini pretstavuva kriti~ni godini za
formiraweto na dvi`eweto za kognitivna psihologija. No toa stoi samo koga se zemaat pred-
vid drugi razvojni linii nadvor od psihologijata. Okolu 1956 godina vo psihologijata imalo
nekolku nade`ni novi priodi, no ne{to {to bi li~elo na nova paradigma se pojavilo prib-
li`no deset godini podocna. Toga{ taa bila nare~ena „kognitivna psihologija”, spored Nei-
ser, ili „pristap na obrabotka na informacijata“, spored Haber. Zada~ata na ovoj nov pri-
stap bila da se sogledaat „promenlivostite“ na setilnata informacija dodeka taa se podlo-
`uva na raznovidni operacii na procesirawe. Noviot pristap mo`el da naslika nekoi
eksperimentalni operacii, kako razdeluvaweto na vremiwata za odgovor (Stenberg, 1966)
ili zadninskoto maskirawe (Sperling, 1963), {to mo`elo da bide iskoristeno da gi oddeli
stadiumite na procesirawe kako enkodirawe na drazbite ili prepoznavawe. Misleweto pre-
ku linearno podreduvawe na procesira~kite stadiumi bilo, a donekade sè u{te e, dominantno
vo pristapot na informaciono procesirawe.
Vo taa smisla pristapot dobiva nekoi karakteristiki na neobiheviorizmot. Od druga
strana, toj koristi razli~en teoriski vokabular, delumno zajmen od novite disciplini {to
se pojavile nabrgu po Vtorata svetska vojna.
Prvata od niv bila matemati~kata teorija na komunikacii na [enon vo 1948 godina.
Taa ja opremila psihologijata so nova statistika (informaciona mera, „bit“), {to bilo poz-
draveno so golemi o~ekuvawa, bidej}i izgledalo verojatno deka }e se re{at anti~kite prob-
lemi okolu toa kako da se meri strukturata i redot. Me|utoa, se poka`alo deka ovaa infor-
maciona statistika ima ograni~ena primenlivost vo psihologijata, glavno zatoa {to ne mo-
`elo da se specificira edinica mera nezavisno od sostojbata na ~ove~kiot sistem za proce-
sirawe na informacijata; „bitovite“ informacii gi zamenile „trupci” so razli~na golemi-
na. Potraen efekt na informacionata teorija bilo ohrabruvaweto na psiholozite da go kon-
33
ceptualiziraat kognitivniot sistem so pomo{ na „informacioni kanali“ so ograni~en kapa-
citet, tendencija {to stanala posebno silna vo psihologijata na vnimanieto. Rezultantnite
„modeli na kapacitet“ na vnimanieto bile podlo`eni na kriti~ko preispituvawe od Wuman
vo 1987 godina. Kone~no, trajnata netehni~ka upotreba na „informacija“ kako centralen kon-
cept vo kognitivnata psihologija se vra}a na asimilacijata na teorijata na informacii od
strana na psihologijata od pedesettite godini.
Vlijanieto na kibernetikata vrz kognitivnata psihologija e pote{ko za razjasnuva-
we. Samoto novo pole, onaka kako {to toa e prika`ano od Viner, bilo amalgam od heterogeni
idei, a negovoto vlijanie bilo silno, no difuzno. Eden vlijatelen u~ebnik {to ja prika`uva
psihologijata od gledna to~ka na procesiraweto na informacija, ja koristi „terminologija-
ta na kibernetikata onaka kako {to taa e konstruirana od Viner, no nitu Viner nitu pak ki-
bernetikata nekade se spomenati“. Sepak, mo`e da se prepoznae ne{to pospecifi~no vlija-
nie: asimilacijata na koncepti od teorijata na kontrola, na primer hierarhiskata kontrola
i povratnite vrski. Za amerikanskata psihologija rabotata bila najefektivno zavr{ena od
Miler, Galanter i Pribram koi predlo`ile da se zameni tradicionalnoto mislewe preku
nizite pottikodgovor so edna nova funkcionalna edinica, edinicata „proveridejstvu-
vajpro{irizavr{i“ Taa edinica ne bila samo protest protiv biheviorizmot, tuku mo`e da
bide sfatena kako kritika (vo toa vreme proro~ka) na konceptualiziraweto na procesira-
weto informacija kako edna niza od linearni stadiumi. Od drug teoriski agol, kritikata
bila rezimirana od Nizer, no do denes temata sè u{te ne miruva vo celost.
Tretoto vlijanie proizleglo od novorodenata oblast na ve{ta~ka inteligencija.
Ve{ta~kata inteligencija e del, no ne e identi~na so kompjuterskite nauki. Izrazot
se pojavil vo 1956 godina pri „dvomese~noto izu~uvawe na ve{ta~kata inteligencija“ sprove-
deno na kolexot Dartmaut vo Wu Hemp{ir, vo Soedinetite Dr`avi, {to trebalo da se odviva
„temelej}i se vrz stavot deka sekoj aspekt na u~eweto ili koja i da e druga osobina na inteli-
gencijata mo`e vo princip da bide tolku precizno opi{ana da mo`e i edna ma{ina da ja si-
mulira“.
Prvite programabilni kompjuteri bea izgradeni vo tekot na Vtorata svetska vojna.
Prv dvigatel bil Konrad Cusi, germanski in`ener, koj vo 1941 godina go bil izgradil svojot
prv „planski kontroliran kompjuter“. Vo 1945 godina toj razvil programski jazik so poviso-
ko nivo  „planirano smetawe“, a mnogu porano ve}e imal jasna koncepcija na ona {to podoc-
na stanalo poznato kako „ve{ta~ka inteligencija“, koja samiot toj ja narekol „primeneta ma-
temati~ka logika“. Negovata rabota, me|utoa, bila prekinata vo uslovite na vojnata, pa {to
se odnesuva do me|unarodniot razvoj na ve{ta~kata inteligencija, re{itelnite pottici pro-
izlegle od Velika Britanija i Soedinetite Dr`avi.
Kompjuterskata nauka i ve{ta~kata inteligencija mu dol`at dva va`ni koncepta na
angliskiot matemati~ar Alan Tjuring. Prviot e dokazot deka edna hipoteti~ka ma{ina mo-
`e da gi presmetuva site funkcii {to se prifatlivo presmetlivi. Tjuringovata ma{ina (ka-
ko {to potoa e nare~ena) se sostoi od kontrolna edinica {to e sposobna da prifati kone~en
broj sostojbi, edna lenta (teoriski beskone~no dolga) koja e podelena vo kvadrati od koi se-
koj nosi simbol od nekoe kone~no mno`estvo simboli, i od edinica za ~itawe i pi{uvawe.
Vtoro, {to se odnesuva do pra{aweto „dali ma{inite mo`at da mislat“, Tjuring smetal deka
odgovorot bi bil „da“ ako eden posrednik {to komunicira so edna ma{ina i so edno ~ove~ko
su{testvo preku teleprinter ne mo`e na odredi koj e li~nost, a koj ma{ina. „Testot na Tju-
ring“ dolgo vreme bil vode~ki princip vo kompjuterskata simulacija na mentalnite procesi,
no denes ve}e ne se smeta za soodveten.
Zna~aen napredok vo logi~koto dizajnirawe na kompjuteri bil ostvaren od Xon fon
Nojman okolu 1945 godina. „Ma{inata na Nojman“ funkcionira na dva principa. Prvo, kon-
ceptot na smestena programa: instrukciite i podatocite zaedno se smesteni vo edinstven me-
dium. Vtoro, poslednoto izveduvawe instrukcii, ostvareno preku eden programski broja~ so
sposobnost za avtomatsko zgolemuvawe. So konstrukcijata na kompjuteri vrz modelot na Fon
Nojman, bil postaven stadiumot za pojava na ve{ta~kata inteligencija.

34
Me|utoa, i pokraj toa sè u{te imalo pre~ki: kompjuterskata programa trebalo da bide
napi{ana vo „ma{inski kod“, t.e. vo forma na broevi {to se neposredno povrzani so ,,hardve-
rot na ma{inata“. Bil potreben podlabok uvid vo razvivawe na povisoki programski jazici,
kako i vospriemawe na idejata za ve{ta~ka inteligencija: natamo{noto razvivawe poka`a
deka kompjuterite ne se ograni~eni edinstveno na manipulacija na broevi, odnosno deka tie
mo`at da izveduvaat nenumeri~ki zada~i; i deka mo`at da bidat taka napraveni da razbiraat
barem ograni~eno podmno`estvo od nekoj priroden jazik. Hronolo{ki dvata razvoja se sov-
padnale: vo IBM, okolu 1955 godina, istra`uva~kiot tim go dizajniral FORTRAN, a istra`uva-
~ite {to rabotele na programi za igrawe {ah se natprevaruvale za presmetkovnoto vreme na
istata ma{ina.
Ve{ta~kata inteligencija e na sigurno mesto otkako }e se sfati deka „na kompjuterot
mo`e da se gleda kako na sredstvo za procesirawe informacija, a ne samo broevi“. No dodeka
e lesno da se sumiraat po~etocite na ve{ta~kata inteligencija, nejziniot podocne`en razvoj
e neverojatno slo`en i mu se opira na sekoj obid za koncizno sumirawe. Na primer, Wuvel
identifikuval ne pomalku od trieset protivre~ni stavovi vo intelektualnata istorija na
ve{ta~kata inteligencija, od koi mnogu sè u{te ne se re{eni.
Postojat dva na~ina na viduvawe na istorijata na ve{ta~kata inteligencija: edniot
go istaknuva kontinuitetot, a drugiot diskontinuitet. Prikaznata za kontinuitetot, raska-
`ana, na primer, od De Mej vrz osnova na edna porane{na klasifikacija na Mi{i, go deli
razvojot na ve{ta~kata inteligencija na ~etiri stadiumi: monadi~en, strukturalen, konte-
kstualen i kognitiven. Zna~eweto na ovie stadiumi mo`e najdobro da bide ilustrirano so
rabotata vrz ma{inskoto preveduvawe na jazici i vrz razbiraweto na prirodniot jazik. Mo-
nadi~niot stadium soodvetstvuva na preveduvaweto zbor-po-zbor, pri {to sekoj zbor se smeta
kako samosodr`an entitet. Na strukturalnoto nivo e vovedena sintaksi~ka analiza, no re~e-
nicite sè u{te bile smetani za nezavisni edinici. Kontekstniot stadium bil dostignat koga
e koristen lingvisti~kiot, semanti~kiot ili pragmati~kiot kontekst na re~enicata, za zbo-
rovite {to se pojavuvaat me|u niv da bidat nedvosmisleni. Na ova nivo kontekstot sè u{te e
vo mnogu golem del specifi~en za situacijata i mora da bide vospriemen od sistemot zaedno
so signalot. Na kognitivnoto nivo „kontekstot stanuva ne{to {to go dava vospriema~ot“,
toa e vo osnova „poznanie na svetot“ {to mu ovozmo`uva na sistemot da detektira kontekstu-
alni elementi {to se kongruentni so znaeweto na sistemot. Tipi~ni formi na svetsko znae-
we se strukturite i scenarijata. Obata koncepta se odnesuvaat na znaewe za stereotipni si-
tuacii, no „strukturata“ pove}e se odnesuva na bezli~ni aspekti na nekoja situacija (na pri-
mer, prostornata postavenost na lekarska ordinacija), a „scenarioto“ na socijalnite aspekti
i vremenskite organizacii (na primer, poseta na lekar).
Ne e jasno dali kognitivniot stadium e, vsu{nost, krunsko dostigawe na kontinuira-
niot razvoj na ve{ta~kata inteligencija; toj mo`e ednakvo da bide viden i kako izraz na edna
promena na paradigma {to se slu~ila vo periodot me|u 1965 i 1980 godina. Promenata na pa-
radigmata ima mnogu aspekti i samo pova`nite mo`at da bidat spomenati. Edna promena se
slu~ila kaj dimenzijata op{ta sposobnost. Porane{nite programi vo ve{ta~kata inteligen-
cija imale cel da bidat op{ti modeli na ~ove~kite kognitivni sposobnosti na visoko nivo
(na primer, Re{ava~ot na op{t problem na Wuvel, [ou i Sajmon od 1958 godina), pri {to
igraweto {ah i doka`uvaweto teoremi bile prototipni. Dene{nite programi se sodr`inski
mnogu specifi~ni  toa se „ekspertni sistemi“. Vtora promena e dimenzijata na istra`uva-
weto nasproti znaeweto. Poradi ograni~enostite vo procesira~kata brzina i dostapnata me-
morija, ranite programi bile naso~eni kon iznao|awe efektivni proceduri, ~esto so heuri-
sti~ka priroda. Denes strukturite na podatoci i nivnata upotreba vo prika`uvaweto na zna-
eweto stanale pova`ni. Kone~no, ve{ta~kata inteligencija sekoga{ oscilirala me|u ~isto
istra`uvawe i edna primeneta perspektiva, no ramnote`ata sega e pomestena kon in`ener-
stvoto; „igra~kite“ im otstapija mesto na vistinskite zada~i.
Iako „in`enerstvo na znaewe“ mo`e da bide idno ime na „ve{ta~ka inteligencija“, sè
u{te ne se otfrleni streme`ite kon op{tost. Raskrivaweto na „arhitekturata na kognicija-
ta“, spored Anderson, ostanuva legitimna istra`uva~ka cel isto kako i „re{avaweto na ~o-
vekovite problemi“ desetina godini pred toa.

35
Anderson, avtor na Arhitekturata na kognicijata, e psiholog eksperimentator koj vo
svoeto istra`uvawe gi kombinira ulogite na psiholog i rabotnik so ve{ta~kata inteligen-
cija. Toj fakt nè isprava pred povrzanosta me|u psihologijata i ve{ta~kata inteligencija.
Od samiot po~etok tie se mnogu bliski; eden od prvite tekstovi na pionerite na ve{ta~kata
inteligencija se pojavil vo vode~kiot `urnad za psihologija. Razmenata na idei bila dvo-
strana, no ~esno e da se ka`e deka psihologijata bila pod pogolemo vlijanie na ve{ta~kata
inteligencija. Kako i da e, tekot na idei ne bil ednonaso~en: mo`eme da razlikuvame neko-
lku prelomi vo evolucijata na kompjuterskata metafora za mentalni procesi.
Vo po~etokot akcentot bil vrz hardverot i negovite sli~nosti so mozokot. Vsu{nost,
samiot razvoj na digitalnite kompjuteri bil inspiriran so povlekuvaweto analogii so nerv-
niot sistem, a Xon Nojman zboruva za komponentite na kompjuterite kako za nivni „organi”.
Popularnata ideja za eden „elektronski mozok“ o~igledno proizleguva od ovoj na~in na mi-
slewe. So napredokot na povisokite programski jazici i ve{ta~kata inteligencija, spored-
bite na hardverot stanale pomalku popularni. Na primer, Niser razjasnil deka pri upatuva-
weto na kompjuterskite simulacii na mentalnite procesi toj ne se interesira za kompjute-
rot kako fizi~ki sistem, tuku za kompjuterskite programi. Kako i da e, pobliskoto ispituva-
we poka`uva deka sè u{te ima dosta teoretizirawe za ograni~uvawata na hardverot. Pospe-
cifi~no, tipi~nata arhitektura na edna Fon Nojmanova ma{ina (kontrola, smestuvawe, vlez
i izlez), nejzinata seriska izvedba na funkcioniraweto i nejzinite ograni~uvawa na kapaci-
tetot bile proizvedeni vo modelite so blok-dijagrami na procesiraweto na informacijata
od strana na ~ovekot. Razvojnite linii vo poleto na kompjuterskiot hardver bile instrumen-
talni vo razdvojuvaweto na modelot na procesirawe informacija od strana na ~ovekot i od
ograni~eniot kapacitet, kako i od konceptite za serisko procesirawe kon paralelno proce-
sirawe kako koncepti so neograni~en kapacitet. Na krajot stanalo jasno deka treba da se na-
pravat sporedbi, ne so fizi~kata struktura na kompjuterot, tuku so negovata funkcionalna
arhitektura odredena so ograni~uvawata na softverot, kako {to se operativniot sistem i
programskite jazici. Vo ponovo vreme idejata za modularna organizacija na umot privle~e po-
{iroko vnimanie. Modulite se samosodr`ani, avtomatski funkcionira~ki komponenti na
sistemot koi mo`at, no ne moraat da imaat aktuelen fizi~ki korelat vo mozokot. Rezimiraj-
}i, analogijata na hardver nikoga{ ne bila vo celost otfrlena.
Sepak, pove}eto istra`uva~i vo kognitivnite nauki pretpo~itaat da gi ispituvaat
softverskite analogii me|u kompjuterite i ~ove~kiot mozok. Edno od nivoata na koi toa se
~ini mo`e da bide ozna~eno kako „molekularno“; toa vklu~uva elementarni operacii {to se
zbidnuvaat vo daden povisok programski jazik i na na~in na koj tie se podredeni vo programi.
Na primer, jazicite so listovno procesirawe (kakov {to e LISP) ~esto se smetaat za posood-
vetni za psiholo{ki relevantni programi vo ve{ta~kata inteligencija, bidej}i tie dozvolu-
vaat definicii {to se sostojat od uslov i aktiven del, kako i od hierarhiski organizirani
efektivni sistemi. Idejata ovde e deka „aktuelnata organizacija na ~ovekovite programi
mnogu li~i na organizacijata na efektiven sistem“.
Na molekularnoto nivo programskiot jazik ve}e ne e relevanten, pa sporedbata komp-
juter  um se sosredoto~uva vrz nekoi apstraktni sli~nosti me|u niv. Vo prvata faza na ve-
{ta~kata inteligencija interesot se centriral glavno vrz proceduri ili metodi, kako, na
primer, celesoobraznata analiza vo re{avaweto na problemi. Denes akcentot se pomestil
vrz strukturite na pretstavuvawe i znaewe, ~esto so holisti~ki vkus, kako, na primer, kon-
ceptite za „struktura” i „scenario” {to ve}e gi spomenavme. Pretstavuvaweto i procesira-
weto ne mo`at vo celost da bidat oddeleni edno od drugo i so napredokot na {emite na holi-
sti~koto prika`uvawe idejata za procesirawe goredolu  nasproti procesiraweto na poda-
toci dolugore  se zdobila so {iroka prifatenost. Edna od najgolemite ironii vo istorija-
ta na naukata e deka nekoi klu~ni koncepti na antimehanisti~kite teorii vo psihologijata
(na primer Ge{talt-psihologijata) stanale popularni preku ve{ta~kata inteligencija, koja
nu`no po~iva vrz mehanisti~ka osnova.
Ve{ta~kata inteligencija i psihologijata stanale sojuznici bez da se gri`at za fi-
lozofskite pra{awa, no pra{awata {to go opkru`uvaat „mehaniziraweto na procesite na
mislata“ jasno imaat filozofska dimenzija. Toa e va`en, iako sigurno ne edinstven problem

36
na sega{nata filozofija na umot koja e vo osnovnite disciplini na kognitivnite nauki. Vo
Soedinetite Dr`avi istorijata na filozofijata zazela sli~en kurs kako i psihologijata.
Biheviorizmot ne bil aktiven samo vo psihologijata, tuku i vo filozofijata. Kralstvoto na
biheviorizmot bilo skr{eno od filozofi kako Patnam i Fjodor, koi predlo`ile funkcio-
nalisti~ki priod, spored koj mentalnite sostojbi se funkcionalni, odnosno tie se odredeni
so nivnata funkcionalna uloga vo kognitivniot sistem. Iako mentalnite sostojbi gi ima kaj
nekoi fizi~ki sistemi, tie ne mo`at da bidat svedeni vrz fiziolo{ki sostojbi na sekoja
mentalna sostojba, koja, ako e definirana preku funkcionalnosta, soodvetstvuva so golema
raznovidnost na fiziolo{ki sostojbi. Od druga strana, sekoe kauzalno sogleduvawe na pove-
denieto na eden kognitiven sistem mora da vklu~i i upatuvawe na mentalnite sostojbi, bidej-
}i povedenieto e vo princip nezavisno od drazbite; to est, ne e mo`no da se dadat, vo ~isto
fizi~ki elementi, relevantnite parametri {to go odreduvaat povedenieto vo dadena situa-
cija.
Funkcionalizmot e filozofskata platforma na teorijata na presmetuvawe na umot i,
vo op{t slu~aj, nejzinite sledbenici se skloni kon idejata za „silna“ ve{ta~ka inteligenci-
ja, to est kon stavot deka eden programiran kompjuter vsu{nost ima mentalni sostojbi. Ovaa
ideja e osporena od Sirl, temelej}i se vrz faktot deka mentalnite sostojbi mo`at edinstve-
no da bidat sozdadeni od `iv sistem i se funkcii od biohemiskite osobini na mozokot. Ne-
{to porano Drajfus tvrdel, od pozicijata na filozofskata fenomenologija, deka su{tinski-
te osobini na ~ovekovoto iskustvo ne mo`at da bidat opfateni so formalni modeli od tipot
na ve{ta~kata inteligencija, bidej}i tie se vkoreneti vo teloto i vo op{testvenite tradi-
cii {to ne mo`at da bidat verbalizirani. Verojatno zatoa {to negovite posebni filozo-
fski osnovi bile neprifatlivi, negoviot predizvik ne bil zemen mnogu seriozno od zaedni-
cata na ve{ta~kata inteligencija. Sepak, problemite {to se postaveni so ve{ta~kata inte-
ligencija prodol`uvaat da go privlekuvaat vnimanieto na filozofite.
Pojavata na dvi`eweto na kognitivnata nauka e nesmislivo bez pridonesot od genera-
tivnata lingvistika. Mo`at da se razlikuvaat tri fazi. Prvin, Чomskieviot pregled na Ski-
nerovoto delo „Verbalno povedenie“ bil patokaz za odbivaweto na biheviorizmot vo lingvi-
stikata i vo psihologijata. Vtoro, vo svojata rabota vrz teorijata na sintaksata Чomski zaze-
ma kompjutaciona perspektiva, naglasuvaj}i gi formalnite operacii {to se definirani vo
prikazite, a so teorijata dava formalni operacii vo „kontekstnata faza“ na ve{ta~kata in-
teligencija, {to ovozmo`uva da bidat primeneti prakti~ni {emi za re{avawe na problemite.
No vo sedumdesettite godini lingvistikata na Чomski i ve{ta~kata inteligencija zapo~nale
da se oddale~uvaat od pove}e pri~ini. Edna od niv e distinkcijata kompetentnostizvodlivo-
st. Istra`uva~ite na ve{ta~kata inteligencija se zainteresirani za izvodlivosta, a lingvi-
stite se zainteresirani za kompetentnosta. Natamu, postoela ligvisti~ka sklonost da se od-
deli sintaksata od semantikata i sintaksata da se istakne kako prostor na semantikata, dode-
ka dihotomijata sintaksa/semantika se doka`ala kako nezaobikoliva vo rabotata vrz simula-
cijata na procesiraweto na prirodnite jazici. Kone~no, tuka bilo i Чomskievoto napreduva-
we kon edno biolo{ko, hardversko gledi{te za „jazi~nata sposobnost“, koja bila edinstveno
prifatliva za onie {to veruvale vo „modularnosta na umot“. Kompjutacionata lingvistika
sè u{te postoi, no ve}e ne e ednostaven proizvod ili primena na generativniot pristap vo
lingvistikata.
Kompjutacionata teorija na umot poprima pomalku paradoksalno gledi{te za releva-
ntnosta na nevronaukite vo kognitivnite nauki, gledi{te {to go zdru`uva fizikalizmot i
mentalizmot. Teorijata vleguva vo fizikalizam, bidej}i pretpostavuva deka manipulacijata
so simboli se postignuva so ~isto fizi~ki sredstva. Taa e mentalisti~ka zatoa {to prifa}a
deka manipulacijata so simboli ne mo`e da bide objasneta samo preku ~isto fizi~ki termi-
ni, tuku i so upatuvawe pa mentalni sostojbi {to se odredeni preku semantikata. Nevronau-
kite i kognitivnite nauki dejstvuvaat na razli~ni fizi~ki nasproti simboli~ko-(semanti~-
ki) nivoa na objasnuvawe, pa znaeweto vo vrska so nervniot sistem (ili za polusprovodni~ki-
te ~ipovi) ne e potrebno pri objasnuvawata vo kognitivnata nauka, osven ako celta ne e „pre-
vod“, odnosno vidoizmenuvawe na fizi~kite golemini vo simboli.

37
Site kongnitivni nau~nici ne ja delat ovaa perspektiva na nevronaukite. Vsu{nost
postoi sè pogolem streme` mentalnite procesi da se modeliraat vo eden teoriski re~nik koj
gi zafa}a su{tinskite osobini na nevronskiot supstrat na umot. Dvi`eweto ne e ograni~eno
na nevrofiziolozi, tuku ima sledbenici me|u rabotnicite vo ve{ta~kata inteligencija, psi-
holozite i filozofite. Taka imame kognitivna nevrobiologija, kognitivna nevropsihologi-
ja, pa duri i nevrofilozofija.
Od gledna to~ka na kognitivnata nevronauka, istorijata izgleda poinaku otkolku od
gledna to~ka na presmetuvaweto. Bez somnenie, ima zaedni~ki koreni, kako na primer, stati-
jata na Mek Kuloh i Pits vo koja simboli~kata logika bila primeneta vrz analizata na ne-
vralnite mre`i  va`en element vo razvojot na digitalnite kompjuteri. No, pati{tata se
razdvoile vo pedesettite godini. Dodeka se razvival standardniot pristap na ve{ta~kata in-
teligencija, Frenk Rozenblat rabotel vrz ona {to go narekol „perceptroni“  samoorganizi-
ra~ki nevrovidni mre`i za koi se pretpostavuvalo deka se sposobni za raspoznavawe oblici
na apstrakcija. Rabotata vrz perceptronite privr{ila koga Minski i Papert poka`ale deka
tie ne mo`at da odgovorat na nekoi od postavenite zada~i na pronao|a~ot. Perceptronite se
sostoele od nevrovidni elementi {to rabotat paralelno; trgnuvaj}i od dadeno po~etno nivo
vo koe elementite se slu~ajno rasporedeni i me|usebno povrzani, tie se sposobni da se orga-
niziraat sebesi vo stabilni mre`i. Vo ponovo vreme napredokot vo dizajniraweto na kompju-
terite (paralelno presmetuvawe, decentralizirani kompjuterski mre`i) stimuliral nov in-
teres za op{tite idei vo osnovata na perceptronite i dovel do edna nova paradigma {to e na-
re~ena „nov konekcionizam“ ili priod na „paralelno raspredeleno procesirawe“. Denes pa-
ralelno raspredelenoto procesirawe e glavna alternativa na standardnata, simboli~ki ori-
entirana paradigma na ve{ta~kata inteligencija. Toa e vklu~eno vo istra`uvaweto na „mi-
krostrukturata na poznanieto“, vnatre{nata struktura na pomakroskopskite kognitivni do-
stigawa, na primer zborot prepoznavawe.
Edna biolo{ka perspektiva vrz kognicijata ne povlekuva nu`no i fokus vrz mozokot.
Toa, isto taka, mo`e da zna~i deka organizmot i okolinata se razgleduvani kako edinstven
ekolo{ki sistem, kade {to organizmot e opfaten vo neposredno prezemawe na biheviori-
sti~ki relevantnata informacija od okolinata. Ova gledi{te datira od rabotite na Xibson.
Ekolo{kiot realizam, kako {to denes se narekuva, tvrdi deka „vgnezdenite osobini na orga-
nizmot i okolinata“, zemeni kako objektivni sostojbi na ne{tata, se edinstvenite objekti na
percepcija; toj gi otfrla objasnuvawata na kognicijata so pomo{ na nevralni ili mentalni
sostojbi, kako i kompjuterskata metafora na umot. Ekolo{kiot realizam go prifa}a fizi-
kalizmot, no ne vo smisla na teorijata na presmetuvawe na umot; namesto toa, toj raboti vrz
razvojot na edna „ekolo{ka fizika“ {to e sposobna za opi{uvawe na fizi~kata okolina so
pomo{ na invarijanti koi gi odreduvaat potencijalnite dejstva vrz organizmot. Eden fizi~-
ki priod se primenuva vrz organizmot, no relevantni teorii se statisti~kata mehanika i
ireverzibilnata termodinamika, teorii {to se privle~eni za paralelno raspredelenoto
procesirawe.
Dali kognitivnata nauka ima edinstvena teoriska platforma? Ne, i pokraj pretenzi-
jata na kompjutacioniot pristap. Toj duri i ne mora da bide soodveten za napreduvawe kon te-
oriska unifikacija. Verojatno oblasta na kognicija e tolku heterogena {to bara nekoj teo-
riski pluralizam. Ovoj avtor misli deka vozbudlivite paradigmi mo`at lesno da go podelat
poleto me|u sebe. Senzomotornite procesi mo`at da bidat predadeni na ekolo{kiot reali-
zam; simboli~kiot proces vo potesnata smisla (nadvore{en i vnatre{en govor, pa so toa i
golem del od mislovniot proces) bi mo`el da ostane vistinska oblast na kompjutacionata te-
orija. Mikrostrukturata na kognicijata bi mo`ela da bide predadena na paralelno rasprede-
lenoto procesirawe, no toa ne ja isklu~uva simboli~kata manipulacija kako edno legitimno
nivo na analiza vo kognitivnite nauki. Elaboracijata na poedinostite na eden takov kompro-
mis i otkrivaweto dali toj e primenliv e va`na zada~a vo idnina.
Od druga strana, sega{nite paradigmi vo kognitivnite nauki ne formiraat zatvoren
univerzum. Kognicijata i nejzinoto povtorno otkritie e amerikanski fenomen zaedno so po-
javata i dominacijata na biheviorizmot. Evropskata psihologija razvila odreden broj kogni-
tivni pristapi, kako {to se, na primer, genetskata epistemologija na Pija`e, kulturnoisto-
riskiot pristap na Vigotskij i negovite sledbenici vo toga{niot Sovetski Sojuz, kako i Ge-
38
{talt-psihologijata na Keler, Varthajmer i Kefka. Vsu{nost, kognitivniot pristap vo psi-
hologijata bil najprvin postuliran od Oto Selc koj, isto taka, anticipiral mnogu od prin-
cipite na kompjutacioniot pristap. Sovremenata kognitivna nauka asimilirala nekoi idei
od nebihevioristi~kata tradicija vo psihologijata, no mnogu drugi ostanuvaat da bidat
otkrieni. Treba da se uka`e deka razli~nite komponenti na kognitivnata nauka treba da se
opiraat na celosnata dominacija na dvi`eweto na ve{ta~kata inteligencija. Kone~no, ve{-
ta~kata inteligencija e edinstveno razumna dodeka ja stimulira prirodnata inteligencija, a
principite na prirodnata inteligencija treba da bidat izu~uvani sami za sebe, duri i vo in-
teres na ve{ta~kata inteligencija.
Ovaa statija be{e koncentrirana vrz intelektualnata istorija, no toa e samo edna
strana od prikaznata. Sporedeno so psihologijata, lingvistikata i drugite poddisciplini na
kognitivnata nauka, ve{ta~kata inteligencija e „golema nauka“, {to vklu~uva golemi tro{o-
ci za oprema i lu|e. Vsu{nost, od samite svoi po~etoci istra`uvaweto na ve{ta~kata inte-
ligencija sekoga{ bilo finansirano od kompleksot na voenata industrija, pa odredeni di-
skontinuiteti vo nejziniot razvoj mo`at edinstveno da bidat objasneti so investiraweto na
istra`uva~kite dotacii vo konkretni proekti i nivno povlekuvawe ako ovie proekti ne se
poka`ale spored o~ekuvawata na nivnite sponzori. Istra`uvaweto na ve{ta~kata inteli-
gencija e ograni~eno na najindustrijaliziranite nacii vo svetot. Sè dodeka ne se sozdadeni
uslovi {to dozvoluvaat pro{iruvawe na negovata geografska i politi~ka osnova, istra`u-
vaweto na ve{ta~kata inteligencija }e slu`i edinstveno da gi zasili tehnolo{kata i eko-
nomskata dominacija na grst nacii nad ostatokot od svetot. Kognitivnite nau~nici treba da
bidat svesni za takvite implikacii na nivnata rabota. Vo ovaa smisla ve{ta~kata inteli-
gencija treba da dojde pod kontrola na prirodnata inteligencija.

39
Svetislav Bulatovi}

RA\AWE NA NOVA TRANSDISCIPLINARNA NAUKA:


KOGNITIVNATA RENESANSA NA XXI VEK

Svetot razdroben so specijalizacija, pretvoren vo lavirint so mno{tvo


razdvoeni delovi, bara komunikativna obnova na svoeto znaewe i samo-
razbirawe. Arijadninata crvena ni{ka za izlez od lavirintot e novata
transdisciplinarna kognitivna nauka, epistemolo{ka prizma na naso-
~eno sobirawe na {irok spektar znaewa i iskustva. Ova mentalno zra-
~ewe treba da prodre niz yidovite koi industriskoto op{testvo gi soz-
dalo me|u naukata i lu|eto ‡ za da se otvori prostor za ra|awe na nov um,
koj }e bide sposoben da go razbere promenetiot svet.

Informati~kiot op{testven sistem, neograni~enata ekspanzija na novi tehnologii,


promenetiot odnos na ~ovekot sprema svetot baraat golema obnova na tradicionalnite nau-
ki. Stanuva zbor za sozdavawe nova, edinstvena kognitivna nauka koja ja ukinuva nasledenata
samodovolnost na nau~nata specijalizacija. Informati~koto op{testvo ne mo`e bez svojata
celosna informacija, koja ve}e ne ja poznava Dekartovata podelba na „prirodno“ i „op{te-
stveno“. Kognitivnata nauka vo sebe }e go sodr`i znaeweto i iskustvoto na site relevantni
posebni nauki, no samata nivna specijalizacija }e ja sveduva na tehni~ki moment na eksper-
tskiot sistem.
Taa bara misla koja ne se zapira na granicite na industriski definiranite znaewa i
umeewa. High teach, high touch nè soo~uva so svetot koj, pred sè, e edinstvena informacija. Za-
toa novata kognitivna nauka, koja od noviot mislitel ne bara memorija i tehnika, no nad sè
renesansna restavracija na celokupnoto razmisluvawe, pretstavuva ne samo podloga na novo-
to ~ove~ko obrazovanie, no i dominanten i edinstven mo`en model na komunikacija me|u na-
u~nite disciplini. Za noviot svet potreben e, po kojznae koj pat, nov ~ovek, odnosno
seprisutna nauka, koja se zasnova na neograni~eniot identitet na svetot i misleweto.
Kako nie go do`ivuvame ovoj svet i zo{to go do`ivuvame tokmu na takov na~in? [to
nosat zborovite koi gi izgovarame vo sebe i slikite koi gi sozdavame vo glavata? Kolku e na-
{iot mozok terra incognita? Dali znaeweto koe go imame e samo tehnika na sovladuvawe na pri-
rodata ili e ne{to mnogu pove}e?
Kognitivnata nauka ne se pla{i od vakvi pra{awa, istovremeno metafizi~ki i egza-
ktni, tuku niz niv se soo~uva so celinata. Od druga strana, odgovorite mo`at da bidat be-
skrajno slo`eni, no mo`nosta na postavuvawe na celosno pra{awe e su{tinski kvalitet na
informati~koto nasproti industriskoto op{testvo. Jasno e deka so vakov multidisciplina-
ren pristap ovaa nova nauka e ~edo na krajot na dvaesettiot vek. Idejata za interdisciplina-
ren pristap svoja prva realizacija do`iveala preku tretirawe na samoto ~ove~ko mislewe.

Prviot po~etok
Toj po~etok na novata nauka se slu~i vo septemvri 1946 godina na pro~ueniot Hikso-
nov simpozium vo organizacija na Kaliforniskiot institut za tehnologija, kade za prv pat
jasno se izneseni novite idei.
Od toga{ kognitivnata nauka pominala pat od talkawe vo metodologijata do popre-
cizno definirawe na konceptot. Se ~ini deka najmalku tri uslovi morale da se ispolnat za
da se dojde do nejzino ra|awe.
Pred sè bilo potrebno jasno da se poka`e neadekvatnosta na bihevioristi~iot pri-
stap na razgleduvawe na ~ove~koto mislewe. Vtoro, trebalo da se prepoznaat i prifatat od-
delnite ograni~uvawa koi op{testvenite nauki gi nosele vo sebe. I na krajot, pojavata na

40
kompjuterite pretstavuvala onoj najjak impuls za formirawe na osnovnite principi na nova-
ta nauka.
Presvrtnica vsu{nost pretstavuvalo izlagaweto na pro~ueniot amerikanski psiho-
log Karl La{li na Hiksonoviot simpozium, vo koe toj se sprotivstavil na dvete osnovni dog-
mi na biheviorizmot. La{lieviot koncept na nerven sistem se sostoel od sekoga{ aktivni,
hierarhiski organizirani edinici, koi se kontroliraat od nekoj vnatre{en centar, a ne so
nadvore{ni stimulansi. Od toga{ modelot na dinami~ki postojano aktiven sistem igra do-
minantna uloga vo sekoe razmisluvawe za funkcionirawe na ~ove~kiot mozok, nasproti sta-
ti~kiot sistem zasnovan na konceptot refleksen lak i povrzan sinxir na nevroni.

Filozofija

Psihologija Lingvistika

Ve{ta~ka Antropologija
inteligencija

Nevronauki

Jaki interdisciplinarni vrski


Slabi interdisciplinarni vrski

Sl. 1. Vrski me|u kognitivnite nauki

Izlagawata na drugite, prvi teoreti~ari na kognitivnata nauka, Fon Nojman, Herbe-


rt Sajmon, Voren MekKalak i drugi, poka`ale na osnovnite pravci vo koi }e se dvi`i novata
nauka. Na slikata 1 se prika`ani nau~nite disciplini koi na dene{niot stadium na razvoj
na kognitivnata nauka pretstavuvaat sr` na nejzinite istra`uvawa. Ovoj cvrst spoj na razno-
rodni disciplini ja formira op{tata definicija i specifi~niot domen na istra`uvawa na
novata nauka. Kognitivnata nauka, zna~i, pretstavuva sovremeni, soglasni so iskustvoto i so
eksperimentalnata proverka, intuicii i istra`uvawa na brojni epistemolo{ki pra{awa, od
koi treba posebno da se izdvojat onie koi se odnesuvaat na prirodata na ~ove~koto poznanie,
negovite komponenti, izvori, razvoj i distribucija.

Aspekti na novata nauka

Vaka definiraniot pristap gi opredeluva op{tite aspekti i ograni~uvawa koi se na-


metnuvaat na dene{niot stepen na razvoj na kognitivnata nauka:
1. Pri razgleduvawe na ~ove~kite kognitivni aktivnosti treba precizno da se odredi
za koi mentalni procesi stanuva zbor, odnosno za koe nivo na analiza se zboruva. So drugi
zborovi, kognitivnata nauka smeta deka ~ove~koto mislewe, vo nau~ni celi, mo`e da se pre-
tstavi i opi{e so simboli, {emi, sliki i drugi oblici na mentalna reprezentacija. Od druga
strana, barem na ova nivo na razvoj na kognitivnata nauka, analizata treba celosno da se od-
voi od najniskoto nivo na reprezentacija, vo koe spa|aat biolo{kite, fizi~kite, hemiskite

41
i nevrolo{kite (vo domenot na funkcionirawe na nervnata }elija) analizi, a isto taka i od
najvisokoto nivo, vo koe se vbrojuvaat sociolo{kite, umetni~kite i kulturolo{kite repre-
zentacii. Ovie ograni~uvawa mo`at da se pretstavat kako na slikata 2.

Sl. 2. Nivoa na reprezentacija

1 ‡ najnisko nivo na reprezentacija (fizika, hemija, nevrofiziologija),


2 ‡ sredno nivo na reprezentacija (kognitivna nauka),
3 ‡ najvisoko nivo na reprezentacija (umetnost)

2. Vo osnovata na sovremenoto tolkuvawe i razbirawe na funkcioniraweto na ~ove~-


kiot mozok se nao|a kompjuterot. Pri ova ne mislime samo na kompjuterot kako najmo}no
sredstvo za izveduvawe eksperimenti i potvrduvawe ili pobivawe na razli~ni pretpostavki
za procesot na razmisluvawe, tuku, {to e pozna~ajno, na kompjuterot kako najto~en i najsigu-
ren do denes otkrien model na funkcionirawe na ~ove~kiot mozok.
3. Eliminiraweto na odredeni ~initeli koi vo zna~itelna mera vlijaat na procesite
na mislewe i poznanie na svetot okolu nas, pretstavuva tret generalen aspekt na kognitivna-
ta nauka. Tuka mislime na onie elementi koi se vo momentalna faza na razvoj na novata nauka
i ne mo`at da se vklu~at vo nea, bidej}i bi gi ote`nuvale, a ~esto i bi gi onevozmo`uvale po-
natamo{nite istra`uvawa, a pred sè istra`uvawa na vlijanieto na afektite, istoriskite i
geopoliti~kite ~initeli, kako i na reakciite poradi vlijanieto na sredinata vo koja se `i-
ve.

Teoriski temeli

Brz razvoj na naukata i kompjuterite. Blagodarej}i na germanskiot logi~ar Frege, koj


go vovel noviot oblik na logika koja dozvoluva manipulirawe so apstraktni simboli, na sa-
miot po~etok na XX vek angliskite matemati~ari i filozofi Rasel i Vajthed uspeale osnov-
nite zakoni na aritmetikata da gi svedat na elementarni logi~ki iskazi. Toj teoriski prido-
nes vlijael na cela edna generacija matemati~ki orientirani misliteli. Drug va`en prido-
nes e rabotata na matemati~arot Tjuring, tatkoto na ve{ta~kata inteligencija, na definira-
we i doka`uvawe deka e mo`no da se formira i izvr{i beskone~en broj programi, koi mani-
puliraat so simboli, na ma{ina koja raboti na principot na binaren kod (Tjuringova ma{i-
na). Vrz osnova na Tjuringovite raboti Fon Nojman ja razrabotuva idejata za memorirana

42
programa, odnosno idejata deka kompjuterot mo`e da se kontrolira so programa koja se nao|a
vo vnatre{nata memorija na ma{inata. Nakratko, od ovie principi, koi vo poslednive dece-
nii se usovr{uvale, nastanal golemiot „bum“ i brz razvoj na kompjuterskite nauki, koj trae i
denes.
Nevronski model. Osnovnata ideja na nevronskiot model, koja ja razvile MekKalak i
Pit, e mnogu ednostavna, iako matemati~kata analiza koja e vo osnova na ovoj model nikako
ne e trivijalna. Ovie dvajca amerikanski nau~nici poka`ale deka operaciite na nervnite
kletki i sinapti~ki vrski so drugite kletki (nevronska mre`a), mo`at da se modeliraat so
logi~ki izrazi. Modelot ovozmo`uva nevronot da se razgleduva kako aktiven element koj
preku sinapsa go „pali“ drugiot nevron na ist na~in kako {to eden iskaz go pri~inuva drugi-
ot (to~no‡neto~no). Ovaa analogija mo`e da se pro{iri i na elektri~noto kolo, pri {to
signalot na vlezot vo koloto ili pominuva niz koloto ili ne. Nevronski model, zna~i, pre-
tstavuva teoriska pretpostavka koja go povrzuva mozokot i kompjuterot. Vrz osnova na Tju-
ringovata ma{ina i nevronskiot model se zaklu~uva deka barem za edna stra{no mo}na ma{i-
na ‡ ~ove~kiot mozok ‡ mo`e da se razmisluva kako za kompjuter dokolku funkcionira na lo-
gi~ki principi.
Teorija na informaciite. Rabotite na Klod [enon isto taka pretstavuvaat eden od
temelite na kognitivnata nauka. Rabotej}i na Masa~usetskiot institut za tehnologija, toj gi
razrabotil osnovnite poimi na teorijata na informacija: za informacijata mo`e da se mi-
sli celosno odvoeno od nejzinata sodr`ina odnosno taa mo`e da se tretira kako odluka me|u
dve raspolo`livi alternativi. Osnovna edinica na informacijata e bit, koj pretstavuva po-
trebna koli~ina na informacija za od dve ednakvo mo`ni alternativi da se izbere edna.
Kibernetika. [irokata oblast na teorijata na kontrolata i komunikacijata, bez
ogled dali e vo pra{awe ma{ina ili `ivo su{testvo, e nare~ena spored knigata na Viner,
kibernetika. Cvrst spoj me|u kontrolata i komnunikacijata postoi i vo eden krajno osnoven
poim ‡ poraka, bez ogled dali taa se prenesuva so nekoe elektri~no, mehani~ko ili nervno
sredstvo. Vo taa smisla ideite i istra`uvawata na kibernetikata i teorijata na informa-
cii se mnogu povrzani.
Nevropsiholo{ki sindromi. Zna~aen pridones vo razvojot na kognitivnata nauka do-
{ol od edna na prv pogled oddale~ena nau~na oblast ‡ nevropsihologijata, koja se zanimava
so kognitivnite nesposobnosti i drugite mentalni patologii nastanati poradi o{tetuvawe
na mozokot. Posledicite od ovie istra`uvawa, koi stanale intenzivni za vreme i po Vtorata
svetska vojna, koga, za `al, imalo na raspolagawe golem broj pacienti so o{teten mozok, bile
takvi {to postojniot model na funkcionirawe na mozokot na principot na refleksen lak
moral da bide zamenet. Sli~ni istra`uvawa, koi denes se vr{at vo specijalni laboratorii,
davaat mnogu inspirativni sugestii za na~inot na funkcionirawe na mozokot kaj normalni-
te li~nosti.

Pat vo nov svet

Kognitivnata nauka stoi pred misterijata na noviot svet koj nè opkru`uva. No najgo-
lema tajna na ovoj svet i ponatamu ostanuvame nie samite. Za nejzinoto rasvetluvawe vo po-
slednite decenii bitno pridonele brojni misliteli i istra`uva~i. Sekoj od niv zaslu`uva
posebno vnimanie. I funkcionalizmot na Patnam, intencionalizmot na Denet, novata epi-
stemologija na Fodor vo oblasta na filozofijata. I modelot na pomnewe na Atkinson i [i-
frin, teorijata za mentalni reprezentacii na [epard, teorijata za razbirawe na prirodni-
ot jazik vo oblasta na psihologijata na [enk. I MekKartieviot jazik na ve{ta~kata inteli-
gencija LISP, i Vinogradovata fenomenalna programa SHRDLU, i top-down-konceptot na ram-
ki na Minki vo oblasta na ve{ta~kata inteligencija. I sintaksnite strukturi i generativ-
nata gramatika na ^omski vo lingvistikata. I kanonite na Klod Levi-[tros vo antropolo-
gijata. I Sperievite i Gazaniginovite istra`uvawa na bisektiraniot mozok, i Pribramovi-
te holografski aspekti na nervniot sistem vo oblasta na nevronaukata.
Ne mo`eme site niv poop{irno da gi spomeneme, no vaka kreiranata i definirana
kognitivna nauka denes ve}e pretstavuva aktuelen model na vtemeluvawe na obrazovanieto i
organizacijata na nau~nata komunikacija vo op{testvata koi zavr{ile so svojot industriski

43
period na razvoj. Taa e pulsira~ko jadro na razvojot na novi tehnologii i poinakvoto ~oveko-
vo samorazbirawe. Taa e ve}e sega realna paradigima na noviot svet.
Mo`ebi e najdobro da zavr{ime so proro~kite zborovi na eden od najzna~ajnite kog-
nitivni nau~nici, Norbert Viner: „Informacijata e informacija, a ne materija ili energi-
ja. Nitu eden materijalizam koj ova ne go prifa}a nema da pre`ive“. Faktot deka na ovoj pat,
koj e pred nas, prviot ~ekor e najte`ok ne bi trebalo da nè spre~i i nie da za~ekorime po ne-
go.

Prezemeno od:
Spisanie Galaksija.

44

You might also like