Connect with us

BIRTH OF ARTIFICIAL INTELLIGENCE



Department of Oral and Dentomaxillofacial Radiology, Faculty of Dentistry, Istanbul University-Cerrahpasa, Istanbul, Turkey CAST (Cerrahpaşa Research, Simulation and Design Laboratory), Istanbul University-Cerrahpasa, Istanbul, Turkey Health Biotechnology Joint Research and Application Center of Excellence, Esenler, Istanbul, Turkey

"Son dönemde önemli gelişmeler kaydedilen yapay zeka veya makine öğrenimi teknolojisi, modern anlamda yaklaşık 1950'lerde ortaya çıktı. Teknolojinin ilerlemesiyle birlikte günümüzde hemen her alanda aktif bir şekilde kullanımı bulunmaktadır. "

YAPAY ZEKANIN DOĞUŞU

The period before 1950

The 'McCulloch and Pitts Neuron’ is a mathematical model proposed in 1943 to represent conduction in nerve cells in order to understand the human mind and decision-making process. This algorithm architecture forms the basis of popular artificial neural networks today. [2]
British mathematician Alan Turing suggested asking the question 'Can machines think?' in his article published in 1950. ‘The Imitation Game' described in the article is about a human controller asking questions through text-based communication with a human and a machine in different rooms, and the machine is trying to convince the controller that it is human. Thus, it was stated that if a machine can compete with a human on equal terms (text-based communication) and deceive the human controller, it can be said that there is intelligence. In this game, for which different alternatives have been developed over time, the developers have deliberately produced systems that make human errors (e.g. late typing, wrong answers), which has called into question the perception that 'machines are never wrong' [3].

The period between 1950-1970

The term 'Artificial Intelligence' was first used in a workshop organized by John McCarthy in 1956 in Dartmouth. This name aroused a higher level of curiosity in society than researchers expected, but it also raised expectations from related technologies. It is worth thinking about whether the interest and development in these technologies would have been the same today if a less interesting name such as 'computable learning' had been chosen instead of 'artificial intelligence' back then [4, 5].

In 1958, Ord. Prof. Dr. Cahit Arf gave a speech on 'Can machines think and how can they think?' at the event of public education conferences held at the opening of the academic year of Atatürk University [6] In the same year, the previously proposed 'McCulloch and Pitts Neuron' led to the development of the 'Perceptron' algorithm, and in 1962, the design of the multi-layer perceptron architecture was suggested. Although this model still lacked deep learning elements, it was defined as 'extreme machine learning' for its time [7, 8]. In 1965, Stanford University had a project called DENDRAL and the Massachusetts Institute of Technology had a project called ELIZA [9, 10].

The period between 1970-1990

In 1974, Stanford University launched the SUMEX-AIM project, which shared computer resources by using national network infrastructures such as ARPANET [11]. For example, the MYCIN project is an expert system developed to make recommendations for choosing appropriate antibiotics for the patient. This rule-based system, developed with the opinions of experts who were not sufficient in number at the time, was designed as a tool to help less experienced physicians choose the right antibiotic [12]. Later, new software based on the MYCIN project was developed, and instead of antibiotic elements, a common platform was produced where developers can design expert systems in any scenario they want [13]. In his master's thesis in 1986, Fikret Uluğ transferred the EMYCIN software to a different programming language to build a similar platform and developed two different expert systems for ‘car engine problem diagnosis' and 'financial advices' [14].

The period after 1990

These systems were first created in theory and then developed with the technological possibilities of that day. What the developed systems could do was limited and the margin of error was higher than today's. Therefore, special care was taken to ensure that the decisions taken by the machine were explainable. In some periods, developments came to a halt and there were periods of stagnation called 'artificial intelligence winter'. Over time, the computing power of computers increased, and by the 90s, machine learning techniques such as deep learning were developed. In the 2000s, convolutional neural network models were introduced and gained attention with their success in image processing. [15] The process, which started with the identification of objects such as cats and dogs, continued with serious advances in different subjects such as autonomous driving. In 2016, the AI system called AlphaGo, developed by DeepMind, declared its victory to the whole world by beating the world champion in a five-game competition in the game called Go. [16] However, this software was developed specifically for this game, and its superiority over humans is also within the scope of this game. However, it is not possible to play another game such as chess or sudoku or make clinical decisions with the same software unless it is included in the training.

Growth of baby AI

Historically, studies aimed at unraveling the human mind led researchers to the question of how to transfer a mind into a machine. Then, algorithms were developed to calculate the desired output by analyzing the problem mathematically. Not to forget, the perception of these systems about the universe we live in is limited only by the data presented to it. Moreover, these data are often just numbers and have no idea what they actually are. This situation requires careful use of the relevant systems. Big data can be evaluated tirelessly with AI, and many improvements can be achieved in the field of health. Thus, perhaps better-quality images can be obtained by improving the data coming from an imaging device's sensor, but with today's technology, the final decision is still up to the user. In addition, although today models such as deep learning or convolutional neural networks achieve high success in some tasks, they are defined as 'closed boxes' because the mechanism of the decisions they make based on statistical calculations cannot be fully explained. However, the development of new architectures such as generative adversarial network (GAN) is exciting.

Position of AI in healthcare

In summary, AI is a tool and can be useful when used correctly. It does not get tired like a human, it does not make decisions based on its emotions, but both can 'make mistakes'. Today, we use different AI systems in daily life, aware or unconsciously. The digital transformation in healthcare and the increase in digital patient data produced facilitate the development of various AI applications, as integrated into the system or independent software. The usage area of these technologies is not limited to the clinical decision support systems. For example, similar systems can be used in the stock management of a healthcare facility or in the organization of healthcare services across the country. Artificial limbs can be controlled by processing electrical signals collected from the brain. Although it is still a distant dream to leave the entire process to robots with AI in the field of healthcare, today, knowing the advantages and disadvantages of these systems plays a key role in increasing the benefits obtained from relevant technologies [17,18].

Misconceptions

× AI does not make mistakes.
Artificial intelligence applications are simply statistical analysis of numerical data, and the results may contain errors. Although its ability to quickly learn large volumes of data makes it look smart, that doesn't mean that it won't make any mistakes.

× An intelligent AI system can be dangerous if it learns new things by hiding them from humans.
Today, artificial intelligence models are trained for a purpose and it is not possible to transfer their knowledge on one subject to another subject (unless programmed for that). Moreover, most software is delivered after the training has been completed by the developers and the model is no longer learning anything new. Although some models update themselves with data from the user (if included in features), these are limited to purposeful topics. Unless a model is developed to cause harm, especially by learning new things, these systems are safe, at least in intent.

× The complex structure of artificial intelligence makes it difficult to understand its emotions and can be dangerous if hurt.
The production of models that can understand and imitate human emotions is an interesting but difficult topic. Unless this is involved in its training, its sole purpose is to perform the desired task and is independent of emotional decisions.

× Artificial intelligence may take our jobs in the future.
It is not possible to predict the future with certainty, but it would be useful to talk about different possible scenarios. These technologies will change professions rather than ending them, and may put those who are late in adapting to professional life at a disadvantage. Perhaps, it can support the work to be completed faster and reduce working hours or the number of personnel of companies can be reduced. However, it is possible that new professions will emerge in the coming years. Although the acceleration in technological developments is dizzying, trying to understand these developments rather than closing ourselves off is probably the best thing to do.

1.      Orhan, K., Amasya, H. Tıptan Diş Hekimliğine Yapay Zeka. İçinde: Orhan, K. Jagtap, R. (Eds) Diş Hekimliğinde Yapay Zeka. Springer, Cham. (2023).

https://www.researchgate.net/publication/367260858_Dis_Hekimliginde_Yapay_Zeka_Uygulamalari

2.      Mcculloch, W.S., Pitts, W. A Logical Calculus Of The İdeas İmmanent İn Nervous Activity. Bulletin Of Mathematical Biophysics 5, 115–133 (1943).

https://link.springer.com/article/10.1007/BF02478259

3.      AM. Turıng, I.—Computıng Machınery And Intellıgence, Mind, Volume Lıx, Issue 236, October 1950, 433–460, (1950).

https://link.springer.com/chapter/10.1007/978-1-4020-6710-5_3

4.      Mccarthy J, Et Al., A Proposal For The Dartmouth Summer research project on artificial intelligence, august 31, 1955. (2006).

https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1904

5.      Moor J, The Dartmouth College artificial intelligence conference: The next fifty years. (2006).

https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1911

6.      Arf, C. Makine düşünebilir mi ve nasıl düşünebilir. Atatürk Üniversitesi-Üniversite Çalışmalarını Muhite Yayma ve Halk Eğitimi Yayınları Konferanslar Serisi, (1), 91-103. (1959).

https://www.mbkaya.com/hukuk/cahit-arf-makine-dusunebilir-mi-orjinal.pdf

7.      Rosenblatt F, The perceptron: a probabilistic model for information storage and organization in the brain. (1958).

https://psycnet.apa.org/record/1959-09865-001

8.      Rosenblatt F, Principles of Neurodynamics Spartan. (1962).

https://apps.dtic.mil/sti/pdfs/AD0256582.pdf

9.      Weizenbaum J, ELIZA—a computer program for the study of natural language communication between man and machine. (1966).

https://dl.acm.org/doi/pdf/10.1145/365153.365168

10.  Lederberg J, Systematics of organic molecules, graph topology and Hamilton circuits. A general outline of the Dendral system Interim report. 1966.

https://ntrs.nasa.gov/api/citations/19660004786/downloads/19660004786.pdf

11.  Freiherr G, The seeds of artificial intelligence: SUMEX-AIM: US Department of Health, Education, and Welfare, Public Health Service, (1980).

https://ia803104.us.archive.org/8/items/seedsofartificia00frei/seedsofartificia00frei.pdf

12.  Van Melle W, MYCIN: a knowledge-based consultation program for infectious disease diagnosis. (1978)

https://www.sciencedirect.com/science/article/abs/pii/S0020737378800492

13.  Weiss S, The EXPERT and CASNET consultation systems. (1979).

https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=59480&item_no=1&page_id=13&block_id=8

14.  Ulug F, Emycin-Prolog expert system shell. 1986, NAVAL POSTGRADUATE SCHOOL MONTEREY CA.

https://apps.dtic.mil/sti/citations/ADA178496

15.  LeCun Y, et al., Gradient-based learning applied to document recognition. (1998).

https://ieeexplore.ieee.org/abstract/document/726791

16.  Wang F-Y, et al., Where does AlphaGo go: From church-turing thesis to AlphaGo thesis and beyond. (2016).

https://ieeexplore.ieee.org/abstract/document/7471613

17.  Yüce, F,  Taşsöker, M. Diş hekimliğinde yapay zeka uygulamaları. 7tepe Klinik Dergisi, 19(2), 141-149. (2023)

https://yeditepedentaljournal.org/jvi.aspx?pdir=yeditepe&plng=tur&un=YDJ-05668&look4=

18. Zaim Gökbay, İ. Tıpta yapay zeka uygulamaları-antik çağdan yapay zekaya teşhis ve tedavi yöntemlerinin gelişim sürecinde klinik karar destek sistemlerinin evrimine genel bakış. İstanbul Üniversitesi Yayınevi, İstanbul, 673-692. (2021).

     https://avesis.istanbul.edu.tr/yayin/ef87cac9-d422-4636-81e0-58ea68b8ef06/tip-bilisimi