Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

Cristina Cristina, Ade Kurniawan


A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS). Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

Full Text:


N. Malik, Artificial Neural Networks and their Applications. 2005.

D. E. Rumelhart, J. L. McClelland, and the P. R. Group, “Parallel distributed processing, explanations in the micro structure of cognition, 1: Foundations,” A Bradford B., p. 576, 1988.

B. Li, M. Li, S. Ghose, and C. Smidts, “Integrating software into PRA,” Proc. - Int. Symp. Softw. Reliab. Eng. ISSRE, vol. 2003–Janua, pp. 457–467, 2003.

R. Chillarege, I. S. Bhandari, J. K. Chaar, M. J. Halliday, B. K. Ray, and D. S. Moebus, “Orthogonal Defect Classification—A Concept for In-Process Measurements,” IEEE Trans. Softw. Eng., vol. 18, no. 11, pp. 943–956, 1992.

W. James, Writings 1878-1899: Psychology: Briefer Course; The Will to Believe and Other Essays in Popular Philosophy; Talks to Teachers on Psychology and to Students on Some of Life’s Ideals; Selected Essays. 1992.

D. E. Muller, “No Title,” pp. 3–16.

E. Kussul, T. Baidyk, L. Kasatkina, and V. Lukovich, “Rosenblatt perceptrons for handwritten digit recognition,” Neural Networks, 2001. Proceedings. IJCNN ’01. Int. Jt. Conf., vol. 2, pp. 1516–1520, 2001.

B. Widrow, “Pattern Recognition and Adaptive Control,” Appl. Ind. IEEE Trans., vol. 83, no. 74, pp. 269–277, 1964.

M. Budinich, “Sorting with Self-organizing Maps,” vol. 1190, no. 1992, pp. 1188–1190, 1995.

J. Triesch and C. von der Malsburg, “Democratic Integration: Self-Organized Integration of Adaptive Cues,” Neural Comput., vol. 13, no. 9, pp. 2049–2074, 2001.

M. A. De Abreu De Sousa and E. Del-Moral-Hernandez, “Comparison of three FPGA architectures for embedded multidimensional categorization through Kohonen’s self-organizing maps,” Proc. - IEEE Int. Symp. Circuits Syst., pp. 2–5, 2017.

N. Brunswick, “Wp11 = 2:30,” pp. 786–791, 1988.

H. H. Jasper, E. D. Adrian, J. C. Eccles, E. V. Luniel, and D. Tweel, “Vectorcardiographic Diagnosis Using the Polynomial,” no. 2, pp. 90–95, 1967.

D. F. Specht, “Generation of Polynomial Discriminant Functions for Pattern Recognition,” IEEE Trans. Electron. Comput., vol. EC-16, no. 3, pp. 308–319, 1967.

B. Widrow et al., “Adaptive Noise Cancelling: Principles and Applications,” Proc. IEEE, vol. 63, no. 12, pp. 1692–1716, 1975.

D. M. C. Kemsely, D., T.R. Martinez, “A survey of Neural Network Research and Fielded Applications. International Journal of Neural Networks 2,” vol. 2/3/4, pp. 123–133, 1992.

G. Mcn. Anderson, Dave, Artificial Neural Networks Technology. Data & Analysis Center for Software. 1992.

C. C. Gelenbe, Erol, M.Sungur, Learning Random Networks for Compression of Still and Moving Images. 2004.

S. Carrato, Neural Networks For Image Compression. In Gelenbe, E. (ed.) Neural Networks: Advanced and Applications 2. 1992.

and S. M. Jain, A. S., “Job-Shop Scheduling Using Neural Networks.”

H. C. and S. H. H. Zhang, “Applications of Neural Networks in Manufacturing: a State-of-the-Art Survey,” vol. 3, pp. 705–728, 1995.

J.-A. and F. L. Muller, Self-Organizing Data Mining. 2000.

H. R. and A. G. I. Madala, Inductive Learning Algorithms for Complex Systems Modeling. CRC Press. 1994.

P. Lisboa, “A Review of Evidence of Health Benefit from Artificial Neural Networks in Medical Intervention. Neural Networks 15,” pp. 11–39, 2002.



  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Terindeks oleh :










Program Studi D4 Teknik Informatika
Politeknik Harapan Bersama Tegal
Jl. Mataram No.09 Pesurungan Lor Kota Tegal

Telp. +62283 - 352000

Email :


Copyright: JPIT (Jurnal Informatika: Jurnal Pengembangan IT) p-ISSN: 2477-5126 (print), e-ISSN 2548-9356 (online) 

Flag Counter
View Visitor Statistic


Creative Commons License
JPIT (Jurnal Informatika: Jurnal Pengembangan IT) is licensed under a Creative Commons Attribution 4.0 International License.