The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 8 of 528
Back to Result List

Learning representations by crystallized back-propagating errors

  • With larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study showsWith larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study shows the importance of dealing with ANN in hierarchies through backpropagation and brings in learning methods as novel ways of interacting with ANN. Practitioners will benefit from this interactive process because they can restrict neural learning to specific architectural components of ANN and can focus further development on specific areas of higher cognitive levels without the risk of destroying valuable ANN structures.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Marcus GrumORCiDGND
DOI:https://doi.org/10.1007/978-3-031-42505-9_8
ISBN:978-3-031-42504-2
ISBN:978-3-031-42505-9
Title of parent work (English):Artificial intelligence and soft computing
Publisher:Springer
Place of publishing:Cham
Editor(s):Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada
Publication type:Conference Proceeding
Language:English
Date of first publication:2023/09/14
Publication year:2023
Release date:2024/03/07
Tag:NMDL
artificial neural networks; backpropagation; cognitive levels; knowledge crystallization; second-order conditioning
Number of pages:23
First page:78
Last Page:100
Organizational units:Wirtschafts- und Sozialwissenschaftliche Fakultät / Wirtschaftswissenschaften / Fachgruppe Betriebswirtschaftslehre
DDC classification:3 Sozialwissenschaften / 33 Wirtschaft / 330 Wirtschaft
Peer review:Nicht ermittelbar
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.