Das Suchergebnis hat sich seit Ihrer Suchanfrage verändert. Eventuell werden Dokumente in anderer Reihenfolge angezeigt.
  • Treffer 8 von 533
Zurück zur Trefferliste

Learning representations by crystallized back-propagating errors

  • With larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study showsWith larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study shows the importance of dealing with ANN in hierarchies through backpropagation and brings in learning methods as novel ways of interacting with ANN. Practitioners will benefit from this interactive process because they can restrict neural learning to specific architectural components of ANN and can focus further development on specific areas of higher cognitive levels without the risk of destroying valuable ANN structures.zeige mehrzeige weniger

Metadaten exportieren

Weitere Dienste

Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Verfasserangaben:Marcus GrumORCiDGND
DOI:https://doi.org/10.1007/978-3-031-42505-9_8
ISBN:978-3-031-42504-2
ISBN:978-3-031-42505-9
Titel des übergeordneten Werks (Englisch):Artificial intelligence and soft computing
Verlag:Springer
Verlagsort:Cham
Herausgeber*in(nen):Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada
Publikationstyp:Konferenzveröffentlichung
Sprache:Englisch
Datum der Erstveröffentlichung:14.09.2023
Erscheinungsjahr:2023
Datum der Freischaltung:07.03.2024
Freies Schlagwort / Tag:NMDL
artificial neural networks; backpropagation; cognitive levels; knowledge crystallization; second-order conditioning
Seitenanzahl:23
Erste Seite:78
Letzte Seite:100
Organisationseinheiten:Wirtschafts- und Sozialwissenschaftliche Fakultät / Wirtschaftswissenschaften / Fachgruppe Betriebswirtschaftslehre
DDC-Klassifikation:3 Sozialwissenschaften / 33 Wirtschaft / 330 Wirtschaft
Peer Review:Nicht ermittelbar
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.