Technical capacity for development of sequentially training neural networks in industrial architecture design
https://doi.org/10.21285/2227-2917-2025-1-155-164
EDN: QQZRBY
Abstract
This paper aims to analyze the possibilities of developing sequentially learning neural networks in the context of industrial architecture design. The study revealed a lack of empirical data on the application of neural network methods to automate the creation of architectural models of industrial objects in the modern information environment. Moreover, available studies are limited to a general description of the problem, offering no comprehensive solutions. The applied methods are based on training neural networks using images; however, they are adapted for modeling autonomous behavior in architectural design. The proposed standards stipulate that algorithmically controlled neural networks serve as a tool to facilitate subsequent design, rather than being a source of knowledge. Theoretical and experimental studies have been conducted to develop new engineering design techniques to create a model of autonomous behavior in architecture. The results demonstrate that the integration of engineering design with neural network methods contributes to the standardization of design processes, increasing functional efficiency and quality of realized objects. The developed methods can be used at early design stages to switch from conventional visual approaches to the use of algorithmically controlled models, which offer opportunities for engineering design automation.
About the Author
K. A. MerkushevRussian Federation
Konstantin A. Merkushev, Postgraduate Student
76 Lenin Ave., Chelyabinsk 454080
Author ID: 1233716
Competing Interests:
The author declare no conflict of interests regarding the publication of this article
References
1. Sinno Jialin Pan, Qiang Yang A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering. 2010;22(10):1345-1359. https://doi.org/10.1109/tkde.2009.191.
2. LeCun Y., Bengio Y., Hinton G. Deep Learning. Nature. 2015;521:436-444. https://doi.org/10.1038/nature14539.
3. Parisi G.I., Kemker R., Part J.L., Kanan C., Wermter S. Continual Lifelong Learning with Neural Networks: A Review. Neural Networks. 2019;113:54-71. https://doi.org/10.1016/j.neunet.2019.01.012.
4. Chaudhry A., Ranzato M.A., Rohrbach M., Elhoseiny M. Efficient Lifelong Learning with A-GEM. ICLR. 2019:1-20. https://doi.org/10.48550/arXiv.1812.00420.
5. Schwarz J., Luketina J., Czarnecki W.M., Grabska-Barwinska A., Yee Whye Teh, Pascanu R. et al. Progress & Compress: A Scalable Framework For Continual Learning. International Conference On Machine Learning. 2018:4528-4537. https://doi.org/10.48550/arXiv.1805.06370.
6. Zenke F., Poole B., Ganguli S. Continual Learning Through Synaptic Intelligence. International Conference On Machine Learning. 2017:3987-3995. https://doi.org/10.48550/arXiv.1703.04200.
7. Kirkpatrick J., Pascanu R., Rabinowitz N., Veness J., Desjardins G., Rusu A.A. et al. Overcoming Catastrophic Forgetting in Neural Networks. Proceedings of the National Academy of Sciences of the United States of America. 2017;114(13):3521-3526.
8. Hospedales T., Antoniou A., Micaelli P., Storkey A. Meta-Learning in Neural Networks: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2022;44(9):5149-5169. https://doi.org/10.1109/TPAMI.2021.3079209.
9. Zoph B., Le Q.V. Neural Architecture Search with Reinforcement Learning. ICLR. 2017:1-16. https://doi.org/10.48550/arXiv.1611.01578.
10. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Yu P.S. A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems. 2021;32(1):4-24. https://doi.org/10.1109/TNNLS.2020.2978386.
11. Baker B., Gupta O., Naik N., Raskar R. Designing Neural Network Architectures using Reinforcement Learning. ICLR. 2017:1-18. https://doi.org/10.48550/arXiv.1611.02167.
12. Zoph B., Vasudevan V., Shlens J., Le Q.V. Learning Transferable Architectures for Scalable Image Recognition. Proceedings of The IEEE Conference On Computer Vision and Pattern Recognition. 2018:8697-8710. https://doi.org/10.48550/arXiv.1707.07012.
13. Sabour S., Frosst N., Hinton G.E. Dynamic Routing Between Capsules. Advances in Neural Information Processing Systems. 2017;30:1-11. https://doi.org/10.48550/arXiv.1710.09829.
14. Zeiler M.D., Fergus R. Visualizing and Understanding Convolutional Networks. ECCV 2014. Lecture Notes in Computer Science. 2014;8689:818-833. https://doi.org/10.1007/978-3-319-10590-1_53.
15. Hinton G., Vinyals O., Dean J. Distilling the Knowledge in a Neural Network. Cornell University ArXiv. 2015:1-9. https://doi.org/10.48550/arXiv.1503.02531.
16. Delange M., Aljundi R., Masana M., Parisot S., Xu Jia, Leonardis A. A Continual Learning Survey: Defying Forgetting in Classification Tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021;1(1):1-20. https://doi.org/10.1109/tpami.2021.3057446.
17. Elsken T., Metzen J.H., Hutter F. Neural Architecture Search: A Survey. Journal of Machine Learning Research. 2019;20(55):1-21. https://doi.org/10.48550/arXiv.1808.05377.
18. Chen Z., Liu B. Lifelong Machine Learning Systems: Beyond Learning Algorithms. USA: Morgan & Claypool Publishers series, 2018. 209 p. https://doi.org/10.2200/S00832ED1V01Y201802AIM037.
19. Ruder S. An Overview of Multi-Task Learning in Deep Neural Networks. Cornell University ArXiv. 2017:1-14. https://doi.org/10.48550/arXiv.1706.05098.
20. Real E., Moore S., Selle A., Saxena S., Suematsu Y.L., Quoc Le et al. Large-Scale Evolution of Image Classifiers. International Conference On Machine Learning. 2017:2902-2911. https://doi.org/10.48550/arXiv.1703.01041.
21. Gao Huang, Zhuang Liu, van der Maaten L., Weinberger K.Q. Densely Connected Convolutional Networks. Proceedings of The IEEE Conference On Computer Vision and Pattern Recognition. 2017:4700-4708. https://doi.org/10.48550/arXiv.1608.06993.
22. Simonyan K., Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR. 2015:1-14. https://doi.org/10.48550/arXiv.1409.1556.
23. Szegedy C., Wei Liu, Yangqing Jia, Sermanet P., Reed S., Anguelov D. et al. Going Deeper With Convolutions. IEEE Conference on Computer Vision and Pattern Recognition. 2015:1-9. https://doi.org/10.1109/CVPR.2015.7298594.
24. Ioffe S., Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. International Conference On Machine Learning. 2015:448-456. https://doi.org/10.48550/arXiv.1502.03167.
25. Min Lin, Qiang Chen, Shuicheng Yan Network in Network. Cornell University ArXiv. 2014:1-10. https://doi.org/10.48550/arXiv.1312.4400.
26. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun Deep Residual Learning for Image Recognition. Proceedings of The IEEE Conference On Computer Vision and Pattern Recognition. 2016:770-778. https://doi.org/10.48550/arXiv.1512.03385.
27. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N. et al. Attention Is All You Need. Advances in Neural Information Processing Systems. 2017;30:1-15. https://doi.org/10.48550/arXiv.1706.03762.
28. Goodfellow I.J., Pouget-Abadie J., Mirza M., Bing Xu, Warde-Farley D., Ozair S. Generative Adversarial Networks. Advances in Neural Information Processing Systems. 2014;27:1-9. https://doi.org/10.48550/arXiv.1406.2661.
29. Radford A., Metz L., Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. ICLR. 2016:1-16. https://doi.org/10.48550/arXiv.1511.06434.
30. Frankle J., Carbin M. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. ICLR. 2019:1-42. https://doi.org/10.48550/arXiv.1803.03635.
Review
For citations:
Merkushev K.A. Technical capacity for development of sequentially training neural networks in industrial architecture design. Izvestiya vuzov. Investitsii. Stroitelstvo. Nedvizhimost. 2025;15(1):155-164. (In Russ.) https://doi.org/10.21285/2227-2917-2025-1-155-164. EDN: QQZRBY