GAO Yan,WU Xiaodong,TIAN Jiayi.Analysis and prediction of the ground subsidence due to the spatial form of underground karst caves based on machine learning[J].Acta Scientiarum Naturalium Universitatis Sunyatseni,2023,62(02):83-92.
GAO Yan,WU Xiaodong,TIAN Jiayi.Analysis and prediction of the ground subsidence due to the spatial form of underground karst caves based on machine learning[J].Acta Scientiarum Naturalium Universitatis Sunyatseni,2023,62(02):83-92. DOI: 10.13471/j.cnki.acta.snus.2022D019.
Analysis and prediction of the ground subsidence due to the spatial form of underground karst caves based on machine learning
In order to understand the characteristics of ground subsidence with underground karst caves caused by engineering activities, the numerical simulation based on the finite element method is adopted to analyze the ground subsidence response containing different forms of underground caverns subjected to vertical load. The influence of dimension, depth, shape of underground cavern and magnitude of the vertical load are explored separately. The results show that the larger the size of underground cavern, the shallower the buried depth, the larger the shape coefficient, and the larger the load, the larger the ground subsidence generated. The shapes of the ground subsidence curves are all bell-shaped and conform to the Gaussian distribution law. The grey correlation analysis shows that the size and shape of underground cavern are sensitive to the maximum ground subsidence value. That is, the geometry of underground cavern has an important effect on the ground subsidence. The land subsidence curves obtained by the numerical simulation are trained through the deep neural network. The error between the predicted value and the calculated value after training is within 5%. The deep neural network can be used as an effective method to predict the ground subsidence caused by the construction of the foundation with underground karst caves.
BENGIO Y, COURVILLE A, VINCENT P, 2013. Representation learning: A review and new perspectives[J]. IEEE Trans Pattern Anal Mach Intell, 35(8): 1798-1828.
BENGIO Y,2009. Learning deep architectures for AI[M]. Boston: Now Publishers.
BRAHMA P P, WU D, SHE Y, 2016. Why deep learning works: A manifold disentanglement perspective[J]. IEEE Trans Neural Netw Learn Syst, 27(10): 1997-2008.
HEATON J, 2018. Ian Goodfellow, yoshua bengio, and Aaron courville: Deep learning[J].Genet Program Evolvable Mach, 19(1/2): 305-307.
HINTON G E, SALAKHUTDINOV R R, 2006. Reducing the dimensionality of data with neural networks[J]. Science, 313(5786): 504-507.
LIN H W, TEGMARK M, ROLNICK D, 2017. Why does deep and cheap learning work so well?[J]. J Stat Phys, 168(6): 1223-1247.
PECK R, 1969. Deep excavations and tunnelling in soft ground[C]// Proceedings of 7th ICSMFE. Mexico.