Abstract
This paper provides a unified framework for the vibration and stability analysis of sandwich-type smart metamaterial concrete plates, with functionally graded graphene origami-enabled auxetic concrete metamaterials (FGGOEAMs) and piezoelectric sensor-actuator layers. Higher-order shear deformation theory is used along with piezoelastic constitutive relations to account for the electromechanical interactions between the material and actuator layer, and will thus utilize the coupled behavior of the system. The effective properties of the composites containing graphene origami are estimated with micromechanical models that are developed and assisted by genetic programming methods. The governing equations of motion are obtained using Hamilton's principle and solved with the differential quadrature approach (DQA), providing an effective formalism for the analysis of complex multiphysics interactions. At the same time, all of the published articles are used to check the results and also to examine the global mode shapes of the system. The presented approach utilizes advanced micromechanical modeling, higher-order continuum formulations, and numerical simulations to analyze the energy management and stability properties of the metamaterial concrete plates. The system presents improvements in stability and tunable energy management, providing a strong design platform for multifunctional smart metamaterial concrete structures. This approach will bridge together analytical modeling, numerical simulation, and active design, contributing to the development of smart materials that will improve performance with applications in energy harvesting, structural health monitoring, and stability in future engineering applications.
Key Words
auxetic structures; graphene origami; metamaterials; piezoelectric layers; smart structures and systems; vibration analysis
Address
(1) Jianfeng Du:
School of Intelligent Construction, Luzhou Vocational and Technical College, Luzhou 646000, Sichuan, China;
(2) Jianfeng Du:
Luzhou Key Laboratory of Intelligent Construction and Low-carbon Technology, Luzhou 646000, Sichuan, China;
(3) Zheyang Yuan:
Luzhou Huaxi Green Building Materials Development Co., Ltd., Luzhou 646000, Sichuan, China;
(4) Mohamadhasan Babaee:
Department of Mechanical Engineering, Tehran Branch, Islamic Azad University, Tehran, Iran;
(5) Murat Yaylacı:
Department of Civil Engineering, Recep Tayyip Erdogan University, 53100, Rize, Turkey;
(6) Murat Yaylacı:
Turgut Kiran Maritime Faculty, Recep Tayyip Erdogan University, 53900, Rize, Turkey.
Abstract
This study proposes an image acquisition system that aims to correct geometric distortion of images for the structure inspection. The proposed system is composed of main camera, lght-emttng dode module, the displacement sensor, and the movement system. Most images taken in the industrial environment are typically distorted, which increase difficulty of damage inspection. Thus, calibration procedure is required to correct distortion. However, two main problems remain: (1) most existing studies focus on lens distortion without considering geometric distortions caused by camera angles, and (2) most image acquisition systems rely on markers such as checkerboard, which increase inconvenience. Therefore, the proposed image acquisition system is developed to ease these problems. The proposed system employs an LED module to project a specialized light pattern instead of using physical markers, and geometric calibration can be performed by selecting corner points of the projected pattern. Furthermore, the movement system supports linear movement up to 3000 mm and full 360° rotation, producing flexibility in experimental setups. Thus, experts are able to perform precise inspection with simple image preprocessing. Experiments are conducted in both indoor and outdoor environments, and results prove the effectiveness of the proposed image acquisition system by achieving better root mean square error.
Address
(1) Jinho Song, Jaewon Park, Sungsik Yoon:
Department of Artificial Intelligence, Hannam University, 70 Hannam-ro, Daejeon 34430, Republic of Korea;
(2) Sangmok Lee:
Dam Safety Management Center, Korea Water Resources Corporation (K-water), 200 Sintanjin-ro, Daejeon 34350, Republic of Korea;
(3) In-Ho Kim:
Department of Civil Engineering, Kunsan National University, 558 Daehak-ro, Kunsan 54150, Republic of Korea.
Abstract
Since it is difficult to extract and understand the multi-dimensional spatial feature information from twodimensional images, three-dimensional modeling and scene segmentation of structures are necessary. At present, most three-dimensional reconstruction technologies have the issues of expensive data acquisition equipment, long modelling time, and poor robustness under low texture. Considering the above concerns, this study proposed an integrated laser and visible light method based on the built-in lidar sensor of a smartphone to reduce the cost and achieve fast reconstruction of structural 3D point cloud models. In addition, a lightweight segmentation backbone network based on spatio-temporal redundancy downsampling mechanism was developed and deployed, which can realize real-time segmentation and information understanding of the environment. An unfurnished apartment was selected to verify the performance of the proposed method, and a floor plan was finally generated for comparison with the real values. Results show that the robustness of point cloud reconstruction was enhanced and the mIoU of scene segmentation on the test set reached 65.2% at 200FPS. The dimension of the generated floor plan has also obtained high accuracy (over 93% and mostly up to 97%), which is efficient for engineers to quickly and accurately understand on-site spatial structure information.
Key Words
3D structural reconstruction; Lidar sensor; LSSNet; point cloud; scene segmentation
Address
(1) Yanzhi Qi, Zhi Ding:
Department of Civil Engineering, Hangzhou City University, Hangzhou, 310015, China;
(2) Yanzhi Qi, Yaozhi Luo:
Institute of Structural Engineering, Zhejiang University, Hangzhou, 310058, China;
(3) Yanzhi Qi, Zhi Ding:
Key Laboratory of Safe Construction and Intelligent Maintenance for Urban Shield Tunnels of Zhejiang Province, Hangzhou, 310015, China.
Abstract
Most bridge inspection records are kept in analog formats as inspection maps with cracks annotated by hand, which limits their usefulness in modern digital asset management systems. Although deep learning has achieved strong performance on crack detection in photographic imagery, these methods depend heavily on color and texture cues that are absent in inspection maps. To address this gap, this paper presents a deep-learning framework designed to segment cracks directly from binary inspection maps. A synthetic dataset generation pipeline was developed using Auto LISP scripts to simulate cracks in CAD-based bridge elevations, reducing the need for manual annotations. Building on advances in general-purpose segmentation architectures, we design a customized model that incorporates an edge-sensitive decoding module, a structure-aware loss combining geometric and pixel-level accuracy, and a sliding-window inference strategy for processing large, high-resolution drawings. The model is trained on synthetic data and evaluated on real inspection maps to test its generalization ability. Results show that the proposed method consistently outperforms widely used segmentation baselines in both quantitative accuracy and visual clarity. Ablation studies further confirm the contribution of each architectural component. Beyond static segmentation, the framework enables time-series visualization of crack evolution, supporting condition tracking across historical records. This approach provides a scalable and practical solution for digitizing analog inspection data, making it compatible with Building Information Modeling (BIM) and digital twin systems. By transforming long-term inspection archives into actionable digital resources, the proposed method enhances efficiency, continuity, and data-driven decision-making in bridge maintenance workflows.
Key Words
crack; digitalization; inspection map; transformer
Address
(1) Jihun Shin:
Department of Smartcity, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea;
(2) Chang-Su Shim:
Department of Civil and Environmental Engineering, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea.