Loading...
Welcome to Journal of Graphics share: 

Table of Contents

    For Selected: Toggle Thumbnails
    Cover
    Cover of issue 6, 2023
    2023, 44(6): 0. 
    PDF 137     193
    Related Articles | Metrics
    Contents
    Table of Contents for Issue 6, 2023
    2023, 44(6): 1. 
    PDF 64     91
    Related Articles | Metrics
    Review
    A review of computer-aided classification prediction of Parkinson's disease based on machine learning
    WEN Jin-yu, FANG Mei-e
    2023, 44(6): 1065-1079.  DOI: 10.11996/JG.j.2095-302X.2023061065
    HTML    PDF 369     234

    Parkinson's disease (PD) is among the top ten most challenging diseases according to the World Health Organization, placing a substantial burden on patients and their families. Currently, treatment can only offer partial relief from clinical symptoms and cannot achieve a complete cure. Therefore, early auxiliary diagnosis holds significant practical significance for PD patients. This research conducted a comprehensive analysis of computer-aided diagnosis techniques for PD classification prediction both domestically and internationally. It also summarized research endeavors utilizing machine learning models to assist in the early detection of PD, aiming to guide early intervention and prevent disease progression. Common prediction methods involved data preprocessing, feature selection, and classification. Traditional machine learning methods might not be as effective when dealing with large datasets or high data complexity, making deep learning or improved machine learning methods more promising for improving prediction accuracy. Furthermore, there has been a growing focus on diagnosing brain structural images of PD patients with cognitive impairment. Research on cognitive dysfunction followed a progressive trajectory, emphasizing the need for early screening and timely intervention. Future research should further explore computer-assisted diagnostic techniques based on machine learning methods and apply them to the early classification prediction of PD, aiming to enhance the accuracy of medical diagnosis and elevate the quality of diagnosis and treatment.

    Figures and Tables | References | Related Articles | Metrics
    Current status and progress of research on the application of complex networks in the field of industrial design
    DING Man, LI Peng-hui, ZHANG Yi-fei, MA Hong-kun
    2023, 44(6): 1080-1090.  DOI: 10.11996/JG.j.2095-302X.2023061080
    HTML    PDF 254     114

    With the rapid advancement of network science research, complex networks have become one of the research focal points in various fields. Industrial design, as a highly integrated cross-disciplinary domain, is no exception. Complex network theory provides a more in-depth research perspective, bringing brand-new opportunities within this field. To advance research and foster the application of complex networks in industrial design, a review was conducted on related studies at home and abroad. It summarized the research methods and progress, offering further analysis. Firstly, the complex network theory was presented through three aspects: basic concept, basic model, and main research content. Subsequently, it reviewed the application of complex network theory in industrial design in terms of product structural design, product appearance design, and design demand analysis. Finally, taking into account the existing development of relevant research, it provided an analysis and summary of the application of research results of complex network in industrial design and the research needs for node selection and network construction in the actual design process. It also presented and discussed the research hotspots and development trends in complex networks in industrial design.

    Figures and Tables | References | Related Articles | Metrics
    A review of neural radiance field for autonomous driving scene
    CHENG Huan, WANG Shuo, LI Meng, QIN Lun-ming, ZHAO Fang
    2023, 44(6): 1091-1103.  DOI: 10.11996/JG.j.2095-302X.2023061091
    HTML    PDF 324     273

    The neural radiance field (NeRF) is a crucial technology for reconstructing realistic visual effects and synthesizing novel views of scenes. It primarily renders synthetic 3D scenes based on 2D image data captured by cameras, inferring known views to unknown views, so that users can observe synthetic views from different viewpoints to enhance human-computer interaction. As a method for novel viewpoint synthesis and 3D reconstruction, NeRF exhibited significant research and application value in the fields of robotics, autonomous driving, virtual reality, and digital twins. Its integration with autonomous driving scenarios allowed for high-quality reconstruction of complex driving scenes and the simulation of different scenes under adverse conditions. This could enrich training data for autonomous driving systems, enhance their accuracy and safety at minimal costs, and validate the effectiveness of autonomous driving algorithms. Given NeRF's current important application prospect in autonomous driving scenes and its limited coverage in existing reviews, firstly, the traditional explicit 3D scene representation method was employed to introduce the implicit representation method of scenes, namely NeRF, along with the introduction of the principle of NeRF technology. Secondly, the discussion analysis were conducted regarding the challenges encountered when combining NeRF and autonomous driving scenes, including the problems of sparse view reconstruction, large-scale scene reconstruction, motion scenes, training acceleration, and the synthesis of autonomous driving scenes. Finally, insights were provided into the future development directions of NeRF technology.

    Figures and Tables | References | Related Articles | Metrics
    YOLOv8 with bi-level routing attention for road scene object detection
    WEI Chen-hao, YANG Rui, LIU Zhen-bing, LAN Ru-shi, SUN Xi-yan, LUO Xiao-nan
    2023, 44(6): 1104-1111.  DOI: 10.11996/JG.j.2095-302X.2023061104
    HTML    PDF 444     346

    With the continuous increase of motor vehicles, the road traffic environment has become increasingly complex, particularly due to changes in light conditions and complex backgrounds that can interfere with the accuracy and precision of target detection algorithms. Meanwhile, the diverse shapes of targets in road scenes can pose challenges to the detection task. In response to these challenges, a method named YOLOv8n_T was proposed. Building on the YOLOv8 skeleton network, it incorporated a D_C2f block utilizing deformable convolution to enhance feature learning for targets under complex backgrounds, making it more adaptable to the diverse and complex scenarios of road targets. Furthermore, the model incorporated a dual routing attention module to query adaptively and remove irrelevant regions, retaining only the most relevant regions. For small targets such as pedestrians and traffic lights on the road, a small target detection layer was added. Experimental results demonstrated that the proposed YOLOv8n_T could significantly enhance the precision of target detection in road scenarios, with an average precision increase of 6.8 percentage points compared to the original YOLOv8n and 11.2 percentage points compared to YOLOv5n on the BDD100K dataset.

    Figures and Tables | References | Related Articles | Metrics
    Image Processing and Computer Vision
    Research on image privacy detection based on deep transfer learning
    WANG Da-fu, WANG Jing, SHI Yu-kai, DENG Zhi-wen, JIA Zhi-yong
    2023, 44(6): 1112-1120.  DOI: 10.11996/JG.j.2095-302X.2023061112
    HTML    PDF 178     109

    Addressing the absence of an early warning mechanism for image privacy leakage in current social media platforms, an optimization scheme for image privacy target detection based on the YOLOv8 model was proposed, thus reducing the risk of privacy leakage when users share images. Building upon the YOLOv8 baseline model, the bottleneck transformer (BoT) module was integrated into the backbone network, capturing global contextual information and modeling long-range dependencies of targets. Concurrently, the bidirectional feature pyramid network (BiFPN) structure was introduced to improve the neck network, facilitating the deep fusion of multi-scale features. On this basis, based on the deep transfer learning method, the YOLOv8 pre-training model was fine-tuned and trained to achieve automatic detection of image privacy. A privacy image dataset was constructed using the LabelImg annotation tool, and the common YOLO series model was compared with the improved YOLOv8 in the transfer learning mode. The results demonstrated that YOLOv8 exhibited strong performance in the baseline model, while the F1 and mAP@.5 values of the improved model proposed in this study reached 0.885 and 0.927, respectively, reflecting a 4.0% and 3.4% increase compared with YOLOv8. This significantly enhanced detection accuracy was well-suited for image privacy detection in various application scenarios.

    Figures and Tables | References | Related Articles | Metrics
    Spiking neural network-based navigation and obstacle avoidance algorithm for complex scenes
    DING Jian-chuan, XIAO Jin-tong, ZHAO Ke-xin, JIA Dong-qing, CUI Bing-de, YANG Xin
    2023, 44(6): 1121-1129.  DOI: 10.11996/JG.j.2095-302X.2023061121
    HTML    PDF 137     80

    Spiking neural network (SNN) have been widely applied in the field of mobile robot navigation and obstacle avoidance due to their low power consumption and temporal processing capabilities. However, existing SNN models are relatively simple and struggle with addressing obstacle avoidance in complex scenarios, such as dynamic obstacles with varying speeds and environmental noise interference. To tackle these challenges, a complex scene navigation and obstacle avoidance algorithm was proposed based on SNNs. This algorithm employed attention mechanisms to enhance obstacle avoidance capabilities for dynamic obstacles, enabling the model to make more accurate obstacle avoidance decisions by focusing more on the information of dynamic obstacles. Additionally, a dynamic spiking threshold was designed based on biological inspiration, allowing the model to adaptively adjust the firing of spiking signals to adapt to environments with noise interference. Experimental results demonstrated that the proposed algorithm exhibited optimal navigation and obstacle avoidance performance within virtual complex scenes. Across the three designed complex scenes (variable-speed dynamic scenes, input interference, and weight interference), the navigation obstacle avoidance success rates could reach 86.5%, 79.0%, and 76.2%, respectively. This research provided a new approach and method for solving the problem of robot navigation and obstacle avoidance in complex scenarios.

    Figures and Tables | References | Related Articles | Metrics
    Image Processing and Computer Vision
    Detail-enhanced multi-exposure image fusion method
    XIA Xiao-hua, LIU Xi-heng, YUE Peng-ju, ZOU Yi-qing, JIANG Li-jun
    2023, 44(6): 1130-1139.  DOI: 10.11996/JG.j.2095-302X.2023061130
    HTML    PDF 163     159

    A detail-enhanced multi-exposure image fusion method was proposed to address the problem of low weights obtained in the light and dark regions in the sequence image, resulting in the loss of details in the bright and dark regions of the fused image. Wavelet decomposition was conducted on the sequence image and the fused image based on the weight map. This process involved extracting the low-frequency components from the fused image and the high-frequency components from the edge regions, and they were fused with the high-frequency components from the non-edge regions of the sequence image. Finally, the detail-enhanced fused image was obtained through wavelet inverse transform. Experimentally, nine sets of classical multi-exposure image sequences were selected for comparison with nine multi-exposure image fusion algorithms in terms of subjective comparison and objective evaluation, respectively. The results demonstrated that the proposed method, which combined the spatial and frequency domain image fusion methods, could effectively solve the problem of detail loss in the bright and dark areas of fused images, while avoiding the problem of ringing phenomenon often encountered in frequency domain image fusion methods. The fused images were realistic, natural, and exhibited vibrant colors. The mean image information entropy value and the mean image gradient value of the fused images obtained through the proposed method were 7.655 5 and 7.027 3, respectively, ranking first and second among the ten multi-exposure image fusion algorithms. Considering both subjective and objective evaluation results, the proposed method outperformed the nine comparative methods.

    Figures and Tables | References | Related Articles | Metrics
    Multi-scale view synthesis based on neural radiance field
    FAN Teng, YANG Hao, YIN Wen, ZHOU Dong-ming
    2023, 44(6): 1140-1148.  DOI: 10.11996/JG.j.2095-302X.2023061140
    HTML    PDF 178     113

    To address the problem of blurring and jaggedness in neural radiance fields (NeRF) for multi-scale view synthesis tasks, we proposed multi-scale neural radiance fields (MS-NeRF). This learning framework enhanced the quality of synthesized target views by incorporating view features and viewpoint features of different scales. First, for target views at different scales, a multi-level wavelet convolutional neural network was employed to extract target view features. Additionally, view features served as priors to supervise network in synthesizing target scene views. Second, the sampling region of the light from the viewpoint camera was enlarged at the pixel points in the target view, thus preventing blurred and jagged rendering results caused by sampling only a single ray per pixel. Finally, through training with view features and viewpoint features at different scales, the deep neural network with a progressive structure learned the mapping relationship between view features and viewpoint features to the target view, enhancing the robustness of the network to synthesize views at different scales. Experimental results demonstrated that MS-NeRF could reduce training costs and improve the visual effect of synthesized target views compared to existing methods.

    Figures and Tables | References | Related Articles | Metrics
    High-capacity clipped robust image steganography based on multilevel invertible neural networks
    LI Hong-xuan, ZHANG Song-yang, REN Bo
    2023, 44(6): 1149-1161.  DOI: 10.11996/JG.j.2095-302X.2023061149
    HTML    PDF 103     92

    Image steganography aims to safeguard information confidentiality by embedding secret information into carrier images while evading detection by observers. However, during the transmission, the edges of the carrier images are often prone to cropping due to resolution limitations, making it challenging to recover continuous hidden information from the edge-missing carrier images. Another challenge in image steganography is how to enhance the effective payload capacity without being detected. To address these challenges, we proposed a data-driven image steganography algorithm that employed a high-capacity and clipped robust multilevel invertible steganography network (CR-MISN). This network had the capability to recover the continuous secret images as fully as possible from carrier images with damaged edges. Furthermore, the algorithm exhibited a high degree of flexibility, allowing for the steganography of large-sized images with different specifications by altering channel numbers in the multilevel cascading of image branches. Experimental results demonstrated that the proposed method outperformed other state-of-the-art methods in terms of visual imperceptibility, quality metrics, and cropping recovery on various public datasets.

    Figures and Tables | References | Related Articles | Metrics
    Point cloud classification model incorporating external attention and graph convolution
    ZHOU Rui-chuang, TIAN Jin, YAN Feng-ting, ZHU Tian-xiao, ZHANG Yu-jin
    2023, 44(6): 1162-1172.  DOI: 10.11996/JG.j.2095-302X.202306116
    HTML    PDF 76     49

    In response to the challenge of insufficiently extracting local features from disordered and unstructured point cloud data, a point cloud classification model fusing external attention and graph convolution was proposed. Firstly, the point cloud data was constructed into a local directed graph, and then the graph convolution fused with external attention was employed for feature extraction to capture richer and more representative local features. Next, residual structures were introduced to build a deeper network and fuse feature information at different levels, enhancing the network performance. Finally, the point cloud data with a tree-like hierarchical structure was mapped to a hyperbolic space with negative curvature, thereby enhancing the ability of point cloud data representation. Embedding computation was also performed in the hyperbolic space to obtain the final classification results. Experiments were conducted on the standard publicly available datasets ModelNet40 and ScanObjectNN. The results demonstrated that the overall classification accuracy of the model on different datasets reached 93.8% and 82.8%, respectively, improving the overall accuracy of the model by 0.3% to 4.9%, compared to the current mainstream high-performance models, exhibiting strong robustness.

    Figures and Tables | References | Related Articles | Metrics
    Future frame prediction based on multi-branch aggregation for lightweight video anomaly detection
    HUANG Shao-nian, WEN Pei-ran, QUAN Qi, CHEN Rong-yuan
    2023, 44(6): 1173-1182.  DOI: 10.11996/JG.j.2095-302X.2023061173
    HTML    PDF 79     52

    Video anomaly detection in complex scenes holds significant research value and practical applications. Despite the remarkable performance of current prediction-based methods, they encountered challenges, such as the use of large model parameters. To address these problems, we proposed a lightweight model based on multi-branch aggregation for frame prediction. The proposed model leveraged Transformer units as basic structures, with multi-branch aggregation, reducing the model parameters significantly. This method not only reduced computational costs but also enhanced detection accuracy. Building on this foundation, we designed a multi-branch Transformer fusion encoder extracting temporal motion features of normal events. The proposed encoder utilized a multi-branch connection operation to achieve multi-layer feature fusion, elevating the encoder's feature optimization ability. Moreover, a multi-branch clustering decoder was developed using K-means to mitigate the impact of normal feature diversity on anomaly detection performance. Experiments were conducted on three public datasets: UCSD Ped2, CUHK Avenue, and ShanghaiTech. The results demonstrated that the proposed model outperformed the current mainstream algorithms, achieving better detection performance and lower computational cost.

    Figures and Tables | References | Related Articles | Metrics
    Knee cysts detection algorithm based on Mask R-CNN integrating global-local attention module
    ZHANG Li-yuan, ZHAO Hai-rong, HE Wei, TANG Xiong-feng
    2023, 44(6): 1183-1190.  DOI: 10.11996/JG.j.2095-302X.2023061183
    HTML    PDF 98     59

    Accurately detecting knee cysts is an effective means for facilitating early diagnosis and treatment of various knee-related diseases. However, the task of detecting knee cysts can be challenging due to their imaging features' similarity to other lesions in MR imaging, such as intra-articular effusion and cystic tumors. Therefore, a Mask R-CNN multi-task learning model incorporating global-local attention modules was proposed to simultaneously implement the automatic recognition, detection, and segmentation of knee cysts in MRI. Firstly, the method utilized the channel attention mechanism to achieve the weighted fusion of global and local features of knee images, forming a feature map with multi-scale information. This map provided more accurate discriminative features for the model. Secondly, a multi-task uncertainty loss function was introduced, which employed homoskedasticity uncertainty to indicate the relative confidence of each task. It adaptively adjusted the task weights, and automatically searched for the optimal solution. Finally, the GrabCut method was utilized to generate masks based on pre-labeled bounding boxes to further construct knee MRI datasets, enhancing the quality and efficiency of data annotation. The experimental results demonstrated that the proposed method could accurately identify cystic knee lesions in MRI, with an average accuracy of 92.3% for detection and 92.8% for segmentation. These results outperformed other comparison methods in terms of effectiveness.

    Figures and Tables | References | Related Articles | Metrics
    Video captioning based on semantic guidance
    SHI Jia-hao, YAO Li
    2023, 44(6): 1191-1201.  DOI: 10.11996/JG.j.2095-302X.2023061191
    HTML    PDF 105     60

    Video captioning aims to automatically generate a sentence of text for a given piece of input video, summarizing the events in the video. This technology finds application in various fields, including video retrieval, short video title generation, assisting the visually impaired individuals, and security monitoring. However, existing methods tend to overlook the role of semantic information in description generation, resulting in insufficient description ability of the model for key information. To address this issue, a video captioning model based on semantic guidance was designed. This model as a whole adopted the encoder-decoder framework. In the encoding stage, a semantic enhancement module was employed to generate key entities and predicates. Subsequently, a semantic fusion module was utilized to generate the overall semantic representation. In the decoding stage, a word selection module was adopted to select the appropriate word vector, guiding the description generation to efficiently leverage semantic information and enhance the attention to the key semantics in the results. Finally, the experiment demonstrated that the model could achieve Cider scores of 107.0% and 52.4% on two widely used datasets: MSVD and MSR-VTT, respectively, outperforming the state-of-the-art model. User studies and visualization results corroborated that the descriptions generated by the model aligned well with human comprehension.

    Figures and Tables | References | Related Articles | Metrics
    Palette-based semi-interactive low-light Thangka images enhancement
    ZHANG Chi, ZHANG Xiao-juan, ZHAO Yang, YANG Fan
    2023, 44(6): 1202-1211.  DOI: 10.11996/JG.j.2095-302X.2023061202
    HTML    PDF 73     72

    As one of the important expressions of Regong art, Thangka has garnered increasing popularity due to its complex structure, bright colors, clear lines, and exquisite paintings. However, capturing Thangka images in dimly lit temple settings often presents challenges such as uneven illumination, high noise, color distortion, and loss of detail information. To address these issues, a semi-interactive low illumination Thangka image enhancement method based on color palettes was proposed. Firstly, based on the Retinex model, a low-illumination enhancement network, RCUNet, incorporating the convolutional block attention module (CBAM) and U-Net, was designed. Through the designed loss function, iterative training was conducted to reconstruct the illumination, reflection, and noise maps, thus synthesizing an enhanced result. For interaction, the main colors of the enhanced image were extracted and corresponding color palettes were generated using an improved K-means algorithm. Then, modifying these color palettes further improved the colors of the enhanced image. Finally, compared with several currently popular enhancement methods, quantitative and qualitative comparison experiments were undertaken on the Thangka datasets. The experimental results demonstrated that this method could yield the best results in three indicators: NIQE, PIQE, and PSNR scores.

    Figures and Tables | References | Related Articles | Metrics
    Computer Graphics and Virtual Reality
    DynArt ChatGPT: a platform for generating dynamic intangible cultural heritage new year paintings
    JIN Cong, ZHOU Man-ling, ZHANG Jun-song, WANG Hong-liang, ZHANG Jia-yi, WANG Jing, XU Ming-liang
    2023, 44(6): 1212-1217.  DOI: 10.11996/JG.j.2095-302X.2023061212
    HTML    PDF 132     89

    ChatGPT has attracted cross-disciplinary interest due to its capabilities in conversation and reasoning skills. New Year paintings are a vital component of China's intangible cultural heritage, historically serving as a primary means of publicity. As a form of commodity production, it holds immense publicity and economic value. In the modern society dominated by science and technology, the preservation and development of China's intangible cultural heritage has encountered three major challenges: economic development, technological updating, and cultural changes, resulting in issues such as a lack of inheritance, difficulties in innovation, and inadequate protection. In order to promote and develop traditional Chinese arts, an idea was envisioned: could we combine the conversational capabilities of ChatGPT with traditional arts from the realm of intangible cultural heritage? Based on this idea, we constructed a system called the dynamic intangible cultural heritage New Year paintings generation system (DynaArt ChatGPT). DynaArt ChatGPT could extract the keywords from the samples provided by ChatGPT and then generate a description related to those samples based on the keywords. The description was then input into the Lumen5 model, which could generate the corresponding dynamic video according to the provided description. Experiments demonstrated that this dynamic painting generation system could offer a new interpretation of some well-known Chinese folktales.

    Figures and Tables | References | Related Articles | Metrics
    Zero-shot text-driven avatar generation based on depth-conditioned diffusion model
    WANG Ji, WANG Sen, JIANG Zhi-wen, XIE Zhi-feng, LI Meng-tian
    2023, 44(6): 1218-1226.  DOI: 10.11996/JG.j.2095-302X.2023061218
    HTML    PDF 121     93

    Avatars generation holds significant implications for various fields, including virtual reality and film production. To address the challenges associated with data volume and production costs in existing avatar generation methods, we proposed a zero-shot text-driven avatar generation method based on a depth-conditioned diffusion model. The method comprised two stages: conditional human body generation and iterative texture refinement. In the first stage, a neural network was employed to establish the implicit representation of the avatar. Subsequently, a depth-conditioned diffusion model was utilized to guide the neural implicit field in generating the required avatar model based on user input. In the second stage, the diffusion model was employed to generate high-precision inference texture images, leveraging the texture prior obtained in the first stage. The texture representation of the avatar model was enhanced through an iterative optimization scheme. With this method, users could create realistic avatars with vivid characteristics, all from text descriptions. Experimental results substantiated the effectiveness of the proposed method, showcasing that it could yield high-quality avatars exhibiting realism when generated in response to given text prompts.

    Figures and Tables | References | Related Articles | Metrics
    Visualization comparison of historical figures cohorts
    CHEN Yi-tian, ZHANG Wei, TAN Si-wei, ZHU Rong-chen, WANG Yi-chao, ZHU Min-feng, CHEN Wei
    2023, 44(6): 1227-1238.  DOI: 10.11996/JG.j.2095-302X.2023061227
    HTML    PDF 127     77

    In the realm of historical research, conducting cohort comparisons among historical figures holds great significance. It is aspired to extract similarities and differences between cohorts based on the various characteristics of historical figures, thereby gaining a deeper understanding of historical events. However, existing visualization systems predominantly concentrate on exploring single cohorts and their internal differences, neglecting the hierarchical comparisons between cohorts. In order to bridge this gap, a visual analysis method was proposed for comparing cohorts of historical figures. The proposed approach involved the extraction and visualization of the characteristics of historical figure cohorts, leveraging structured historical figure data to facilitate multi-dimensional cohort comparisons. We have developed an interactive visualization system that employed flower-based visual metaphors to encode the diverse features of these cohorts. Historians could interactively explore cohorts and study the correlations, similarities, and differences among various cohorts. Following feedback evaluations by historians and case analyses, the historical figure cohort comparison visualization system has demonstrated its effectiveness as a robust tool for cohort comparison research. Through this system, historians can obtain in-depth insights into historical events.

    Figures and Tables | References | Related Articles | Metrics
    Optimizing the multi-part layout to minimize the empty travel distance of nozzle
    XING Xiao-yue, TAO Xiu-ting, WANG Shu-fang, PAN Wan-bin
    2023, 44(6): 1239-1250.  DOI: 10.11996/JG.j.2095-302X.2023061239
    HTML    PDF 52     26

    The layout of multi-part in the printing chamber often directly impacts the efficiency and capability of batch manufacturing for fused filament fabrication (FFF). Meanwhile, the total empty travel distance of the printing nozzle exerts a significant impact on the efficiency and capability mentioned above. Therefore, an optimized layout method for multi-part was proposed, taking into account the empty travel distance. The method improved the efficiency of FFF's batch manufacturing and the printing chamber's available space by reducing the empty travel distance of the printing nozzle between each part. Firstly, to balance the efficiency (such as the acceleration of geometric interference detection) and the accuracy of empty travel distance calculation, the proposed approach constructed the corresponding voxel-based surrogate model layer by layer for each part. Then, based on a greedy strategy and PSO, this paper proposed a method for determining the order and position of each voxel-based surrogate model's placement pair by pair. Finally, the voxel-based surrogate models were replaced with their corresponding parts, and the input part set was compactly arranged. Experiments were conducted on a set of complex-shaped parts. The results showed that the proposed method could reduce the total empty distance of the printing nozzle by 31.17% compared to Magics under the same measuring standard. Furthermore, compared with the layout function in existing commercial software, this method exhibited enormous potential for significantly reducing the total empty distance of the printing nozzle for batch manufacturing with FFF, especially for the set of complex-shaped parts.

    Figures and Tables | References | Related Articles | Metrics
    Industrial Design
    Gamification drives in personal carbon footprint Apps
    YAO Shan-liang, LIU Xiang-xiang, WANG Yuan-yuan
    2023, 44(6): 1251-1258.  DOI: 10.11996/JG.j.2095-302X.2023061251
    HTML    PDF 88     84

    In order to solve the problem that personal carbon footprint Apps in the market fail to attract users in a sustainable manner, the Octalysis is introduced to explore the hidden gamification driving factors behind such products. Firstly, we will collect target users' behavioural preferences through questionnaires and divide them into four user groups: achievement, cooperation, exploration and competition, and analyse them quantitatively by combining the entropy value method, so as to find out the importance of each driver in personal carbon footprint Apps. Secondly, the personal carbon footprint App “GreenerMe” was planned, gamification elements were proposed, and a static evaluation system for the gamified Octalysis framework of the app was constructed. Finally, the user journey is analysed in terms of user groups and the four stages of discovery, introduction, shaping and finality, and the evaluation system is then dynamically optimised to determine the appropriate gamification elements for the product and the timing of their placement. This study aims to assist personal carbon footprint app designers in gaining a more accurate and comprehensive understanding of their users, and to provide an effective strategy for gamification of personal carbon footprint Apps to ensure that the product is effective in motivating users to engage and sustain carbon reduction.

    Figures and Tables | References | Related Articles | Metrics
    Optimization design of operating state information of molded case circuit breaker based on eye tracking
    LI He-sen, CHEN Ying, YAO Da-wei
    2023, 44(6): 1259-1266.  DOI: 10.11996/JG.j.2095-302X.2023061259
    HTML    PDF 64     50

    The molded case circuit breaker (MCCB) is an indispensable basic electrical component within industrial power systems. In order to enhance the visual recognition efficiency of the operational status information of MCCB, an improved design method for the open/close symbols of MCCB was proposed. The analysis of the open/close symbol design could lead to the summarization of reasons for the limited visualization of operational status information. By conducting eye-tracking experiments and utilizing evaluation indicators such as first entry time, first fixation duration, total fixation duration, hot spot distribution, fixation progression, and subjective questionnaires, the design issues of the current operational status information of MCCB were comprehensively analyzed. Based on the experimental conclusions, the key points for improving the size and color of the open/close symbols were proposed. The superiority of the improved open/close symbol design was verified through a comparative analysis of eye-tracking experimental data before the improvement. Furthermore, the research on the improvement of open/close symbols could serve as a reference for enhancing the visualization effect of operational status information in similar electrical products. Eye-tracking experiments could also provide key technical support for the analysis and enhancement of operational status information visualization in MCCB.

    Figures and Tables | References | Related Articles | Metrics
    Total Contents
    Total Contents of 2023
    2023, 44(6): 1267. 
    PDF 47     54
    Related Articles | Metrics
    Published as
    Published as 6, 2023
    2023, 44(6): 1268. 
    PDF 26     69
    Related Articles | Metrics