Loading...
Welcome to Journal of Graphics share: 

Table of Contents

    29 February 2020, Volume 41 Issue 1 Previous Issue    Next Issue

    For Selected: Toggle Thumbnails
    Robust boolean operations for complex non-closed surfaces
    LIU Kai-zheng, LIU Li-gang
    2020, 41(1): 1-9.  DOI: 10.11996/JG.j.2095-302X.2020010001
    Abstract ( 132 )   PDF (2251KB) ( 295 )  
    Boolean operation is a common approach for constructing complex solid geometries in
    computer-aided geometric design. Since it was introduced in the 1880s, most of the research on it is
    trading off between efficiency and robustness. And most of the algorithms require strictly that input
    meshes have no cavity and boundary edges, which ensures that inputs can serve as boundary
    representations for some solids. Quite different from the methods mentioned above, this paper
    proposes an efficient, robust and wildly-adaptive method of Boolean operation, which could be
    applied to non-solid meshes. Firstly, we merge input meshes into one mesh, then split it into different
    patches along non-manifold edges after resolving the intersection issue of the merge, and identify all
    cells surrounded by those patches. Next, we calculate the winding number of each cell by adding
    virtual patches, tag the special properties of the cell in every input mesh, and consequently acquire the
    correct result of Boolean operation.
    Related Articles | Metrics
    Boundary processing method of symmetric area and surface tension calculation based on surface particle extraction
    ZHU Xiao-lin, ZHANG Yi-qun, GUO Qing-wei
    2020, 41(1): 10-17.  DOI: 10.11996/JG.j.2095-302X.2020010010
    Abstract ( 76 )   PDF (1130KB) ( 127 )  
    Flow-solid boundary processing has always been the focus of fluid simulation, and
    boundary force method and virtual particle methods are commonly used to study fluid-solid boundary.
    The boundary force method prevents particles from penetrating by applying repulsive forces to
    particles laid on the boundary, but the calculation of boundary forces limits the simulation speed. The
    virtual particle method generates virtual particles at the boundary, but with the increase of the number
    of particles, the number of virtual particles required increases too, resulting in a decrease in
    computational speed, and the separation of fluid and boundary. For solving these problems, this paper
    proposes a boundary processing method of symmetric area, which meets the real-time requirements
    on the premise of ensuring fidelity, and with the increase of the number of fluid particles, the increase
    of the time consumed by this method is obviously slower than that of other traditional methods, which
    is more suitable for the simulation of complex scenes, while avoiding the separation of fluid and boundary at the boundary. The CSF (continue surface tension) method is a common method for
    dealing with surface tension. It can calculate the surface tension as volume force, which greatly
    reduces the effect of surface shape on curvature calculation, whereas in fact curvature calculation is
    only related to the shape of the surface. To solve this problem, the CSF method is improved and a
    surface tension calculation method based on surface particle extraction is proposed, which reduces the
    error of the traditional CSF method in calculating curvature, and improves the calculation speed. The
    simulation results verify the effectiveness of the proposed method.
    Related Articles | Metrics
    Adaptive piecewise inverse scale space algorithm for scattered data fitting
    ZHONG Yi-jun1, LI Chong-jun2
    2020, 41(1): 18-26.  DOI: 10.11996/JG.j.2095-302X.2020010018
    Abstract ( 105 )   PDF (1879KB) ( 117 )  
    Scattered data reconstruction has been widely studied in the fields of signal processing and
    computer graphics. Moreover, in recent years, to obtain sparse representation approximations of
    scattered data by means of sparse optimization method has also become a hot spot in the
    cross-domain of optimization and surface reconstruction. In this paper, we establish the connection
    between surface fitting of scattered data and the piecewise sparseness in PSI space generated by a
    B-spline, and introduce the piecewise sparsity to the Bregman inverse scale space (ISS) algorithm. In
    addition, an adaptive piecewise ISS algorithm is established to solve the scattered data reconstruction
    problem. Through the analysis of the piecewise symbolic consistency, the performance guarantee of
    adaptive piecewise ISS system is obtained in this paper and the selection of aP_ISS parameters can be
    avoided. Numerical experimental results applied to the surface reconstruction of scattered data show
    that, this algorithm can not only effectively fit the surface, but also protect the piecewise sparsity of
    coefficient of the surface.
    Related Articles | Metrics
    Leaf floating simulation based on GAN
    FENG Qian-tai, YANG Meng, FU Hui
    2020, 41(1): 27-34.  DOI: 10.11996/JG.j.2095-302X.2020010027
    Abstract ( 181 )   PDF (9870KB) ( 141 )  
    Inspired by the progress of deep learning algorithm, we propose a method based on the
    generative adversarial network (GAN) to simulate the effect of leaf falling, in order to achieve
    realistic leaf floating in the 3D rendering, virtual reality and other environments. Firstly, the training
    data set is produced by experiment or in the field of 6-element time series of leaf falling under
    different environmental conditions (wind level, disturbance level, etc.), and then the double-channel
    auxiliary classifier generative adversarial networks model (DACGAN) is trained using the data set.
    Secondly, the trained model can output 3D leaf data according to our demands (wind level,
    disturbance level, etc.). Finally, the data can be used in a variety of graphic rendering environments.
    Compared with the traditional mathematics-based method controlling the leaf floating and the
    artificial animation method setting keyframes, our method, as the experimental results indicate, can
    provide the possibility to capture the realism of the falling leaves. In addition, with the development
    of computer vision and deep learning, the 3D data in reality is more easily acquired, the generated
    model is more optimized, and the cost of DACGAN will be lower.
    Related Articles | Metrics
    Physics-based algorithm for natural rime growth simulation
    LI Ye1, YANG Meng1, SHANGGUAN Da-yan2, YANG Gang1
    2020, 41(1): 35-43.  DOI: 10.11996/JG.j.2095-302X.2020010035
    Abstract ( 144 )   PDF (9922KB) ( 119 )  
    In order to simulate two main kinds of rime in winter, the soft rime and the hard rime, an
    algorithm for rime growth simulation based on physics is proposed in this paper by analyzing the
    morphology of two kinds of rime under different wind speeds and temperatures. The algorithm
    calculates the length of rime by using the icing conductor model based on the principle of
    thermodynamics. After that, the method of diffusion limited aggregation is introduced to our
    algorithm to simulate the morphology of the soft rime with each growing point of the rime as a
    condensation nuclear, and a segmentation method is for the simulation of the needle-like rime’s
    morphology; Perlin noise function is used to simulate wind field by constructing stochastic noise
    function and smoothing function. The experimental results show that this method can simulate the
    soft rime and the hard rime in the nature realistically and efficiently.
    Related Articles | Metrics
    Overviewing of visual analysis approaches for clustering high-dimensional data
    ZHANG Rong1, CHEN Yi1, ZHANG Meng-lu1, MENG Ke-xin2
    2020, 41(1): 44-56.  DOI: 10.11996/JG.j.2095-302X.2020010044
    Abstract ( 166 )   PDF (3259KB) ( 179 )  
    Visual clustering analysis makes use of visualization and interaction technologies to help
    users analyze the clustering process and results from multiple perspectives to find hidden structures
    and relationships within the original data. However, because of the “curse of dimension” of
    high-dimensional data, there are many challenges posed for cluster analysis, such as parameter setting
    of clustering model, data feature capture, result interpretation and visualization. Starting with the
    problems encountered in the process of high-dimensional data clustering, this paper firstly
    summarizes the data processing methods commonly used in the process of clustering and compares
    their performance. These methods can greatly solve the “curse of dimension” problem to help users
    explore the clustering patterns existing in the data. Then, due to the different needs of the clustering
    results obtained by different data processing methods in analyzing and understanding the internal
    structure and rules hidden in clusters, this paper makes a summary and divides the currently available
    visual analysis approaches of clustering high-dimensional data into two categories, namely, visual
    analysis approaches based on dimensionality reduction and subspace clustering. Finally, the current opportunities and challenges existing in this field are discussed.
    Related Articles | Metrics
    Virtual reality pottery modeling system based on leap motion
    LIN Ying-ying, CAI Rui-fan, ZHU Yu-zhen, TANG Xiang-jun, JIN Xiao-gang
    2020, 41(1): 57-65.  DOI: 10.11996/JG.j.2095-302X.2020010057
    Abstract ( 163 )   PDF (6941KB) ( 143 )  
    The manufacturing conditions of pottery are too strict which definitely restrict the spread of
    ceramic culture. To solve the problem, an immersive pottery modelling system equipped with Leap
    Motion and HTC Vive is developed, which is, and enables users to go through real pottery production
    process with their bare hands, free from the barrier of physical condition and. In terms of structure,
    the pottery is divided into three parts: the top, the side and the bottom. The side part derives from a
    simple generic homogeneous cylinder, while its profile deformation is controlled by Gaussian
    function. In terms of interaction, according to real pottery making scene as well as the characteristics
    of pottery structure, the system designs a series of user-friendly hand gestures and designates the
    operating range. And it’s supposed to improve the sense of reality and immersion of the experience,
    combined with motion capture and virtual reality technology, while the controllability of pottery
    deformation is guaranteed. In order to prove the superiority of system performance, this paper
    conducts a contrast experiment and proposes a target-practice variance computing method. In addition,
    a questionnaire which involves four evaluation indicators (immersion, completion, operative
    difficulty and enjoyment) is carried out, well verifying the system’s superior user experience.
    Related Articles | Metrics
    Non-touch heart rate estimation based on the low-rank and sparse matrix decomposition
    HUANG Ji-feng1, BAI Guo-chen2, XIONG Nai-xue2, WEI Jian-guo2
    2020, 41(1): 66-72.  DOI: 10.11996/JG.j.2095-302X.2020010066
    Abstract ( 95 )   PDF (3504KB) ( 166 )  
    Heart rate detection, as a vital physiological parameter, plays an important role in medical
    care, criminal investigation andinformation security, etc. Current studies on computer vision areas
    have shown that heart rate signals can be obtained from videos captured by a normal webcam. The
    current method can achieve relatively more desirable results in ideal experimental environments,
    while the robustness of it is poorer in natural conditions when there is head shaking, noise and
    shadow. In this study, we captured the region of interest by detecting the face landmarks, to reduce the
    interference of the detection errors caused by the head shaking. And based on low-rank and sparse
    matrix decomposition, this paper proposes a non-touch heart rate estimation model to denoise the
    blood volume pulse (BVP) signal matrix in the frequency domain, so as to tackle the problem arising
    from capturing heart rate signals by cameras in a non-touch way. We tested our model on the dataset
    of MAHNOB-HCI and the results showed that the proposed model outperforms with 3.25% error
    ratio means.
    Related Articles | Metrics
    Image dehazing combining dark channel prior and Hessian regular term
    GAO Zhu-zhu, WEI Wei-bo, PAN Zhen-kuan, ZHAO Hui
    2020, 41(1): 73-80.  DOI: 10.11996/JG.j.2095-302X.2020010073
    Abstract ( 97 )   PDF (2379KB) ( 198 )  
    The contrast and visibility of outdoor images taken in hazy weather are seriously affected. At
    present, the image dehazing methods usually consider that the dehazing performance highly depend
    ends on the accurate transmission image. The second order Hessian regular term has the ability to
    preserve fine structure and suppress step artifacts, which is helpful to improve the image contrast and
    visibility. Therefore, in this paper, the dark channel prior method is first used to obtain atmospheric
    optical value and the initial transmission image, and then a second order variational model is proposed
    to refine the initial transmission image and dehazing image by combining Hessian regular term. In order
    to improve the operational efficiency of the proposed dehazing model, a corresponding alternating
    direction multiplier method (ADMM) was designed. By introducing auxiliary variables, the Lagrangian
    multiplier was continuously updated and iterated until the energy equation converged. At last, the
    simulation experiment was carried out by the foggy image data base (LIVE Image Defogging) to test
    the proposed fog removal method. The visual quality and quantitative evaluation of the effect pictures
    of mist and fog removal showed that the fog removal images obtained by the fog removal model
    proposed in this paper were clear and natural, and the texture details maintained well.
    Related Articles | Metrics
    Digital recognition method of bank card based on CNN
    LI Shang-lin, WANG Lu-da, LIU Dong
    2020, 41(1): 81-87.  DOI: 10.11996/JG.j.2095-302X.2020010081
    Abstract ( 168 )   PDF (6151KB) ( 176 )  
    Due to many interference factors when photographing the bank card, such as the
    uncertainty of shooting angle, the complexity of lighting conditions and the diversity of bank card
    background, there are great challenges for the bank card digital recognition algorithm based on
    natural shooting scene. Therefore, a framework for bank card recognition is proposed based on
    convolution neural network (CNN). Firstly, the digital region of target bank card is obtained by
    performing a series of image processing algorithms, such as projection correction, edge detection, and
    morphology operation. Secondly, a convolution neural network is trained through the augmented
    dataset to obtain the above target digital area for sliding window recognition. Then the initial bank
    card number sequence is output to generate a digital graph. Finally, a smoothing optimization
    algorithm is proposed, which inputs the above initial bank card number graph and optimizes it. Then
    the digital sequence is divided into individual numbers and the final result is output. The experimental
    results show that the algorithm significantly improves the accuracy of bank card digital recognition
    and segmentation. At the same time, it still has good robustness for those bank cards with more
    complex images.
    Related Articles | Metrics
    Face image recognition based on basis function iteration of discrete cosine transform
    YU Wan-bo, WANG Xiang-xiang, WANG Da-qing
    2020, 41(1): 88-92.  DOI: 10.11996/JG.j.2095-302X.2020010088
    Abstract ( 99 )   PDF (406KB) ( 105 )  
    The research work of image processing and recognition by means of non-linear chaotic
    method is receiving increasing attention. In the existing literature, there has been a method which
    constructs dynamic system by taking sinusoidal function as auxiliary function and image, and
    iteratively generates chaotic attractors as image features. In order to further explore the characteristics
    of image attractors as image features and improve the recognition effect, this paper uses a discrete
    cosine transform (DCT) basis function matrix instead of a sine function to generate approximate
    chaotic attractors iteratively for face recognition. First, this study analyzes the diversity and
    oscillation of DCT basis function matrix. Then, the DCT basis function matrix and the image matrix
    are used to construct the iterative expression, and the proposed iterative algorithm is used to generate
    the attractor. After the attractor is transformed by fast Fourier transform, the correlation coefficient is
    calculated, and the face image is recognized. For the Yalefaces image database, when each image can
    be trained, the recognition rate can reach 100%. When the first five images of each group are trained
    to extract the feature, the recognition rate can exceed 85%. For CMU PIE databases, when each image
    can be trained, the recognition rate can exceed 99%. And this attractor method can be used as a
    method of image bottom feature extraction, which still needs further study.
    Related Articles | Metrics
    FocusNet: coarse-to-fine small object detection network
    ZHOU Li-wang1, PAN Tian-xiang1, YANG Ze-xi2, WANG Bin1
    2020, 41(1): 93-99.  DOI: 10.11996/JG.j.2095-302X.2020010093
    Abstract ( 134 )   PDF (3937KB) ( 124 )  
    Much fruitful study has been conducted on object detection which is one of the
    fundamental problems in deep learning. Self-service freezer is an important application of artificial
    intelligence in the retail industry. Object detection methods are used to detect goods in pictures
    captured by cameras inside the freezer, and tasks such as commodity classification follow suit. Due to
    the limitation of hardware, currently we only apply fast while less accurate models in practical
    application, of which the detection accuracy is much worse, for small objects. In an attempt to explore
    the features of data collected in self-service freezers such as single background and small range of
    object, a coarse-to-fine two-stage method called FocusNet was proposed to tackle the problem of
    object detection under this special condition, which was based on the previous main stream one-stage
    detection method. The experimental results show that FocusNet outperforms the previous method by
    about 8.3% and 3.5% in small object detection and overall detection, respectively.
    Related Articles | Metrics
    A CT image segmentation method for liver tumor by an improved FCN
    DUAN Jie1,2, CUI Zhi-ming1,2, SHEN Yi1,2, FENG Wei1, WU Hong-jie1, FENG Yu-qing1,2
    2020, 41(1): 100-107.  DOI: 10.11996/JG.j.2095-302X.2020010100
    Abstract ( 297 )   PDF (1103KB) ( 263 )  
    Accurate medical image segmentation is a necessary step in assisting disease diagnosis and
    surgical planning. The automatic segmentation of liver tumors has always been a difficult problem
    due to the blurred borders and the low contrast of abdominal organs. Aiming at the problem that the
    traditional full convolutional network (FCN) achieves low accuracy in end-to-end segmentation, this
    paper proposes a CT image liver tumor segmentation method based on convolutional multi-scale
    fusion FCN. Firstly, the original CT image dataset is preprocessed by improving the contrast,
    enhancement and denoising. Secondly, the designed FCN network is trained using the processed
    dataset. Finally, a network model capable of accurately segmenting the liver tumor is obtained. The
    experiment adopts a variety of evaluation indicators to evaluate the effectiveness of segmentation
    results and makes comparison with a variety of common segmentation networks. The experimental
    results show that the method proposed in this paper can accurately segment liver tumors of various
    shapes and sizes in CT images, and the segmentation effect is good, which can provide a reliable
    support for clinical diagnosis.
    Related Articles | Metrics
    Human body reconstruction based on improved piecewise hinge transformation
    ZHANG Xiao-meng, FANG Xian-yong, WANG Lin-bo, TIAN Li-li, SUN You-wei
    2020, 41(1): 108-115.  DOI: 10.11996/JG.j.2095-302X.2020010108
    Abstract ( 86 )   PDF (3148KB) ( 218 )  
    At present, existing single-image-based human body modeling methods still cannot
    effectively deal with the complex occlusions of body parts either due to the arms or clothes or to the
    changes of viewpoints. To solve this problem, using the distribution characteristics of skeletal joints
    in the SMPL model, we designed a human body reconstruction method by improving the traditional
    segmented hinge transformation model. The method uses the accurate annotation of the skeletal joints
    to identify the node of the model transformation and combines the image contour boundary constraint
    map to propose the non-rigid registration method of forward piecewise regression and probability
    expectation minimization (FPR-PEM). The iterative model was used to linearly interpolate the thin
    plate splines at the deformed joint to ensure the independence of the point cloud shape on the model
    surface, which effectively registered non-rigid deformation models under various postures and better
    solved the reconstruction challenges brought by complex occlusion. Then regression adjustments
    were performed with regard to the model posture so as made to achieve accurate human body
    modeling. Experimental results show that the proposed method works effectively to build a fine and
    smooth model of human body reconstruction.
    Related Articles | Metrics
    Two level nested hybrid algorithm for solving JSSP with complex associated constraints
    LUO Ya-bo, YU Han-Lin
    2020, 41(1): 116-124.  DOI: 10.11996/JG.j.2095-302X.2020010116
    Abstract ( 64 )   PDF (498KB) ( 75 )  
    The JSSP (job shop scheduling problem) includes two coupling sub-problems, namely
    “equipment allocation” and “process sequencing”. The current researches mainly focus on the
    small-scale problems involving serial processing. The properties of the feasible domain will become
    very complicated if there are complex correlative constraints such as parallel or even nested
    relationships among processes. When the scale of problem is large, it is even difficult to obtain a
    feasible solution. To solve the above difficult problems, on the basis of giving full play to the
    advantages of genetic algorithm in solving “allocation problem” and ant colony optimization
    algorithm in solving “sorting problem”, a two-level nested model and its basic ideas are proposed.
    Through a series of improvement strategies, such as the integer encoding strategy based on process,
    multi-node cross strategy based on machine type, mutation strategy of gene exchange in interval
    section based on equipment type, feasible path forming strategy based on reverse traversal, as well as
    pheromone spreading and updating strategy based on the shortest processing time, a two-level nested hybrid algorithm that integrates genetic algorithm and ant colony optimization algorithm in the same
    loop is constructed. Aiming at medium-scale problems, comparative experiments are carried out by
    respectively applying genetic algorithm, ant colony optimization algorithm, two-level nested ant
    colony algorithm, and two-level nested hybrid algorithm in combination with genetic algorithm and
    ant colony algorithm. The experimental results verify the reliability and superiority of the proposed
    algorithm, and provide a new idea and method for solving JSSP with complex association constraints.
    Related Articles | Metrics
    A lightweight strong physical unclonable function design based on LFSR
    HOU Shen1,2, GUO Yang1, LI Tun1, LI Shao-qing1
    2020, 41(1): 125-131.  DOI: 10.11996/JG.j.2095-302X.2020010125
    Abstract ( 64 )   PDF (621KB) ( 94 )  
    Physical unclonable function (PUF) is a new type of hardware security primitive. It
    prevents chips from over-manufacturing and illegally copying and can be implemented with ASICs
    and FPGAs. PUF can be used for secure secret key generation and chips authentication. Strong PUF
    is an important category of PUF and it has a great CRP space for safety authentication of device
    identity. As a classic strong PUF design, arbiter PUF is expensive in hardware overhead and less ideal
    in uniqueness, which makes it not suitable for some resource-intensive applications like embedded
    systems and IoT devices. To decrease the hardware overhead, a new lightweight strong PUF design
    has been proposed. A linear feedback shift register is used as obfuscating logic of a weak PUF to
    obtain lots of responses. With simple structure, this design is easy to implement. The PUF design is
    implemented and evaluated on a 28 nm FPGA developing board. The experimental results show that
    the randomness of the PUF is 49.8%, the uniqueness is 50.25%, and the hardware overhead is very
    small.
    Related Articles | Metrics
    A product conceptual design method based on experimental analysis and numerical simulation technology
    LI Tian-zeng, HUANG Hong-mei, CHEN Jia-jing, LAI Chun-min
    2020, 41(1): 132-140.  DOI: 10.11996/JG.j.2095-302X.2020010132
    Abstract ( 77 )   PDF (7106KB) ( 99 )  
    In order to intuitively and reasonably drive the formation and output of product conceptual
    design solutions, a product conceptual design method based on experimental analysis and numerical
    simulation technology is proposed in light of the complementary advantages of experimental tests and
    numerical simulation technology. Firstly, through the preliminary investigation and analysis of the
    design object, the appropriate design focus point was determined. Secondly, the functional
    characteristics of the existing products were analyzed and summarized by means of experimental tests
    and numerical simulation technology, and the breakthrough point of the design was found. Then, the
    product mechanism was analyzed deeply, and on this basis, the reasonable solutions were put forward
    and verified. Finally, by discussing the relationship between the form and function, the final product
    modeling was determined, and the product design concept was output. The comprehensive application
    of experimental and numerical analysis methods in product design forms a new design process based
    on scientific proof, which can ensure the scientific rationality of scheme generation and output and
    help to improve product design efficiency and quality. The proposed method was exemplified by the design of mini desk fan.
    Related Articles | Metrics
    Ankle-foot orthosis design based on parametric reverse modeling
    ZHANG Fang-lan, CHEN Rui-ying, SHAO Shuai, ZHANG Jun-yao, JI Wen-jia
    2020, 41(1): 141-147.  DOI: 10.11996/JG.j.2095-302X.2020010141
    Abstract ( 118 )   PDF (11814KB) ( 127 )  
    In order to realize the rapid and accurate modeling of the ankle-foot orthosis (AFO) and
    improve the individualized customization efficiency of the orthosis and the wearing comfort of the
    patient, a design method of the ankle-foot orthosis based on parametric reverse modeling was
    proposed. The method used 3D scanning to obtain the original digital model of the foot and the calf.
    The acquired basic digital model was established through the point cloud data processing and
    optimization. The personalized shape feature data points are extracted by Grasshopper to reestablish
    the ankle-foot orthosis digital model. A six-stage AFO parametric reverse modeling program was
    created, and customized design of AFO can be realized by changing parameters. Finally, the
    feasibility of the proposed method was verified by a case of customized design of AFO for stroke
    patient.
    Related Articles | Metrics
    Spatial and semantic data integration method and application of BIM and GIS
    ZHAI Xiao-hui, SHI Jian-yong
    2020, 41(1): 148-157.  DOI: 10.11996/JG.j.2095-302X.2020010148
    Abstract ( 219 )   PDF (2407KB) ( 308 )  
    With the development of digital city and smart city construction, the integration of building
    information modeling (BIM) and geographic information system (GIS) has received much attention
    from a wide academic circle. The current integration mainly focuses on the conversion of both
    geographic and semantic information between the two data standards, IFC and CityGML, but there
    are problems, such as data error and loss, lacking geometric-semantic coherence and poor application
    extensibility. This paper proposed a multi-scale 3D city data model which takes both entity and
    geographical objects into account, and studied the extraction, processing and transformation method
    of spatial and semantic data of BIM and GIS. Accordingly, the integrated application framework was
    designed, verifying and preliminarily applied to the 3D visualization platform. It is advantageous in
    the realization of a total fusion of BIM and GIS information in terms of geometry, semantics and
    precision as well as the avoidance of the information loss caused by the traditional data
    transformation. It also has an advantage over the multilevel storage, loading and displaying of
    multi-scale spatial and semantic data and helps to achieve efficient integration of a large scale of
    building and city data with high accuracy.
    Related Articles | Metrics
    Quality guarantee strategy of effective teaching from the perspective of teachers
    ZHANG Xiu-fen1, YU Gang2, HU Zhi-yong1, XUE Jun-fang1, QIAN Shao-hua1
    2020, 41(1): 158-163.  DOI: 10.11996/JG.j.2095-302X.2020010158
    Abstract ( 247 )   PDF (775KB) ( 174 )  
    In order to overcome the shortcoming that the existing quality guarantee system of
    curriculum teaching lacks in the universal adaptability, this paper focuses on the current curriculum
    teaching situation and probes into the curriculum quality guarantee strategy based on the effective
    teaching theory. Based on the literature review and data analysis of the influencing factors of the
    curriculum teaching quality, seven major influencing factors of curriculum teaching quality were
    sorted out centering on teachers and students. Then a quality guarantee model of two-stage
    hierarchical curriculum teaching was established based on the effective teaching theory. The model
    has the advantage of self-evaluation and continuous improvement. Furthermore, a self-assessment
    form of teaching quality for teachers was designed to realize the internal evaluation and continuous
    improvement of curriculum teaching quality. Moreover, a calculation formula of correlation degree
    between evaluation indexes and influencing factors was established. In addition, based on the reform practice of curriculum teaching, four quality guarantee strategies of curriculum teaching, which
    include elaborate course teaching design, flexible teaching mode, construction of the community of
    interests between teachers and students, and give full play to the role of model teachers, were
    proposed from the perspective of teachers. Finally, this paper analyzed the implementation effect of
    the above proposed curriculum teaching quality improvement strategies in the teaching practice of the
    postgraduate courses-computer graphics and digital rapid prototyping. Then the curriculum teaching
    effects of the classes of 2017 and 2018 was studied and compared. According to the results, the final
    grades excellence rate of students in the class of 2017 and 2018 is 5% and 28% respectively, which
    shows that the curriculum teaching effects of the class 2018 after the reformation have been greatly
    improved. Therefore, the proposed method in this paper will provide implications for the
    implementation of the quality guarantee strategy of curriculum teaching in colleges and universities.
    Related Articles | Metrics
    Course reform of engineering graphics against the background of emerging engineering education
    LUAN Ying-yan, WANG Ying, HE Rui
    2020, 41(1): 164-168.  DOI: 10.11996/JG.j.2095-302X.2020010164
    Abstract ( 264 )   PDF (2655KB) ( 193 )  
    Based on the background and goal of emerging engineering education, a new concept of
    talent cultivation and a new structure of teaching design are established. New teaching methods and
    new standards of teaching evaluation are adopted and a new system of curriculum structure is
    performed. Driven by the the effect-oriented training requirements of cultivation of innovative
    abilities, curriculumre form program and the teaching reform of “supply side” are carried out by
    establishing the teaching model which emphasizes the cultivation of students’ innovative abilities and
    discipline integration. The teaching effect is improved via adjusting the teaching plan, reforming the
    course content and establishing matched teaching practice, etc. The basic framework of the teaching
    reform plan is expounded and analyzed from the aspects of the reform of teaching content system,
    teaching methods and the teaching reform cases. Campared with the traditional teaching method in
    terms of study effects and students’ recognition, the feasibility and effectiveness of the teaching mode
    of innovative ability cultivation are discussed.
    Related Articles | Metrics