Research Article
Innovative Practice and Transformation of Python Programming Course Empowered by AI Large Models
Xiaoxuan Wu*,
Zhize Wu,
Zhengmao Li
Issue:
Volume 13, Issue 2, April 2025
Pages:
11-15
Received:
20 March 2025
Accepted:
8 April 2025
Published:
14 April 2025
Abstract: This paper focuses on how to use AI big model to improve the teaching quality and learning effect of Python programming course. Starting from the foundation of AI big model and the current situation of Python programming courses, it provides new ideas for the teaching reform of programming courses in colleges and universities by exploring the innovative practice of AI big model in all aspects of course teaching (syntactic knowledge, application scenarios and practical aspects), including the auxiliary knowledge lectures, the project-driven as the core, and the real-time feedback evaluation system, in order to realize the optimization of programming practice and stimulate the creativity of students. and practice reference. At the same time, the challenges brought by the big model to the auxiliary teaching are viewed with dialectical thinking, and reasonable scientific suggestions are given to cultivate compound innovative talents with both profound technical skills and diversified knowledge reserves.
Abstract: This paper focuses on how to use AI big model to improve the teaching quality and learning effect of Python programming course. Starting from the foundation of AI big model and the current situation of Python programming courses, it provides new ideas for the teaching reform of programming courses in colleges and universities by exploring the innov...
Show More
Research Article
Mathematical Modeling and Empirical Analysis of Multi-source Teaching Evaluation Data
Yingcan Wang,
Jianxin Zhao*
Issue:
Volume 13, Issue 2, April 2025
Pages:
16-20
Received:
21 March 2025
Accepted:
19 April 2025
Published:
29 April 2025
DOI:
10.11648/j.si.20251302.12
Downloads:
Views:
Abstract: To address the inconsistency between student evaluation satisfaction data and supervisory evaluation conclusions in university teaching assessments, this study introduces structured data such as class size and course intensity to establish a "normalization-entropy weight-TOPSIS" evaluation methodology. Using a semester's teaching evaluation dataset from the Naval Submarine Academy as an empirical testbed, we implemented a comparative analysis of four distinct data preprocessing strategies: 1) RobustScaler enhanced with Sigmoid function transformation, 2) conventional RobustScaler, 3) Z-score standardization, and 4) Min-Max normalization. The experimental design rigorously evaluated each method's capacity to harmonize student feedback with expert evaluations through correlation analysis and distribution pattern verification.The empirical validation demonstrates that when applying the proposed evaluation framework to datasets processed by the Sigmoid-enhanced RobustScaler method, the resulting assessment scores achieved the highest correlation coefficient with supervisory ratings. Compared with traditional methods relying solely on student satisfaction rates, when applied within the proposed evaluation framework, this approach improved the correlation coefficient between the resulting assessment scores and supervisory ratings by 19.0%, which means generated assessment scores demonstrating superior alignment with supervisory ratings. The novel evaluation approach demonstrated superior performance in this practical application scenario, effectively resolving the contradiction inherent in single-source evaluation systems that exclusively utilize student feedback.
Abstract: To address the inconsistency between student evaluation satisfaction data and supervisory evaluation conclusions in university teaching assessments, this study introduces structured data such as class size and course intensity to establish a "normalization-entropy weight-TOPSIS" evaluation methodology. Using a semester's teaching evaluation dataset...
Show More