A Brief Study of The Use of Pattern Recognition in Online Learning: Recommendation for Assessing Teaching Skills Automatically Online Based
Rudy Hartanto, Gadjah Mada University
Indah Soesanti, Gadjah Mada University
Abstract
Online learning has become a trend for the current generation of students who have been exposed to advanced information and communication technology. Smart education can use pattern recognition. Manual assessments are subjective and inconsistent. To overcome these problems, pattern recognition can be used in the non-verbal aspect assessment system. This study describes pattern recognition in online learning about the functions, modalities, and algorithms and specifically related to the recognition system of non-verbal aspects of teaching skills. The literature study was carried out through the stages of planning, selection, extraction, and selection. There are 86 articles reviewed. The first result is the functions of implementing pattern recognition in online learning are engagement recognition, attention detection, emotion recognition, learning behavior, learning activity recognition, authentication, teaching training, etc. using four classifications of modality: visual, audio, biosignal, behavioral, and CNN as the most widely used learning algorithm. Secondly, all modalities (except behavioral) and CNN algorithm can be used for assessing teaching skills. Early development of the non-verbal aspect assessment system can use Facial Expression Recognition (FER) and Hand Gesture Recognition (HGR). The future analysis needs to focus on technology characteristics, the meaningfulness of the content, and the proper teaching mode. In the end, hopefully, prospective teachers will acquire technology that can make it easier for them to practice teaching and get objective assessments.
Keywords
Full Text:
PDFReferences
S. Phuyal, D. Bista, and R. Bista, “Challenges, Opportunities and Future Directions of Smart Manufacturing: A State of Art Review,” Sustain. Futur., vol. 2, p. 100023, 2020.
M. Fukuyama, “Society 5.0: Aiming for a New Human-Centered Society,” Japan SPOTLIGHT, vol. Special Ar, pp. 47–50, 2018.
G. Elhussein, T. A. Leopold, and S. Zahidi, “Schools of the Future Defining New Models of Educ ation for the Fourth Industrial Revolution,” Cologny/Geneva, 2020.
M. E. Gladden, “Who Will Be the Members of Society 5.0? Towards an Anthropology of Technologically Posthumanized Future Societies,” Soc. Sci., vol. 8, no. 5, p. 148, May 2019.
Keidanren, “Toward realization of the new economy and society.” Japan, 2016.
A. J. Balloni and P. H. de Souza Bermejo, “Governance, Sociotechnical Systems and Knowledge Society: Challenges and Reflections,” in International Conference on ENTERprise Information Systems, 2010, pp. 42–51.
M. Aparicio, F. Bacao, and T. Oliveira, “Trends in the e-Learning Ecosystem: A Bibliometric Study,” in AMCIS 2014 Proceedings, 2014, pp. 1–11.
S. Mokhtar, J. A. Q. Alshboul, and G. O. A. Shahin, “Towards Data-driven Education with Learning Analytics for Educator 4.0,” in Journal of Physics: Conference Series, 2019, vol. 1339, p. 012079.
M. Maria, F. Shahbodin, and N. C. Pee, “Malaysian higher education system towards industry 4.0 – Current trends overview,” in Proceedings of The 3rd International Conference on Applied Science And Technology (ICAST’18), 2018, p. 020081.
S. Y. Tan, D. Al-Jumeily, J. Mustafina, A. Hussain, A. Broderick, and H. Forsyth, “Rethinking Oue Education to Face The New Industry Era,” in 10th International Conference on Education and New Learning Technologies, 2018, pp. 6562–6571.
C. Sima, “Generations BB, X, Y, Z, α - the changing consumer in the hospitality industry,” in The Routledge Handbook of Hotel Chain Management, M. Ivanova, S. Ivanov, and V. P. Magnini, Eds. Oxon: Routledge, 2016, pp. 471–479.
M. Cristea, G. G. Noja, D. E. Dănăcică, and P. Ştefea, “Population ageing, labour productivity and economic welfare in the European Union,” Econ. Res. Istraživanja, vol. 33, no. 1, pp. 1354–1376, Jan. 2020.
M. Hen and M. Goroshit, “Social–emotional competencies among teachers: An examination of interrelationships,” Cogent Educ., vol. 3, no. 1, pp. 1–9, Feb. 2016.
F. Bambaeeroo and N. Shokrpour, “The impact of the teachers’ non-verbal communication on success in teaching,” J. Adv. Med. Educ. Prof., vol. 5, no. 2, pp. 51–59, 2017.
R. W. Kaps and J. K. Voges, “Nonverbal Communications : A Commentary on Body Language in the Aviation Teaching Environment,” J. Aviat. Educ. Res., vol. 17, no. 1, pp. 43–52, 2007.
Akinola, “The use of non – verbal communication in the teaching of english language,” J. Adv. Linguist., vol. 4, no. 3, pp. 428–433, 2014.
H. Fawad and I. A. Manarvi, “Student feedback & systematic evaluation of teaching and its correlation to learning theories, Pedagogy & Teaching skills,” in Proceedings of 2014 IEEE International Conference of Teaching, Assessment and Learning (TALE), 2014, pp. 398–404.
M. Sulaiman, Z. H. Ismail, A. A. Aziz, and A. Zaharim, “Lesson study: Assessing pre-service teacher’s performance of teaching chemistry,” in 2011 3rd International Congress on Engineering Education (ICEED), 2011, pp. 208–213.
R. Barmaki and C. E. Hughes, “Embodiment analytics of practicing teachers in a virtual immersive environment,” J. Comput. Assist. Learn., vol. 34, no. 4, pp. 387–396, Aug. 2018.
C. Okoli, “A Guide to Conducting a Standalone Systematic Literature Review,” Commun. Assoc. Inf. Syst. Vol., vol. 37, no. 43, pp. 879–910, 2017.
O. Zawacki-Richter, V. I. Marín, M. Bond, and F. Gouverneur, “Systematic review of research on artificial intelligence applications in higher education – where are the educators?,” Int. J. Educ. Technol. High. Educ., vol. 16, no. 1, p. 39, Dec. 2019.
N. J. van Eck and L. Waltman, “VOSviewer Visualizing scientific landscapes.” Universiteit Leiden and CWTS Meaningful metrics, 2020.
J. Yi, B. Sheng, R. Shen, W. Lin, and E. Wu, “Real Time Learning Evaluation Based on Gaze Tracking,” in 14th International Conference on Computer-Aided Design and Computer Graphics, 2015, pp. 157–164.
K. O. Bailey, J. S. Okolica, and G. L. Peterson, “User identification and authentication using multi-modal behavioral biometrics,” Comput. Secur., vol. 43, pp. 77–89, 2014.
X. Song, S. Zhu, Y. Wei, Q. Sun, and L. Zhao, “Research on Information Engineering with Information Technology for E-Learning Based on Face Recognition,” Adv. Mater. Res., vol. 977, pp. 460–463, 2014.
Z. Zainuddin and A. S. Laswi, “Implementation of The LDA Algorithm for Online Validation Based on Face Recognition,” in Journal of Physics: Conference Series. International Conference on Computing and Applied Informatics 2016, 2017, pp. 1–6.
R. Sivakumar and R. Sivakumar, “A Convolutional Neural Network based Fingerprint and Face Biometric Multi-modal for Educational Authentication System,” Int. J. Adv. Sci. Technol., vol. 29, no. 3, pp. 3486–3497, 2020.
O. Ruiz-Vivanco, A. Gonzalez-Eras, J. Cordero, and L. Barba-Guaman, “Monitoring for the Evaluation Process On-Line Prototype Based on OpenFace Algorithm,” in echnology Trends. CITT 2018. Communications in Computer and Information Science, 2019, pp. 500–509.
R. Tsankova, O. Marinov, M. Durcheva, and E. Varbanova, “Synergy Effect of the TeSLA Project in Management of Engineering Higher Education,” in Proceedings of the 9th International Conference on Management of Digital EcoSystems, 2017, pp. 259–264.
B. Akhmetov, I. Tereykovsky, A. Doszhanova, and L. Tereykovskaya, “Determination of Input Parameters of the Neural Network Model , Intended for Phoneme Recognition of a Voice Signal in the Systems of Distance Learning,” Int. J. Electron. Telecommun., vol. 64, no. 4, pp. 425–432, 2018.
N. A. Mahadi, M. A. Mohamed, A. I. Mohamad, M. Makhtar, M. F. A. Kadir, and M. Mamat, “A Survey of Machine Learning Techniques for Behavioral-Based Biometric User Authentication,” in Recent Advances in Cryptography and Network Security, InTech, 2018, pp. 43–59.
M. Murshed, M. A. A. Dewan, F. Lin, and D. Wen, “Engagement Detection in e-Learning Environments using Convolutional Neural Networks,” in 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), 2019, pp. 80–86.
T. Huang, Y. Mei, H. Zhang, S. Liu, and H. Yang, “Fine-grained Engagement Recognition in Online Learning Environment,” in 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), 2019, pp. 338–341.
L. Geng, M. Xu, Z. Wei, and X. Zhou, “Learning Deep Spatiotemporal Feature for Engagement Recognition of Online Courses,” in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 442–447.
H. Zhang, X. Xiao, T. Huang, S. Liu, Y. Xia, and J. Li, “An Novel End-to-end Network for Automatic Student Engagement Recognition,” in 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), 2019, pp. 342–345.
M. A. A. Dewan, F. Lin, D. Wen, M. Murshed, and Z. Uddin, “A Deep Learning Approach to Detecting Engagement of Online Learners,” in 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), 2018, pp. 1895–1902.
G. M. Notaro and S. G. Diamond, “Development and Demonstration of an Integrated EEG, Eye-Tracking, and Behavioral Data Acquisition System to Assess Online Learning,” in ICETC ’18 Proceedings of the 10th International Conference on Education Technology and Computers, 2018, pp. 105–111.
L. Stanca, R. Lacurezeanu, and C. Felea, “Analysis for Visualizing Actors’ Experiences in E-learning Spaces to Create an Active Learner’s Profile,” in 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), 2019, pp. 1882–1888.
D. Canedo, A. Trifan, and A. J. R. Neves, “Monitoring Students’ Attention in a Classroom Through Computer Vision,” in Highlights of Practical Applications of Agents, Multi-Agent Systems, and Complexity: The PAAMS Collection, 2018, pp. 371–378.
A. Edwards et al., “Sensor-based Methodological Observations for Studying Online Learning,” in SmartLearn’17, 2017, pp. 25–30.
J. Wang, P. Antonenko, M. Celepkolu, Y. Jimenez, E. Fieldman, and A. Fieldman, “Exploring Relationships Between Eye Tracking and Traditional Usability Testing Data,” Int. J. Human–Computer Interact., vol. 35, no. 6, pp. 483–494, Apr. 2019.
M. Fahimipirehgalin, F. Loch, and B. Vogel-Heuser, “Using Eye Tracking to Assess User Behavior in Virtual Training,” in Intelligent Human Systems Integration, 2020, pp. 341–347.
T. Robal, Y. Zhao, C. Lofi, and C. Hauff, “Webcam-based Attention Tracking in Online Learning: A Feasibility Study,” in IUI 2018, 2018, pp. 189–197.
H. Chen, M. Yan, S. Liu, and B. Jiang, “Gaze inspired subtitle position evaluation for MOOCs videos,” in Proceedings of SPIE - The International Society for Optical Engineering, 2017, vol. 10443, p. 1044318.
Y. Uğurlu, “User Attention Analysis for E-learning Systems Using Gaze and Speech Information,” in Information Science, Electronics and Electrical Engineering (ISEEE), 2014.
U. Stickler and L. Shi, “Eyetracking methodology in SCMC : A tool for empowering learning and teaching,” ReCALL, vol. 29, no. 2, pp. 160–177, 2017.
B. Hu, X. Li, S. Sun, and M. Ratcliffe, “Attention Recognition in EEG-Based Affective Learning Research Using CFS+KNN Algorithm,” IEEE/ACM Trans. Comput. Biol. Bioinforma., vol. 15, no. 1, pp. 38–45, Jan. 2018.
A. Belle, R. Hobson, and K. Najarian, “A physiological signal processing system for optimal engagement and attention detection,” in 2011 IEEE International Conference on Bioinformatics and Biomedicine Workshops (BIBMW), 2011, pp. 555–561.
M. R. D. da Silva, M. Postma-Nilsenova, and F. Hermens, “Wandering Mice, Wandering Minds: Using Computer Mouse Tracking to Predict Mind Wandering,” in 22nd International Conference on Circuits, Systems, Communications and Computers (CSCC 2018), 2018.
A. Elbahi and M. N. Omri, “Conditional Random Fields For Web User Task Recognition Based On Human Computer Interaction,” in WSCG 2015. 23rd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2015.
N. Gerard et al., “Detection of Subject Attention in an Active Environment Through Facial Expressions Using Deep Learning Techniques and Computer Vision,” in Advances in Neuroergonomics and Cognitive Engineering. AHFE 2020. Advances in Intelligent Systems and Computing, vol 1201, Switzerland: Springer, Cham, 2021, pp. 326–332.
J. Chen, N. Luo, Y. Liu, L. Liu, K. Zhang, and J. Kolodziej, “A hybrid intelligence-aided approach to affect-sensitive e-learning,” Computing, vol. 98, no. 1, pp. 215–233, 2016.
Y. Te Ku, H. Y. Yu, and Y. C. Chou, “A Classroom Atmosphere Management System for Analyzing Human Behaviors in Class Activities,” in 1st International Conference on Artificial Intelligence in Information and Communication, ICAIIC 2019, 2019, pp. 224–231.
B. E. Zakka and H. Vadapalli, “Detecting Learning Affect in E-Learning Platform Using Facial Emotion Expression,” in Proceedings of the 11th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2019), 2021, pp. 217–225.
Y. Ling, SmartPlayer : Inferring Learners ’ Emotions While They Are Watching Videos. Springer Singapore, 2019.
J. Xu, Z. Huang, M. Shi, and M. Jiang, “Emotion Detection in E-learning Using Expectation-Maximization Deep Spatial-Temporal Inference Network,” in Advances in Computational Intelligence Systems. UKCI 2017. Advances in Intelligent Systems and Computing, vol 650, 2018, pp. 245–252.
A. Sun, Y.-J. Li, Y.-M. Huang, and Q. Li, “Using Facial Expression to Detect Emotion in E-learning System: A Deep Learning Method,” in Emerging Technologies for Education. SETE 2017. Lecture Notes in Computer Science, vol 10676, 2017, pp. 446–455.
B. Xu, X. Li, H. Liang, and Y. Li, “Research on professional talent training technology based on multimedia remote image analysis,” EURASIP J. Image Video Process., vol. 39, pp. 1–9, Dec. 2019.
G. Li and Y. Wang, “Research on learner’ s emotion recognition for intelligent education system,” in IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2018, pp. 754–758.
D. Yang, A. Alsadoon, P. W. C. Prasad, A. K. Singh, and A. Elchouemi, “An Emotion Recognition Model Based on Facial Recognition in Virtual Learning Environment,” in Procedia Computer Science, 2018, vol. 125, pp. 2–10.
C. Bian, Y. Zhang, F. Yang, W. Bi, and W. Lu, “Spontaneous facial expression database for academic emotion inference in online learning,” IET Comput. Vis., vol. 13, no. 3, pp. 329–337, 2019.
F. D’Errico, M. Paciello, B. De Carolis, A. Vattanid, G. Palestra, and G. Anzivino, “Cognitive Emotions in E-Learning Processes and Their Potential Relationship with Students’ Academic Adjustment,” Int. J. Emot. Educ., vol. 10, no. 1, pp. 89–111, 2018.
C. Ma, C. Sun, D. Song, X. Li, and H. Xu, “A Deep Learning Approach for Online Learning Emotion Recognition,” in 13th International Conference on Computer Science & Education (ICCSE), 2018, pp. 1–5.
B. D. Miljković, A. V. Petojević, and M. R. Žižović, “On-line Student Emotion Monitoring as a Model of Increasing Distance Learning Systems Efficiency,” New Educ. Rev., vol. 47, no. 2, pp. 225–240, 2017.
L. Jie, Z. Xiaoyan, and Z. Zhaohui, “Speech Emotion Recognition of Teachers in Classroom Teaching,” in 2020 Chinese Control And Decision Conference (CCDC), 2020, pp. 5045–5050.
Z. Liu, W. Zhang, J. Sun, H. N. H. Cheng, X. Peng, and S. Liu, “Emotion and associated topic detection for course comments in a MOOC platform *,” in International Conference on Educational Innovation through Technology Emotion, 2016, pp. 15–19.
X. Feng, Y. Wei, X. Pan, L. Qiu, and Y. Ma, “Academic Emotion Classification and Recognition Method for Large-scale Online Learning Environment—Based on A-CNN and LSTM-ATT Deep Learning Pipeline Method,” Int. J. Environ. Res. Public Health, vol. 17, no. 6, p. 1941, Mar. 2020.
L. Shen, V. Callaghan, and R. Shen, “Affective e-Learning in residential and pervasive computing environments,” Inf. Syst. Front., vol. 10, no. 4, pp. 461–472, Sep. 2008.
M. Soltani and H. Zarzour, “Facial Emotion Detection in Massive Open Online Courses,” in Trends and Advances in Information Systems and Technologies. WorldCIST’18 2018. Advances in Intelligent Systems and Computing, 2018, vol. 745, pp. 277–286.
S. Park and J. Ryu, “Exploring Preservice Teachers’ Emotional Experiences in an Immersive Virtual Teaching Simulation through Facial Expression Recognition,” Int. J. Human–Computer Interact., vol. 35, no. 6, pp. 521–533, Apr. 2019.
H. Abe, T. Kamizono, K. Kinoshita, K. Baba, S. Takano, and K. Murakami, “Towards activity recognition of learners in on-line lecture,” J. Mob. Multimed., vol. 11, no. 3–4, pp. 205–212, 2015.
L. Catrysse, D. Gijbels, and V. Donche, “It is not only about the depth of processing: What if eye am not interested in the text?,” Learn. Instr., vol. 58, pp. 284–294, 2018.
F. Novita Sari, P. Insap Santosa, and S. Wibirama, “Comparison expert and novice scan behavior for using e-learning,” in Proc. SPIE 10443, Second International Workshop on Pattern Recognition, 104430E, 2017.
B. V. Thiyagarajan, “Revolution in teaching/MOOCs by the use of real-time face detection,” in 2015 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), 2015, pp. 1–7.
Z. Zhan, L. Zhang, H. Mei, and P. S. W. Fong, “Online Learners’ Reading Ability Detection Based on Eye-Tracking Sensors,” Sensors, vol. 16, no. 1457, pp. 1–17, 2016.
R. Moro and M. Bielikova, “Utilizing Gaze Data in Learning: From Reading Patterns Detection to Personalization,” in Late-breaking Results and Workshop Proceedings of the 23rd Conference on User Modeling, Adaptation, and Personalization, 2015.
B. The and M. Mavrikis, “A Study On Eye Fixation Patterns of Students in Higher Education Using an Online Learning System,” in ACM Proceeding of the Sixth International Conference on Learning Analytics & Knowledge, 2016, pp. 408–416.
Y. Bandung, K. Mutijarsa, and L. B. Subekti, “Design and Implementation of Video Conference System with Object Tracking for Distance Learning,” pp. 6–11, 2017.
B. Edwiranda, B. C. Purba, and Y. Bandung, “Design and Implementation of Real-time Object Tracking System based on Viola-Jones Algorithm for Supporting Video Conference,” 2018 12th Int. Conf. Telecommun. Syst. Serv. Appl., pp. 1–6, 2018.
C. Lwande, L. Muchemi, and R. Oboko, “Behaviour Prediction in a Learning Management System,” in 2019 IST-Africa Week Conference (IST-Africa), 2019, pp. 1–10.
B. Wu and J. Xiao, “Mining online learner profile through learning behavior analysis,” in Proceedings of the 10th International Conference on Education Technology and Computers - ICETC ’18, 2018, pp. 112–118.
S. Fatahi, H. Moradi, and E. Farmad, “Behavioral Feature Extraction to Determine Learning Styles in E-Learning Environments,” in International Conference e-Learning, 2015, pp. 66–72.
Q. T. Nguyen, H. Tieu Binh, T. D. Bui, and P. D. N.T., “Student postures and gestures recognition system for adaptive learning improvement,” in 2019 6th NAFOSTED Conference on Information and Computer Science (NICS), 2019, pp. 494–499.
T. Xu, Z. Feng, W. Zhang, X. Yang, and P. Yu, “Depth based Hand Gesture Recognition for Smart Teaching,” 2018 Int. Conf. Secur. Pattern Anal. Cybern. SPAC 2018, pp. 387–390, 2018.
J. Wang, T. Liu, and X. Wang, “Human hand gesture recognition with convolutional neural networks for K-12 double-teachers instruction mode classroom,” Infrared Phys. Technol., vol. 111, p. 103464, Dec. 2020.
P. B. Santos, C. V. Wahle, and I. Gurevych, “Using Facial Expressions of Students for Detecting Levels of Intrinsic Motivation,” 2018 IEEE 14th Int. Conf. e-Science, pp. 323–324, 2018.
P. Utami, F. Pahlevi, D. Santoso, N. Fajaryati, B. Destiana, and M. Ismail, “Android-based applications on teaching skills based on TPACK analysis,” in IOP Conference Series: Materials Science and Engineering, 2019, vol. 535, p. 012009.
R. Barmaki and C. Hughes, “Gesturing and Embodiment in Teaching: Investigating the Nonverbal Behavior of Teachers in a Virtual Rehearsal Environment,” in The Eighth AAAI Symposium on Educational Advances in Artificial Intelligence 2018 (EAAI-18) Gesturing, 2018, pp. 7893–7899.
Y. J. Lee, L.-Y. Chen, and M.-H. Lee, “Developing an Online System to Evaluate EFL College Students’ Voice Performance in English Public Speaking,” Int. J. Technol. Learn., vol. 23, no. 2, pp. 1–13, 2016.
E. Hettiarachchi, E. Mor, M. A. Huertas, and A.-E. Guerrero-Roldan, “Introducing a Formative E-Assessment System to Improve Online Learning Experience and Performance,” J. Univers. Comput. Sci., vol. 21, no. 8, pp. 1001–1021, 2015.
P. Utami, G. P. Cikarge, M. E. Ismail, and S. Hashim, “Teaching Aids in Digital Electronics Practice through Integrating 21st Century Learning Skills using a conceptual approach,” in Journal of Physics: Conf. Series, pp. 1–9.
P. Utami, R. Hartanto, and I. Soesanti, “A Study on Facial Expression Recognition in Assessing Teaching Skills : Datasets and Methods,” in Procedia Computer Science, 2019, vol. 161, pp. 544–552.
Y. Wang, Q. Liu, W. Chen, Q. Wang, and D. Stein, “Effects of instructor’s facial expressions on students’ learning with video lectures,” Br. J. Educ. Technol., vol. 00, no. 00, pp. 1–15, 2018.
S. Li, J. Zheng, and Y. Zheng, “Towards a new approach to managing teacher online learning : Learning communities as activity systems,” Soc. Sci. J., 2019.
P. Mishra and M. J. Koehler, “Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge,” Teach. Coll. Rec., vol. 108, no. 6, pp. 1017–1054, 2006.
S. Baddeley, “TPACK is missing an ‘E,’” Edtech in The Classroom, 2020. [Online]. Available: https://simonbaddeley64.wordpress.com/2020/01/15/tpack-is-missing-an-e/. [Accessed: 20-Sep-2020].
P. Sudira, “Model Pembelajaran Vokasional W2CPATK,” in Pembelajaran Vokasional Era Revolusi Industri 4.0, Yogyakarta: UNY Press, 2020, p. 271.
H. Tanaka, S. Sakti, G. Neubig, T. Toda, and S. Nakamura, “Modality and Contextual Differences in Computer Based Non-verbal Communication Training,” in 4th IEEE International Conference on Cognitive Infocommunications, 2013, pp. 127–132.
M. Castañer and O. Camerino, “The Teacher’s Body Communicates. Detection of Paraverbal Behaviour Patterns,” in The Temporal Structure of Multimodal Communication, L. Hunyadi and I. Szekrenyes, Eds. 2020, pp. 159–159.
R. I. Arends, Learning to Teach, 9th ed. NewYork: McGraw-Hill.
C. Ma, C. Sun, D. Song, X. Li, and H. Xu, “A Deep Learning Approach for Online Learning Emotion Recognition,” in 2018 13th International Conference on Computer Science & Education (ICCSE), 2018, no. Iccse, pp. 1–5.
G. Du, M. Chen, C. Liu, B. Zhang, and P. Zhang, “Online robot teaching with natural human-robot interaction,” IEEE Trans. Ind. Electron., vol. 65, no. 12, pp. 9571–9581, 2018.
S. D. Craig, J. Twyford, N. Irigoyen, and S. A. Zipp, “A Test of Spatial Contiguity for Virtual Human’s Gestures in Multimedia Learning Environments,” J. Educ. Comput. Res., vol. 53, no. 1, pp. 3–14, 2015.
DOI: https://doi.org/10.21831/elinvo.v7i1.51354
Refbacks
- There are currently no refbacks.
Copyright (c) 2022 Elinvo (Electronics, Informatics, and Vocational Education)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Our Journal indexed by:
ISSN 2477-2399 (online) || ISSN 2580-6424 (print)