首页--工业技术论文--自动化技术、计算机技术论文--自动化基础理论论文--人工智能理论论文--自动推理、机器学习论文

基于S-ELM-LUPI模型的多模态情感识别

致谢第4-5页
摘要第5-7页
Abstract第7-8页
1 INTRODUCTION第15-22页
    1.1 Research Background and Significances第15页
    1.2 Research Objectives第15-16页
    1.3 Research Content第16-19页
    1.4 Research Statement第19-20页
    1.5 Thesis Structure第20-22页
2 LITERATURE REVIEW第22-34页
    2.1 Human Computer Interaction Overview第22-25页
        2.1.1 HCI Evolution第24-25页
        2.1.2 HCI Current Research第25页
    2.2 Emotion Recognition (ER) General Concepts第25-27页
        2.2.1 Human Emotion Expression第25-27页
        2.2.2 Affective Computing第27页
    2.3 Emotion Recognition research第27-33页
        2.3.1 Audio and Visual,Unimodal-based Related Work第27-28页
        2.3.2 Multimodal Emotion Recognition第28-33页
    2.4 Conclusion of the Chapter第33-34页
3 DATA COLLECTION AND FEATURES EXTRACTION第34-56页
    3.1 Data Collection第34-41页
        3.1.1 Dataset Collection Design第34-37页
        3.1.2 Emotion Recognition-based Datasets Categories第37-41页
    3.2 Feature Extraction第41-53页
        3.2.1 Feature Construction第42-44页
        3.2.2 Feature Selection第44页
        3.2.3 Facial Expression Feature Extraction第44-52页
        3.2.4 Audio based Feature Extraction第52-53页
    3.3 Data Preprocessing第53-55页
        3.3.1 Data Cleaning第54页
        3.3.2 Normalization第54-55页
        3.3.3 Missing Values第55页
    3.4 Conclusion of the Chapter第55-56页
4 SPARSE EXTREME LEARNING MACHINE-LEARNING USING PRIVILEGED INFORMATION(S-ELM-LUPI)METHOD FOR CLASSIFICATION第56-87页
    4.1 Data Classification Outline第56-63页
        4.1.1 Classical Machine Learning Process第57-58页
        4.1.2 Mathematical Expression for Classification Task第58-60页
        4.1.3 Learning Using Privileged Information (LUPI)Paradigm第60-62页
        4.1.4 Formulation of Learning Using Privileged Information第62-63页
    4.2 Sparse Extreme Learning Machine第63-69页
        4.2.1 Extreme Learning Machine Method Basics第63-67页
        4.2.2 Sparse Extreme Learning Machine Method第67-69页
    4.3 Sparse ELM-LUPI for Classification第69-85页
        4.3.1 Theoretic Principle第69-73页
        4.3.2 Proposed Method Evaluation Results and Analysis第73-85页
    4.4 Conclusion of the Chapter第85-87页
5 MULTIPLE FEATURE FUSION FOR UNIMODAL EMOTION RECOGNITION第87-120页
    5.1 Human Emotion Expression Rule第87-90页
        5.1.1 Emotion Recognition Using One Modality第88页
        5.1.2 Multiple Features Extraction第88-90页
    5.2 Fusion Description第90-94页
        5.2.1 Serial Fusion Process第90-92页
        5.2.2 Semi-serial Fusion Method第92-94页
    5.3 Audio-based Emotion Recognition第94-104页
        5.3.1 Audio-based Dataset Description第94-95页
        5.3.2 Audio-based Multiple Features Extraction第95-96页
        5.3.3 Audio based Emotion Recognition Experiments Setup第96-97页
        5.3.4 Audio-based Emotion Recognition Results and discussion第97-104页
    5.4 Facial Expression Recognition第104-118页
        5.4.1 Facial Expression Datasets Description第104-106页
        5.4.2 Facial-based Multiple Features extraction第106-107页
        5.4.3 Facial Expression Recognition Experiments description第107-108页
        5.4.4 Facial Expression Recognition Results and Discussion第108-118页
    5.5 Conclusion of the Chapter第118-120页
6 MULTIMODAL EMOTION RECOGNITION USING THE LUPI PARADIGM第120-141页
    6.1 Multimodal Emotion Recognition (MER)第120-126页
        6.1.1 Emotion Definition, Challenges and Opportunities第121-122页
        6.1.2 Multimodal Information Fusion for Emotion Recognition第122-123页
        6.1.3 Privileged Information in Multimodal Emotion Recognition第123-124页
        6.1.4 Proposed Method for MER Fusion第124-126页
    6.2 Dataset Description第126-130页
        6.2.1 eNTERFACE'05 Audio-Visual Dataset Preprocessing第127-128页
        6.2.2 Multimodal Feature Extraction第128-130页
    6.3 Multimodal Emotion Recognition Experiments Design第130-132页
        6.3.1 Fusion Procedure第130-131页
        6.3.2 Objectives of the Experiments第131-132页
    6.4 Multimodal-based Experiment Results and Discussion第132-140页
        6.4.1 Performance evaluation第132-134页
        6.4.2 Comparison to Other Methods第134-137页
        6.4.3 Improvement Evaluation第137-139页
        6.4.4 Evaluation of individual modality contribution第139-140页
    6.5 Conclusion of the Chapter第140-141页
7 CONCLUSION AND FUTURE DIRECTIONS第141-145页
    7.1 General Conclusion第141-142页
    7.2 Future Work第142-145页
参考文献第145-165页
作者简历及在学研究成果第165-168页
学位论文数据集第168页

论文共168页,点击 下载论文
上一篇:基于一种改进特征选择方法的股票分类研究
下一篇:IT支持供应链合:加纳农产品制造业案例