Manchot Research Group II "Decision Making with the Help of Artificial Intelligence Methods" - Use Case Economy "Social Governance and Compliance".
In the second funding period of the Manchot Research Group II "Decision Making Using Artificial Intelligence Methods", the knowledge gained in the first funding period on employee reactions to the use of AI in the field of human resources will be expanded.
By means of various qualitative and quantitative studies, a guideline and action plan for companies will be developed.
If you as a company are interested in this research project and would like to support us with your experience, please contact us.
Further information on the Manchot Research Group II can be found here.
Contact partner at the HHU Düsseldorf: Prof. Dr. Marius Wehner
Project duration: 01.01.2022 - 12.31.2024
Collaborative project between HTW Berlin and Heinrich Heine University Düsseldorf.
The project "Fair Enough" will deal with how the fairness of learning analytics systems can be checked and audited.
The social consequences of algorithmic decision processes are determined by the combination of the implemented algorithmic processes with the used data and the user behavior. While the fairness of algorithmic processes in computer science is primarily judged by quantitative, formal-analytical standards, which cannot all be fulfilled at the same time, users judge the fairness of algorithmic processes rather on their subjective, individual perception and social norms. Therefore, the project investigates the topic from two complementary sides:
1. development of practical methods to assess the fairness of learning analytics systems and data (HTW Berlin),
2. investigation of the requirements and expectations of users regarding the fairness of learning analytics systems (HHU Düsseldorf).
As a result of the project, a tool in the form of a 6-step guideline for the examination of learning analytics systems with regard to their fairness is to be developed, which takes into account both the fairness of the system from data and algorithms as well as the usage process of the statements made by the system.
As part of the project, we are conducting experiments with learners and teachers, among others - if you are interested in our research project, please feel free to contact us.
Contact person at HHU Düsseldorf: Prof. Dr. Marius Wehner
Project duration: 03.01.2021-02.29.2024
Project Manager: Prof. Dr. Katharina Simbeck
Project assistant at the HHU Düsseldorf: Lynn Schmodde
Project staff member at the HTW Berlin: Linda Fernsel
The Cranfield Network on International Strategic Human Resource Management (CRANET) has been collecting international data for 30 years with the aim of identifying trends in human resource management.
Under the motto "Hello Future. Hello HR", 47 universities worldwide are cooperating to create a valid data basis for researching relevant practices and trends in human resource management.
The current focus of the CRANET 2021 study is to explore topics such as:
HR practices in national and comparative, international comparison.
Impact of the COVID19 pandemic on HRM and HR practices.
Analysis of changes in "lived" HR practices over time.
Data will be collected by the end of September 2021 in the form of standardized questionnaires sent to national organizations by the respective network partners. The special feature of the surveys is that the questionnaires are identical - with the exception of language and, if necessary, country-specific modifications. Thus, they provide a unique and comparable database for science and practice, which is used as a basis for relevant research.
We would like to express our sincere thanks to HRpepper Management Consultants for their kind support of the German Cranet survey in 2021.
The 2015/16 studies provide, for example, interesting initial findings on:
The increasing importance of digitalization in recruiting, talent management and learning.
The use of flexible and agile ways of working, such as interactive learning spaces, flexible organizational structures and working models, etc.
The uncertainty of HR departments to deal with these issues and to be prepared for a digital transformation
Contact person at HHU:
Joint project between the HTW Berlin and the Heinrich-Heine-University Düsseldorf.
Description of the joint project:
The project "LADi - Learning Analytics and Discrimination" addressed whether and to what extent implicit discrimination based on gender, age, origin or learning type can be fostered or prevented by the use of algorithmic evaluations in digital learning systems and processes.
The use of digital learning platforms generates data about learners, their usage behavior, and their learning processes (e.g., demographic data, tests taken, activity data). This data can be evaluated by means of learning analytics and used to measure and predict learning success or to individualize learning processes - for example, by means of interventions by teachers or by adapting teaching materials. At the same time, learning analytics harbors the risk of disadvantage and discrimination, for example if learners are predicted to have lower learning success due to certain characteristics.
Results of the joint project:
The results of the conducted studies within the project "LADi" show that a possible discrimination of certain groups has to be kept in mind when applying learning analytics. Thus, the results show that the under- or over-representation of certain groups in the initial data leads to biases in the decision of the algorithm used. Furthermore, it was possible to identify discrimination potentials based on demographic data, learning behavior and degree of disability when using learning analytics. With several experiments conducted, it has also been shown that various dangers arise from relying on learning analytics as the sole decision-making entity. For example, a conjoint analysis showed that teachers rely heavily on the automatic assessment recommendations on learning platforms, whereas, for example, the learning behavior or ethnicity of the learners mapped there is not or hardly taken into account for the assessment. While a study with learners demonstrated that learners generally perceive the use of learning analytics as fair, another experiment showed that teachers can be influenced by the recommendation of an algorithm to the extent that they would tend to lower their rating if the platform gave a negative recommendation (warning), but would not raise their rating if the recommendation was positive. The results thus show both the advantages and the problems that can arise from the use of learning analytics.
The results of the LADi project thus enrich, among other things, the scientific and social discussions around the topics of fairness and transparency in the context of algorithmic decision-making and measures to prevent discrimination.
Project duration: 11.01.2018 – 02.28.2022
External project website: https://digi-ebf.de/learning-analytics-und-diskriminierung
Publications from the collaborative project:
Yun, H., Riazy, S., Fortenbacher, A. & Simbeck, K., (2019). Code of Practice for Sensor-Based Learning. In: Pinkwart, N. & Konert, J. (Hrsg.), DELFI 2019. Bonn: Gesellschaft für Informatik e.V. (S. 199-204). DOI: 10.18420/delfi2019_326
Riazy, S. & Simbeck, K., (2019). Predictive Algorithms in Learning Analytics and their Fairness. In: Pinkwart, N. & Konert, J. (Hrsg.), DELFI 2019. Bonn: Gesellschaft für Informatik e.V. (S. 223- 228). DOI: 10.18420/delfi2019_305
Köchling, A. & Riazy, S. (2019). Fluch oder Segen? Big Data und Learning Analytics im Lernkontext. weiter bilden, (4), 17–20. www.die-bonn.de/id/37212/about/html/
Riazy, S.; Weller, S. and Simbeck, K. (2020). Evaluation of Low-threshold Programming Learning Environments for the Blind and Partially Sighted.In Proceedings of the 12th International Conference on Computer Supported Education - Volume 2: CSEDU, ISBN 978-989-758-417-6, pages 366-373. DOI: 10.5220/0009448603660373
Riazy, S.; Simbeck, K.; Woestenfeld, R. and Traeger, M. (2020). Prior Knowledge as a Predictor for Persistence.In Proceedings of the 12th International Conference on Computer Supported Education - Volume 1: CSEDU, ISBN 978-989-758-417-6, pages 137-144. DOI: 10.5220/0009324201370144
Riazy, S.; Simbeck, K. and Schreck, V. (2020). Fairness in Learning Analytics: Student At-risk Prediction in Virtual Learning Environments. In Proceedings of the 12th International Conference on Computer Supported Education - Volume 1: CSEDU, ISBN 978-989-758-417-6, pages 15-25. DOI: 10.5220/0009324100150025
Köchling, A. and Riazy, S. (2020) “Big Data et Learning Analytics: bienfait ou fléau?” Education Permanente (2) 2020. 20 Köchling, A., Riazy, S., Wehner, M. C., & Simbeck, K. (2021). Highly Accurate, But Still Discriminatory. Business & Information Systems Engineering, 63(1), 39-54.) DOI: 10.1007/s12599-020-00673-w
Riazy, S., Simbeck, K., Schreck, V. (2021). Systematic Literature Review of Fairness in Learning Analytics and Application of Insights in a Case Study. In: Lane, H.C., Zvacek, S., Uhomoibhi, J. (eds) Computer Supported Education. CSEDU 2020. Communications in Computer and Information Science, vol 1473. Springer, Cham. doi.org/10.1007/978-3-030-86439-2_22
Riazy, S., Simbeck, K., Traeger, M., Woestenfeld, R. (2021). The Effect of Prior Knowledge on Persistence, Participation and Success in a Mathematical MOOC. In: Lane, H.C., Zvacek, S., Uhomoibhi, J. (eds) Computer Supported Education. CSEDU 2020. Communications in Computer and Information Science, vol 1473. Springer, Cham. doi.org/10.1007/978-3- 030-86439-2_21
Rzepka, N., Müller, H.-G., and Simbeck, K. (2021). What you apply is not what you learn! Examining students' strategies in German capitalization tasks. In Proceedings of the 14th International Conference on Educational Data Mining.
Mai, L., Köchling, A., and Wehner, M. C. (2021). ‘This Student Needs to Stay Back’: To What Degree Would Instructors Rely on the Recommendation of Learning Analytics? In Proceedings of the 13th International Conference on Computer Supported Education - Volume 1: CSEDU, ISBN 978-989-758-502-9; ISSN 2184-5026, 189-197. DOI: 10.5220/0010449401890197.
Mai, L., Köchling, A., Schmodde, L., and Wehner, M. C. (2021). Teacher vs. Algorithm: Learners’ Fairness Perception of Learning Analytics Algorithms. In Lingnau, Andreas (Hg.) (2021). DELFI 2021 - 19. Fachtagung Bildungstechnologien der GI: Hochschule Ruhr West. ISBN: 978-3- 946757-03-0, urn:nbn:de:hbz:1393-opus4-7338. S.130-145.
Mai, L.; Köchling, A. & Wehner, M. C. (2022) ”This Student Needs to Stay Back”: To What Degree Would Instructors Rely on the Recommendation of Learning Analytics?, SN Computer Science 3, 259 (2022). doi.org/10.1007/s42979-022-01137-6
Rzepka, N., Simbeck, K., Müller, H.-G. and Pinkwart, N. (2022). Keep It Up: In-session Dropout Prediction to Support Blended Classroom Scenarios. Proceedings of the 14th International Conference on Computer Supported Education - Volume 2: CSEDU,, SciTePress, 2022, ISBN 978-989-758-562-3
Rzepka, N., Simbeck, K., Müller, H.-G. and Pinkwart, N. (2022). Fairness of In-session Dropout Prediction. Proceedings of the 14th International Conference on Computer Supported Education - Volume 2: CSEDU CSEDU, SciTePress, 2022, ISBN 978-989-758-562-3
Practical contributions from the joint project:
Köchling, A. and Riazy, S. (2020). Learning Analytics: Wann ist Personalisierung diskriminierend? www.forumbd.de/blog/learning-analytics-wann-ist-personalisierung-diskriminierend
Köchling, A. and Kaiser, H. (2020). Learning Analytics: Die digitale Zukunft des Lernens. www.netzwerk-digitale-bildung.de/learning-analytics-die-digitale-zukunft-des-lernens/
Köchling, A. and Nieter, A. (2020). Wie digital ist die Bildungspraxis? www.codingkids.de/wissen/status-quo-wie-digital-ist-die-bildungspraxis-1
Interview mit Jun.-Prof. Dr. Marius Wehner in der DUZ - Magazin für Wissenschaft und Gesellschaft. Die finale Entscheidung sollten immer Menschen treffen www.duz.de/media/duzDe/issues/d0c2kw/cff7y8/web/html5/index.html
Sporn, Z. and Rzepka, N. (2021). How Babbel Kept Students Learning Languages Through the First Lockdown www.babbel.com/en/magazine/how-babbel-kept-students-learning-languages-throughlockdown
The project "Manchot AI" addresses the question of how the promotion of talent in a company through the use of algorithms and artificial intelligence (AI) can lead to discrimination and compliance violations.
The increasing use of AI and digitally available data in HR is resulting in more new information for talent management:
Activity-based data: e.g., absenteeism, performance metrics, organizational metrics,
Personal data: e.g., demographic characteristics, relationship status, children, religion,
Subjective data: e.g., manager ratings, social network data.
In addition to these new opportunities, however, new problems and legal issues arise when using AI to identify talent. Already in the past, online job search and personnel selection using AI identified cases of implicit discrimination against applicants. This type of discrimination is also conceivable in the context of personnel measures for career advancement.
In the "Manchot AI" project, companies that already use algorithmic processes for talent development in their operational practice will first be interviewed in order to gain experience from experts regarding the use of AI for talent development in practice.
Subsequently, realistic scenarios for the use of AI in the context of talent management will be developed, representing the use of algorithmic processes to varying degrees.
For more information on this project, see: www.heicad.hhu.de
If you as a company are interested in this research project and would like to support us with your experience, please feel free to contact us.
Contact person at HHU Düsseldorf: Prof. Dr. Marius Wehner
Project duration: 01.01.2019 - 12.31.2021