For this year’s edition (the third in a row) of the Recommender Systems in Human Resources workshop, to be held at the ACM RecSys Conference in Singapore, I co-authored three accepted papers:
Enhancing Resume Content Extraction in Question Answering Systems through T5 Model Variant
-
Y. Luo, F. Lu, V. Pal, and D. Graus, “Enhancing resume content extraction in question answering systems through t5 model variants,” in Recsys in hrโ23: the 3\textsuperscriptrd workshop on recommender systems for human resources, 2023.
[Bibtex]@inproceedings{luo2023enhancing, title={Enhancing Resume Content Extraction in Question Answering Systems through T5 Model Variants}, author={Yuxin Luo and Feng Lu and Vaishali Pal and David Graus}, year={2023}, booktitle = {RecSys in HRโ23: The 3\textsuperscript{rd} Workshop on Recommender Systems for Human Resources}, numpages = {10}, location = {Singapore}, series = {CEUR Workshop Proceedings}, month={9} }
This paper is based on the MSc Data Science thesis of Yuxin, who was supervised by Feng. Yuxin applied Large Language Models (mT5) to do Question Answering over resumes for information extraction.
Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach
-
S. Avlonitis, D. Lavi, M. Mansoury, and D. Graus, “Career path recommendations for long-term income maximization: a reinforcement learning approach,” in Recsys in hrโ23: the 3\textsuperscriptrd workshop on recommender systems for human resources, 2023.
[Bibtex]@inproceedings{avlonitis2023career, title={Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach}, author={Spyros Avlonitis and Dor Lavi and Masoud Mansoury and David Graus}, year={2023}, booktitle = {RecSys in HRโ23: The 3\textsuperscript{rd} Workshop on Recommender Systems for Human Resources}, numpages = {8}, location = {Singapore}, series = {CEUR Workshop Proceedings}, month={9} }
This is Spyros’ MSc AI thesis (from 2022), who was jointly supervised by me and Dor. Spyros applied reinforcement learning to career path recommendations, leveraging Randstad’s rich data for simulating an environment in which agents can apply for jobs, be hired, and receive salary.
Enhancing PLM Performance on Labour Market Tasks via Instruction-based Finetuning and Prompt-tuning with Rules
-
J. Vrolijk and D. Graus, “Enhancing plm performance on labour market tasks via instruction-based finetuning and prompt-tuning with rules.” 2023.
[Bibtex]@inproceedings{vrolijk2023enhancing, title={Enhancing PLM Performance on Labour Market Tasks via Instruction-based Finetuning and Prompt-tuning with Rules}, author={Jarno Vrolijk and David Graus}, year={2023}, eprint={2308.16770}, archivePrefix={arXiv}, primaryClass={cs.CL} }
This paper is based on work by Jarno on using structured taxonomy data for training and fine-tuning Large Language Models for different downstream tasks (such as relation classification, entity linking, and question answering).
See the full list of accepted papers.

Leave a Reply