Together with Helen Hulsker I joined of a webinar on governing AI by Dataiku, read the blurb here:
There are many obstacles for Data Scientists in effectively governing the use of AI such as, overcoming the bias inherent in historical data, agreeing on the boundaries of governing AI with the business, and perhaps most importantly, ensuring the adoption of good AI practices across the organisation.
So join Randstad and Dataiku as we set the stage around implementing AI in a high-risk domain from a technical/practical and jurisdictional perspective.
Excited to be giving a keynote at the International Workshop on Fair, Effective And Sustainable Talent management using data science (FEAST workshop), part of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021).
I’ll give a talk about some of our recent data science projects Randstad Groep Nederland, spanning from our job/candidate recommender systems, work on bias and bias mitigation, knowledge graphs, and deep embeddings for learning to match candidates to vacancies.
The AI doomsday scenarios, ignited by books such as The Filter Bubble (2011) and Weapons of Math Destruction (2016), are slowly being superseded by more pragmatic and nuanced views of AI. Views in which we acknowledge we’re in control of AI and able to design them in ways that reflect values of our choice.
This shift can be seen in the rising involvement of computer scientists, e.g., through books such as The Ethical Algorithm (2019) or Understand, Manage, and Prevent Algorithmic Mitigate Bias (2019), books that describe and acknowledge the challenges and complexities of algorithmic fairness, but at the same time offer concrete methods and tools for more fair and ethical algorithms. This shift can too be seen in that the methods described in these books have already found their ways into the offerings of all major cloud providers, e.g., at the FAccT 2021 Tutorial “Responsible AI in Industry: Lessons Learned in Practice” Microsoft, Google, and Amazon demoed their fair AI solutions to the multidisciplinary audience of the FAccT community.
The message is clear: we can (and should!) operationalize algorithmic fairness.
We received news that our workshop proposal “RecSys in HR: Workshop on Recommender Systems for Human Resources” was accepted for inclusion in the 15th ACM Conference on Recommender Systems (RecSys 2021) program! That means we’ll be running a full-day workshop with (research and position) papers, keynotes, and a panel (all TBD) during the conference which will be held in Amsterdam, 27th September-1st October 2021.
That was fun! The Online Anti-Discrimination Hackathon ran last weekend, a hakacthon co-organized by Ministerie van OCW, Inspectie SZW, Ministerie van BZK, and Hackathon Factory which revolved around “gender discrimination in data collection and labeling for automated assessment and selection of candidates.” Having ample experience in designing and developing AI-powered candidate selection systems, and the risks of algorithmic bias, I was happy to contribute to this hackathon with a few colleagues at Randstad Groep Nederland in several ways.
Talk on Algorithmic Bias Mitigation in Automated Recruitment
First, I gave a (virtual, pre-recorded) talk on algorithmic matching, algorithmic bias, and bias mitigation in the domain of automated recruitment. More specifically, I shared how and where we use AI and recommender systems to facilitate job and candidate matching at Randstad Groep Nederland, and more generally about the challenges of bias and the opportunities of bias mitigation. I showcased both examples of misuse of AI, which results in discriminatory systems, and examples of how AI can be used to actively reduce or mitigate bias in the recruitment process. See the recording of my talk below:
In addition, me and a colleague joined live interactive roundtable sessions during the hackathon, and we brought a panel of four subject-matter experts for one-on-one sessions with hackathon teams and participants.
David Graus, Randstad Groep Nederland, Commissie AI: “AI biedt enorme kansen om filterbubbels te doorbreken en bias te reduceren”
David heeft een achtergrond in zoekmachinetechnologie waarmee hij inmiddels een carrière heeft opgebouwd gericht op de ontwikkeling van personalisatie- en aanbevelingssystemen. In zijn huidige rol geeft David leiding aan de data scientists van Randstad Groep Nederland en is hij betrokken bij alle facetten van het bouwen van AI-systemen, van ideation tot aan het daadwerkelijk in productie brengen en monitoren van systemen. “Uit ervaring weet ik dat je als AI-ontwerper en -bouwer over de mogelijkheden beschikt om personalisatie- en aanbevelingssystemen in de basis ethisch verantwoord op te zetten. AI biedt daarom grote kansen om filterbubbels te doorbreken en bias te reduceren. Als lid van de Commissie AI hoop ik de sector hierbij te helpen.“
Leuk stuk van Lubach, maar als filterbubbelontkenner en iemand die een flink deel van z’n boterham verdient aan het bouwen aan soortgelijke “algoritmen” voel ik me wel geroepen wat nuance in te brengen 😉.
After attending the beautiful virtual 14th ACM Conference on Recommender Systems (RecSys2020), I am happy to start looking forward to RecSys2021, which will be held in Amsterdam!
I am super excited to share that I’ve joined the organizing committee of RecSys2021 as local outreach chair, which means I’ll help out assisting the other chairs and linking the (local) industry and companies to the conference.
Work with impact. At Randstad Groep Nederland IT you keep the country moving, enabling people across sectors to do their work, getting pizza on your table and your suitcase on the plane. Your AI solutions mean tomorrow’s recruiter is smarter and faster but still embodies our human forward approach, combining tech with a personal touch and putting people first – including you. Constantly experimenting, working on new NLP use cases and matching systems or expanding our self-service data platform. If you bring the idea we will provide the freedom to explore, so you can help us shape the world of work.
Data Science @ RGN
Randstad IT is organized in a variation of the Spotify Engineering Model with squads, tribes, and chapters. Our data science chapter spans 12 data scientists, data engineers and machine learning engineers over 3 departments (IT, finance, and marketing), across 6 different teams. These teams work on recommender systems for algorithmic job matching, natural language processing and information extraction, forecasting, and more. We are further interested in AI fairness and auditing, explainability, and transparency.
Who are you?
We’re looking for students studying AI, data science, or related programs, for either graduation projects or regular internships. Fluency in python is required, and we expect our interns to work autonomously. However, as an intern you’ll be a fully fledged member of our chapter, which means you get to benefit from the knowledge that is being shared in our chapter.
This mission perfectly fits my personal conviction that knowledge and understanding of technology through media/algorithmic-literacy — not fear and repression — is vital in progressing into our technology-infused future! See, e.g., what I wrote about it on the neutrality of algorithms, or “algorithmic literacy.”
Prior to joining their board, I have been following SETUP for a couple of years, joining some of their meetups, and giving a talk at one of their events in 2018 “leven met algoritmen.” I am very excited to start as a board member and help set up SETUP’s future!