The development of algorithmic applications in many areas raises more and more ethical concerns about their short and long-term consequences, whether they are about transparency, justice, security or many others. In order to identify and answer them, many reports established guidelines in order to assist either developers or lawmakers designing and regulating AI systems. These reports aim to be as generic as possible so that they can be relevant for as much automated decision-making systems as possible. Due to this however, the task of linking these abstract principles to actual situations and concrete methods to solve them can be difficult.
If AI ethics tends to be focused on machine learning applications, many applications may raise ethical concerns in the field of Operations Research too. It is known for a while, but recent studies show a new scale. For example, in France, Parcoursup's assignment algorithm impacts hundred of thousands of students' course each year, raising questions about the criteria that are used and their legitimacy. Another example we can consider deals with navigational tools, that have a great impact on traffic and could cause dangerous effects depending on the circumstances. Similarly to these examples, there are many OR applications that already have a great impact on society, and for which ethical questions need to be asked. In fact, the issues that may come with OR algorithms are also diverse and depend from the different types of problems. In order to design proper ways to tackle these issues, a first step is then to identify them into the context of use. In that presentation, we would like to focus on personnel scheduling problems, and most especially analyze the ethical issues related to employees' conditions that these problems can involve, in order to draw some research avenues.