Learning with Combinatorial Optimization Layers: a Probabilistic Approach
Guillaume Dalle  1@  , Léo Baty  2@  , Louis Bouvier  3@  , Axel Parmentier  4@  
1 : CERMICS, Ecole des Ponts
Ministère de la Transition écologique et solidaire
2 : CERMICS, École des Ponts
Ecole des Ponts ParisTech
3 : CERMICS, Ecole des Ponts
Ecole des Ponts ParisTech
4 : CERMICS, Ecole des Ponts
Ecole des Ponts ParisTech

Combinatorial optimization (CO) layers in machine learning (ML) pipelines are a powerful tool to tackle data-driven decision tasks, but they come with two main challenges. First, the solution of a CO problem often behaves as a piecewise constant function of its objective parameters. Given that ML pipelines are typically trained using stochastic gradient descent, the absence of slope information is very detrimental. Second, standard ML losses do not work well in combinatorial settings. A growing body of research addresses these challenges through diverse methods. Unfortunately, the lack of well-maintained implementations slows down the adoption of CO layers.

In this work, building upon previous works, we introduce a probabilistic perspective on CO layers, which lends itself naturally to approximate differentiation and the construction of structured losses. We recover many approaches from the literature as special cases, and we also derive new ones. Based on this unifying perspective, we present InferOpt.jl, an open-source Julia package that 1) allows turning any CO oracle with a linear objective into a differentiable layer, and 2) defines adequate losses to train pipelines containing such layers. Our library works with arbitrary optimization algorithms, and it is fully compatible with Julia's ML ecosystem. We demonstrate its abilities on several applications from the operations research literature.

Personnes connectées : 2 Vie privée
Chargement...