Follow
Thibault Laugel
Thibault Laugel
TRAIL (AXA - Sorbonne Université)
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
The dangers of post-hoc interpretability: Unjustified counterfactual explanations
T Laugel, MJ Lesot, C Marsala, X Renard, M Detyniecki
IJCAI'19: Proceedings of the 28th International Joint Conference on …, 2019
2502019
Comparison-based inverse classification for interpretability in machine learning
T Laugel, MJ Lesot, C Marsala, X Renard, M Detyniecki
Information Processing and Management of Uncertainty in Knowledge-Based …, 2018
250*2018
Defining locality for surrogates in post-hoc interpretablity
T Laugel, X Renard, MJ Lesot, C Marsala, M Detyniecki
ICML 2018 Workshops: ICLM 2018 Workshop on Human Interpretability in Machine …, 2018
1192018
Imperceptible adversarial attacks on tabular data
V Ballet, X Renard, J Aigrain, T Laugel, P Frossard, M Detyniecki
NeurIPS 2019 Workshops: NeurIPS 2019 Workshop on Robust AI in Financial …, 2019
1062019
Issues with post-hoc counterfactual explanations: a discussion
T Laugel, MJ Lesot, C Marsala, M Detyniecki
ICML 2019 Workshops: ICML 2019 Workshop on Human In the Loop Learning (HILL), 2019
622019
Unjustified classification regions and counterfactual explanations in machine learning
T Laugel, MJ Lesot, C Marsala, X Renard, M Detyniecki
ECML PKDD 2019, 37-54, 2020
332020
How to choose an explainability method? Towards a methodical implementation of XAI in practice
T Vermeire, T Laugel, X Renard, D Martens, M Detyniecki
ECML-PKDD 2021 Workshop: XKDD, 521-533, 2021
232021
Achieving diversity in counterfactual explanations: a review and discussion
T Laugel, A Jeyasothy, MJ Lesot, C Marsala, M Detyniecki
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and …, 2023
172023
When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness
T Laugel, N Krco, V Grari, JM Loubes, M Detyniecki
arXiv preprint arXiv:2302.07185, 2023
15*2023
Understanding prediction discrepancies in classification
X Renard, T Laugel, M Detyniecki
Machine Learning 113 (10), 7997-8026, 2024
11*2024
Integrating prior knowledge in post-hoc explanations
A Jeyasothy, T Laugel, MJ Lesot, C Marsala, M Detyniecki
International Conference on Information Processing and Management of …, 2022
112022
Detecting potential local adversarial examples for human-interpretable defense
X Renard, T Laugel, MJ Lesot, C Marsala, M Detyniecki
ECML PKDD 2018 Workshops: Nemesis 2018, UrbReas 2018, SoGood 2018, IWAISe …, 2019
112019
On the overlooked issue of defining explanation objectives for local-surrogate explainers
R Poyiadzi, X Renard, T Laugel, R Santos-Rodriguez, M Detyniecki
arXiv preprint arXiv:2106.05810, 2021
102021
A general framework for personalising post hoc explanations through user knowledge integration
A Jeyasothy, T Laugel, MJ Lesot, C Marsala, M Detyniecki
International Journal of Approximate Reasoning 160, 108944, 2023
82023
Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
R Poyiadzi, X Renard, T Laugel, R Santos-Rodriguez, M Detyniecki
arXiv preprint arXiv:2107.04309, 2021
62021
Knowledge Integration in XAI with Gödel Integrals
A Jeyasothy, A Rico, MJ Lesot, C Marsala, T Laugel
2023 IEEE International Conference on Fuzzy Systems (FUZZ), 1-6, 2023
42023
Local post-hoc interpretability for black-box classifiers
T Laugel
Sorbonne Université, 2020
3*2020
Why do explanations fail? A typology and discussion on failures in XAI
C Bove, T Laugel, MJ Lesot, C Tijus, M Detyniecki
arXiv preprint arXiv:2405.13474, 2024
22024
On the Fairness ROAD: Robust Optimization for Adversarial Debiasing
V Grari, T Laugel, T Hashimoto, S Lamprier, M Detyniecki
ICLR 2024, 2024
22024
Intégration de connaissances en XAI avec les intégrales de Gödel
A Jeyasothy, A Rico, MJ Lesot, C Marsala, T Laugel
Rencontres Francophones sur la Logique Floue et ses Applications (LFA), 2023
22023
The system can't perform the operation now. Try again later.
Articles 1–20