Coarse-to-Fine Concept Bottleneck Models - Département MathNum
Communication Dans Un Congrès Année : 2024

Coarse-to-Fine Concept Bottleneck Models

Résumé

Deep learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets ante hoc interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on two levels of granularity. To this end, we propose a novel two-level concept discovery formulation leveraging: (i) recent advances in vision-language models, and (ii) an innovative formulation for coarse-to-fine concept selection via data-driven and sparsity-inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the whole image and general unstructured concepts; instead, we introduce the notion of concept hierarchy to uncover and exploit more granular concept information residing in patch-specific regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpetability.
Fichier principal
Vignette du fichier
2310.02116v2.pdf (3.74 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04709806 , version 1 (26-09-2024)

Identifiants

  • HAL Id : hal-04709806 , version 1

Citer

Konstantinos Panousis, Dino Ienco, Diego Marcos. Coarse-to-Fine Concept Bottleneck Models. NeurIPS 2024 - 38th Annual Conference on Neural Information Processing Systems, Dec 2024, Vancouver (BC), Canada. ⟨hal-04709806⟩
138 Consultations
81 Téléchargements

Partager

More