Interpretable and Accurate Fine-grained Recognition via Region Grouping

Interpretable and Accurate Fine-grained Recognition via Region Grouping

论文地址:https://openaccess.thecvf.com/content_CVPR_2020/papers/Huang_Interpretable_and_Accurate_Fine-grained_Recognition_via_Region_Grouping_CVPR_2020_paper.pdf

代码地址:https://github.com/zxhuang1698/interpretability-by-parts

We present an interpretable deep model for fine-grained visual recognition. At the core of our method lies the integration of region-based part discovery and attribution within a deep neural network. Our model is trained using image-level object labels, and provides an interpretation of its results via the segmentation of object parts and the identification of their contributions towards classification. To facilitate the learning of object parts without direct supervision, we explore a simple prior of the occurrence
of object parts. We demonstrate that this prior, when combined with our region-based part discovery and attribution, leads to an interpretable model that remains highly accurate. Our model is evaluated on major fine-grained recognition datasets, including CUB-200, CelebA and iNaturalist. Our results compares favourably to state of-the-art methods on classification tasks, and outperforms previous approaches on the localization of object parts.

我们提出一个用于细粒度视觉识别的可解释深度模型。我们方法的核心是利用一个深度神经网络整合基于区域的part 发现和attribution。我们的方法使用图像级别的目标标签训练,并通过分割目标parts并确定其对分类的贡献来解释其结果。为了在没有直接监督的情况下促进对目标parts的学习,我们探索了一个简单的目标parts出现的先验。我们展示了当结合先验知识和我们的基于区域part 发现和attribution,能够实现一个能够保留高精确度的可解释的模型。我们在大型细粒度识别数据集上进行了验证,包含CUB-200,CelebA和iNaturalist。在分类任务上我们的结果相比于先进方法效果较好,并且在目标parts超出了之前的方法。

发表评论

邮箱地址不会被公开。 必填项已用*标注