Skip to content

Commit f3db293

Browse files
committed
add input gradient tutorial; update README
1 parent 36206fe commit f3db293

File tree

3 files changed

+1504
-2
lines changed

3 files changed

+1504
-2
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,6 +167,7 @@ Two dimensions (representations of explanation results and types of the target m
167167
| [ForgettingEvents](https://github.com/PaddlePaddle/InterpretDL/blob/master/interpretdl/interpreter/forgetting_events.py) | Dataset-Level | Differentiable |
168168
| [TIDY (Training Data Analyzer)](https://github.com/PaddlePaddle/InterpretDL/blob/master/tutorials/TIDY.ipynb) | Dataset-Level | Differentiable |
169169
| [Consensus](https://github.com/PaddlePaddle/InterpretDL/blob/master/interpretdl/interpreter/consensus.py) | Features | Cross-Model |
170+
| [Generic Attention](https://github.com/PaddlePaddle/InterpretDL/blob/master/interpretdl/interpreter/generic_attention.py) | Input Features | Specific: Bi-Modal Transformers |
170171

171172
\* LRP requires that the model is of specific implementations for relevance back-propagation.
172173

@@ -179,7 +180,7 @@ Two dimensions (representations of explanation results and types of the target m
179180
## Planning Alorithms
180181

181182
* Intermediate Features Interpretation Algorithm
182-
- [ ] Bi-Modal Explanation
183+
- [ ] More Transformers Specific Interpreters
183184

184185
* Dataset-Level Interpretation Algorithms
185186
- [ ] Influence Function
@@ -214,6 +215,8 @@ Two dimensions (representations of explanation results and types of the target m
214215
* `Perturbation`: [Evaluating the visualization of what a deep neural network has learned.](https://arxiv.org/abs/1509.06321)
215216
* `Deletion&Insertion`: [RISE: Randomized Input Sampling for Explanation of Black-box Models.](https://arxiv.org/abs/1806.07421)
216217
* `PointGame`: [Top-down Neural Attention by Excitation Backprop.](https://arxiv.org/abs/1608.00507)
218+
* `Generic Attention`: [Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
219+
](https://arxiv.org/abs/2103.15679)
217220

218221
# Copyright and License
219222

tutorials/Input_Gradient.ipynb

Lines changed: 1499 additions & 0 deletions
Large diffs are not rendered by default.

tutorials/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ The available (and planning) tutorials are listed below:
5555
[Ernie1.0 in Chinese](https://github.com/PaddlePaddle/InterpretDL/blob/master/tutorials/ernie-1.0-zh-chnsenticorp.ipynb) ([on NBViewer](https://nbviewer.org/github/PaddlePaddle/InterpretDL/blob/master/tutorials/ernie-1.0-zh-chnsenticorp.ipynb))
5656
as examples. For text visualizations, NBViewer gives better and colorful rendering results.
5757

58-
- Input Gradient Interpreters (to appear). This tutorial introduces the input gradient based interpretation algorithms.
58+
- [Input Gradient Interpreters](https://github.com/PaddlePaddle/InterpretDL/blob/master/tutorials/Input_Gradient.ipynb). This tutorial introduces the input gradient based interpretation algorithms.
5959

6060
- LIME and Its Variants (to appear). This tutorial introduces the LIME algorithms and many advanced improvements based on LIME.
6161

0 commit comments

Comments
 (0)