Skip to content

Commit 29015ac

Browse files
committed
update images in asl-20/asl-23/asl-25 notebooks.
1 parent 0089659 commit 29015ac

4 files changed

+32
-26
lines changed

Diff for: asl-15-classifier-deployment-dpu.ipynb

+14-14
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,7 @@
344344
"outputs": [],
345345
"source": [
346346
"\n",
347-
"model = keras.models.load_model('tf2_asl_classifier3.h5')\n"
347+
"model = keras.models.load_model('tf2_asl_classifier13.h5')\n"
348348
]
349349
},
350350
{
@@ -484,7 +484,7 @@
484484
"metadata": {},
485485
"outputs": [],
486486
"source": [
487-
"quantized_model.save('tf2_asl_classifier_quantized.h5')"
487+
"quantized_model.save('tf2_asl_classifier13_quantized.h5')"
488488
]
489489
},
490490
{
@@ -672,39 +672,39 @@
672672
],
673673
"source": [
674674
"!vai_c_tensorflow2 \\\n",
675-
" --model ./tf2_asl_classifier_quantized.h5 \\\n",
675+
" --model ./tf2_asl_classifier13_quantized.h5 \\\n",
676676
" --arch ./arch/B4096/arch-zcu104.json \\\n",
677-
" --output_dir ./model/B4096/ \\\n",
677+
" --output_dir ./model_vgg16/B4096/ \\\n",
678678
" --net_name asl_classifier\n",
679679
"\n",
680680
"!vai_c_tensorflow2 \\\n",
681-
" --model ./tf2_asl_classifier_quantized.h5 \\\n",
681+
" --model ./tf2_asl_classifier13_quantized.h5 \\\n",
682682
" --arch ./arch/B3136/arch-kv260.json \\\n",
683-
" --output_dir ./model/B3136/ \\\n",
683+
" --output_dir ./model_vgg16/B3136/ \\\n",
684684
" --net_name asl_classifier\n",
685685
"\n",
686686
"!vai_c_tensorflow2 \\\n",
687-
" --model ./tf2_asl_classifier_quantized.h5 \\\n",
687+
" --model ./tf2_asl_classifier13_quantized.h5 \\\n",
688688
" --arch ./arch/B2304/arch-b2304-lr.json \\\n",
689-
" --output_dir ./model/B2304/ \\\n",
689+
" --output_dir ./model_vgg16/B2304/ \\\n",
690690
" --net_name asl_classifier\n",
691691
"\n",
692692
"!vai_c_tensorflow2 \\\n",
693-
" --model ./tf2_asl_classifier_quantized.h5 \\\n",
693+
" --model ./tf2_asl_classifier13_quantized.h5 \\\n",
694694
" --arch ./arch/B1152/arch-b1152-hr.json \\\n",
695-
" --output_dir ./model/B1152/ \\\n",
695+
" --output_dir ./model_vgg16/B1152/ \\\n",
696696
" --net_name asl_classifier\n",
697697
"\n",
698698
"!vai_c_tensorflow2 \\\n",
699-
" --model ./tf2_asl_classifier_quantized.h5 \\\n",
699+
" --model ./tf2_asl_classifier13_quantized.h5 \\\n",
700700
" --arch ./arch/B512/arch-b512-lr.json \\\n",
701-
" --output_dir ./model/B512/ \\\n",
701+
" --output_dir ./model_vgg16/B512/ \\\n",
702702
" --net_name asl_classifier\n",
703703
"\n",
704704
"!vai_c_tensorflow2 \\\n",
705-
" --model ./tf2_asl_classifier_quantized.h5 \\\n",
705+
" --model ./tf2_asl_classifier13_quantized.h5 \\\n",
706706
" --arch ./arch/B128/arch-b128-lr.json \\\n",
707-
" --output_dir ./model/B128/ \\\n",
707+
" --output_dir ./model_vgg16/B128/ \\\n",
708708
" --net_name asl_classifier\n"
709709
]
710710
},

Diff for: asl-20-classifier-compatibility-dpu.ipynb

+11-5
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"4. Inspect model for Vitis-AI_DPU compatibility\n",
2020
"5. Iterate until a compatible model has been defined\n",
2121
"\n",
22-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_03_asl_vitis_ai.png' width=1000 align='center'><br/>"
22+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_03_asl_vitis_ai.png' width=1000 align='center'><br/>"
2323
]
2424
},
2525
{
@@ -204,7 +204,7 @@
204204
" )\n",
205205
"```\n",
206206
"\n",
207-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_01_imagenet.png' width=1000 align='center'><br/>"
207+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_01_imagenet.png' width=1000 align='center'><br/>"
208208
]
209209
},
210210
{
@@ -7294,7 +7294,7 @@
72947294
"### 4.1 VGG Convolutional Base\n",
72957295
"We begin by creating a model of the VGG-16 convolutional base. We can do this by instantiating the model and setting `include_top = False`, which excludes the fully connected layers. In this notebook, we will instantiate the model with weights that were learned by training the model on the ImageNet dataset.\n",
72967296
"\n",
7297-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_10_pretrained_base.png' width=1000 align='center'><br/>"
7297+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_10_pretrained_base.png' width=1000 align='center'><br/>"
72987298
]
72997299
},
73007300
{
@@ -7796,7 +7796,7 @@
77967796
"### 4.2 Add the Classification Layer (attempt 1)\n",
77977797
"Since we intend to train and use the model to classify hand signals from the ASL dataset (which has 29 classes), we will need to add our own classification layer. In this example, we have chosen to use just a single fully connected dense layer that contains 256 nodes, followed by a softmax output layer that contains 29 nodes for each of the 29 classes. The number of dense layers and the number of nodes per layer is a design choice, but the number of nodes in the output layer must match the number of classes in the dataset.\n",
77987798
"\n",
7799-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_02_asl.png' width=1000 align='center'><br/>"
7799+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_02_asl.png' width=1000 align='center'><br/>"
78007800
]
78017801
},
78027802
{
@@ -14647,7 +14647,13 @@
1464714647
"### 4.4 Add the Classification Layer (attempt 2)\n",
1464814648
"Since we intend to train and use the model to classify hand signals from the ASL dataset (which has 29 classes), we will need to add our own classification layer. In this example, we have chosen to use just a single fully connected dense layer that contains 256 nodes, followed by a softmax output layer that contains 29 nodes for each of the 29 classes. The number of dense layers and the number of nodes per layer is a design choice, but the number of nodes in the output layer must match the number of classes in the dataset.\n",
1464914649
"\n",
14650-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_03_asl_vitis_ai.png' width=1000 align='center'><br/>"
14650+
"It turns out that during model compilation, the dense layer of size 1280 is not supported on the smaller DPU B128 architecture.\n",
14651+
"\n",
14652+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_02_asl_B128_issue.png' width=1000 align='center'><br/>\n",
14653+
"\n",
14654+
"For this reason we reduce the size of the last layers from 1280 to 1000.\n",
14655+
"\n",
14656+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_03_asl_vitis_ai.png' width=1000 align='center'><br/>"
1465114657
]
1465214658
},
1465314659
{

Diff for: asl-23-classifier-fine-tuning.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"4. Add our custom classifier layer for the ASL dataset\n",
2020
"5. Train the model (the last four layers of the feature extractor, plus the classifier)\n",
2121
"\n",
22-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_06_asl_fine_tuning.png' width=1000 align='center'><br/>"
22+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_06_asl_fine_tuning.png' width=1000 align='center'><br/>"
2323
]
2424
},
2525
{
@@ -330,7 +330,7 @@
330330
" )\n",
331331
"```\n",
332332
"\n",
333-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_01_imagenet.png' width=1000 align='center'><br/>\n"
333+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_01_imagenet.png' width=1000 align='center'><br/>\n"
334334
]
335335
},
336336
{
@@ -845,7 +845,7 @@
845845
"### 4.1 VGG Convolutional Base\n",
846846
"We begin by creating a model of the VGG-16 convolutional base. We can do this by instantiating the model and setting `include_top = False`, which excludes the fully connected layers. In this notebook, we will instantiate the model with weights that were learned by training the model on the ImageNet dataset.\n",
847847
"\n",
848-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_10_pretrained_base.png' width=1000 align='center'><br/>\n"
848+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_10_pretrained_base.png' width=1000 align='center'><br/>\n"
849849
]
850850
},
851851
{
@@ -1348,7 +1348,7 @@
13481348
"\n",
13491349
"In the previous section, we set the `trainable` attribute of the convolutional base to `True`. This now allows us to \"freeze\" a selected number of layers in the convolutional base so that only the last few layers in the convolutional base are trainable. \n",
13501350
"\n",
1351-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_11_freeze_layers.png' width=1000 align='center'><br/>\n",
1351+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_11_freeze_layers.png' width=1000 align='center'><br/>\n",
13521352
"\n",
13531353
"#### How to freeze only a few layers?\n",
13541354
"There are two ways to specify which layers in the model are trainable (tunable). \n",
@@ -1871,7 +1871,7 @@
18711871
"### 4.3 Add the Classification Layer\n",
18721872
"Since we intend to train and use the model to classify hand signals from the ASL dataset (which has 29 classes), we will need to add our own classification layer. In this example, we have chosen to use just a single fully connected dense layer that contains 256 nodes, followed by a softmax output layer that contains 29 nodes for each of the 29 classes. The number of dense layers and the number of nodes per layer is a design choice, but the number of nodes in the output layer must match the number of classes in the dataset.\n",
18731873
"\n",
1874-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_12_add_classifier.png' width=1000 align='center'><br/>"
1874+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_12_add_classifier.png' width=1000 align='center'><br/>"
18751875
]
18761876
},
18771877
{
@@ -2540,7 +2540,7 @@
25402540
"source": [
25412541
"### 4.6 Compile and Train the Model\n",
25422542
"\n",
2543-
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/VGG16_06_asl_fine_tuning.png' width=1000 align='center'><br/>"
2543+
"<img src='https://github.com/AlbertaBeef/asl_tutorial/raw/2022.2/images/MobileNetV2_06_asl_fine_tuning.png' width=1000 align='center'><br/>"
25442544
]
25452545
},
25462546
{

Diff for: asl-25-classifier-deployment-dpu.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"\n",
1212
"This notebook describes how to quantize and compile a TensorFlow2 model with Vitis-AI for deployment.\n",
1313
"\n",
14-
"<img src='./images/VGG16_06_asl_fine_tuning.png' width=1000 align='center'><br/>"
14+
"<img src='./images/MobileNetV2_06_asl_fine_tuning.png' width=1000 align='center'><br/>"
1515
]
1616
},
1717
{

0 commit comments

Comments
 (0)