Skip to content

Commit 7d91625

Browse files
Editing README
1 parent 1d002c4 commit 7d91625

2 files changed

Lines changed: 15 additions & 11 deletions

File tree

README.md

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
**Dataset used: [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
55
This dataset has more than 50,000 images of 43 classes.**
66

7-
**I was able to reach a +99% validation accuracy, and a 98.1% testing accuracy.**
7+
**I was able to reach a +99% validation accuracy, and a 97.6% testing accuracy.**
88

99
## Pipeline architecture:
1010
- **Load The Data.**
@@ -119,11 +119,9 @@ In this step, we will apply several preprocessing steps to the input images to a
119119
3. Local Histogram Equalization.
120120
4. Normalization.
121121

122-
1.
123-
**Shuffling**: In general, we shuffle the training data to increase randomness and variety in training dataset, in order for the model to be more stable. We will use `sklearn` to shuffle our data.
122+
1. **Shuffling**: In general, we shuffle the training data to increase randomness and variety in training dataset, in order for the model to be more stable. We will use `sklearn` to shuffle our data.
124123

125-
2.
126-
**Grayscaling**: In their paper ["Traffic Sign Recognition with Multi-Scale Convolutional Networks"](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) published in 2011, P. Sermanet and Y. LeCun stated that using grayscale images instead of color improves the ConvNet's accuracy. We will use `OpenCV` to convert the training images into grey scale.
124+
2. **Grayscaling**: In their paper ["Traffic Sign Recognition with Multi-Scale Convolutional Networks"](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) published in 2011, P. Sermanet and Y. LeCun stated that using grayscale images instead of color improves the ConvNet's accuracy. We will use `OpenCV` to convert the training images into grey scale.
127125

128126
<figure>
129127
<img src="./traffic-signs-data/Screenshots/Gray.png" width="1072" alt="Combined Image" />
@@ -132,8 +130,7 @@ In this step, we will apply several preprocessing steps to the input images to a
132130
</figcaption>
133131
</figure>
134132

135-
3.
136-
**Local Histogram Equalization**: This technique simply spreads out the most frequent intensity values in an image, resulting in enhancing images with low contrast. Applying this technique will be very helpfull in our case since the dataset in hand has real world images, and many of them has low contrast. We will use `skimage` to apply local histogram equalization to the training images.
133+
3. **Local Histogram Equalization**: This technique simply spreads out the most frequent intensity values in an image, resulting in enhancing images with low contrast. Applying this technique will be very helpfull in our case since the dataset in hand has real world images, and many of them has low contrast. We will use `skimage` to apply local histogram equalization to the training images.
137134

138135
<figure>
139136
<img src="./traffic-signs-data/Screenshots/Equalized.png" width="1072" alt="Combined Image" />
@@ -142,8 +139,7 @@ In this step, we will apply several preprocessing steps to the input images to a
142139
</figcaption>
143140
</figure>
144141

145-
4.
146-
**Normalization**: Normalization is a process that changes the range of pixel intensity values. Usually the image data should be normalized so that the data has mean zero and equal variance.
142+
4. **Normalization**: Normalization is a process that changes the range of pixel intensity values. Usually the image data should be normalized so that the data has mean zero and equal variance.
147143

148144
<figure>
149145
<img src="./traffic-signs-data/Screenshots/Normalized.png" width="1072" alt="Combined Image" />
@@ -174,7 +170,7 @@ LeNet-5 is a convolutional network designed for handwritten and machine-printed
174170

175171
**LeNet-5 architecture:**
176172
<figure>
177-
<img src="./traffic-signs-data/Screenshots/LeNet.png" width="1072" alt="Combined Image" />
173+
<img src="LeNet.png" width="1072" alt="Combined Image" />
178174
<figcaption>
179175
<p></p>
180176
</figcaption>
@@ -213,7 +209,7 @@ VGGNet was first introduced in 2014 by K. Simonyan and A. Zisserman from the Uni
213209

214210
**VGGNet architecture:**
215211
<figure>
216-
<img src="./traffic-signs-data/Screenshots/VGGNet.png" width="1072" alt="Combined Image" />
212+
<img src="VGGNet.png" width="1072" alt="Combined Image" />
217213
<figcaption>
218214
<p></p>
219215
</figcaption>
@@ -342,6 +338,14 @@ Number of new testing examples: 5
342338
</figcaption>
343339
</figure>
344340

341+
Now we'll input these 5 images to the VGGNet model and output the top 5 softmax proabilities of each prediction.
342+
343+
<figure>
344+
<img src="./traffic-signs-data/Screenshots/TopSoft.png" width="1072" alt="Combined Image" />
345+
<figcaption>
346+
<p></p>
347+
</figcaption>
348+
</figure>
345349

346350
The VGGNet model was able to predict the right class for each of the 5 new test images. Test Accuracy = 100.0%.
347351
In all cases, the model was very certain (80% - 100%).
59.4 KB
Loading

0 commit comments

Comments
 (0)