Train and Deploy an ALPR for Colombian License Plate with Tiny-YoloV4 and Jetson Nano without Internet

Rafael Rodriguez
7 min readSep 18, 2020

--

There is a wide range of tutorial and medium articles talking about training YoloV4 for custom object detection, but what about deploying? not only deploy to consume from a web service in any cloud provider as AWS or AZURE why not to do it locally? that what we will do here.

For small parking, the empty places and the number of free spaces are hard to get automatically, so we can detect if there is a car in a certain place and count it, also we can count it at the entry.

Final Result get the car and Plate Position

Steps :

1. The Architecture of the Network

2. Components

3. Labeling License Plates, Cars, and Motorcycles

4. Installing Darknet in Jetson Nano

5. Running Basic Test

1. ARCHITECTURE OF THE NETWORK

To deploy this solution, we should put cameras in places close to car circulation. The location could be in the entry, exit, and principal route of the parking lot to get most or all the License Plates and count the number of vehicles in certain parking places.

Cameras installation in different places forces us to make significant investments in data connection and energy. An alternative to avoid this is to use wifi cameras and reuse some of the electrical power installations available in the parking lot.

The following is the architecture proposed for the network :

LABELING THE IMAGES

Here is the tedious step, to train a detector, we have to label images. To do that, we have many options. Here we will use a program available in Github from Yonghye Kwon, https://github.com/developer0hye/Yolo_Label there is some improvement points about this app that I want you to take care of:

  • If you don’t see the windows complete, change your computer screen resolution because you can’t change the size of the app windows
  • Sometimes the box over the image is not entirely close but don’t worry about that

Put all the car images in a folder named car, and all the Motorcycle images in a folder called motorcycle. After that, put both folders in a new folder named ParkingImages and create a new file named obj.names inside the folder, put the following words car, plate, motorcycle in the file one word per line

Run the YoloLabel.exe, and let’s label the images. You have to do it with the car and motorcycle folder. We can’t merge the photos because we need to do a train and test split with a balanced amount of images of both classes.

Folder Structure
Inside the PakingImages Folder
obj.names File

After that, we have to run the following script to Split your data into train and test. Here it is, you have to put it in the ParkingImages folder and run it with the following command.

$ python TrainTestSplit.py

or

$ python3 TrainTestSplit.py

Note: if you don´t have imutils library installed, run the following command:

$ pip install imutils

After running the script, you have to merge the cars and motorcycle’s images in a folder named data and delete the car and motorcycle folders. You will end with the following structure.

At the end with this step, you should finish with a folder named data with all your images and a .txt file for each image you also have, another file obj.names for the name of the objects to detect in our case, car, plate, and motorcycle additionally you will have train.txt and test.txt two files containing the paths of the images to train and test the tiny-yolov4

TRAINING TINY-YOLOV4

Here is the exciting part, we will train our custom object detector using COLAB, the google’s open platform for running python code in a GPU for free.

“We will not reinvent the wheel”.

with this phrase in mind, we will follow a Github repo to train the Tiny-YoloV4

Our training process is based on Github: https://github.com/reubenbf/quick-yolov4-tiny

But first, we have to Compress the ParkingImages folder and name it as ParkingImages.zip and upload it to GOOGLE DRIVE in your root folder because it is needed in Step 5.

I made some modifications to avoid use Roboflow, my Colab Notebook is Here, the Notebook explains to you how to update and where you have to place the folder with the images we labeled in the previous steps.

At the end of this step, you should finish with 4 main files,

  • yolov4-tiny-obj_best.weights — you can find it in /darknet/backup/
  • yolov4-tiny-obj.cfg — you can find it in /darknet/cfg/
  • obj.data — you can find it in /darknet/data/
  • obj.names — you can find it in /darknet/data/

That's what we need to deploy in the Jetson Nano

INSTALLING DARKNET IN JETSON NANO

To install Darknet on Jetson Nano, you have to follow these steps:

$ mkdir ${HOME}/project
$ cd ${HOME}/project
$ git clone https://github.com/AlexeyAB/darknet.git
$ cd darknet

Verify you are in the darknet folder with the PWD command, as shown. In my case, rafael is in the path, but in your case, may change this part of the path:

$ pwd
/home/rafael/Documents/project/darknet

Delete the Makefile existing in the darknet folder and replace it with this with all the configuration needed to compile in the Jetson Nano

Run the following command to compile darknet

$ make## to get the max of the Jetson Nano
$ sudo nvpmodel -m 0
$ sudo jetson_clocks

With this part, we are done, ready to run the first test.

RUNNING BASIC TEST

First of all, let’s get an image to test. To do this, you can download from www.tucarro.com.co, to download open the image on a different tab and change the extension from .webp to .jpeg manually. Here is an example:

https://http2.mlstatic.com/spark-life-full-aire-acondicionada-D_NQ_NP_650151-MCO43229406559_082020-F.webp## tohttps://http2.mlstatic.com/spark-life-full-aire-acondicionada-D_NQ_NP_650151-MCO43229406559_082020-F.jpeg

and save it as car.jpeg in the darknet folder.

Copy the files from the training step to :

  • yolov4-tiny-obj_best.weights to /darknetfolder
  • yolov4-tiny-obj.cfg to /darknetfolder
  • obj.data rename it as coco.data and move it to /darknet/cfg folder
  • obj.names to /darknet/data folder
$ ./darknet detector test cfg/coco.data \
yolov4-tiny-obj.cfg \
yolov4-tiny-obj_best.weights \
car.jpeg

After that, you will see the image with the bounding boxes.

Predictions

As we can see, there are some cases where we have false positives, the first one is closer to the images used to train the model, is crucial for us to select the images to train, and this is a trade-off between accuracy (traying to get similar photos controlling the position, the camera used to take the pictures and light conditions) or generic usage (trying to get a lot of images from different angles, cameras, and light conditions)

NEXT STEPS

  • Script to run the detector in real-time
  • Segment Characters and read the License Plate

--

--

Rafael Rodriguez
Rafael Rodriguez

Written by Rafael Rodriguez

Electrical Engineer | Telecommunication Engineer | Scrum Master | Computer Vision Engineer | Web Developer rrodriguezmarulanda@gmail.com

No responses yet