Now just run darknet.

!cd $DN; ./darknet detector -dont_show -map train data/obj.data cfg/yolov3-train.cfg darknet53.conv.74

It ran, saying:

608 x 608
Create 64 permanent cpu-threads

Until it stopped with:

Tensor Cores are disabled until the first 3000 iterations are reached.
(next mAP calculation at 1000 iterations) H100/500200: loss=771.2 hours left=810.0
100: 771.169434, 816.347351 avg loss, 0.000000 rate, 5.082846 seconds, 6400 images, 810.018514 hours left
Saving weights to /content/darknet/yolo-license-plates/yolov3-train_last.weights
Couldn’t open file: /content/darknet/yolo-license-plates/yolov3-train_last.weights

So I added a mkdir.

!mkdir $DN/yolo-license-plates/
!date
!cd $DN; ./darknet detector -dont_show -map train data/obj.data cfg/yolov3-train.cfg darknet53.conv.74
!dat

Details can be found on the Darknet website.

Since the Colab will time out due to its usage limits we should instead save this on our Google Drive. Let’s also change the paths to avoid spaces.

from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)

root_dir = "/content/gdrive/MyDrive"
base_dir = root_dir + '/Colab Notebooks/Cars/'

Then make the directory and update obj.data.

!mkdir "$root_dir/yolo-license-plates"
!echo -e 'classes = 1\ntrain = $DN/data/train.txt\nvalid = $DN/data/test.txt\nlabels = $DN/data/labels.txt\nnames = $DN/data/cars/classes.txt\nbackup = "$base_dir/yolo-license-plates"' > $DN/data/obj.data

This way the weights get saved directly in our Google Drive and we don’t lose our progress due to limits.

Once there are .weights files in yolo-license-plates you can use them. You’ll need .weights, .cfg and classes.txt.

Wasted a lot of time since (Google) Colab limits take a day to reset. Truly annoying but they want to push you to pay for it which is clearly too much in this case.

The TPU was as slow as the CPU so clearly Darknet did not use it.