You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the [announcement](https://blog.tensorflow.org/2020/07/tensorflow-2-meets-object-detection-api.html) that [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) is now compatible with Tensorflow 2, I tried to test the new models published in the [TF2 model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md), and train them with my custom data.
@@ -14,16 +14,18 @@ This tutorial should be useful for those who have experience with the API but ca
14
14
However, I will add all the details and working examples for the new comers who are trying to use the object detection api for the first time, so hopefully this tutorial will make it easy for beginners to get started and run their object detection models easily.
15
15
16
16
17
-
###Roadmap
17
+
## Roadmap
18
18
19
19
This tutorial should take you from installation, to running pre-trained detection model, and training/evaluation your models with a custom dataset.
20
20
21
21
1.[Installation](#installation)
22
22
2.[Inference with pre-trained models](#inference-with-pre-trained-models)
23
-
3. Preparing your custom dataset for training
24
-
4. Training with your custom data, and exporting trained models for inference
23
+
3.[Preparing your custom dataset for training](#preparing-your-custom-dataset-for-training)
24
+
4. Training object detction model with your custom dataset
25
+
5. Exporting your trained model for inference
25
26
26
-
### Installation
27
+
28
+
## Installation
27
29
28
30
The examples in this repo is tested with python 3.6 and Tensorflow 2.2.0, but it is expected to work with other Tensorflow 2.x versions with python version 3.5 or higher.
For more installation options, please refer to the original [installation guide](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md).
98
100
101
+
To run the examples in this repo, you will need some more dependencies:
102
+
103
+
```bash
104
+
# install OpenCV python package
105
+
pip install opencv-python
106
+
pip install opencv-contrib-python
107
+
```
99
108
100
-
###Inference with pre-trained models
109
+
## Inference with pre-trained models
101
110
102
111
To go through the tutorial, clone this repo and follow the instructions step by step.
You can also select set of classes to be detected by passing their labels to the argument `--class_ids` as a string with the "," delimiter. For example, using `--class_ids "1,3" ` will do detection for the classes "person" and "car" only as they have id 1 and 3 respectively (you can check the id and labels from the [coco labelmap](models/mscoco_label_map.pbtxt)). Not using this argument will lead to detecting all objects in the provided labelmap.
In this tutorial, I am going to use the interesting [raccoon dataset](https://github.com/datitran/raccoon_dataset) collected by [Dat Tran](https://dat-tran.com/).
159
+
The raccoon dataset contains a total of 200 images with 217 raccoons, which is suitable to use in tutorial examples.
160
+
161
+
The original [dataset repo](https://github.com/datitran/raccoon_dataset) provides many scripts to deal with the dataset and randomly select train and test splits with 160 and 40 images respectively.
162
+
However, just for convenience, and to decrease the efforts needed, I have included the dataset images and annotation in this repo (in [data/raccoon_data/](data/raccoon_data/) ), and split them manually, taking the first 160 images for training, and the last 40 images for testing.
163
+
I recommend checking the original [dataset repo](https://github.com/datitran/raccoon_dataset), along with this [article](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) written by the author of the dataset.
149
164
150
-
TODO
151
165
152
-
### Training with your custom data
166
+
First step to start training your model is to generate [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) file from the dataset annotations.
167
+
TFRecord is a binary file format that makes dealing with large datasets more efficient, you can read more about TFRecords in this [article](https://medium.com/mostly-ai/tensorflow-records-what-they-are-and-how-to-use-them-c46bc4bbb564).
168
+
The Tensorflow Object Detection API provides examples to generate tfrecords from annotations that have the same shape as pascal voc or oxford pet dataset (you can see the instructions [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/preparing_inputs.md)).
169
+
But generally, you may have your data annotations in any format, so let's generate intermediate format as csv file, then use it to generate our tfrecords.
170
+
171
+
First we need to convert the xml annotations files to csv, which was [provided in the raccoon dataset repo](https://github.com/datitran/raccoon_dataset/blob/master/xml_to_csv.py). I just took this file and refined it a little, and used argparse package to input our pathes as arguments, you can find the refined file in [data_gen/xml_to_csv.py](data_gen/xml_to_csv.py).
After generating the csv file, use it to genrate the tfrecord file.
179
+
In the tensorflow detection repo they provide a good [tutorial](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/using_your_own_dataset.md) to deal with your custon data and generate tfrecords.
180
+
I have used the examples provided, and solved some issues to make it work with TF2, and also used argparse to make it easier to use for any data in the future.
181
+
You can find my file in [data_gen/generate_tfrecord.py](data_gen/generate_tfrecord.py), and you can use it as follows:
0 commit comments