Commit e626e219 authored by Radim Tylecek's avatar Radim Tylecek

Merge branch 'master' of gitlab.inf.ed.ac.uk:3DRMS/Challenge2018

parents b7753b02 4bb7fc4a
...@@ -20,27 +20,43 @@ We provide the following data for the challenge: ...@@ -20,27 +20,43 @@ We provide the following data for the challenge:
## Data ## Data
### Download
_IMPORTANT_: Please install [git lfs](https://git-lfs.github.com/) before cloning this repository to retrieve PLY files. _IMPORTANT_: Please install [git lfs](https://git-lfs.github.com/) before cloning this repository to retrieve PLY files.
Then following commands are suggested to download the repository:
```
git lfs install
git config --global credential.helper 'cache --timeout=28800'
git config --global http.sslVerify false
git clone https://username@gitlab.inf.ed.ac.uk/3DRMS/Challenge2018.git
```
_NOTE_: Due to bug in Gitlab server valid PLY files are not downloaded with ZIP web link. You can still download them via web individually. _NOTE_: Due to bug in Gitlab server valid PLY files are not downloaded with ZIP web link. You can still download them via web individually.
You can also use [alternative ZIP download](https://drive.google.com/drive/folders/1Rc36NjEyola_wNlFI-Edzd7GSxkmre2h?usp=sharing). Extract eg. using `7z e 3DRMS2018.zip.001`.
### Semantic Labels and Calibration
* File [`labels.yaml`](https://gitlab.inf.ed.ac.uk/3DRMS/Challenge2018/blob/master/calibration/labels.yaml) - semantic label definition list * File [`labels.yaml`](https://gitlab.inf.ed.ac.uk/3DRMS/Challenge2018/blob/master/calibration/labels.yaml) - semantic label definition list
* File [`colors.yaml`](https://gitlab.inf.ed.ac.uk/3DRMS/Challenge2018/blob/master/calibration/colors.yaml) - label color definition (for display) * File [`colors.yaml`](https://gitlab.inf.ed.ac.uk/3DRMS/Challenge2018/blob/master/calibration/colors.yaml) - label color definition (for display)
* File [`calibration/camchain-DDDD.yaml`](https://gitlab.inf.ed.ac.uk/3DRMS/Challenge2018/blob/master/calibration/camchain-2017-05-16-09-53-50.yaml) - camera rig calibration (for real data) * File [`calibration/camchain-DDDD.yaml`](https://gitlab.inf.ed.ac.uk/3DRMS/Challenge2018/blob/master/calibration/camchain-2017-05-16-09-53-50.yaml) - camera rig calibration (for real data)
### Training (Synthetic data) ### Training (Synthetic data)
| Sequence | frames | annotated frames | | Sequence | 0001 | 0128 | 0160 | 0224 |
| -------- | ------ | ----- | | -------- | ------ | ----- | ----- | ----- |
| clear_0224 | 1000 | 500 | | clear | 1000 | 1000 | 1000 | 1000 |
| cloudy_0224 | 1000 | 500 | | cloudy | 1000 | 1000 | 1000 | 1000 |
| overcast_0224 | 1000 | 500 | | overcast | 1000 | 1000 | 1000 | 1000 |
| sunset_0224 | 1000 | 500 | | sunset | 1000 | 1000 | 1000 | 1000 |
| twilight_0224 | 1000 | 500 | | twilight | 1000 | 1000 | 1000 | 1000 |
| _Total_ | 5000 | 2500 | | _Total_ | 5000 | 5000 | 5000 | 5000 |
| _Stereo pairs_ | 2500 | 2500 | 2500 | 2500 |
Total 20k images / 10k annotated stereo pairs / 25 GB
* File `model_RRRR_SSSS.ply` - point cloud of scene SSSS with semantic labels (field `scalar_s`) at resolution RRRR * File `model_RRRR_SSSS.ply` - point cloud of scene SSSS with semantic labels (field `scalar_s`) at resolution RRRR
* Higher resolution point clouds are available upon request (too large for this repository) * Higher resolution point clouds are available from [here](https://drive.google.com/drive/folders/1n6wQbXVtL2dcUWTvOigL2HsVpywCet6y?usp=sharing) (too large for this repository)
* Folders `EEEE_SSSS` - sequences rendered from scene SSSS in environment EEEE * Folders `EEEE_SSSS` - sequences rendered from scene SSSS in environment EEEE
* Subfolders `vcam_X` * Subfolders `vcam_X`
* Files `vcam_X_fXXXXX_gtr.png` - GT annotation with label set IDs (indexed bitmap) * Files `vcam_X_fXXXXX_gtr.png` - GT annotation with label set IDs (indexed bitmap)
...@@ -60,6 +76,7 @@ fclose(fd); ...@@ -60,6 +76,7 @@ fclose(fd);
``` ```
Python: Python:
```python ```python
import numpy as np
with open('training/clear_0001/vcam_0/vcam_0_f00001_dmap.bin', 'rb') as f: with open('training/clear_0001/vcam_0/vcam_0_f00001_dmap.bin', 'rb') as f:
x = np.fromfile(f, dtype='>f4', sep='') x = np.fromfile(f, dtype='>f4', sep='')
a = np.reshape(x, [480, 640], order='F') a = np.reshape(x, [480, 640], order='F')
...@@ -85,6 +102,9 @@ Points correspond to camera centers (inner circle of the rig) and viewing direct ...@@ -85,6 +102,9 @@ Points correspond to camera centers (inner circle of the rig) and viewing direct
| sunset_0288 | 1000 | | sunset_0288 | 1000 |
| twilight_0288 | 1000 | | twilight_0288 | 1000 |
| _Total_ | 5000 | | _Total_ | 5000 |
| _Stereo pairs_ | 2500 |
Total 2 GB.
* Folders `EEEE_SSSS` - sequences rendered from scene SSSS in environment EEEE * Folders `EEEE_SSSS` - sequences rendered from scene SSSS in environment EEEE
* Subfolders `vcam_X` * Subfolders `vcam_X`
...@@ -113,11 +133,22 @@ We will evaluate the following measures: ...@@ -113,11 +133,22 @@ We will evaluate the following measures:
We will use distance thresholds of 1cm, 2cm, 3cm, 5cm, and 10cm. We will use distance thresholds of 1cm, 2cm, 3cm, 5cm, and 10cm.
### Results
We include results produced by our baseline methods in `results` folder.
Reconstruction: COLMAP [3] each test sequence separately and all merged with GT distance computed.
Semantic segmentation: SegNet [TBA]
Recommended viewer: [CloudCompare](http://www.cloudcompare.org/) - turn off normals to see scalar fields properly
#### References #### References
* [1] Seitz et al., A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms, CVPR 2006 * [1] Seitz et al., A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms, CVPR 2006
* [2] Schöps et al., A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos, CVPR 2017 * [2] Schöps et al., A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos, CVPR 2017
* [3] Schönberger et al. Structure-from-motion revisited, CVPR 2016.
## Submission Categories ## Submission Categories
...@@ -141,7 +172,7 @@ Create a set of semantic image annotations for all views in the test, using the ...@@ -141,7 +172,7 @@ Create a set of semantic image annotations for all views in the test, using the
## Contact ## Contact
For questions and requests, please contact `rtylecek@inf.ed.ac.uk`. For questions and requests, please contact `rtylecek@inf.ed.ac.uk` and `sattlert@inf.ethz.ch`.
## Credits ## Credits
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment