- Formation of Discrete Image
- Choices of Sampling Grid
- 2x Resize Operation
- Libraries
- Literature
- Choices of Origin
- Improvements in Detectron & Detectron2
- Box Regression Transform
- Flip Augmentation
- Anchor Generation
- RoIAlign
- Paste Mask
- Point-based Algorithms
- Summary
Technically, an image is a function that maps a continuous domain, e.g. a box [0,𝑋]×[0,𝑌][0,X]×[0,Y], to intensities such as (R, G, B). To store it on computer memory, an image is discretized to an array
array[H][W]
, where each elementarray[i][j]
is a pixel.How does discretization work? How does a discrete pixel relate to the abstract notion of the underlying continuous image? These basic questions play an important role in computer graphics & computer vision algorithms.
This article discusses these low level details, and how they affect our CNN models and deep learning libraries. If you ever wonder which resize function to use or whether you should add/subtract 0.5 or 1 to some pixel coordinates, you may find answers here. Interestingly, these details have contributed to many accuracy improvements in Detectron and Detectron2.
Read in full here:
This thread was posted by one of our members via one of our news source trackers.