The Beginning of Computer Vision
Researchers began developing computer-enabled vision technologies as early as the 1950s, beginning with simple two-dimensional imaging for statistical pattern recognition. It wasn’t until 1978, when researchers at the MIT AI Lab developed a bottom-up approach to extrapolating 3D models from 2D computer-created “sketches” that CV’s practical applications became obvious. Image recognition technologies have splintered into different categories by general use case since then.Machine Vision vs. Computer Vision – Commonalities
Both computer vision and machine vision use image capture and analysis to perform tasks with speed and accuracy human eyes can’t match. With this in mind, it’s probably more productive to describe these closely related technologies by their commonalities—distinguishing them by their specific use cases rather than their differences. Computer vision and machine vision systems share most of the same components and requirements:- An imaging device containing an image sensor and a lens
- An image capture board or frame grabber may be used (in some digital cameras that use a modern interface, a frame grabber is not required)
- Application-appropriate lighting
- Software that processes the images via a computer or an internal system, as in many “smart” cameras