Machine Vision Localization System

From Mech
(Difference between revisions)
Jump to: navigation, search
(Setting up the Cameras)
(Setting up the Cameras)

Revision as of 10:58, 16 January 2009



This is a machine vision systems that tracks multiple targets and can send out the positions via the serial port. It is based upon the Indoor_Localization_System, but has several enhancements and bug fixes. Refer to Indoor_Localization_System for a basic overview of the setup of the system and the workings of the patter identification algorithm.

Major Enhancements/Changes

  • The system will now mark the targets with an overlay and display coordinate data onscreen.
  • The serial output is now formatted for the XBee radio using the XBee's API mode with escape characters.
  • The calibration routine has been improved, and only needs to be performed once.
  • A command interface for sending out commands via the serial port has been added.
  • The system will discard targets too close to the edge of the camera frame to prevent misidentification due to clipping.
  • The origin of the world coordinate is now in the middle, not the lower left corner.
  • The GUI displaying the camera frames is now full sized instead of a thumbnail. However, if your monitor isn't big enough, you can resize them.

Major Bug Fixes

  • Two major memory leaks fixed.
  • Calibration matricies are now calculated correctly.
  • File handling bug that causes an off-by-one error in LoadTargetData() fixed.

Camera Calibration Routine

The camera calibration routine used is explained in the document Image_Formation_and Camera_Calibration.pdf by Professor Ying Wu.

Target Patterns

The target patterns and preprocessor can be downloaded at Indoor_Localization_System#Pre-processing_program_source_with_final_patterns:.


Setting up the Cameras

The cameras should be set up according to Indoor_Localization_System with one caveat: targets at the edge of the camera frame will now be discarded. This prevents misidentification of patterns if one or more dots in the pattern fall off the screen, but it also means that there must be enough overlap that when the target is in the dead-zone of one camera, it is picked up by another camera.

Machine vision single frame.png
Machine vision four frames.png
Machine vision calibration.png
Personal tools