Video Processing Latency Correction

Video Processing can process camera images to detect a target and calculate the angle to that target. This angle can then be used to rotate a turret to fire at the target.

Video Processing algorithms are increasingly sophisticated, including such techniques as Neural Networking.  However, increasing sophistication is often accompanied by increased processing time.  For example, a Neural Networking-based detection algorithm may take up to 200 milliseconds to process a single video frame.

If a robot Video Processor acquires an image and calculates a Target Angle – but during that time the robot has rotated 2 degrees – the target angle will no longer be correct, due to the delay between (a) when the video frame was acquired and (b) later when the robot controller receives the detection event.

By acquiring a sensor real-time timestamp in the Video Processor and including that with the detection event, a robot application maintaining a timestamped Orientation History can calculate the change in angle and correct the target angle.

To address this challenge, SF2 works together with an IMU (e.g., navX-MXP or navX-Micro), using the sensor timestamp to determine the latency and look back into the SF2 Orientation History to calculate the change in orientation.

For more details on the algorithm used, please see the Video Processing Latency Correction Algorithm Description.

LabVIEW Example

The SF2 Video Processing Latency Correction LabVIEW example shows how to make small modifications to the LabVIEW “FRC RoboRIO Robot Project” to correct target detection angles to account for Video Processing Latency using a navX-Model device and the SF2 Orientation History Buffer.

RobotMain.vi

The RobotMain.vi is modified to add initialization of the navX-MXP communication (in this case, over the MXP SPI Bus), and to also construct the Buffer with the appropriate size.

Vision Processing.vi

The Vision Procesing.vi is modified to take the angle calculated by the Vision Processor and the navX-Model device timestamp (acquired as soon as possible after the video image being processed is acquired – before any video processing occurs). Then, when a detection even occurs:

  1. Past navX-Model device orientation (when the image was originally acquired) is retrieved from the Orientation History.vi,
  2. Current orientation is retrieved from the navX-Model device
  3. The change in angle is calculated as:
    • change_in_angle = current_navx_angle – historical_navx_angle
  4. The change in angle is added to the angle calculated by the vision processor.

 

Java Example

C++ Example