Most ship collisions and grounding accidents are due to errors made by watchkeeping personnel (WP) on the bridge. International Maritime Organization (IMO) adopts the resolution on the Bridge Navigation Watch Alarm System (BNWAS) detecting operator disability to avert these accidents. The defined system in the resolution is very basic and vulnerable to abuse. There is a need for a more advanced system of monitoring the behaviour of WP to mitigate watchkeeping errors. In this research, a Bridge Navigation Watch Monitoring System (BNWMS) is suggested to achieve this task. Architecture is proposed to train a model for BNWMS. The literature reveals that vision-based sensors can produce relevant input data required for model training. 2D body poses belonging to the same person are estimated from multiple camera views by using a deep learning-based pose estimation algorithm. Estimated 2D poses are projected into 3D space with a maximum 8 mm error by utilising multiple view computer vision techniques. Finally, the obtained 3D poses are plotted on a bird's-eye view bridge plan to calculate a heatmap of body motions capturing temporal, as well as spatial, information. The results show that motion heatmaps present significant information about the behaviour of WP within a defined time interval. This automated motion heatmap generation is a novel approach that provides input data for the suggested BNWMS.