SeerVision designs camera steering algorithms that turn any camera-motor system to a robotic cameraman. Our software accesses the live video stream, identifies objects of interest and then actively steers the camera lens, following the same cinematography rules that a cameraman would, keeping the object of interest in specific areas of the frame.
We use state-of-the-art computer vision algorithms to detect and visually track anything on the camera live stream
Optimization-based controllers smoothly steer the camera's view to any point on the frame, rejecting vibrations in the process
Be your own director. Define any camera movement relevant to an object on the live view and let our algortihms take care of making the video
SeerVision is an ETH spin-off from the Automatic Control Laboratory of ETH Zurich. The expertise of the team is on the design of motor movement trajectories that satisfy certain performance specifications, such as limited acceleration between set-points, smooth average velocity and an optimized trade-off between tracking accuracy and overall motor movement.