Skip to main content

SCENARIO-BASED RULE ENGINE

The core Video Analytics element of VISuite AI is Ipsotek’s patented Scenario-Based Rule Engine (SBRE), a powerful tool to precisely define behaviors of interest as they would unfold in the real-world dynamic and complex environment.

The SBRE is derived from a conceptual description of activities as they are explained by a human operator. It allows for multiple Video Analytics and AI events to contribute to an alarm trigger and leverages the ability to fuse inputs from multiple sensors to form a robust alarm. The SBRE can differentiate between complex scenarios such as a vehicle stopping opposite a sensitive building, driver leaving the vehicle and walking away, or a taxi stopping at the same location to let a passenger out. Another example would be differentiating between a passenger in an airport putting a bag on the floor and standing next to it from a person putting a bag down and actually abandoning it.

The SBRE’s strength is also demonstrated when events from different sensors are merged in one information fusion engine that benefits from multimodal detection capability. For example, the SBRE is capable of combining events from Facial Recognition with Video Analytics tracking and access control to differentiate between a security guard backtracking on arrival gates (entering airport airside through arrival gates) and a person attempting to breach airport security.

In short, the SBRE takes decisions and executes actions based on fusing Video Analytics, AI and other multimodal information to automate and enhance surveillance.

ARTIFICIAL INTELLIGENCE

VISuite AI benefits from a versatile GPU-powered deep learning engine. The AI solution provides customers with highly customized object classification, detection, and tracking capabilities for projects with niche requirements.

AI is an exciting new technology that will open up many new application and deployment avenues for VISuite AI. Some of the areas where AI is adding value are:

➢ Improved performance and especially in challenging environments

➢ Delivering bespoke and customized solution

➢ Automatic configuration of Video Analytics systems

ENHANCED SITUATIONAL AWARENESS

VISuite AI actively monitors large networks of cameras and tracks objects in real-time throughout the scenes. Advanced trackers and AI detectors are used to maintain a track on every individual, vehicle or object to generate rich and accurate metadata unique to each object in the scene.

The tracked objects shape, class, appearance, colors, speed and trajectory on a geomap (time and location) are some of the information collected and used as a feed to the Scenario-Based Rule Engine that takes a decision if an operator should be alerted to an incident or not. Besides events, this information can also used to provide statistical information and support post event analytics and forensic investigations.

EFFICIENT AND FAST CONFIGURATION

VISuite AI is not only designed with bespoke customization in mind, but also rapidly deployable and efficient configuration. Ipsotek’s solutions can be deployed across large projects such as safe cities in a fraction of the time of traditional Video Analytics systems. This is achieved through Ipsotek’s patented scene calibration and rule templating.

➢ Automatic AI based calibration: For large scale deployments VISuite AI offers automatic scene calibration. This tool collects information from the scene for a short period of time and defines the critical parameters required for calibration. This process reduces the calibration effort which is ideal for roll-out to hundreds or even thousands of cameras. Furthermore, this is used to calculate the GPS coordinates of each tracked object in reference to the GPS coordinates of the camera.

➢ Rule Templating: VISuite AI provides a library of matured and tested rules for a variety of different solutions. These are made available for all of Ipsotek’s customers and assists dramatically in configuring a new system. These Scenario-Based solution templates can be applied directly to cameras and enable a high-performance solution to be configured in minutes by just adjusting the detection virtual zones (Areas of Interest) to match the cameras field of view (FoV). Solution templates can also be applied to a large number of cameras simultaneously by using conceptual virtual zones.