At the core of the so-called AI cameras are advanced analytics – a lot of them AI- or deep learning-based – that raise the overall performance and accuracy of the system. Whereas in the past, these types of analytics mostly reside on the backend server, increasingly they are moving towards the edge thanks to the camera’s increasing processing power.
The primary key components in an AI camera are the image sensor and the processor. A good image sensor captures high-quality video even in poor lit conditions, thus enhancing the camera’s recognition capability. An advanced processor, meanwhile, is required to run the AI algorithms.
“Compared with conventional security cameras, AI cameras excel in their GPU chipsets, which provide stronger computing power, lower power consumption, and better heat dissipation,” said Max Fang, IP Project Product Director at Hikvision.
With AI becoming more and more dominant, more chip options besides CPU and GPU have emerged, for example NPU, TPU and SoC.
“The computational demands of neural networks require new hardware architecture designed to deliver massive math computations. Up until recently, users had to choose between a GPU (graphical processing unit) or a CPU (central processing unit). GPU architecture more closely matches the demands for neural network processing, and as a result, most of the industry has been developed on GPU hardware from NVIDIA,” said Lei Bennett, VP of Product Management for Security at FLIR Systems. “Today, chip designers are launching hybrid chip architectures optimized for neural network processing demands that combine a CPU core with what is commonly referred to a neural network fabric or a sea of multipliers. This has led to the availability of extremely powerful chip sets that are much less expensive and operate on an order of magnitude lower power.”
“Most modern surveillance cameras today inside the enclosure include one or more standard SoC (system on a chip) integrated circuits on which the device’s firmware and operating system (typically Linux) run,” said Jeff Whitney, VP of Marketing at Arecont Vision Costar. “The SoC in many cameras will be integrated with or supplemented by a chipset providing the video analytics and/or AI intelligence, typically from a third-party vendor. This is where the additional intelligence capability of AI is added to the camera, although some vendors may initially opt to add the AI components to an external device interfaced directly with the camera.”
How AI cameras can differentiate
With more and more AI cameras being rolled out, most using similar key components, manufacturers inevitably face the challenge of differentiation – how to differentiate their products and make them stand out?
One way is to differentiate by the accuracy and performance of the AI algorithm. “The key is the accuracy and range performance of the AI algorithm. This is highly impacted by the quality of the training data used to train a neural network. There are many points of failure in how you build your training data, so this is the critical differentiator,” Bennett said.
Or, they can differentiate by non-AI features, for example time-to-market and the ability to meet certain vertical demands. “The most effective approach for AI deployments may be to focus on specific market needs or requirements. This could include partnering with specialist vendors or systems integrators with a practice and expertise in markets such as in retail, warehousing, transportation, hospitality, or education, rather than a generalist approach that attempts to the general-purpose camera market,” Whitney said. “This focus on specific market needs will be especially true early on, as AI remains a promising but immature technology. Focusing on specific applications will result in better results at this early stage of the market.”
View original article on A&S International Magazine by William Pao - Click here