Seeing Eye To AI – How smart video shapes the edge
The evolution of intelligent video technology continues at a rapid pace. As in many other industries, the onset of the COVID-19 pandemic has accelerated the timelines and the video world of artificial intelligence (AI) continues to evolve rapidly in 2021. As the demand for video and the use of AI to make sense of visual data is increasing, the number of cameras and subsequent data is growing rapidly, forcing the creation of new edge architectures. Cameras and AI in Traffic Management “Smart factories” can leverage AI to detect faults or deviations in the production line in real time. In addition, a new generation of “smart” use cases has developed. For example, in “smart cities”, cameras and AI analyze traffic patterns and adjust traffic lights, to improve vehicle flow, reduce congestion and pollution, and increase safety. pedestrians. “Smart factories” can leverage AI to detect defects or deviations in the production line in real time, adapt to reduce errors, and implement effective quality assurance measures. As a result, costs can be drastically reduced through automation and earlier fault detection. Evolution of smart video The evolution of smart video is also happening alongside other advances in technology and data infrastructure, such as 5G. As these technologies come together, they impact the way we think about the periphery. And, they generate a demand for specialized storage. Below are some of the biggest trends we are seeing: More volume means better quality The volume and variety of cameras continues to increase with each new advancement, bringing new capabilities. Having more cameras allows you to see and capture more. It could mean having more coverage or more angles. It also means that more real-time video can be captured and used to train AI. The quality also continues to improve with higher resolutions (4K video and above) The quality also continues to improve with higher resolutions (4K video and above). The more detailed the video, the more information can be extracted from it. And, the more efficient AI algorithms can become. In addition, the new cameras transmit not only video stream, but also additional low-rate streams used for low-bandwidth surveillance and AI pattern matching. Smart Cameras Work 24/7 Whether used for traffic management, security or manufacturing, many of these smart cameras work 24/7/365. year, which represents a unique challenge. Storage technology must be able to keep up. On the one hand, storage has evolved to offer high performance data transfer speeds and data write speeds, to ensure high quality video capture. In addition, the actual storage technology on the camera must provide the longevity and reliability essential to any workflow. The real world context is essential for understanding endpoints. Whether for business, scientific research or in our personal lives, we are seeing new types of cameras capable of capturing new types of data. With the potential benefits of using and analyzing this data, the importance of reliable data storage has never been more evident. Consider Context When Designing Storage Technology When we design storage technology, we need to consider context, such as location and form factor. We have to think about the accessibility of the cameras (or the lack of them), are they on top of a tall building or maybe in the middle of an isolated jungle? Such locations might also need to withstand extreme temperature variations. All of these possibilities must be taken into account in order to ensure continuous, durable and reliable recording of critical video data. Chipsets Improve Artificial Intelligence (AI) Capabilities Improved computational capabilities of cameras mean processing takes place at the device level, enabling real-time decisions at the edge. New Camera Chipsets Deliver Improved AI Capability We are seeing new chipsets coming in for cameras that deliver enhanced AI capability, and more advanced chipsets add deep neural network processing for learning analysis. depth on the camera. AI is getting smarter and more efficient. Cloud Must Support Deep Learning Technology Just as camera and recorder chipsets are coming with more computing power, in today’s intelligent video solutions, most video analytics and video analytics Deep learning is always done with discrete or cloud-based video analytics devices. To support these new AI workloads, the cloud has undergone some transformation. Cloud-based neural network processors have adopted the use of massive GPU clusters or custom FPGAs. They receive thousands of hours of training video and petabytes of data. These workloads depend on the high capacity capabilities of enterprise-class hard drives (HDDs), which can already support 20TB per drive, and high-performance enterprise flash devices, platforms, or SSDs. Dependence on wired and wireless internet has enabled the scalability and ease of installation that has fueled the explosive adoption of security cameras, but this has only been the case where LAN and WAN infrastructure exists. already. 5G technology facilitates camera installations Emerging cameras that are ready for 5G are designed to load and run third-party applications 5G removes many obstacles to deployment, allowing many options for placement and ease of installation of cameras at metropolitan level. With this ease of deployment comes new increased scalability, driving use cases and new advancements in camera and cloud design. For example, cameras can now be stand-alone, with direct connectivity to a centralized cloud, as they no longer depend on a local network. Emerging 5G-capable cameras are designed to load and run third-party applications, which can bring more expanded capabilities. Yet with greater battery life, these cameras will need even more dynamic storage. They will require new combinations of endurance, capacity, performance and energy efficiency, to be able to optimally manage the variability of new application-driven functions. Paving the way for the edge storage revolution It’s a brave new world for intelligent video, and it’s as complex as it is exciting. Architectural changes are made to handle new workloads and prepare for even more dynamic capacities at the edge and endpoints. At the same time, deep learning analytics continues to evolve at the back-end and in the cloud. Understanding changes in workload, whether at the camera, recorder or cloud level, is essential to ensure that new architectural changes are complemented by continuous innovation in storage technology.