-
Essay / Analysis of real-time monitoring system on Hadoop image processing interface
Traditional security systems aim to prevent crimes as much as possible. Real-time monitoring helps prevent crimes before they happen. Implementing security measures is also time-consuming and usually requires human intervention. A standalone security system will make security economically viable and work quickly. Using facial, object and behavior recognition on the video feed provided by CCTV cameras, various criminal activities can be detected and authorities will be assisted in taking action. Covering a large number of CCTVs spread over a large space can generate a lot of data and requires enormous processing power to process this data. Therefore, we will use Hadoop's image processing interface to distribute the processing task across the cloud network, to improve communication between authorities in different domains. Say no to plagiarism. Get a Custom Essay on “Why Violent Video Games Should Not Be Banned”?Get the original essay At present, in almost all places, security systems operate in a rather passive manner. CCTV cameras installed in this system record videos and transmit them to a human supervisor. Such a security system is prone to human error. Rapid actions are not possible, although they are necessary in many conditions to prevent the adversary. All security operates locally and offers limited cloud capabilities. Such a static system is outdated and is itself at risk of misuse and hacking. We therefore offer a modern and dynamic system capable of operating in the cloud with powerful real-time monitoring and arguably cheaper than the existing system. Images from several CCTV cameras will reach a local station. These video streams will be fed to preliminary object recognition algorithms and will undergo the selection process in the local station. After the initial object recognition process, the video stream will be divided into a small unit comprising several frames. These images will be mapped to the respective nodes for processing and their results will be reduced to get the final result. The authors of [1] proposed a scalable video processing system on Hadoop network. The system uses FFmpeg for video coding and OpenCV for image processing. They also feature a face tracking system, which grouped together multiple images of the same people. The captured video stream is stored in the Hadoop distributed file system. The system does not specify proper security mechanisms and storing such amount of data in HDFS will not be cost-effective. The system in [2] used Nvidia CUDA enabled Hadoop clusters to improve server performance by utilizing the parallel processing capability of CUDA cores. present in Nvidia GPUs. They demonstrated an AdaBoost-based face detection algorithm in Hadoop network. Although equipping clusters with Nvidia GPUs can increase the cost of the clusters, CUDA cores potentially offer massive improvements in image processing tasks. Although our goal is to implement the system in existing hardware to minimize costs. The authors of [3] used the Hadoop framework to process astronomical images. They implemented a scalable image processing pipeline on Hadoop, which enabled cloud computing of astronomical images. They used an existing C++ library and JNI touse this library in Hadoop for image processing. Although they were successful, many optimizations were not performed and Hadoop was not properly integrated into the C++ library. A survey in [4] describes various security services provided in the Hadoop framework. Security services, necessary for the framework, such as authentication, access control, and integrity, are discussed, including what Hadoop provides and what it does not provide. Hadoop has several security vulnerabilities that can be exploited to initiate a replay attack or view files stored in the HDFS node. Hence as according to the scholar, a good integrity check method and a good authorization check method are necessary. Object recognition stated in [5] provides an effective way to recognize a three-dimensional object from a two-dimensional image. In its stated methodology, certain characteristics of the object remain constant regardless of viewing angle. Extracting these features specifically will save a considerable amount of resources compared to legacy object recognition systems that recreate entire 3D objects using depth analysis. As shown in [6], the original eigenfaces fail to accurately classify faces when the data comes from different angles and light sources, as in our problem. We therefore use the concept of TensorFace. A vector space of different images formed at multiple angles is applied to N-mode SVD with multilinear analysis to recognize faces. Behavior recognition can be performed as shown in [7]. Features will be extracted from the video stream and applied to feature descriptors, template events, and event/behavior templates. The output will be mapped from the feature space to the behavior label space where a classifier will map it as normal or abnormal. The system proposed in [8] presents a cost-effective, reliable, efficient and scalable monitoring system where data is stored using the P2P concept. It avoids the load on a single data center and divides the load into multiple peer nodes. It also provides authentication as a module between peer nodes and directory nodes. The system does not present any method for implementing computer vision and integrity checking. Offers an open source Hadoop video processing interface integrating C/C++ applications into the Hadoop framework. It provides an R/W interface allowing developers to store, retrieve and analyze HDFS video data. Using the security available in the Hadoop framework for video data may result in poor performance and security is not mentioned in the HVPI. TensorFlow, a machine learning system, shown in [10], provides several tools to implement multiple training algorithms and optimizations for multiple large-scale devices. It uses data flow graphs for computational states and operations that change those states. TensorFlow can work well with Hadoop Framework to distribute processing into existing hardware. To provide real-time recognition, various pre-processings are done to improve the performance of Hadoop and neural network. The whole process can be divided into following phases: - Video Collection: - The video stream coming from the video capture device like CCTV will be converted into Hip Image Bundle (HIB) object using various tools like Hib Import, info . After that, HIB will undergo preprocessing using a video encoder like Culler class and FFmpeg. At this stage, various..