-
Essay / Human face recognition: Description of methods
Human face recognition is a difficult problem in computer vision. Early experiments in machine vision tended to focus on toy problems in which the observed world was carefully controlled and constructed. Perhaps boxes shaped like regular polygons were identified, or simple objects like scissors were used. In most cases, the image background has been carefully controlled to provide excellent contrast between the analyzed objects and the surrounding world. It is clear that face recognition does not fall into this category of problems. Face recognition is challenging because it is a real-world problem. The human face is a complex and natural object whose contours and features are generally not easily (automatically) identified. For this reason, it is difficult to develop a mathematical model of the face that can be used as prior knowledge when analyzing a particular image. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get an original essay Applications of facial recognition are widespread. Perhaps the most obvious is human-machine interaction. Computers could be made easier to use if, when one simply sat down at a computer terminal, the computer could identify the user by name and automatically load their personal preferences. This identification could even be useful for improving other technologies such as voice recognition, because if the computer can identify the person speaking, the observed voice patterns can be more accurately classified in relation to the person's voice known. Human facial recognition technology could also have uses in security. Face recognition could be one of many mechanisms used to identify an individual. Facial recognition as a security measure has the advantage that it can be done quickly, perhaps even in real time, and does not require significant equipment to implement. This also does not pose any particular difficulty for the identification of the subject, as is the case in retinal scanners. However, it has the disadvantage of not being a foolproof authentication method, since the appearance of the human face is subject to various sporadic modifications on a daily basis (shaving, hairstyle, acne, etc.), as well as progressive. evolves over time (aging). For this reason, facial recognition is perhaps best used as a complement to other identification techniques. A final area where facial recognition techniques could be useful is in search engine technologies. In combination with face detection systems, users could be enabled to search for specific people in images. This could be done either by asking the user to provide an image of the person to be found, or by simply providing the person's name for known people. One specific application of this technology is in criminal photo databases. This environment is ideally suited for automated face recognition since all poses are standardized and lighting and scale remain constant. Clearly, this type of technology could extend online searches beyond the textual clues typically used when indexing information. Facial recognition. Development through historyRecognitionFacial analysis is one of the most relevant applications of image analysis. It is a real challenge to build an automated system that matches the human ability to recognize faces. Although humans are quite good at identifying familiar faces, we are not very proficient when it comes to handling large numbers of unfamiliar faces. Computers, with almost unlimited memory and computing speed, are expected to overcome human limitations. Facial recognition remains an unsolved problem and a technology in high demand. A simple search for “facial recognition” in the IEEE Digital Library yields 9,422 results. 1332 articles in a single year 2009. Many industrial sectors are interested in what it could offer. Some examples include video surveillance, human-computer interaction, photo cameras, virtual reality or law enforcement. This multidisciplinary interest stimulates research and arouses the interest of various disciplines. This is therefore not a problem limited to computer vision research. Face recognition is a relevant topic in the fields of pattern recognition, neural networks, computer graphics, image processing and psychology. In fact, the first work on this subject was carried out in the 1950s in psychology [21]. They focused on other issues such as facial expression, the interpretation of emotions or the perception of gestures. Engineering began to look at facial recognition in the 1960s. One of the first studies on this topic was that of Woodrow W. Bledsoe. In 1960, Bledsoe, among other researchers, started Panoramic Research, Inc., in Palo Alto, California. The majority of the work done by this company involved AI-related contracts from the US Department of Defense and various intelligence agencies [4]. In 1964 and 1965, Bledsoe, along with Helen Chan and Charles Bisson, worked on using computers to recognize human faces [14, 15]. As funding for this research was provided by an anonymous intelligence agency, little work has been published. He later continued his research at the Stanford Research Institute. Bledsoe designed and implemented a semi-automatic system. Certain facial coordinates were selected by a human operator, and then computers used this information for recognition. He described most of the problems that facial recognition still suffers from, even 50 years later: lighting variations, head rotation, facial expression, aging. Research on this topic continues, trying to measure subjective facial features like ear size or distance between eyes. For example, this approach was used at Bell Labs by A. Jay Goldstein, Leon D. Harmon, and Ann B. Lesk. They described a vector containing 21 subjective features such as ear protrusion, eyebrow weight or nose length, as a basis for recognizing faces using pattern classification techniques. In 1973, Fischler and Elschanger attempted to measure similar characteristics automatically [34]. Their algorithm used local template matching and a global fit measure to find and measure facial features. There were other approaches in the 1970s. Some tried to define a face as a set of geometric parameters and then perform shape recognition based on those parameters. But the first to develop a fully automated facial recognition system was Kenade in 1973. He designed and implemented a facial recognition program. It operated in a systemcomputer designed for this purpose. The algorithm automatically extracted sixteen facial parameters. In his work, Kenade compares this automated extraction to human or manual extraction, showing only a small difference. He obtained a correct identification rate of 45 to 75%. It demonstrated that better results were obtained when irrelevant features were not used. In the 1980s, various approaches were actively pursued, most of them continuing previous trends. Some work has attempted to improve the methods used to measure subjective characteristics. For example, Mark Nixon presented a geometric measure of eye spacing [5]. The pattern matching approach has been improved with strategies such as “deformable patterns”. This decade also brought new approaches. Some researchers build facial recognition algorithms using artificial neural networks [1]. The first mention of eigenfaces in image processing, a technique that would become the dominant approach in the following years, was made by L. Sirovich and M. Kirby in 1986 [10]. Their methods were based on principal component analysis. Their goal was to represent an image in a lower dimension without losing much information and then reconstruct it [6]. Their work would later be at the origin of the proposal of many new facial recognition algorithms. The 1990s saw the wide recognition of the clean face approach mentioned as the basis of the state of the art and early industrial developments. In 1992, Mathew Turk and Alex Pentland of MIT presented work using eigenfaces for recognition [11]. Their algorithm was able to locate, track and classify a subject's head. Since the 1990s, the field of facial recognition has received much attention, with a notable increase in the number of publications. Many approaches have been adopted which have led to different algorithms. Some of the most relevant are PCA, ICA, LDA and their derivatives. Different approaches and algorithms will be discussed later in this work. Views on Recognition Algorithm Design The most obvious facial features were used in the early days of facial recognition. This was a sensible approach to mimicking the ability to recognize human faces. We attempted to measure the importance of certain intuitive features [2] (mouth, eyes, cheeks) and geometric measurements (distance between eyes [8], width-length ratio). Nowadays, it is still a relevant issue, mainly because removing certain features or parts of a face can lead to better performance [4]. In other words, it is crucial to decide which facial features contribute to good recognition and which are worth nothing more than extra noise. However, the introduction of abstract mathematical tools like eigenfaces has created another approach to face recognition. It was possible to calculate the similarities between faces while avoiding these human-relevant features. This new point of view made it possible to reach a new level of abstraction, leaving aside the anthropocentric approach. Some characteristics relevant to humans are still taken into account. For example, skin color [9, 3] is an important feature for face detection. The localization of certain features like mouth or eyes is also used to perform normalization before the feature extraction stage [12]. In summary, a designer can apply to algorithms theknowledge provided by psychology, neurology or simple observation. On the other hand, it is essential to abstract and attack the problem from a purely mathematical or computational point of view. Structure of Face Recognition System Face recognition is a term that includes several sub-problems. There are different classifications of these problems in the bibliography. Some of them will be explained in this section. Finally, a general or unified classification will be proposed. Generic facial recognition system The input to a facial recognition system is always an image or video stream. The result is an identification or verification of the subject(s) appearing in the image or video. Some approaches [15] define a facial recognition system as a three-step process - see Figure 1.1. From this point of view, the face detection and feature extraction phases could take place simultaneously. Figure 1.1: A generic face recognition system. Face detection is defined as the process of extracting faces from scenes. Thus, the system positively identifies a certain region of the image as a face. This procedure has many applications such as face tracking, pose estimation or compression. The next step – feature extraction – involves obtaining the relevant facial features from the data. These features may be certain facial regions, variations, angles or measurements, which may or may not be relevant to humans (e.g., eye spacing). This phase has other applications such as tracking facial features or recognizing emotions. Finally, the system recognizes the face. In an identification task, the system would report an identity from a database. This phase involves a comparison method, a classification algorithm and a precision measurement. This phase uses methods common to many other fields that also perform certain classification processes: sound engineering, data mining, etc. These phases can be merged or new ones can be added. Therefore, we could find many different technical approaches to solve a face recognition problem. Face detection and recognition could be performed in tandem, or perform expression analysis before normalizing the face [10]. Face Detection ProblemStructureFace Detection is a concept that includes many sub-problems. Some systems detect and locate faces at the same time, others first perform a detection routine and then, if positive, attempt to locate the face. Then some tracking algorithms may be required - see Figure 1.2. Figure 1.2: Face detection process. Face detection algorithms generally share common steps. First, some data dimension reduction is performed in order to obtain an admissible response time. Some preprocessing could also be done to adapt the input image to the algorithm's prerequisites. Then, some algorithms analyze the image as is, and others try to extract certain relevant facial regions. The next phase usually involves extracting facial features or measurements. These will then be weighted, evaluated or compared to decide if there is a face and where it is located. Finally, some algorithms have a learning routine and include new data in their models. Face detection is therefore a two-class problem where you have to decide whether there is a face or not in an image. This approach can be [10].