blog




  • Essay / What is digital image acquisition and all about it

    Digital image acquisition or also known as digital image is the formation of a coded represented visual component numerically of an object. Moreover, getting information or updates without having any physical contact with the object or phenomenon is actually the way to practice the application of remote sensing. Jensen (2000) states that remote sensing is defined as a technique for measuring information about an object without touching it (cited by Liu in 2014). For further explanation, the term "remote sensing" defined the use of a satellite or any aircraft based on sensor technologies as a platform to identify and organize transmitted signals such as electromagnetic radiation. Satellite radiation sensors can be divided into two, active and passive. For a passive system, the sun acts as the main source of electromagnetic radiation while for an active system, it transmits energy downward and can detect energy radiated from the earth. There are a few theories and methods on how the image is captured from the satellite in terms of space and time. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get an original essayNext, to explain more details and depth regarding image acquisition, it is actually an action of retrieving an image from a hardware source which are used for the treatment. Image acquisition is a very crucial and important step because if no images are obtained or recovered, the next step cannot be performed, hence the images are not completely processed. There are three main principles in the arrangement of sensors used to modify energy into digital images. These are the presence of energy reflected from the object of interest, a sensing system that focuses on the energy, and a sensing device that can detect, measure, and estimate the energy. To summarize theoretically, first, the combination of input electrical power and sensor material that reacts with the detected energy will convert the energy into voltage. Then the waveform produced by the output voltage is the response of the sensor and each sensor will acquire a digital quantity which will be digitized by itself. Mishra, Kumar, and Shukla (2017) state that image acquisition is defined by the action of capturing images before it will be analyzed. Image acquisition can be performed using many types of sensors. First, image acquisition can be performed using a single sensor. One of the most common and well-known types of single sensors is the photodiode. The photodiode is made of silicon materials, and its produced output voltage waveform matches the light. Furthermore, to improve the selectivity of the sensor, the use of a filter in front of it should be strongly considered. With the presence of a sensor, the sensor output will strongly favor the band favored by the filter. To obtain an image with a two-dimensional view using a single sensor, the movement or locomotion of the object must be fixed in the x and y directions. Theoretically, rotation will produce movement in one direction while linear movement will produce movement in lines perpendicular to each other. Second, image acquisition can also be done with a linear sensor or called sensory strips. This type of sensor is used more often than the single sensor. This type of sensor was arranged in a line in the form of a sensor strip. The sensor strip makes it possible to produce aimaging output in one direction. Then this line sensor works because any movement perpendicular to the strip will produce an image in another direction. Finally, the digital image can also be acquired using a matrix sensor. In this type of sensor, an individual sensor has been arranged in a 2D layout pattern. Quite a large number of electromagnetic detection tools and some ultrasonic detection tools are arranged in an ordered series. This type of arrangement or pattern is mainly found in digital cameras. The most common type of sensor for cameras is called a CCD array, which is mainly used in digital cameras and other light-sensing tools and devices. The response of each sensor is correlated to the fundamental light energy projected onto the sensor surface. The sensor is a widely used element in astronomical and other applications that require an image with low disturbance or noise. Additionally, noise can be reduced by allowing the sensor to combine and merge the input light signal over several minutes or even hours. There is an advantage in using this type of array sensors whose layout is 2D, thus a complete picture can be acquired by focusing the energy pattern on the array surface. As the sensor array occurs with the focal plane, it will produce a result equal to the amount of light received by each sensor. The term image acquisition also includes the process of image compression. It is a process whose intention or objective is to produce a compact and dense digital representation of a signal. Image compression techniques can be divided into two groups depending on the capabilities of the original image, whether or not it can be reconstructed using the compressed image. These classified techniques are lossless compression techniques and lossy compression techniques. For the lossless compression process, the reconstructed data and the original value must have the same value for each data sample. While for lossy compression techniques, which are mainly used in image and video data processing applications, it is not necessary that the value of the original and reconstructed data be the same. So, a slight loss is allowed for the value of the changed data. Additionally, a compression process with faulty output is called lossy compression technique. To obtain an image using a remote sensing satellite, a few steps must be followed. Satellite image processing operations can be divided into four main groups which are image rectification and restoration, image enhancement, image classification and finally information extraction. Basically, it is a process to convert raw image data in order to obtain accurate data and get rid of any noise or disturbance present in the data. To follow this process, data must be recorded and maintained in digital form so that it can be stored on a computer or disk. Indeed, appropriate hardware, software and image analysis system are also required to carry out this crucial process. There are a few commercially developed software programs that can be used for remote sensing and image processing and analysis, such as SAGA GIS and InterImage. Firstly, the process of pre-processing or also known as image rectification and restoration is a very crucial step in obtaining an image using a remote sensing satellite. In this first step, the objective is to ensure that the dataPlatform-specific radiometric and geometric data are accurate and precise, meaning they are error-free. This operation is very preliminary and is grouped into geometric and radiometric corrections. Radiometric corrections are essential because variations can be observed in scene lighting, viewing geometry, atmospheric conditions as well as noise and sensor responses. Each of these data is different and depends on the specific sensor and platform used to acquire the data and the conditions during the acquisition process. However, it is up to us to convert or adjust the data in order to find comparisons between the data obtained. Some examples of radiometric corrections modify data relating to sensor distortion and unwanted sensor or any form of atmospheric noise. Additionally, geometric corrections include geometric distortions due to variations in sensor and Earth geometry. Noise in an image, such as systemic banding or banding and dropped lines, is due to distortion in sensor response and transmission. So, those performed must be adjusted before executing the next process. However, other errors cannot be corrected in this way, which is why a geometric registration process must be carried out. The geometry registration process identified the image coordinates in the form of (row, column) at a certain point called ground control points (GCP). The GCPs in the inexact image correspond to the precise positions of a map, which is called image-to-map registration. Then, the second stage of satellite image processing is the image enhancement process. The main objective is to provide a better appearance to the imagery, so that the visual appearance of the image can be improved to facilitate visual interpretation and analysis. Some of the common types of image enhancement that can be found in GIS and image processing tool are contrast enhancement, linear stretching, histogram equalization, clipping density and edge enhancement. Contrast enhancement is a process of adjusting the brightness of the image to suit and adapt to the display system. This involves changing the original values ​​which then lead to the use of a more available range which will help to enhance the contrast between the target and the backgrounds. Linear stretching is a process in which we calibrate the original brightness value into a new distribution. Then, during histogram equalization, the original brightness value is modified to form a uniform intensity distribution, while density slicing is a process in which the evaluated brightness intervals are mapped into colors discreet. Finally, edge enhancement involves boosting contrast in a local region to support visual transitions between a region of contrasting brightness. It is very important to study and review the histogram image before performing any image enhancement. The histogram is usually displayed in three bands red, blue and green. Image classification is a process of assigning land cover classes to pixels. As an example, there are nine land cover datasets which are further classified into forest, urban, agricultural and other classes. In addition, there are three main image classification methods in remote sensing: unsupervised image classification, supervised image classification, and object-based image analysis. ThereUnsupervised and supervised image classification is most likely common and used by those involved in this field. However, in recent times, object-based image analysis has been used by these people as it is very useful as it offers high-resolution data. First, in unsupervised classification, pixels are first grouped into clusters based on their properties. This classification technique is the most basic technique because it does not require any samples. There are only two steps in this technique which involves generating and assigning clusters. Then, in supervised classification, the selected samples will be used by the software to apply them to the entire image. There are three steps in this classification which involves selecting training areas, generating signature files and classifying. Finally, object-based image classification grouped pixels into different shapes and sizes in a process called multi-resolution segmentation or segment mean shifting. This process formed a similar image object by classifying the pixels. Images filled with objects of different scales and sizes are formed and these objects are more meaningful because they represent a true feature of the image. Objects can be classified based on their texture, context, and geometry using object-based image classification. Objects can be created and classified using multiple bands. However, better land coverage was not yet guaranteed when we chose a higher resolution image. What matters are the image classification techniques that have been selected in order to obtain an accurate result. In order to get better land coverage, we really need to know when to use pixel-based classification (supervised and unsupervised classification) and when should choose object-based classification. The main factor to consider when choosing classification techniques is spatial resolution. Spatial resolution is a term that refers to the number of pixels used in constructing an image. By having a greater number of pixels, the image has a higher spatial resolution. So, before considering classification techniques, we need to know that by having a low spatial resolution image, we can choose either pixel-based or object-based classification techniques, as both will work well. However, if the image is a high spatial resolution image, object-based classification will provide us with better, precise and accurate results. Based on a case study conducted by the University of Arkansas on object-based and pixel-based classification, it was proven that object-based classification outperforms the ability of pixel-based classification. pixels because it uses both spectral and contextual information which has greater precision. , thus providing better results that we can rely on. As mentioned earlier, remote sensing instruments are divided into two, passive and active. The passive tool directly detects natural energy while the passive tool only detects radiation reflected from another source. The most common external source is sunlight. There are a few types of passive instruments such as radiometer, imaging radiometer, spectrometer and spectroradiometer. The radiometer is a device that evaluates the strength of electromagnetic radiation in certain bands of the electromagnetic spectrum. On the other hand, the imaging radiometer includes the scanning capability to produce a 2D array of.