Next Article in Journal
EMV-Compatible Offline Mobile Payment Protocol with Mutual Authentication
Next Article in Special Issue
Real-Time Wood Behaviour: The Use of Strain Gauges for Preventive Conservation Applications
Previous Article in Journal
A Cooperative Machine Learning Approach for Pedestrian Navigation in Indoor IoT
Previous Article in Special Issue
High Energy Double Peak Pulse Laser Induced Plasma Spectroscopy for Metal Characterization Using a Passively Q-Switched Laser Source and CCD Detector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes

Department of Cartographic Engineering, Geodesy, and Photogrammetry, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Current address: Camino de Vera s/n, Edificio 7i, 46022 Valencia, Spain.
These authors contributed equally to this work.
Submission received: 23 September 2019 / Revised: 17 October 2019 / Accepted: 21 October 2019 / Published: 23 October 2019
(This article belongs to the Special Issue Sensors for Cultural Heritage Monitoring)

Abstract

:
In this paper, we propose a novel approach to undertake the colorimetric camera characterization procedure based on a Gaussian process (GP). GPs are powerful and flexible nonparametric models for multivariate nonlinear functions. To validate the GP model, we compare the results achieved with a second-order polynomial model, which is the most widely used regression model for characterization purposes. We applied the methodology on a set of raw images of rock art scenes collected with two different Single Lens Reflex (SLR) cameras. A leave-one-out cross-validation (LOOCV) procedure was used to assess the predictive performance of the models in terms of CIE XYZ residuals and Δ E a b * color differences. Values of less than 3 CIELAB units were achieved for Δ E a b * . The output sRGB characterized images show that both regression models are suitable for practical applications in cultural heritage documentation. However, the results show that colorimetric characterization based on the Gaussian process provides significantly better results, with lower values for residuals and Δ E a b * . We also analyzed the induced noise into the output image after applying the camera characterization. As the noise depends on the specific camera, proper camera selection is essential for the photogrammetric work.

1. Introduction

Accurate recording of color is one of the fundamental tasks in many scientific disciplines, such as chemistry, industry, medicine, or geosciences to name just a few. Color measurement is a crucial aspect in archaeology and specifically in rock art documentation [1,2]. The correct measurement of color allows researchers to study, diagnose, and describe rock art specimens and detect chromatic changes or alterations over time. High-precision metric models together with reliable color information data sets provide essential information in modern conservation and preservation works.
The appropriate description of color is not a trivial issue in cultural heritage documentation [3,4]. Color is a matter of perception, which largely depends on the subjectivity of the observer. Therefore, correct color registration requires objective colorimetric measurement described in rigorous color spaces. Usually, color spaces defined by the CIE are used as the standard reference framework for colorimetric measurement and management.
To avoid damage into the pigment, direct contact measurements with colorimeters or spectrophotometers on painted rock art panels are not allowed. Instead, indirect and noninvasive methods for color determination are required. Thus, the use of digitization techniques with conventional digital cameras to support rock art documentation is becoming more and more frequent [5,6,7,8].
The color information obtained from digital images can be easily captured, stored, and processed. The drawback of such color information lays on the response recorded by the sensor, which is not strictly colorimetric. The RGB responses do not satisfy the Luther–Ives condition so that RGB data are not a linear combination of CIE XYZ coordinates [9]. If the values recorded in the RGB channels were proportional to the input energy, a simple linear relationship between the RGB data acquired by the digital camera and the CIE XYZ tristimulus values would exist. However, in general, the spectral sensitivities of the three RGB channels are not linear combinations of the color-matching functions [10]. The signals generated by digital cameras are indeed referred to as “device dependent”. Thus, a transformation to convert the input RGB data into device-independent color spaces is necessary.
A widely accepted approach to establish the mathematical relationships between the original RGB data and well-defined independent color spaces is the procedure of digital camera characterization [10]. Different techniques are used for colorimetric camera characterization. Numerous papers have been written regarding common techniques, such as polynomial transformation with least-squares regressions [9,10], interpolation from look-up tables [11], artificial neural networks [12], and principal component analysis [13]. Further studies have focused on optimizing characterization, including the use of pattern search optimization [14], multiple regression [15], root-polynomial regression [16], or spectral reflectance reconstruction [17,18,19].
The colorimetric characterization of digital cameras based on polynomial models is an appropriate starting point; they are widely accepted, mathematically simpler and require smaller training sets and less computing time [20,21]. Previous experiments using second-order polynomials applied in rock art paintings gave also good results [22]. However, they tend to be rigid models and suffer from overestimation or underestimation when many or few data are provided. Furthermore, it is known the lack of reliable generalization of predictions in polynomial models, especially when extrapolating or in the case of modeling wiggly functions [23]. Therefore, it is desirable to improve the results by means of flexible, robust and more accurate models.
In this work, we introduce a novel approach for documenting rock art paintings based on a Gaussian process (GP) model. GPs are natural, flexible nonparametric models for N-dimensional functions, with multivariate predictors (input variables) in each dimension [24,25]. The defining property of a GP model is that any finite set of values is jointly distributed as a multivariate Gaussian function. A GP is completely defined by its mean and covariance function. The covariance function is the crucial ingredient in a Gaussian process as it encodes correlation structure which characterizes the relationships between function values at different inputs. GP allows not only nonlinear effects and handling implicit interactions between covariates, but also improves generalization of function values for both interpolation and extrapolation purposes. Due to their generality and flexibility, GPs are of broad interest across machine learning and statistics [25,26].
GP models are formulated and estimated within a Bayesian framework, and all inference is based on the multivariate posterior distribution. Computing the posterior distribution is often difficult, and for this reason, different computation approaches can be used. The Markov chain Monte Carlo (MCMC) is a sampling method that provides a sample of the joint posterior distribution of the parameters [27,28].
The GP model results were compared to the common approach based on polynomial regression models. The main advantage of nonparametric over parametric models is their flexibility [29,30]. In a parametric framework, the shape of the functional relationship is a prespecified, either linear or nonlinear, function, limiting the flexibility of the modeling. In a nonparametric framework, the shape of the functional relationship is completely determined by the data, allowing for a higher degree of adaptability.
The goodness of fit and predictive performance of the models are assessed by analyzing the adjustment residuals and the leave-one-out cross-validation (LOOCV) residuals. The quality of the characterized image is also evaluated in terms of colorimetric accuracy by means of color differences among observed and fitted colors using the CIE framework. In addition, we evaluate the induced noise into the output image after the characterization which is recognized as a drawback for some image applications such as image matching or pattern recognition. The induced noise is evaluated by computing and comparing the coefficients of variation of digital values between the original and output images.

2. Materials and Methods

2.1. Case Study: Cova dels Cavalls

The working area is a rock art scene located in Shelter II of the Cova dels Cavalls site in the county of Tirig, province of Castelló (Spain). This cave is part of one of the most singular rock art sites of the Mediterranean Basin in the Iberian Peninsula, which is listed by UNESCO as a World Heritage since 1998 [31].
The images were taken using two different SLR digital cameras, a Sigma SD15 and a Fujifilm IS PRO. The images contain the hunting scene located in the central part of this emblematic archaeological site. Parameters such as focal, exposure time, aperture, and ISO were controlled during the photographic sessions for both cameras. Photographs were taken in the raw format under homogeneous illumination conditions.
The main difference between the Fujifilm IS PRO and the Sigma SD15 cameras is their integrated sensors. The Fujifilm incorporates a 12 megapixels Super CCD imaging sensor, with resolution of 4256 × 2848 pixels and a color filter array (CFA) with a Bayer pattern. The use of CFA implies that the color registered in every individual pixel is not acquired directly but as a result of interpolation between channels. On the other hand, the Sigma carries a three-layer CMOS Foveon®X3 sensor of 2640 × 1760 pixels, which makes it a true trichromatic digital camera [32]. The main advantage of this sensor is its ability to capture color without any interpolation.

2.2. Image-Based Camera Characterization Methodology

The output RGB digital values registered by the camera depend on three main factors: the sensor architecture, the lighting conditions, and the object being imaged. Even assuming the same object and lighting conditions, other factors can still produce different RGB responses within and across scenes. Some elements such as the internal color filters or user settings (exposure time, aperture, white balance, and so on) can modify the output digital values. As a result, the original RGB data registered by the sensor cannot be used rigorously for the quantitative determination of color, and native RGB camera color spaces are said to be device dependent. A way to transform the signal captured by the camera sensor into a physicaly-based, device-independent color space is by means of camera characterization (See workflow in Figure 1).
To carry out the characterization, various training and test datasets are required. An important aspect on the camera characterization process is the establishment of the working color spaces. Some of the most common color spaces used are the input RGB data and the output tristimulus coordinates [10]. In the preliminary stages of the study four different transformations, between color spaces were tested, including RGB–CIE XYZ, RGB-CIELAB, LMS–CIE XYZ, and LMS–CIELAB. The transformation that worked the best was the RGB–CIE XYZ, whose results are reported in the rest of the paper.
On the other hand, the digital RGB values are available after a complex process driven by the built-in software and electronics of the camera [33]. Usually, a set of preprocessing operations, e.g., demosaicing, white balance, gamut mapping, color enhancement, or compression, are applied automatically to the raw image (Figure 2). It is thus preferable to work with raw data versus RGB processed or compressed image files.
The raw RGB training and test sample data were extracted from the images using our software pyColourimetry which was developed in previous research. This software was written in the Python programming language following the CIE recommendations. It allows raw RGB data extraction from conventional camera formats and implements other colorimetric functions such as transformation among color spaces, color difference calculation, or spectral data treatment [34].
Also, the data acquisition includes the direct measurement of the tristimulus values of the patches present in the color chart and the raw RGB data extraction from the digital image. Thus, a color chart has to be included as a colorimetric reference in the photographic shot to carry out the camera characterization. For this experiment, we used an X-Rite ColorChecker SG Digital Color Chart as a color standard. This chart is routinely used in digital photography for color calibration. It consists of an array of 140 color patches. The number of patches is supposedly enough to cover the color domain present in the scene as well as to provide training and test data sets to analyze the results after the camera characterization.
CIE XYZ values for the ColorChecker patches have to be known prior to undertake the camera characterization. An accepted option is to use those tristimulus values provided by the manufacturer. Nevertheless, it is highly recommended to carry out a new measurement, preferably by means of a colorimeter or spectrophotometer, using the setup of the specific experiment. The spectral reflectance data were acquired using the spectrophotometer Konica Minolta CM-600d, following CIE recommendations (2° standard observer under D65 illuminant). CIE XYZ coordinates can be obtained by transforming the spectral data using well-known CIE formulae [35].
To visualize the tristimulus coordinates, it is necessary to perform a final transformation of the CIE XYZ values into the sRGB color space, which is compatible with most digital devices. This transformation is carried out based on the technical recommendations published by the International Electrotechnical Commission [36]. Thus, the final outcome of the characterization consists of an sRGB output image for each regression model.
Once the digital camera is colorimetrically characterized, it can be used for the rigorous measurement of color simulating a colorimeter [37]. Using a characterized camera, we can obtain accurate color information over complete scenes, which is a very important requirement to properly analyze rock art. The use of conventional cameras for color measurement allows researchers to take pictures with low-cost recording devices, suitable for carrying out heritage documentation tasks using noninvasive methodologies [22].

2.3. Gaussian Processes for Camera Characterization

The main aim of camera characterization is to find the mapping function between the RGB color values and the CIE XYZ tristimulus coordinates:
f : R G B I R 3 X Y Z I R 3
Commonly, this multivariate mapping function is divided into independent functions for each each single XYZ tristimulus value. In this paper, we propose a Gaussian process (GP) to estimate these functions, with different model parameters, θ 1 , θ 2 , and θ 3 , for each mapping function:
f 1 : R G B I R 3 G P ( θ 1 ) X I R
f 2 : R G B I R 3 G P ( θ 2 ) Y I R
f 3 : R G B I R 3 G P ( θ 3 ) Z I R

2.3.1. Gaussian Process Model

A GP is a stochastic process which defines the distribution over a collection of random variables [24,25]. The defining property of a GP is that any finite set of random variables is jointly distributed as a multivariate normal distribution. A GP is completely characterized by its mean and covariance functions that control the a priori behavior of the function. GP can be used as prior probability distributions for latent functions in generalized linear models [38]. However, in this paper, we focus on GP in linear models (a normal outcome), as we can assume that the CIE XYZ color coordinates are normally distributed.
A GP for a normal outcome y = { y 1 , y 2 , , y n I R } I R n , paired with a matrix of D inputs variables (predictors) X = { x 1 , x 2 , , x 3 I R n } I R n × D , consists of defining a multivariate Gaussian distribution over y conditioned on X:
y | X N ( μ ( X ) , K ( X | θ ) + σ 2 I )
where μ ( X ) is a n-vector, K ( X | θ ) is an n × n covariance matrix, σ 2 is the noise variance, and I the n × n diagonal identity matrix. The mean function μ : X I R n × D I R n can be anything, although it is usually recommended to be a linear model or even zero. The covariance function K | θ : X I R n × D I R n × n must be a positive semidefinite matrix [25,26]. In this work, we use the square exponential covariance function, which is the most commonly used function of the Matérn class of isotropic covariance functions [25]. The squared exponential covariance function for two observed points i and j ( i , j = 1 , , n ) takes the form
K ( X , θ ) i j = α 2 exp - 1 2 d = 1 D 1 d 2 ( x d i - x d j ) 2
where θ = { α , } ; α is the marginal variance parameter, which controls the overall scale or magnitude of the range of values of the GP; and = { d } d = 1 D are the lengthscale parameters, which control the smoothness of the function in the direction of the d-predictor, so that the larger the lengthscale the smoother the function.

2.3.2. Bayesian Inference

Bayesian inference is based on the joint posterior distribution p ( θ , σ 2 | y , X ) of parameters given the data, which is proportional to the product of the likelihood and prior distributions,
p ( θ , σ 2 | y , X ) p ( y | θ , σ 2 , X ) p ( θ ) p ( σ 2 )
In the previous equation,
p ( y | θ , σ 2 , X ) = i N ( y i | 0 , K i i ( X | θ ) + σ 2 )
is the likelihood of the model, where the mean function μ ( X ) has been set to zero for the sake of simplicity, and
p ( α ) = N ( α | 0 , 10 ) p ( ) = d = 1 D Gamma ( d | 1 , 0 . 1 ) p ( σ 2 ) = N + ( σ 2 | 0 , 10 }
are the prior distributions of the parameters of the model. These correspond to weakly informative prior distributions based on prior knowledge about the magnitude of the parameters.
Predictive inference for new function values y ˜ for a new sequence of input values X ˜ can be computed by integrating over the joint posterior distribution
p ( y ˜ | y ) = p ( y ˜ | θ , σ 2 , X ˜ ) p ( θ , σ 2 | y , X ) d θ d σ 2
To estimate both the parameter posterior distribution and the posterior predictive distribution for this model, simulation methods and/or distributional approximations methods [38] must be used. Simulation methods based on MCMC [27] are general sampling methods to obtain samples from the joint posterior distribution. For quick inferences and large data sets, where iterative simulation algorithms are too slow, modal and distributional approximation methods can be more efficient and approximate alternatives.

2.4. Second-Order Polynomial MOdel

This is the most extended model in practical camera characterization. The N-dimensional collections of random observations X , Y , and Z are the CIE color variables, where X i , Y i , and Z i represent the color coordinates of the ith order observation i ( i = 1 , , N ). Each X , Y , and Z N-dimensional variable is considered to follow a normal distribution depending on an underlying second-order polynomial function f and noise variance σ 2 ,
p ( X | f x , σ x ) = N ( X | f x , σ x 2 I )
p ( Y | f y , σ y ) = N ( Y | f y , σ y 2 I )
p ( Z | f z , σ z ) = N ( Z | f z , σ z 2 I )
where I is the N × N identity matrix. The latent second-order polynomials functions f x , f y , and f z take the form
f x = a 0 + a 1 · R + a 2 · G + a 3 · B + a 4 · R · G + a 5 · R · B + a 6 · G · B + a 7 · R 2 + a 8 · G 2 + a 9 · B 2
f y = b 0 + b 1 · R + b 2 · G + b 3 · B + b 4 · R · G + b 5 · R · B + b 6 · G · B + b 7 · R 2 + b 8 · G 2 + b 9 · B 2
f z = c 0 + c 1 · R + c 2 · G + c 3 · B + c 4 · R · G + c 5 · R · B + c 6 · G · B + c 7 · R 2 + c 8 · G 2 + c 9 · B 2
where the vectors a = a 1 , , a 9 , b = b 1 , , b 9 and c = c 1 , , c 9 represent the polynomial coefficients, and R , G , and B are the N-dimensional variables in the input RGB space.
The likelihood functions of the variables X , Y and Z (given the coefficients a , b , c ), the variance parameters σ 2 = σ x 2 , σ y 2 , σ z 2 , and the variables R , G and B , take the form
p ( X | a , σ x , R , G , B ) = i N N ( X i | a , σ x 2 , R i , G i , B i )
p ( Y | b , σ y , R , G , B ) = i N N ( Y i | b , σ y 2 , R i , G i , B i )
p ( Z | c , σ z , R , G , B ) = i N N ( Z i | c , σ z 2 , R i , G i , B i )
where the subscript i represents the ith observed value.

Bayesian Inference

The joint posterior distributions are proportional to the product of the likelihood and prior distributions:
p ( a , σ x 2 | X ) p ( X | a , σ x 2 , R , G , B ) p ( a ) p ( σ x 2 ) p ( b , σ y 2 | Y ) p ( Y | b , σ y 2 , R , G , B ) p ( b ) p ( σ y 2 ) p ( c , σ z 2 | Z ) p ( Z | c , σ z 2 , R , G , B ) p ( c ) p ( σ z 2 )
We set vague prior distributions p ( a ) = N ( a | 0 , 1000 ) , p ( b ) = N ( b | 0 , 1000 ) , p ( c ) = N ( c | 0 , 1000 ) , and p ( σ ) = N + ( σ | 0 , 1 ) for the hyperparameters a , b , c , and σ , respectively, based on prior knowledge about the magnitude of the parameters.
Predictive inference for new function values X ˜ , Y ˜ , and Z ˜ for a new sequence of input values R ˜ , G ˜ , and B ˜ can be computed by integrating over the joint posterior distributions
p ( X ˜ | X ) = p ( X ˜ | a , σ x 2 , R ˜ , G ˜ , B ˜ ) p ( a , σ x 2 | X ) d a d σ x 2 p ( Y ˜ | Y ) = p ( Y ˜ | b , σ y 2 , R ˜ , G ˜ , B ˜ ) p ( b , σ y 2 | Y ) d b d σ y 2 p ( Z ˜ | Z ) = p ( Z ˜ | c , σ z 2 , R ˜ , G ˜ , B ˜ ) p ( c , σ z 2 | Z ) d c d σ z 2
Simulation methods based on MCMC are used for estimating both the parameter posterior distribution and the posterior predictive distribution of these models.

2.5. Model Checking and Comparison

For model assessment, common checking procedures of normality, magnitude and tendencies on the fitted and predicted residuals are used. Fitted residuals can be useful for identifying outliers or misspecified models and give us the goodness of the fit for the sampling patches. Furthermore, the performance of each model was assessed using the LOOCV approach [39]. The LOOCV procedure has been previously used in color science multiple times [40,41,42,43], although its origins can be traced back to early practical statistics methods [39] and is routinely used in modern data science applications [44].
In our study, the LOOCV consists of setting aside an individual patch and calculating the prediction model. Then, the predicted value is compared to its observed value which gives a measure of the model predictive accuracy. This allows obtaining an average of the predictive accuracy for unobserved patches as well as individual quality indicators for each color patch.
In addition to the residual analysis, it is required the assessment of the models using colorimetry metrics [1]. Also, a LOOCV procedure was conducted to assess the predictive performance in terms of color differences. In classical colorimetry, color difference metrics are determined using formulas based on the CIELAB color space, such as Δ E a b * , also known as the CIE76 color difference [35].
The CIE XYZ color space is not uniform, that is, equal distances in this space do not represent equally perceptible differences between color stimuli. Contrarily, CIELAB coordinates are nonlinear functions of CIE XYZ, and more perceptually uniform than the CIE XYZ color space [35,45]. The Δ E a b * between the theoretical tristimulus coordinates against the predicted values are computed, which gives a measure of the model adjustment in a well-defined color metric.
Other modern color difference formulas which take Δ E a b * as a reference have been developed by the CIE. An example is the CIEDE2000 formula, which includes corrections for variations in color difference perception due to lightness, chroma, hue, and chroma–hue interaction [46,47,48]. It must be indicated that CIEDE2000 was designed for specialized industry applications [49]. To use the CIEDE2000 formula, a number of specific requirements have to be fulfilled. Some of these requirements are the sample size (greater than 4 degrees), sample–sensor separation (contact), background field (uniform, neutral gray), and sample homogeneity (textureless). Usually, these conditions cannot be guaranteed in the usual working environments found in rock art documentation. Therefore, it seems more appropriate to use the CIE76 formula herein instead of the CIEDE2000 to determine color differences.
Δ E a b * is calculated as the Euclidean distance between two color stimuli in CIELAB coordinates
Δ E a b * = ( Δ L * ) 2 + ( Δ a * ) 2 + ( Δ b * ) 2
where Δ L * , Δ a * , and Δ b * are the differences between the L * , a * , and b * coordinates of the two color stimuli.
Numerous guides seek to quantify the maximum value allowed (tolerance) for an acceptable color difference so that it is imperceptible by human vision. This concept is known as ”Just Noticeable Difference” (JND). A good reference is found in the Metamorfoze guideline, which employs the CIE76 color difference formula, and establishes a color accuracy of 4 CIELAB units [50].

2.6. Induced Noise Analysis

The radiometric response of a digital camera is the outcome of a number of factors, such as electromagnetic radiation, sensor electronics, the optical system, and so forth [51,52,53,54,55]. The noise present on a single image is basically composed of two components: the photoresponse noise of every sensor element (pixel) and the spatial nonuniformity or fixed pattern noise of the sensor array [56,57].
The nonlinear transformation functions in camera characterization models modify the input data which are themselves affected by noise. In the camera characterization, noise is transferred from the original image to the characterized image and transformed in different ways. In this paper, the analysis of noise is carried out by comparing the coefficients of variation in the original and the characterized images. The noise assessment was conducted on four selected patches of the color checker (C7, D7, C8, and D8).

3. Results and Discussion

3.1. Model Performance Assessment

For model assessment we processed the CIE XYZ residuals and the Δ E a b * color differences values after the characterization procedure.

3.1.1. CIE XYZ Residuals

Table 1 summarizes the fitted CIE XYZ and LOOCV residuals values achieved after the characterization process for the two cameras used in the study. Also, the histograms of the fitted and LOOCV residuals in both images are in Figure 3. Although both methodologies give satisfactory results and fit appropriately to the input RGB data, all summary statistics and histograms clearly show that GP outperforms the second-order polynomial regression model in both images. The standard deviation values, which represent the mean error, as well as the maximum and minimum residuals, are lower using the GP process than the second-order polynomial model.
Thus, given the results achieved using the GP, a notable improvement can be observed compared with the common second-order polynomial models, that is, a higher adjustment correlation coefficient and a greater predictive capacity were achieved. An improvement in the predictive capacity (LOOCV residuals) implies a better model generalization, that is, a better capacity for interpolation and extrapolation. This is a key aspect in the characterization procedure, as the output digital image is the result of the application of the model established.

3.1.2. Δ E a b * Color Differences

The Δ E a b * color differences (Equation (10)) obtained between the theoretical and predicted values allowed us to assess the colorimetric quality achieved after the adjustment. The results obtained for the Δ E a b * are shown numerically in the Table 2. Also, they can be consulted graphically in the Figure 4 (for the Sigma SD15) and Figure 5 (for the Fujifilm IS PRO), where the red line delimits the JND tolerance established in 4 CIELAB units.
Under a strict colorimetric criterion, the average and median values for Δ E a b * obtained using both regression models are less than 3 CIELAB units, that is, lower than the JND. However, the Δ E a b * color differences obtained confirm that the adjustment based on the GP model offers better results than the second-order polynomial regression for both cameras. The values achieved for the mean and median Δ E a b * in both images are similar using the second-order polynomial regression model.
It is clearly observed that the main improvement is in the maximum Δ E a b * values obtained. Although the average for LOOCV Δ E a b * is slightly lower using the GP, the maximum values for LOOCV Δ E a b * decreases significantly using this model. The maximum values obtained are 17.814 and 13.021 for the Sigma SD15 and the Fujifilm IS PRO image, respectively, using a second-order polynomial regression model, whereas the maximum values for LOOCV Δ E a b * using the GP are ~8 CIELAB units (Table 2).
Moreover, the number of patches with Δ E a b * greater than 4 units (JND) clearly decrease for both images after applying the GP (Figure 4 and Figure 5). Thus, the GP improvement achieved in the adjustment is noticeable in colorimetric terms, reaching lower magnitude residuals (Table 1) and Δ E a b * values (Table 2), which means better model fits and higher predictive characteristics.

3.1.3. Analysis of Color Chart Patches

The use of the LOOCV procedure in this study is twofold: it allows an overall model checking as well as analyzing patches used in the camera characterization at an individual level. Values for Δ E a b * less than 4 CIELAB units (JND value represented by the red line in the plots in Figure 4 and Figure 5) are achieved for the majority of the patches. Also, it is clearly observed that the Fujifilm IS PRO image gives better results than the Sigma SD15 image, particularly after applying the GP (cf. Figure 5b with Figure 4b).
Table 3 displays the percentage of patches with a LOOCV Δ E a b * greater than 4 CIELAB units for the different regression models performed. Once again, the GP model gives slightly better results than the second-order polynomial model, especially for the Fujifilm IS PRO digital camera.
Particularly, there are eight patches with the highest Δ E a b * values in both images regardless of the model applied (A8, B4, B9, E4, G4, H3, H9, and M3). These patches can be easily identified on the X-rite ColorChecker (Figure 6a) as well as on the CIE chromaticity diagram (Figure 6b). The worst results are found in patches E4, H9, B4, and G4 (blue, green, purple, and red, respectively). Note that patches E4 (blue), G4 (red), and H9 (green) are near the vertices of the triangle that delimits the color gamut, that is the chromatic domain, of the sRGB color space (white line plotted in picture b in Figure 6).
Thus, the nature and colorimetric characteristics of the color patches used as training sample set have an effect on the overall accuracy of the model used [58]. The colors represented with the highest Δ E a b * values in these patches correspond to saturated colors that are commonly found in artificial or industrial objects, but not in natural scenes such as those found in archaeological applications. We have to keep in mind that, usually, color charts used as colorimetric reference are designed mainly for industrial processes or photographic applications. Therefore, purple (B4, H3, and M3), blue (E4), green (H9 and B9), and bright red (G4) colors will not likely be present in archaeological scenes. Consequently, these color patches could be removed from the training sample during the characterization process without affecting the global accuraccy.
Previous research show that a proper selection of the patches, such as the skin tone colors, provides suitable results for camera characterization procedure applied in rock art paintings [59,60]. In Spanish Levantine rock art paintings, it is more frequent to find reddish or black colors (only dark reds in the Cova dels Cavalls) in pigments and skin tone or brown colors in the support. It is clearly observed that these patches work correctly regardless of the regression model used.

3.1.4. Induced Noise Results

The two cameras used in this study have different built-in sensors. The Sigma SD15 camera incorporates a Foveon®X3 sensor, whereas the Fujifilm IS PRO carries a Super CCD sensor. The values of the variation coefficients were computed and compared between the input RGB image and the output sRGB characterized image for the two mathematical transformations. The pixel variability evaluation was conducted in a reduced group of ColorChecker patches with homogeneous reflectance (C7, D7, C8, and D8).
Table 4 shows the variation coefficients for the raw RGB digital values from the original images before the camera characterization (Figure 7a,e). It is informative to contrast these values with the variation coefficients obtained for the CIE XYZ (Table 5) and sRGB transformed data (Table 6) respectively. For a brief overview, Table 7 shows a summary of the variation coefficients obtained.
Moreover, as the degree of the polynomial model used can affect the results achieved in terms of noise, we included the comparison of the variation coefficients for the linear model as well [61]. Our outcomes show basically the same results in the second-order and linear models, which are still slightly worse than the GP result (cf. Table 5 and Table 6). The trend found in the induced noise results, that is, the noise depends on the sensor and therefore it is different for each camera regardless of the mathematical model.
The coefficients obtained for the digital values on the original image indicate that the noise generated directly by the Foveon®X3 sensor is greater than noise in the SuperCCD (Table 4). Apparently, the SuperCCD sensor seems to respond better in the raw data collection stage in terms of noise variability.
Also, the overall results confirm the superior behavior of the SuperCCD when compared with the Foveon®X3 sensor (cf. Table 5 and Table 6). Table 7 shows that none of the regression models applied to the Fujifilm IS PRO image increased the original variability coefficients. In fact, the coefficients decrease slightly after applying the GP; with the opposite behavior present in the Foveon®X3 sensor. Both, the CIE XYZ and sRGB coefficients obtained increase significantly in the Foveon®X3 sensor, regardless of the characterization model used.
Additionally, Figure 7 displays the noise comparative images as a result of the different color space transforms during the camera characterization. Greater noise is produced by the Foveon®X3 sensor versus the SuperCCD sensor. It is evident that the best results are obtained for the Fujifilm IS PRO camera in terms of noise (cf. Figure 7b–d,f–h). Obviously, the architecture of each sensor is different, as well as its characteristics and operation. Therefore, with regard to image noise, the effect produced by the sensor differs depending on the camera used. It turns out that the digital camera selected for the photographic work is a crucial aspect to be taken into account in archaeological applications.

3.2. Output sRGB Characterized Images

The original raw images were successfully transformed into the same device-independent color space by means of the two regression models applied (Figure 8). Both the GP and the second-order polynomial model gave similar results. Even an experienced observer is unable to perceive differences between the images generated with the two regression models applied for both SLR digital cameras (cf. Figure 8c–f). The JND threshold of 4 CIELAB units is suitable in most practical applications (specifically in this study), and proves that human vision cannot perceive the improvement obtained with the GP (cf. Δ E a b * GP and second-order polynomial mean values in Table 2).

3.3. Δ E a b * Mapping Images

To verify the colorimetric quality achieved using both the GP and the polynomial regression model, we mapped the Δ E a b * between the two characterized sRGB images obtained (Figure 9). The predominant green color ( Δ E a b * < 2 units) observed in the mapping images shows that the results obtained are very satisfactory regardless the model applied. Again, the best results were obtained for the Fujifilm IS PRO image (Figure 9b). Nevertheless, for common applications, both regression models can be used since they offer successful results.
The detail of the color chart shown in Figure 9 displays some patches marked in red, that is, with Δ E a b * color differences values greater than 4 CIELAB units (JND). The red color is also found on the edge of the ColorChecker. Indeed, it was on the color chart background support where we found most of the red pixels. It is well known that color depends on the incident lightin; thus, changes in geometry produce local changes of illumination in some parts of the scene. This means that in certain areas the initial homogeneous lighting hypothesis is not fulfilled, and the regression model does not fit well to the input data in shaded areas. This circumstance also reflects the importance of illumination in colorimetry.

3.4. Rock Art Specimen Detail Images

We also compared the results obtained in two different rock art details present on the scene: the wounded animal detail (right upper corner); and the hunting scene (lower left corner) (Figure 10). For each detail selected from the scene we show the clip of the original image, the output (characterized) image after applying the different regression models, as well as the color difference mappings between both models. In order to facilitate the identification of the specimens, a mask has been applied to the latter image (Figure 11 and Figure 12).
Better results are observed in the images characterized with the Fujifilm IS PRO. The color differences Δ E a b * obtained for the pigments were under 2 CIELAB units, hence the predominant green color (Figure 11g and Figure 12g). A limited set of pixels marked in red ( Δ E a b * > 4 CIELAB units) are observed in areas where the homogeneous lighting hypothesis is not fulfilled due to geometry changes in the support (Figure 11g). On the other hand, the yellow values ( Δ E a b * ~ 2–3 units) present in the Sigma SD15 image can be due to the fact that the GP slightly improves the camera characterization (Figure 11c and Figure 12c). Therefore, it is important to previously make a correct selection of the camera to be used in the characterization process for archaeological research.
It can be seen that the images have been successfully characterized, independently of the regression model used. All details in the characterized images present the same ranges of colors, as well as homogeneous lighting for both cameras (cf. Figure 11b,d,f,h; Figure 12b,d,f,h).

4. Conclusions

The use of digital images to support cultural heritage documentation techniques has undergone unprecedented advance in the last decades. However, the original RGB data provided by digital cameras cannot be used for rigorous color measurement and communication. To face the lack of colorimetric rigor of the input RGB data recorded by the sensor, it is necessary to conduct a colorimetric camera characterization; alternatively, color profiles can also be used.
In this paper, the experimental assessment of a GP model has been carried out, and compared with a common second-order polynomial model. Although both regression models yielded good results, the use of the GP provides an improvement in colorimetric terms and fits better to the original raw RGB data. The lowest CIE XYZ residual values achieved for the adjustment and Δ E a b * color differences supports the use of a GP as a proper model for characterizing digital cameras. However, for practical purposes, the final sRGB characterized images derived from both the GP and the second-order polynomial model can be used with success in cultural heritage documentation and preservation tasks.
Additionally, the GP regression model has been tested on two SLR digital cameras with different built-in sensors to analyze the performance of the model in terms of pixel variability. The noise errors achieved show that similar results were obtained regardless the regression model used. However, the results also reveal that the induced noise highly depends on the camera sensor, which is clearly significant in the Foveon®X3 but not in the Super CCD. Thus, the correct choice of the digital camera is a key factor to be taken into consideration in the camera characterization procedure.
It is observed that the camera characterization procedure allows clear identification of the different pigments used in the scene, a proper separation from the support, the achievement of more accurate digital tracings, and accurate color measurement for monitoring aging effects on pigments. This methodology proves to be highly applicable not only in cultural heritage documentation tasks, but in any scientific and industrial discipline where a correct registration of the color is required.

Author Contributions

Conceptualization, A.M.-T.; Formal analysis, A.M.-T.; Funding acquisition, J.L.L.; Methodology, A.M.-T. and G.R.-M.; Software, A.M.-T.; Supervision, Á.M.-M. and J.L.L.; Writing—original draft, A.M.-T. and G.R.-M.; Writing—review & editing, Á.M.-M. and J.L.L.

Funding

This research is partly funded by the Research and Development Aid Program PAID-01-16 of the Universitat Politècnica de València, through FPI-UPV-2016 Sub 1 grant.

Acknowledgments

The authors would like to thank the Dirección General de Cultura y Patrimonio de la Conselleria D’Educació, Insvestigació, Cultura i Esport de la Generalitat Valenciana, the authorization to carry out the 3D documentation in the Cova dels Cavalls site in Tírig (Castelló, Spain).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CIECommission Internationale de l’Éclairage
GPGaussian process
LOOCVLeave-one-out cross-validation
MCMCMarkov chain Monte Carlo
ResFitted CIE XYZ residuals
RLCIE XYZ LOOCV residuals
SLRSingle Lens Reflex

References

  1. Iturbe, A.; Cachero, R.; Cañal, D.; Martos, A. Virtual digitization of caves with parietal Paleolithic art from Bizkaia. Scientific analysis and dissemination through new visualization techniques. Virtual Archaeol. Rev. 2018, 9, 57–65. [Google Scholar] [CrossRef]
  2. Ruiz, J.F.; Pereira, J. The colors of rock art. Analysis of color recording and communication systems in rock art research. J. Archaeol. Sci. 2014, 50, 338–349. [Google Scholar] [CrossRef]
  3. Boochs, F.; Bentkowska-Kafel, A.; Degrigny, C.; Karaszewski, M.; Karmacharya, A.; Kato, Z.; Picollo, M.; Sitnik, R.; Trémeau, A.; Tsiafaki, D.; et al. Colour and Space in Cultural Heritage: Key Questions in 3D Optical Documentation of Material Culture for Conservation, Study and Preservation. Int. J. Herit. Digit. Era 2014, 3, 11–24. [Google Scholar]
  4. Gaiani, M.; Fabrizio, I.A.; Ballabeni, A.; Remondino, F. Securing Color Fidelity in 3D Architectural Heritage Scenarios. Sensors 2017, 17, 2437. [Google Scholar] [CrossRef] [PubMed]
  5. González-Aguilera, D.; Muñoz-Nieto, A.; Gómez-Lahoz, J.; Herrero-Pascual, J.; Gutierrez-Alonso, G. 3D Digital Surveying and Modelling of Cave Geometry: Application to Paleolithic Rock Art. Sensors 2009, 9, 1108–1127. [Google Scholar] [CrossRef]
  6. Robert, E.; Petrognani, S.; Lesvignes, E. Applications of digital photography in the study of Paleolithic cave art. J. Archaeol. Sci. Rep. 2016, 10, 847–858. [Google Scholar] [CrossRef]
  7. Fernández-Lozano, J.; Gutiérrez-Alonso, G.; Ruiz-Tejada, M.Á.; Criado-Valdés, M. 3D digital documentation and image enhancement integration into schematic rock art analysis and preservation: The Castrocontrigo Neolithic rock art (NW Spain). J. Cult. Herit. 2017, 26, 160–166. [Google Scholar] [CrossRef]
  8. López-Menchero, V.M.; Marchante, Á.; Vincent, M.; Cárdenas, Á.J.; Onrubia, J. Combined use of digital nightlight photography and photogrammetry in the process of petroglyphs documentation: The case of Alcázar de San Juan (Ciudad Real, Spain). Virtual Archaeol. Rev. 2017, 8, 64–74. [Google Scholar] [CrossRef]
  9. Hong, G.; Luo, M.R.; Rhodes, P.A. A study of digital camera colorimetric characterization based on polynomial modeling. Color Res. Appl. 2001, 26, 76–84. [Google Scholar] [CrossRef]
  10. Westland, S.; Ripamonti, C.; Cheung, V. Characterisation of Cameras. In Computational Colour Science Using MATLAB®, 2nd ed.; John Wiley & Sons, Inc.: Chichester, UK, 2012; pp. 143–157. [Google Scholar]
  11. Hung, P. Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations. J. Electron. Imaging 1993, 1, 53–61. [Google Scholar] [CrossRef]
  12. Cheung, V.; Westland, S. Color Camera Characterisation Using Artificial Neural Networks. In Proceedings of the Tenth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications, Scottsdale, AZ, USA, 12 November 2002; Volume 1, pp. 117–120. [Google Scholar]
  13. Vrhel, M.J.; Trussell, H.J. Color correction using principal components. Color Res. Appl. 1992, 17, 328–338. [Google Scholar] [CrossRef]
  14. Bianco, S.; Gasparini, F.; Russo, A.; Schettini, R. A New Method for RGB to XYZ Transformation Based on Pattern Search Optimization. IEEE Trans. Consum. Electron. 2007, 53, 1020–1028. [Google Scholar] [CrossRef]
  15. Yoon, C.R.; Cho, M.S. Colorimetric characterization for digital camera by using multiple regression. In Proceedings of the IEEE Region 10 Conference, TENCON 99, ‘Multimedia Technology for Asia-Pacific Information Infrastructure’ (Cat. No.99CH37030), Cheju Island, Korea, 5–17 September 1999; Volume 1, pp. 585–588. [Google Scholar]
  16. Finlayson, G.; Mackiewicz, M.; Hurlbert, A. Color Correction Using Root-Polynomial Regression. IEEE Trans. Image Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Connah, D.; Westland, S.; Thomson, M. Recovering spectral information using digital camera systems. Color. Technol. 2001, 117, 309–312. [Google Scholar] [CrossRef]
  18. Liang, J.; Wan, W. Optimized method for spectral reflectance reconstruction from camera responses. Opt. Express 2017, 25, 28273–28287. [Google Scholar] [CrossRef]
  19. Heikkinen, V. Spectral Reflectance Estimation Using Gaussian Processes and Combination Kernels. IEEE Trans. Image Process. 2018, 27, 3358–3373. [Google Scholar] [CrossRef]
  20. Bianco, S.; Schettini, R.; Vanneschi, L. Empirical modeling for colorimetric characterization of digital cameras. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; Volume 1, pp. 3469–3472. [Google Scholar]
  21. Cheung, V.; Westland, S.; Connah, D.; Ripamonti, C. A comparative study of the characterisation of color cameras by means of neural networks and polynomial transforms. Color. Technol. 2004, 120, 19–25. [Google Scholar] [CrossRef]
  22. Molada-Tebar, A.; Lerma, J.L.; Marqués-Mateu, Á. Camera characterization for improving color archaeological documentation. Color Res. Appl. 2018, 43, 47–57. [Google Scholar] [CrossRef]
  23. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  24. Bernardo, J.; Berger, J.; Dawid, A.; Smith, A. Regression and classification using Gaussian process priors. Bayesian Stat. 1998, 6, 475–501. [Google Scholar]
  25. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  26. Neal, R.M. Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. arXiv 1997, arXiv:physics/9701026. [Google Scholar]
  27. Brooks, S.; Gelman, A.; Jones, G.; Meng, X. Handbook of Markov Chain Monte Carlo; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  28. Durmus, A.; Moulines, E.; Pereyra, M. Efficient Bayesian computation by proximal Markov chain Monte Carlo: When Langevin meets Moreau. SIAM J. Imaging Sci. 2018, 11, 473–506. [Google Scholar] [CrossRef]
  29. Green, P.J.; Silverman, B.W. Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach; CRC Press: Boca Raton, FL, USA, 1993. [Google Scholar]
  30. Ruppert, D.; Wand, M.; Carroll, R.; Raymond, J. Semiparametric regression during 2003–2007. Electron. J. Stat. 2009, 3, 1193–1256. [Google Scholar] [CrossRef] [PubMed]
  31. Unesco. Rock Art of the Mediterranean Basin on the Iberian Peninsula. Available online: http://whc.unesco.org/en/list/874 (accessed on 10 June 2019).
  32. Sigma Corporation. Direct Image Sensor Sigma SD15. Available online: http://www.sigma-sd.com/SD15/technology-colorsensor.html (accessed on 10 June 2019).
  33. Ramanath, R.; Snyder, W.E.; Yoo, Y.; Drew, M.S. Color image processing pipeline. IEEE Signal Process. Mag. 2005, 22, 34–43. [Google Scholar] [CrossRef]
  34. Molada-Tebar, A.; Lerma, J.L.; Marqués-Mateu, Á. Software development for colorimetric and spectral data processing: PyColourimetry. In Proceedings of the 1st Congress in Geomatics Engineering, Valencia, Spain, 5–6 July 2017; 6 July 2017; Volume 1, pp. 48–53. [Google Scholar]
  35. CIE. Colorimetry; Commission Internationale de l’Èclairage: Vienna, Austria, 2004. [Google Scholar]
  36. IEC. IEC/4WD 61966-2-1: Colour Measurement and Management in Multimedia Systems and Equipment—Part 2-1: Default RGB Colour Space—sRGB; International Electrotechnical Commission: Geneva, Switzerland, 1998. [Google Scholar]
  37. Martinez-Verdu, F.; Pujol, J.; Vilaseca, M.; Capilla, P. Characterization of a digital camera as an absolute tristimulus colorimeter. Proc. SPIE 2003, 5008, 197. [Google Scholar]
  38. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A. Bayesian Data Analysis, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  39. Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B Methodol. 1974, 2, 111–147. [Google Scholar] [CrossRef]
  40. Vazquez-Corral, J.; Connah, D.; Bertalmío, M. Camera sensor response, Color characterization, Perceptual correction. Sensors 2014, 14, 23205–23229. [Google Scholar] [CrossRef]
  41. Xiong, W.; Funt, B.; Shi, L.; Kim, S.S.; Kang, B.H.; Lee, S.D.; Kim, C.Y. Automatic white balancing via gray surface identification. Color Imaging Conf. 2007, 1, 143–146. [Google Scholar]
  42. Li, B.; Xu, D.; Xiong, W.; Feng, S. Illumination-independent descriptors using color moment invariants. Opt. Eng. 2009, 48, 027005. [Google Scholar] [CrossRef]
  43. Qian, Y.; Chen, K.; Nikkanen, J.; Kamarainen, J.K. Recurrent Color Constancy. Proc. IEEE Int. Conf. Comput. Vis. 2017, 1, 5459–5467. [Google Scholar]
  44. Gareth, J.; Daniela, W.; Trevor, H.; Rober, T. An Introduction to Statistical Learning with Applications in R; Springer: New York, NY, USA, 2013. [Google Scholar]
  45. CIE. Colorimetry—Part 4: CIE 1976 L*a*b* Colour Space; Commission Internationale de l’Èclairage: Vienna, Austria, 2007. [Google Scholar]
  46. Luo, M.R.; Cui, G.; Rigg, B. The development of the CIE 2000 color-difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [Google Scholar] [CrossRef]
  47. Melgosa, M.; Alman, D.H.; Grosman, M.; Gómez-Robledo, L.; Trémeau, A.; Cui, G.; García, P.; Vázquez, D.; Li, C.; Luo, M.R. Practical demonstration of the CIEDE2000 corrections to CIELAB using a small set of sample pairs. Color Res. Appl. 2013, 38, 429–436. [Google Scholar] [CrossRef]
  48. Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
  49. CIE. Colorimetry—Part 6: CIEDE2000 Colour-Difference Formula; Commission Internationale de l’Èclairage: Vienna, Austria, 2014. [Google Scholar]
  50. Van Dormolen, H. Metamorfoze Preservation Imaging Guidelines|Image Quality; Version 1.0; Society for Imaging Science and Technology: Springfield, VA, USA, 2012. [Google Scholar]
  51. Lebrun, M.; Buades, A.; Morel, J.M. A nonlocal bayesian image denoising algorithm. SIAM J. Imaging Sci. 2013, 6, 1665–1688. [Google Scholar] [CrossRef]
  52. Colom, M.; Buades, A.; Morel, J.M. Nonparametric noise estimation method for raw images. JOSA A 2014, 31, 863–871. [Google Scholar] [CrossRef] [Green Version]
  53. Sur, F.; Grédiac, M. Measuring the noise of digital imaging sensors by stacking raw images affected by vibrations and illumination flickering. SIAM J. Imaging Sci. 2015, 8, 611–643. [Google Scholar] [CrossRef]
  54. Zhang, Y.; Wang, G.; Xu, J. Parameter Estimation of Signal-Dependent Random Noise in CMOS/CCD Image Sensor Based on Numerical Characteristic of Mixed Poisson Noise Samples. Sensors 2018, 18, 2276. [Google Scholar] [CrossRef]
  55. Naveed, K.; Ehsan, S.; McDonald-Maier, K.D.; Ur Rehman, N. A Multiscale Denoising Framework Using Detection Theory with Application to Images from CMOS/CCD Sensors. Sensors 2019, 19, 206. [Google Scholar] [CrossRef]
  56. Riutort-Mayol, G.; Marqués-Mateu, Á.; Seguí, A.E.; Lerma, J.L. Grey level and noise evaluation of a Foveon X3 image sensor: A statistical and experimental approach. Sensors 2012, 12, 10339–10368. [Google Scholar] [CrossRef]
  57. Marqués-Mateu, Á.; Lerma, J.L.; Riutort-Mayol, G. Statistical grey level and noise evaluation of Foveon X3 and CFA image sensors. Opt. Laser Technol. 2013, 48, 1–15. [Google Scholar] [CrossRef]
  58. Chou, Y.; Luo, M.R.; Li, C.; Cheung, V.; Lee, S. Methods for designing characterisation targets for digital cameras. Color. Technol. 2013, 129, 203–213. [Google Scholar] [CrossRef] [Green Version]
  59. Shen, H.; Cai, P.; Shao, S.; Xin, J.H. Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation. Opt. Express 2007, 15, 15545–15554. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Molada-Tebar, A.; Marqués-Mateu, Á.; Lerma, J.L. Camera Characterisation Based on Skin-Tone Colours for Rock Art Recording. Proceedings 2019, 19, 12. [Google Scholar] [CrossRef]
  61. Yamakabe, R.; Monno, Y.; Tanaka, M.; Okutomi, M. Tunable color correction between linear and polynomial models for noisy images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3125–3129. [Google Scholar]
Figure 1. Schematic diagram designed for the camera characterization.
Figure 1. Schematic diagram designed for the camera characterization.
Sensors 19 04610 g001
Figure 2. Raw images versus processed images workflow.
Figure 2. Raw images versus processed images workflow.
Sensors 19 04610 g002
Figure 3. CIE XYZ residuals histograms after the adjustment: (a,b,e,f) Gaussian process; (a,d,g,h); Second-order polynomial; (a,c,e,g); CIE XYZ residuals; (b,d,f,h); LOOCV residuals.
Figure 3. CIE XYZ residuals histograms after the adjustment: (a,b,e,f) Gaussian process; (a,d,g,h); Second-order polynomial; (a,c,e,g); CIE XYZ residuals; (b,d,f,h); LOOCV residuals.
Sensors 19 04610 g003
Figure 4. Sigma SD15 Δ E a b * values for the X-Rite patches: (a) Δ E a b * ; (b) LOOCV Δ E a b * .
Figure 4. Sigma SD15 Δ E a b * values for the X-Rite patches: (a) Δ E a b * ; (b) LOOCV Δ E a b * .
Sensors 19 04610 g004
Figure 5. Fujifilm IS PRO Δ E a b * values for the X-Rite patches: (a) Δ E a b * ; (b) LOOCV Δ E a b * .
Figure 5. Fujifilm IS PRO Δ E a b * values for the X-Rite patches: (a) Δ E a b * ; (b) LOOCV Δ E a b * .
Sensors 19 04610 g005
Figure 6. Patches with higher LOOCV Δ E a b * values: (a) on X-Rite ColorChecker; (b) on CIE chromaticity diagram.
Figure 6. Patches with higher LOOCV Δ E a b * values: (a) on X-Rite ColorChecker; (b) on CIE chromaticity diagram.
Sensors 19 04610 g006
Figure 7. Noise comparative detail sRGB images after characterization: (ad) Sigma SD15; (eh) FujifilmIS PRO; (a,e) Original raw images; (b,f) Gaussian process; (c,g) Second-order polynomial model; (d,h) Linear model.
Figure 7. Noise comparative detail sRGB images after characterization: (ad) Sigma SD15; (eh) FujifilmIS PRO; (a,e) Original raw images; (b,f) Gaussian process; (c,g) Second-order polynomial model; (d,h) Linear model.
Sensors 19 04610 g007
Figure 8. Original and output sRGB characterized images: (a,c,e) Sigma SD15; (b,d,f) Fujifilm IS PRO; (a,b) Original; (c,d) GP; (e,f) Second-order polynomial model.
Figure 8. Original and output sRGB characterized images: (a,c,e) Sigma SD15; (b,d,f) Fujifilm IS PRO; (a,b) Original; (c,d) GP; (e,f) Second-order polynomial model.
Sensors 19 04610 g008
Figure 9. Δ E a b * difference mapping images between GP and second-order polynomial characterization models: (a) Sigma SD15; (b) Fujifilm IS PRO.
Figure 9. Δ E a b * difference mapping images between GP and second-order polynomial characterization models: (a) Sigma SD15; (b) Fujifilm IS PRO.
Sensors 19 04610 g009
Figure 10. Selected rock art scenes: (a) SigmaSD15; (b) Fujifilm IS PRO. (A) Wounded animal detail. (B) Hunting scene.
Figure 10. Selected rock art scenes: (a) SigmaSD15; (b) Fujifilm IS PRO. (A) Wounded animal detail. (B) Hunting scene.
Sensors 19 04610 g010
Figure 11. Wounded animal images: (ad) Sigma SD15; (eh) Fujifilm IS PRO; (a,e) Original image; (b,f) GP characterized image; (c,g) Δ E a b * comparative image; (d,h) Second-order characterized image.
Figure 11. Wounded animal images: (ad) Sigma SD15; (eh) Fujifilm IS PRO; (a,e) Original image; (b,f) GP characterized image; (c,g) Δ E a b * comparative image; (d,h) Second-order characterized image.
Sensors 19 04610 g011
Figure 12. Hunter scene images: (ad) Sigma SD15; (eh) Fujifilm IS PRO; (a,e) Original image; (b,f) GP characterized image; (c,g) Δ E a b * comparative image; (d,h) Second-order characterized image.
Figure 12. Hunter scene images: (ad) Sigma SD15; (eh) Fujifilm IS PRO; (a,e) Original image; (b,f) GP characterized image; (c,g) Δ E a b * comparative image; (d,h) Second-order characterized image.
Sensors 19 04610 g012
Table 1. Fitted CIE XYZ (Res) and LOOCV residuals (RL) values after the characterization.
Table 1. Fitted CIE XYZ (Res) and LOOCV residuals (RL) values after the characterization.
Sigma SD15 Image
Gaussian ProcessSecond-Order Polynomial
CIE XCIE YCIE ZCIE XCIE YCIE Z
ResRLResRLResRLResRLResRLResRL
Max.1.884.531.804.381.283.667.027.366.877.184.805.09
Min.−2.48−4.59−3.21−5.80−1.50−4.54−4.97−5.46−4.70−5.15−3.59−3.93
Std. Dev.0.731.390.771.480.491.011.771.921.821.981.291.38
Fujifilm IS PRO Image
Gaussian ProcessSecond-Order Polynomial
CIE XCIE YCIE ZCIE XCIE YCIE Z
ResRLResRLResRLResRLResRLResRL
Max.2.204.421.983.711.543.337.087.536.486.903.153.35
Min.−3.72−4.25−2.63−3.84−1.35−1.80−4.25−4.69−3.12−3.31−1.96−2.45
Std. Dev.0.941.460.781.200.450.741.751.921.451.580.490.86
Table 2. Δ E a b * summary of the statistical results after the characterization.
Table 2. Δ E a b * summary of the statistical results after the characterization.
Sigma SD15 ImageFujifilm IS PRO Image
Gaussian ProcessSecond-Order PolynomialGaussian ProcessSecond-Order Polynomial
Δ E a b * LOOCV Δ E a b * LOOCV Δ E a b * LOOCV Δ E a b * LOOCV
Max.8.2518.90615.08617.8148.2138.40912.63413.021
Mean1.7552.4402.5612.7511.8172.2852.7532.958
Median1.4032.1802.1352.2051.4861.9132.1012.281
Std. Dev.1.4791.8631.8362.0221.4571.7292.1862.305
Table 3. Percentage of patches with LOOCV Δ E a b * > 4 CIELAB units.
Table 3. Percentage of patches with LOOCV Δ E a b * > 4 CIELAB units.
Sigma SD15Fujifilm IS PRO
Patches%Patches%
ModelGaussian process128.57117.86
Second-order1913.573122.14
LOOCVGaussian process2316.432215.71
Second-order2014.293122.14
Table 4. Noise variation coefficients of the original RGB digital numbers.
Table 4. Noise variation coefficients of the original RGB digital numbers.
Sigma SD15 ImageFujifilm IS PRO Image
PatchRGBRGB
C70.0204880.0141140.0099230.0115860.010770.01223
C80.0368630.0191290.0175450.0133940.011700.00851
D70.0184340.0161390.0164320.0084980.012940.01039
D80.0151180.0146870.0159080.0109090.011620.00933
Table 5. Variation coefficients of the output CIE XYZ coordinates.
Table 5. Variation coefficients of the output CIE XYZ coordinates.
Sigma SD15 Image
Gaussian ProcessSecond-Order PolynomialLinear
PatchXYZXYZXYZ
C70.036730.014130.035590.024470.026200.031290.019880.048510.03192
C80.012900.060140.097000.013600.059880.093790.019160.046170.10635
D70.052720.050060.059850.078440.037410.069280.037550.056640.06164
D80.019330.047730.082510.013650.052540.071370.016320.043620.07257
Fujifilm IS PRO Image
Gaussian ProcessSecond-Order PolynomialLinear
PatchXYZXYZXYZ
C70.008240.010980.009670.009050.012750.010090.010370.011970.00969
C80.010540.011620.017300.011040.011790.017820.009830.010780.01831
D70.010580.012860.010400.009290.011630.010660.008120.011560.00994
D80.005520.010330.016390.006530.010800.016320.006180.009160.01390
Table 6. Variation coefficients of the output sRGB digital numbers.
Table 6. Variation coefficients of the output sRGB digital numbers.
Sigma SD15 Image
Gaussian ProcessSecond-Order PolynomialLinear
PatchsRsGsBsRsGsBsRsGsB
C70.095970.023110.018050.016020.027680.072910.017510.033120.07113
C80.021040.069670.074890.074040.067450.019550.080950.058760.02398
D70.019820.038150.035060.039820.034470.042740.037320.030720.03744
D80.031100.055670.053000.048210.053600.025380.047780.047150.02661
Fujifilm IS PRO Image
Gaussian ProcessSecond-Order PolynomialLinear
PatchsRsGsBsRsGsBsRsGsB
C70.016930.007210.004780.005280.008250.014900.007820.009540.01624
C80.007120.008840.012000.012860.009220.007370.010340.008030.00671
D70.004620.007460.005850.005880.007480.005310.009690.006320.00577
D80.001630.007670.009270.009670.007760.002570.006130.006960.00324
Table 7. Summary of the variation coefficients obtained.
Table 7. Summary of the variation coefficients obtained.
SigmaFujifilm
SD15 ImageIS PRO Image
OriginalRGB0.017900.01099
Gaussian processCIE XYZ0.047390.01120
sRGB0.044630.00778
Second-orderCIE XYZ0.047660.01148
sRGB0.043490.00810
LinearCIE XYZ0.047740.01082
sRGB0.042710.00807

Share and Cite

MDPI and ACS Style

Molada-Tebar, A.; Riutort-Mayol, G.; Marqués-Mateu, Á.; Lerma, J.L. A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes. Sensors 2019, 19, 4610. https://0-doi-org.brum.beds.ac.uk/10.3390/s19214610

AMA Style

Molada-Tebar A, Riutort-Mayol G, Marqués-Mateu Á, Lerma JL. A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes. Sensors. 2019; 19(21):4610. https://0-doi-org.brum.beds.ac.uk/10.3390/s19214610

Chicago/Turabian Style

Molada-Tebar, Adolfo, Gabriel Riutort-Mayol, Ángel Marqués-Mateu, and José Luis Lerma. 2019. "A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes" Sensors 19, no. 21: 4610. https://0-doi-org.brum.beds.ac.uk/10.3390/s19214610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop