379486 Artificial Neural Networks for Environmental and Biochemical Modelling
Artificial neural networks form a group of machine learning techniques, inspired by biological neurons. A neural network is a computer model whose architecture mimics the knowledge acquisition and organizational skills of the human brain. Specifically, ANNs consist of a number of interconnected processing elements, commonly referred to as neurons. The neurons are logically arranged into two or more layers and interact with each other via weighted connections. These scalar weights determine the nature and strength of the influence between the interconnected neurons. Each neuron is connected to all other neurons in the next layer. There is an input layer where data are presented to the neural network, and an output layer that holds the response of the network to the input. It is the intermediate layers, also known as hidden layers, which enable these networks to represent and compute complicated associations between patterns. Neural networks essentially learn through the adaptation of their connection weights.
The aim of the study is to present how ANNs may serve as efficient modelling schemes for tackling major environmental-health problems, providing key information (environmental or biochemical) at several stages of the source-to-effect continuum. This will be demonstrated in three different case studies, including the (a) assessment of exposure to benzene to gasoline station employees, (b) assessment of PM air pollution and (c) parameterization of physiology-based biokinetic (PBBK) models designed to estimate internal exposure to a large amount of industrial chemicals (xenobiotics) and pharmaceutical products. For the needs of the present study, a multi-layer perceptron (MLP) network was utilized. This is a feed-forward artificial neural network model that maps sets of input data onto a set of appropriate output. It is a modification of the standard linear perceptron with three or more layers of neurons (nodes) and nonlinear activation functions. Such architectures provide more powerful models, since they have the ability to distinguish data that are either nonlinearly separable, or are separable by a hyperplane. It is noted that sigmoid units are employed with logistic as activation function at both the hidden and the output layers. Beyond the basic architecture, the neural network design was optimized to meet the efficiency and accuracy needs of each case study. Several training algorithms were applied and evaluated, but it was the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton algorithm that proved to be the most efficient and was therefore used. The BFGS algorithm is a method widely used to solve unconstrained nonlinear optimization problems. It is derived from Newton's method, a class of hill-climbing optimization technique that seeks the stationary point of a function, where the gradient is 0. Specifically, Newton's method assumes that the function can be locally approximated as a quadratic Taylor expansion in the region around the optimum and uses the first and second derivatives to find the stationary point.
(a) Prediction of exposure to benzene for gasoline station employees.
In the case of predicting exposure to benzene of gasoline station employees, an ANN wth five input nodes was created. The number of the input neurons corresponds to each input parameter, corresponding to the amount of gasoline hourly traded (affecting benzene emissions through evaporation and traffic into the gasoline station), wind speed (affecting benzene concentrations through dispersion), ambient temperature (affecting benzene emissions through evaporation and to a smaller extent dispersion), traffic flow of the road in front of the filling station (benzene emissions close to the gasoline station) and benzene urban background concentration (affecting overall benzene levels). The hidden layer included ten nodes (found experimentally to provide the best results) and the output layer three (for each category of employees, meaning cashiers, miscellaneous activities and employees dealing with car refuelling). The overall performance of the model was very good, since R2 was close to 0.9 for the two categories of employees working outdoors; the performance of the model was lower for cashiers (R2=0.8), since additional parameters related to indoor sources were not included in the model. The overall performance of the model is considered to be excellent.
(b) Prediction of atmospheric PMx concentrations.
The second case study was the prediction of atmospheric PM concentrations in the city of Thessaloniki (Greece). Over the last couple of years, the use of biomass as heating source was allowed in Greece as a CO2-neutral means of space heating in the large metropolitan areas of Athens and Thessaloniki affecting more than half of the country’s population. At the same time the use of light heating diesel was heavily taxed. In the same period Greece faces a financial crisis with significant repercussions on the average household income. This resulted in a significant elevation of PM2.5 and PM10 concentrations during wintertime, systematically above 100 μg/m3, while concentrations raised up to180 μg/m3 (and even 220 μg/m3 in extreme occasions) when favored by specific weather conditions. Last winter the Greek government revealed a plan for tackling acute pollution episodes in an attempt to tamper the adverse effects on public health. Thus, a forecasting model for PMx concentrations would be very useful for both increasing public awareness and ensuring cost-effective public health protection. In the specific study, data from 2 different classes of monitoring stations were used, one representative of urban background and one representative of typical traffic sites. On the whole, for each of the two groups of monitors the optimal ANN was designed and developed. The input layer of each ANN consisted of 8 neurons, corresponding to the following predictors: PM10 and PM2.5 concentrations of the previous day, wind speed and direction, temperature, humidity, precipitation and one categorical variable identifying the day of the week (weekday or weekend). The output layer consisted of 2 nodes, corresponding to the output parameters, namely the average daily PM10 and PM2.5 concentrations for each station. The second layer consisted of 14 neurons, experimentally found to yield the best results. The overall performance of the developed ANN was validated against an independent validation set. The model predicted the concentrations of the next days satisfactorily, with an overall R2 that was around 0.8.
c) Prediction of biochemical parameters entering a PBBK model
The third case study is the application of ANNs on a completely different problem of high relevance to the study of interactions between environmental stressors and human health. The fate of chemicals upon human exposure, is mathematically described by Physiology Based BioKinetic (PBBK) models. The latter are continuously gaining ground in regulatory toxicology, describing in quantitative terms the absorption, metabolism, distribution and elimination (ADME) processes in the human body, with a focus on the effective dose at the expected target site. A major problem for the inclusion of these models into the regulatory risk assessment process is the lack of generalization, which is mainly due the lack of proper parameterization of PBBK models for many compounds and especially for new chemicals. Chemical-specific input include partition coefficients as well as metabolic parameters such as the maximal velocity (Vmax) and the Michaelis affinity constant (Km) or intrinsic clearance (Vmax/Km). These physicochemical parameters should have been obtained on the basis of independent measurements (in vitro and in vivo) or using algorithms in the valid domain of application, like quantitative structure–activity relationships (QSARs). QSAR models are regression or classification models used in the chemical and biological sciences and engineering for data poor chemicals, allowing us to predict biological properties related to a chemical compound on the basis of structural information. In this work Abraham’s solvation equation was used for estimating biological properties. Abraham’s model takes into account the excess molar refraction, the compound dipolarity/polarizability, the solute effective or summation hydrogen-bond acidity, the solute effective or summation hydrogen-bond basicity and the McGowan characteristic volume. These parameters comprised the input for the ANN platform we developed. Several network architectures were tested and optimal results were obtained using a twelve node hidden layer. For each chemical-specific biological property, a different ANN was used. Coupling ANNs to Abraham’s model parameters produced remarkable results. Until now the prediction capability of Abraham’s model of the Michaelis - Menten constant was rather poor (R2 up to 0.35); with our coupled ANN - Abraham’s solvation equation method R2 went up to 0.88 for the 55 chemicals we investigated. For the rest of the parameters (partition coefficients, Vmax), prediction against experimental values was consistently highly performant (R2 always above 0.9).
Overall, our ANN generation platform produced very useful tools for several sub-domains of the health and environment interactome. As a caveat, we need to keep in mind that these algorithms are insufficient to improve the predicting capability of the overall model on their own; understanding the underlying physical or biological process facilitates significantly the identification of the key parameters governing the process of interest, and thus their selection as input of the respective ANN model.
See more of this Group/Topical: Computing and Systems Technology Division