292174 Neural Network Modeling of T Regulatory Cells
The lymph nodes of the immune system contain many T lymphocytes. Once an anti-body or a microbe intruder is detected by the body, a metabolic pathway change causes T cells in the body to differentiate to effector T cells to combat the intruder. However, the attack on the intruder must be mediated, otherwise we risk attacking the own/self-body if the immune system was over active. This is the role of the T regulatory cells (Treg). Those cells are highly dependent on the protein Foxp3 for their differentiation and immune suppression mechanism to maintain immune homeostasis. Suppression mechanisms include production of anti-inflammatory cytokines, cell-cell contact, and modification the activation state and function of APCs.
The action of Treg has important application in organ transplant rejection and overcoming an overactive immune system to prevent a cytokine storm. We can currently only describe the general trend of T cell population shift for a given cytokine stimulus. Therefore, we wish to quantify this behavior with a model. We hypothesize that cytokine simulation such as Il-6, TGFb, Indole will cause a shift in the T cell population and cause them to be more stable once transplanted.
While the metabolic pathway is solvable research problem, but only after a very long period of time and grants, we wish to model T cell population upon stimulation, model their stability, and then solve the inverse problem where we can find the required stimulation profile for a given stable population. This can lead to faster and more efficient experimentation.
A recurrent neural network is required because in reality, metabolic system relies on the previous state of the system in addition to the current state. Neural networks (ANN) are inspired from the function of the brain. Especially how the brain learns to adapt to new situations, and how the neurons fire their action potential. Those networks have been used in the past to classify data output, organize and cluster inputs, fit functions, and reproduce time dependent profiles. Applications have been used to predict secondary protein structure, identify tumors as benign or cancerous, recognize visual and auditory stimulus, and perform corporate financial analysis.
Some structural similarities between the artificial and real neural networks include the dendrites, which serve as the input of information. The cell body is where all the input to the neurons are accumulated and processed. In the ANN transfer functions are present to induce a high level of non-linearity in the output of the neuron. This can include sigmoidal and radial functions. The synaptic terminals is where the neurons connect with each other, and in the real biological systems, synapses vary in strength. This is represented in the ANN by adding weights to the inputs of the neurons. Those weights serve as the parameters of the model and are adjusted during learning.
The Learning process and training a neural network is a problem of optimization to find the minimum possible error between the network output and the target output through adjusting the parameters commonly referred to as weights.
Bayesian learning is an appealing option because we are looking at an Occam’s razor problem which is the tradeoff between number of parameters and accuracy vs generalization. Many parameters –more complex model– may reach preciseness by fitting the data better but only the data that is very similar to trained data, or just training data. Whereas a smaller number of parameters can generalize reasonably well over a wide range of data within training range.
A network can be uniquely defined by its structure and transfer functions present. The structure refers to the number of hidden layers, other than the input and output layer. In each hidden layer there are a specific number of neurons present. Each neuron can have some sort of transfer function to introduce nonlinearity.
The networks that we tested ranged between 1 to 2 hidden layers and with each hidden layer containing 1 to 10 neurons. This resulted in 110 different structures. We also trained the same structures using different transfer functions.
The networks had to be retrained multiple times to ensure that we get as close as possible to the global minimum, as the training algorithms can lead to local minimum, but doesn’t guarantee global minimum.
The trained networks were then tested on both training and testing data, and we compared their MSE and graphical output.
The network inputs included stimulation profiles of TGFb,Il-6, and indole. The output of the network was the composition profile of the T cells.
The training data which consists of inputs and target outputs is fed to the model, and the network adjusts its weights and gives an estimate. This output is compared with the targets, and the resulting error was used to determine the tuning of the weights for better estimates.
Once the MSE of the network structures became available, the best networks with reasonably small number of parameters were chosen to be tested on new Testing data.
See more of this Group/Topical: Student Poster Sessions