Yazar "Kockan, Umit" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe ARTIFICIAL-NEURAL-NETWORK PREDICTION OF HEXAGONAL LATTICE PARAMETERS FOR NON-STOICHIOMETRIC APATITES(INST ZA KOVINSKE MATERIALE I IN TEHNOLOGIE, 2014) Kockan, Umit; Ozturk, Fahrettin; Evis, ZaferIn this study, hexagonal lattice parameters (a and c) and unit-cell volumes of non-stoichiometric apatites of M-10(TO4)(6)X-2 are predicted from their ionic radii with artificial neural networks. A multilayer-perceptron network is used for training. The results indicate that the Bayesian regularization method with four neurons in the hidden layer with a tansig activation function and one neuron in the output layer with a purelin function gives the best results. It is found that the errors for the predicted data of the lattice parameters of a and c are less than 1 % and 2 %, respectively. On the other hand, about 3 % errors were encountered for both lattice parameters of the non-stoichiometric apatites with exact formulas in the presence of the T-site ions that are not used for training the artificial neural network.Öğe Artificial-neural-network prediction of hexagonal lattice parameters for non-stoichiometric apatites(2014) Kockan, Umit; Ozturk, Fahrettin; Evis, ZaferIn this study, hexagonal lattice parameters (a and c) and unit-cell volumes of non-stoichiometric apatites of M10(TO4)6X2 are predicted from their ionic radii with artificial neural networks. A multilayer-perceptron network is used for training. The results indicate that the Bayesian regularization method with four neurons in the hidden layer with a tansig activation function and one neuron in the output layer with a purelin function gives the best results. It is found that the errors for the predicted data of the lattice parameters of a and c are less than 1 % and 2 %, respectively. On the other hand, about 3 % errors were encountered for both lattice parameters of the non-stoichiometric apatites with exact formulas in the presence of the T-site ions that are not used for training the artificial neural network.