ISSN 2224-087X (Print)
ISSN 2224-0888 (Online)

Collected scientific papers
"Electronics and information technologies"

(In 1966-2010 published under the title "Electrical engineering")

(Certificate of State Registration 17618-6468 from February 11, 2011)

Main page Search Rules for Authors

Issue 9

Issue 9, Pages: 3-23
DOI: https://doi.org/10.30970/eli.9.3
CALCULATION METHODS AT THE PLASMONIC. 1. THE MI THEORY AND QUASI-STATIC APPROXIMATION
I. Bolesta, A. Demchuk, O. Kushnir, I. Kolych
Plasmonics is a new direction of electronics. It operates with nanoscale objects and an optical frequency range. Therefore, direct research in the field of plasmonics is associated with complex expensive experiments. It is more advantageous to conduct a computational experiment first. The article presents a part of the review of computational calculation methods, which are actively used in the tasks of plasmonics. The dimensional effect affects the dielectric permittivity of metals in excessive particle sizes. In order to take into account this influence of dimensional effect in the calculations, the authors use the Drude model. The paper proposes to use the Mi theory for metallic particles close to the spherical form and the interaction between which can be neglected. To demonstrate the possibilities of such calculations, a method for determining the size distribution of metal particles in ashes over optical spectra of nanoparticles is presented. The method involves counting the spectra of a set of particles of different sizes. Each spectrum from the set has its influence on the overall spectrum. To determine the effect of particles of each size on the overall spectrum, Monte-Carlo methods were used. This method is patented by the authors. The efficiency of the method is confirmed by microscopic research. For metal particles whose dimensions are smaller than the wavelength of light, optical spectra can be calculated using quasi-static approximation. This approximation is not limited to a spherical form, but the most applicable are the formulas presented in the article for elliptical particles. Depending on the shape of the particles, the optical spectra change. There are two maximums of plasmonic absorption for one particle. Energy splitting of spectra allows us to estimate the parameters of the form of metal particles. Such a conclusion is obtained from the comparison of the results of calculations with the data obtained by atomic force microscopy.
PDF Version

Issue 9, Pages: 24-31
DOI: https://doi.org/10.30970/eli.9.24
FEATURES OF APPLICATION OF THE IN SITU AND EX SITU MEASUREMENT TECHNIQUES FOR DETERMINATION OF $mathrm{^{137}Cs}$ SOIL CONTAMINATION
V. Grabovskyi, O. Dzendzelyuk
In this paper we present the results of the $mathrm{^{137}Cs}$ soil contamination density measuring of the same area with using in situ and ex situ measurement techniques. The soil contamination density is understood as the activity of a radionuclide contained in a near-surface layer of soil with 20 cm thick per square meter of surface. Measurements were carried out during 2003-2015 on the territory of Shatsk Biologycal-Geographical Station of Lviv Ivan Franko National University located on the west bank of the Pisochne Lake (Shatsk district of the Volyn region). For measurements using the ex situ technique, a stationary gamma spectrometer equipped with a semiconductor Ge(Li)-detector was used, and for the measurements of the in situ technique, a gamma radiometer Virtuoso (manufactured by Sparing Vist, Lviv) equipped with a scintillation CsJ-detector. The obtained by both methods experimental results were used to create three-dimensional soil contamination maps of the investigated area with using the Surfer 8 program package. The peculiarities of application, advantages and disadvantages of the use of both measurement technologies were analyzed. It has been shown that in the case of correct definition and application of coefficients that taking into account the absorption of gamma radiation of a radionuclide located in the depth of the soil, the results of in situ measurements with acceptable accuracy correlate with the results obtained with using of ex situ technique. Obtained by both methods results were shown a reliable picture of the distribution of radionuclide content in soils. The observed differences in the values of the minimum and maximum of radioactive contamination of investigated soils and the features of their sites localizations was explained both by the peculiarities of the measurement techniques and the natural decay of available in the soil radionuclide. Using in situ method of the radiological measurements provides less time and simplicity of the analysis, their lower cost compared to the ex situ methodics. The acceptable reliability of the obtained results is ensured, if the adjusted coefficient values used by the application software packages for calculations are correctly applied. It was shown that the received distributions of the soil radionuclide contamination density can somewhat vary depending on the method of their determination. It is concluded, in the result of using of both techniques of radiological measurements only a certain averaged evaluation, and not an exact picture of the soil radionuclide contamination density can been given in general.
PDF Version

Issue 9, Pages: 32-39
DOI: https://doi.org/10.30970/eli.9.32
DIFFERENTIAL JONES MATRIX FOR CHOLESTERIC
S. Nastyshyn, I. Bolesta, Yu. Nastishin
The formalism of the Jones matrices is a powerful theoretical tool widely used to calculate the parameters of the electric field of a light wave at the output of an optical system or medium. The Jones calculus is based on the linearity of the vector relation between the electric field of a light wave incident ($ar{E}_0$) on and exiting ($ar{E}$) from an optical system or medium through the Jones matrix J, such that: $ar{E}=Jar{E}_0$. In this approach, the J matrix describes an optical element as a whole and does not contain any information about its internal structure; for this reason it is called the integral Johns matrix (IJM). Although the IJM was introduced for optical systems with discrete elements, it is often used to model optically inhomogeneous media and appears to be even more popular in literature than the approach of the differential Jones matrix (DJM), which was specially developed by Jones for this type of problems. In this paper we employ the DJM approach to the description of optical properties of a cholesteric liquid crystal (cholesteric). Cholesteric is a chiral nematic, whose director $ar{n}$ spontaneously twists around the axis $ar{Z}perpar{n}$. Description of optical properties of the cholesteric was first performed by Mogen [Bulletin de la Societe Francaise Mineralogie et Crystallographie. - 1911. - Vol. 34. - P. 71] using the method of the Maxwell differential equations in the framework of the model, according to which the azimuth of the diagonal tensor of dielectric permittivity varies linearly along the coordinate axis $ar{Z}perpar{n}$. The first attempt to derive the Jones integral matrix for a twisted crystal was performed by Johns [J. Opt. Soc. Am. 1948. - Vol. 37. - P. 671]. An attempt to obtain the cholesteric IJM was made by Chandrasekar and Rao [Acta Cryst. - A24. - P. 445(1968)]. Results of both approaches appear to be applicable for the spectral range except the selective reflection band. We show that the wave numbers of the eigenwaves propagating in the cholesteric are the eigenvalues and the electric field vectors of these eigenwaves are the eigenvectors of the cholesteric DJM. Taking advantage of this finding, we derived the cholesteric DJM in the local coordinate system for any light wavelength, including the spectral band of the selective reflection.
PDF Version

Issue 9, Pages: 40-47
DOI: https://doi.org/10.30970/eli.9.40
MODELING OF PERCOLATION PHENOMENA IN 3D NANOTUBE SYSTEM
Yu. Olenych, I. Karbovnyk, Ya. Shmygelsky, H. Klym
In this paper, the analysis of percolation phenomena in the system of straight nanotubes is carried out and appropriate model is proposed. The algorithm for finding the probability of nanotubes percolation is implemented using a three-dimensional graphics visualization tools. The influence of geometric sizes of nanotubes and their spatial orientation on the probability of a percolation cluster formation is studied. Based on the analysis of the dependence of percolation probability on the limiting values of the dispersion of the polar and azimuthal angles determining the nanotubes orientation in the 3D space, the basic regularities of the conductive cluster formation in isotropic and anisotropic nanotubes systems are established. The optimum values of the investigated system parameters for discovering percolation are found.
PDF Version

Issue 9, Pages: 48-62
DOI: https://doi.org/10.30970/eli.9.48
THE GLOBAL SEISMIC TOMOGRAPHY MODELES INTO EXPLORATIONS OF THE EARTH’S STRUCTURES
V. Fourman
The global tomograpics modeles of upper mantle of the Earth with modern methods are analyzed. The results of different geophysical methods are analyzed in this review to characterize structure of the upper mantle. Based on these data one may assume that corresponding low temperature anomalies also extend to these depths. Based on a joint interpretation of gravity and seismic data residual mantle gravity anomalies are determined. Positive anomalies are found near active plate boundaries. Gravity influence of the temperature induced mantle inhomogeneities determined from seismic tomography data has been removed from the total mantle anomaly field, and the compositional gravity anomalies are obtained. The use of a gravitational field to determine the properties of the distribution of the density of the upper mantle has a long history, but the vast majority of these works are regional, that is, they describe the properties of the lithosphere within specific tectonic structures. The basis of such works is geotravers of deep seismic sounding, using additional information, which limits the range of possible solutions of the inverse problem of gravimetry. Specificity of the used interpretation techniques does not allow to directly compare the results obtained for different structures. In particular, density variations are always determined in relation to some "standard" model, which can vary widely. Even a simple comparison of the average density of the continental and oceanic mantle is still a problem. Thus, only a global study by a single method allows comparative analysis of isolated structures. It is known that the use of only a gravitational field without additional information does not allow us to obtain a reliable result. Global three-dimensional images of the internal structure of the Earth, created on the basis of variations in the velocities of seismic waves, have become one of the main achievements of geophysics of the last decade. These models made it possible to compare the evolution of geostructures, often located on opposite sides of the Earth. The bulk of the global tomographic models is represented by the distribution of the velocities of transverse waves. This is due to the fact that for a modern system of seismic stations, the more or less homogeneous structure of the upper mantle can be obtained only on the basis of the analysis of surface waves, which is mainly determined by variations of horizontally and vertically polarized transverse waves and having insignificant sensitivity to the longitudinal velocities waves At the same time, tomographic models represented by variations of longitudinal waves can only be obtained on the basis of the analysis of the run-times of bulk waves, which results in a direct connection of the horizontal resolution of the model with the density of the network of seismic stations. In addition, bulk waves do not allow sufficient vertical resolution in the upper mantle, even in the presence of a large number of stations. Thus, the analysis of global tomographic models makes it possible to conclude that the temperature distribution in the upper mantle (on the structure of the thermal roots of the continents) without the interpretation of the data on the heat flux. The generalization of the latest results of seismic, gravitational and thermal studies of continental roots makes it possible to state that seismic tomography remains the only method that gives a spatial picture of the upper mantle. The greatest values of mantle anomalies are found near the subduction zones surrounding the Pacific Ocean, as well as in the Alpine-Mediterranean folded belt, which represents the intra-continental zone of collision of lithospheric plates.
PDF Version

Issue 9, Pages: 63-77
DOI: https://doi.org/10.30970/eli.9.63
INTERACTION OF PARAMETERS OF GEOPHYSICAL PROCESSES IN DEEP STRUCTURES OF THE EARTH MODELING
V. Fourman, M. Khomyak, L. Khomyak
Main problems of tectonics are considered and review of the main problematic questions governed ways of the investigations of the physical picture not only for the structures but also for the processes and interactions in the deep shells of our planet are made. Depending on the convergence speed, age of the lithosphere and direction of the moving plates that interacted several subduction zones are distinguished. It is pointed out that investigation of the deep structure, composition and geodynamics of the continental and oceanic lithosphere gives the possibility to distinguished the system connected with global processes of Earth’s development (rifts, uncompensated deeps, continents, oceans). Direct and immediate measurements by modern geodetic instruments make it possible to build an objective model of modern movements of the earth’s crust. The basis for such a discussion is the idea of the principle of the similarity of a number of physical phenomena at different scale levels of the structural hierarchy of the tectonic structures of the Earth’s lithosphere. The patterns of formation, accumulation and development of cracks are similar for different scale levels. It is necessary to identify the influence of each of the listed factors separately, which, of course, is possible only by illustrating cases in which the impact of all other factors is weakened. The main purpose here is to construct a model for distributing the density of the medium using the information obtained during the analysis of other geophysical fields. To study the same geological environment in a seismic field, the most important factor is the formation of its seismic image. The task of constructing a high-speed environment model plays only an auxiliary role, since the seismic imaging approach is much higher than the approach of a seismic model. In general, it can be noted that the impact of the scale of research on the role of their various tasks is the ratio of the size of the object under study with the size of objects available for their direct observation. To do this, it is not enough to construct physical models of investigated objects, and their high-resolution image in geophysical fields is required and their geological classification in these fields. In computer simulation techniques, the influence of such rheological parameters of the geological environment as layering, anisotropy, ductility and viscosity must be taken into account, at the local and regional levels, in relation to the tasks of tectonophysics. It is clear that this pattern is manifested differently when using fields of different types. However, it can be traced in one way or another in gravimetric, magnetometric, seismometric, and electrometric studies. For example, in regional studies, the main goal of the gravimetric method is to construct a dense model of the crust. In the study of local structures, the possibility of quantitative interpretation is reduced due to the greater complexity of the investigated objects, namely ? because of their mutual influence in the gravitational field. Here, the situation, as a rule, can change the use of materials of other geophysical methods, that is, complexing various geophysical methods.
PDF Version

Issue 9, Pages: 78-85
DOI: https://doi.org/10.30970/eli.9.78
SPECTRUM TRANSFORMATION OF THE RESTORED SIGNAL WITH REGULAR AND IRREGULAR SAMPLING
V. Parubochyi, R. Shuwar, D. Afanassyev
The paper deals with the spectrum transformation of the signal restored by regular and irregular sampling. Irregular sampling is studied as a method of obtaining the noise-like error spectrum of the error of a restored signal. Hence the order of the output low-pass filter can be reduced, or in certain cases, this filter can be omitted. The most interesting area of application for this method may be the reproduction of a digital bitmap image. To simplify the problem, the error spectrum transformation is studied for the one-dimension sampling case.
PDF Version

Issue 9, Pages: 86-93
DOI: https://doi.org/10.30970/eli.9.86
PECULIARITIES OF CATASTROPHIC LANDING OF QUADROCOPTER
B. Blahitko, Yu. Mochulsky
Most articles on quadrocopter unmanned flight implicitly assume that all four pairs of electric motor-screw and control scheme are intact. In practice, there are often some or other problems in the sequence: control circuit - motor-screw. This work is devoted to the consideration of the features of the landing of a pilotless quadrocopter in the event of failure of one of four pairs of electric motor-screw. The main features of the emergency landing of the quadrocopter are determined by mathematical modelling. In good condition, the nasal and tail motors create the moment of forces that the quadrocopter rotates relative to the vertical axis clockwise. Therefore, after the emergency cut-off of the nasal electric motor, the yaw angle begins to decrease. At the same time the tail engine will create a moment at which the pitch angle begins to decrease (quadrocopter "lowers" the nose). The Jaw and Pitch create a gyroscopic moment of forces, which begins to rotate the quadrocopter relative to the longitudinal axis, that is, there is a Roll. Having made a little more than one turn relative to the transverse axis, the quadrocopter begins to rotate in the opposite direction and in the future the Pitch changes in the vicinity -$150^o$. The quadrocopter makes almost five turns in the negative direction around the longitudinal axis and falls to the ground by the right engine. After the emergency shutdown of the tail motor, the pitch angle begins to increase (the quadrocopter "lowers" the tail). By making more than one turn in a positive direction relative to the transverse axis, the quadrocopter starts to rotate in the opposite direction, and in the future the Pitch fluctuates in the vicinity of +$150^o$. The quadrocopter makes almost five turns in a positive direction around the longitudinal axis and falls to ground by the left engine. In good condition, the right and left motors create the moment of forces that the quadrocopter rotates relative to the vertical axis counterclockwise. Beside this, after the emergency cut-off of the right electric motor, the roll angle begins to increase (quadrocopter "lowers" the right side). The Yaw and Roll create a gyroscopic moment of forces, which begins to rotate the quadrocopter relative to the transverse axis. Having made a little more than one turning relative to the longitudinal axis, the quadrocopter begins to rotate in the opposite direction and in the future the Roll fluctuates in the vicinity +$150^o$. The quadrocopter makes almost five turns in the positive direction around the transverse axis and falls to the ground by the nasal engine. After the emergency cut-off of the left electric motor, the roll angle begins to decrease (quadrocopter "lowers" the left side). By making more than one turn in a negative direction relative to the longitudinal axis, the quadrocopter starts to rotate in the opposite direction, and in the future the Roll fluctuates in the vicinity of -$150^o$. The quadrocopter makes almost five turns in the negative direction around the transverse axis and falls to the ground by the tail engine. The horizontal speed at the moment landing, as well as the angles is unpredictable. The methods of safe landing of a quadrocopter in the event of failure of one of the four pairs of motor-screw are proposed. The basis of the proposed methods is to use a parachuting effect. The parachuting achieved by forced off the power of the motor, which is located at the opposite end of the same yoke as faulty motor. As a result, the quadrocopter vertical speed at the moment landing is reduced significantly and is approaching a relatively safe value. The horizontal components of the speed all the time will be zero, that is, the quadrocopter will fall vertically down. The angles of the roll and pitch during the fall are zero, that is, the quadrocopter always land on the chassis.
PDF Version

Issue 9, Pages: 94-105
DOI: https://doi.org/10.30970/eli.9.94
ZIPF’S AND HEAPS’ LAWS FOR THE NATURAL AND SOME RELATED RANDOM TEXTS
O. Kushnir, V. Buryi, S. Grydzhan, L. Ivanitskyi, S. Rykhlyuk
We have generated randomized Chomsky’s texts and Miller’s monkey random texts (RTs), basing on a source natural text (NT), and clarified their rank-frequency dependences, Pareto distributions, word-frequency probability distributions, and vocabularies as functions of text lengths. Here the Chomsky’s RT is a NT randomized so that its ’words’ represent any sequences of letters and blanks between the nearest occurrences of some preset letter (e.g., the letter i). We have compared the exponents appearing in different power laws that describe the word statistics for the NTs and RTs, and have analyzed how well theoretical relationships among those exponents are fulfilled in practice. We have proven empirically that the exponents $alpha$ and $eta$ of the Zipf’s law and the word probability distribution for the Chomsky’s RTs are limited by the inequalities $alpha<1$ and $eta>1$, while their Heaps’ exponent should be equal to $etaapprox 1$. We have also compared our results to those obtained for the monkey texts. We have shown that the vocabulary of the Chomsky’s texts is richer than that of the monkey texts. The Heaps’ law is valid to extraordinarily good approximation for the Chomsky’s RTs, similarly to the RTs generated by the intermittence silence process and unlike to sufficiently long NTs that reveal slightly convex vocabulary versus text length dependences plotted on the double logarithmic scale.
PDF Version

Issue 9, Pages: 106-112
DOI: https://doi.org/10.30970/eli.9.106
PROS AND CONS OF CONSENSUS ALGORITHM PROOF OF STAKE. DIFFERENCE IN THE NETWORK SAFETY IN PROOF OF WORK AND PROOF OF STAKE
O. Vashchuk, R. Shuwar
The consensus algorithm is a mechanism that allows you to protect the network against attacks. The work of the algorithm is to provide rules that act on the network members. Proof of Work is one of the consensus algorithms based on the calculation of a complex algorithmic problem. This algorithm requires significant computing power to maintain its performance and therefore is superfluous. An alternative algorithm - Proof of Stake does not require so many resources to maintain network performance, but has a number of shortcomings. The article describes the main aspects of the work of consensus algorithms Proof of Work and Proof of Stake. Also described objectivity and the main requirements to the algorithms in terms of CAP theorem. The comparison between algorithms shows their vulnerabilities to attacks, the features of work and the strengths.
PDF Version

Issue 9, Pages: 113-119
DOI: https://doi.org/10.30970/eli.9.113
CREATING AI FOR GAMES WITH UNREAL ENGINE 4
V. Kushnir, B. Koman
Game AI in Unreal Engine 4 based on decision tree and called Behavior tree. The advantage of developing AI is the wide usage of this method in game industry for building an AI bots. It helps to build not a simple AI but a big model that helps us to build more interesting game. Nevertheless, this method of developing AI has a disadvantage which make hard to build a big system if you are only on a start to build AI with this method. In this paper I will show game engine called Unreal Engine 4 and how artificial intelligence can be developed. Also in this article will be shown a good start using this method for building great and large AI instantly.
PDF Version

Issue 9, Pages: 120-124
DOI: https://doi.org/10.30970/eli.9.120
FINGERPRINT RECOGNITION IN INEXPENSIVE BIOMETRIC SYSTEM
L. Monastyrskii, V. Lozunskii, Ya. Boyko, B. Sokolovskii
Nowadays a fingerprint recognition is the most developed biometric identification method. Each person has the unique papillary pattern which makes the identification possible. Typically, the algorithms parameters of the fingerprint recognition are the end of the line of the papillary patterns, their line branching and single points of the fingerprints. The features of the papillary pattern are converted into an unique code which is uniquely associated with the fingerprint. These fingerprint codes are stored in the database. The advantages of the method are a quiet simple procedure of scanning fingerprints and a low cost of scanning devices. The disadvantages are the sensitivity to tiny damages caused by cuts and scratches. Many scanners have a lot of issues with dry skin and skin of the old people. The number of "impassable" people can vary from less than percent up to the tens of percents for cheap scanners. However, modern fingerprint scanners are equipped with temperature sensors, force sensitive resistors, etc., which enhance the protection of the system from tampering. Human and technical factors, hardware implementation of the biometric devices, recognition algorithms and technical realization of the system are very important for the process of fingerprint recognition. The human factor, that is, the method of attaching (pressing) a finger to a biometric scanner is an important reason for both reducing the speed of the recognition of the fingerprints and the false triggering. In particular, the registration of the characteristic points for the formation of a qualitative fingerprint pattern with oblique (crooked) finger attachment to the biometric scanner increases the speed of identification of a person. The hardware implementation of biometric devices depends on the quality of the biometric sensor and the hardware device of the system. Particularly, when resolution is greater than or equal to 500 dpi, a fairly high quality fingerprint image can be obtained, followed by converting it into a digital model by characteristic points. The hardware platform with the built-in processor determines the speed of matching the fingerprint to the fingerprint pattern contained in the memory. It was created an inexpensive biometric system of person identification by fingerprint recognition on the optoelectronic scanner and microcontroller Arduino base. We have obtained fingerprints in BMP-format and handled by improved algorithm. Work of system was tested in practice. On the base of description technology it may be created the system of security defence, digital door lock for Smart House a system of work time calculation for office. In our article it was given wiring scheme of optical scanner and microcontroller Arduino Uno. Also it was shown the block scheme of working system of fingerprint identification of person wich consist an optical system of scanning, butter of images, two buffers of convolution and library of templates. Recognition time of person identification was changed from one to three sec in dependency of amount of fingerprint library.
PDF Version

Issue 9, Pages: 125-134
DOI: https://doi.org/10.30970/eli.9.125
MAJOR TRENDS IN DEVELOPMENT OF ADAPTIVE METHODS OF MANAGEMENT OF TRANSPORT FLOWS
A. Klimovich, V. Shuts
Adaptive algorithms, which current traffic systems are based on, exist for many decades. Information technologies have developed significantly over this period and it makes more relevant their application in the field of transport. This paper analyses modern trends in the development of adaptive traffic flow control methods. Reviewed the most perspective directions in the field of intelligent transport systems, such as high-speed wireless communication between vehicles and road infrastructure based on such technologies as DSRC and WAVE, traffic jams prediction having such features as traffic flow information, congestion, velocity of vehicles using machine learning, fuzzy logic rules and genetic algorithms, application of driver assistance systems to increase vehicle’s autonomy. Advantages of such technologies in safety, efficiency and usability of transport are shown. Described multi-agent approach, which uses V2I-communication between vehicles and intersection controller to improve efficiency of control due to more complete traffic flow information and possibility to give orders to separate vehicles. Presented number of algorithms which use such approach to create new generation of adaptive transport systems. The change in the intensity of traffic flows observed during the day requires a corresponding change in traffic management parameters, such as cycle times and time of the enabling signals. Adaptive control, due to the presence of feedback from the traffic flow, allows you to take into account both daily changes in intensity and its fluctuations due to the random arrival of vehicles. Systems based on adaptive management have been in place for the last decades, and their application in both Metropolitan areas and smaller cities has proven to be effective. However, the modern development of information technologies and intelligent transport systems (its) allows to create qualitatively new methods of traffic management aimed at improving the convenience, efficiency and safety of transport. Today, there are dozens of different implementations of adaptive transport management systems, and the most common are SCOOT and SCATS. Modern achievements in the field of its can significantly expand the current capabilities of adaptive transport management and create more advanced systems through the use of advanced sensors, electronics, computer and communication technologies, innovative management strategies. One of the important directions of its development is the use of wireless telecommunications. Research conducted in this area in the 2000s showed that the existing Wi-Fi technology does not meet the objectives. To solve these problems, a new addition to the Wi-Fi standard - IEEE 802.11 p was created. The new Protocol is based on the technology of DSRC (Dedicated short range communication), which serves for short-range communication. The next generation technology is called WAVE (Wireless Access to Vehicular Environment) and provides high-speed data transmission. The shortest-range wireless networks are used for data exchange between devices inside the car, for example, for communication between the driver’s smartphone and the car’s systems. V2V communication includes the exchange of data with vehicles passing near or moving on the same route, as well as emergency broadcasting to vehicles located nearby. V2i connectivity uses the roadside infrastructure for data exchange and network connectivity with vehicles. Also, the car can have a direct Internet connection via a cellular network. The services developed at this point include a cooperative alert system, collision system for the detection of collisions, a cooperative security system intersections, warning of the approach of emergency vehicles, or areas with road work. Another direction in its is the prediction of congestion. In the last few decades, the most common road forecasting techniques have been based on the Kalman filter and the integrated moving average autoregression (ARIMA) model. Currently, much attention is paid to methods that can perform forecasting based on several features, including traffic flow, degree of road occupancy, speed. Such algorithms include support vector machines (SVM), neural network (NN) system based on the rules of fuzzy logic (FRBS), genetic algorithms (GA) . The most effective methods to date can predict the occurrence of congestion for a period of 5 to 30 minutes with an accuracy of 95%. The last decade is characterized by active development in the field of Autonomous and unmanned vehicles. This is demonstrated through the following projects: VIAC (2007-10), HAVEit (2008-11), Cybercars-2 and CityMobil (2005-08 and 2008-11) [13], the GCDC competition (2009-11), e-Safety (2002-13), the DARPA competition and Google’s Driverless Car. Work on advanced systems assist the driver (PSV) is conducted in such areas as an assistance system when changing lanes, security systems pedestrian warning system and collision mitigation, adaptive light control headlights, an assistance system when Parking, night vision system cruise control system of internal monitoring, allowing to detect the sleepy state of the driver and to warn about dangerous situations. The next step is the creation of cooperative adaptive cruise control systems based on V2V interaction, traffic sign and traffic light recognition systems, systems that use information from digital maps, for example, to select the appropriate speed before a steep turn. The development of the PRSP will affect the safety requirements of vehicles and over time the use of such systems will become mandatory, making the machines more Autonomous. Multi-agent systems (MAS) are systems consisting of Autonomous intelligent agents interacting with each other and a passive environment in which agents exist and can be affected. The use of MACS to control the intersection is made possible by the development of V2i and V2V communications. The transition device is equipped with a controller implementing an algorithm for controlling the phases of the traffic light. In the case of Autonomous vehicles, the traffic light can simply perform a secondary function, since the main commands can transmit through V2i connectivity. The controller has a specific range, appearing in which the car transmits information about its position, speed and direction of movement. The controller collects this information from all vehicles and performs phase planning based on the data received. If necessary, commands are sent to the vehicles so that they can adjust their actions. The proposed algorithms of the controller can differ significantly from system to system. Some are focused on better planning of traffic light phases by providing more information on traffic flows, others are engaged in monitoring the trajectories of vehicles for more efficient and safe passage of the intersection, others suggest to abandon the regulation of traffic light and actively use the possibilities of wireless communication through the transmission of messages. The quality of the proposed algorithms is evaluated by simulation, which shows a significant increase in the efficiency of the intersection in comparison with the classical traffic light control. In addition to the tasks of managing an isolated intersection, the MAC can be used in centralized traffic management systems throughout the city, which are engaged in the formation of routes, planning of traffic time and coordination of individual traffic light objects in order to avoid congestion . Most of the existing methods of adaptive transport management are based on technologies available for several decades. The article presents a number of technologies, such as devices that provide wireless high-speed telecommunications with cars, and advanced driver assistance systems, the emergence and spread of which will lead to the creation of a new generation of adaptive transport management systems. Such systems will be able to collect detailed descriptions of traffic flows, including information on the route, speed and position of individual vehicles, and be able to transmit individual commands to these vehicles, allowing urban transport systems to cope with the constant growth in the number of vehicles and traffic volumes observed around the world.
PDF Version

Issue 9, Pages: 135-143
DOI: https://doi.org/10.30970/eli.9.135
DEVELOPMENT OF PHASE-SHIFTING DEVICE FOR IMPLEMENTATION OF THREE-STEP INTERFEROMETRIC METHOD WITH ARBITRARY PHASE SHIFTS OF REFERENCE WAVE
L. Muravsky, A. Drymalyk, G. Gaskevych, . Stasyshyn
Two basic approaches to the implementation of procedures of a wavefront stepwise and smooth phase shifts in a two-beam optical interferometer are considered. Advantages and lacks of both procedures are analyzed. It is noted that the smooth phase shift procedure removes the oscillations of a phase shifting element (PSE) that occur during the stepwise procedure. A phase-shifting device that is used to realize the three-step phase shifting interferometry (PSI) method with two arbitrary phase shifts of the reference wave within the angular interval $(0, pi)$ is developed. This device does not need the calibration procedure, because the three step PSI method allows defining any phase shift angle by calculating the correlation coefficient between two recorded interferograms. The correlation coefficient can be considered as a normalized scalar product of two interferograms that can be represented by centered multidimensional vectors or as a cosine of a phase shift angle between these vectors. Therefore, the developed device is much simpler and cheaper than its calibrated prototypes. It contains a PSE consisting of a piezoelectric transducer (PZT) and a mirror attached to the PZT, and an electronic unit for smooth linear motion of the mirror in the reference beam of the interferometer. Basic characteristics of the developed electronic unit are considered. The operating principle of the phase-shifting device electronic unit is to provide the smooth rising voltage supply initiated a smooth change of the PZT size. Therefore, the mirror attached to the PZT also moves smoothly. The PSE is located in the reference beam of the two-beam interferometer. If the reference beam in the interferometer enters the mirror in the direction perpendicular to the plane of the mirror, the beam wavefront moves to a distance twice that of the mirror. To verify the reliability of the developed phase-shifting device, we have elaborated the experimental setup dedicated for formation, registration and processing of interferograms of test surfaces and based on the Twyman-Green interferometer architecture. In this setup, the three-step PSI method with arbitrary phase shifts of a reference wave is fulfilled by recording of three interferograms during smooth phase shift of the mirror attached to the PZT. Comparative analysis of stepwise and smooth phase shift procedures for interferograms recording in the experimental setup has shown that their modulation transfer functions (MTFs) are similar for small exposures of interferograms. In particular, it is shown that for interferogram’s small exposure times, whose ratio to the voltage increase time in the angular interval $(0, pi)$ does not exceed 0.06, the MTF of the interferogram recorded with the smooth phase shift of the reference beam does not differ from the interferogram MTF obtained by stepwise phase shift.
PDF Version

Issue 9, Pages: 144-149
DOI: https://doi.org/10.30970/eli.9.144
DEVELOPMENT OF THE EDDY-CURRENT FLAW DETECTOR FROM U-TYPE MAGNETIC CIRCUIT WITH THE PURPOSE OF COMPENSATION OF EXTERNAL INTERFERENCES
D. Trushakov, S. Rendzinyak, O. Kozlovskyi
The features of the interaction of the eddy-current transducer with the ferromagnetic conducting medium during the flaw detection of ferromagnetic components and pieces of equipment with the revealed influences of external factors such as temperature, electromagnetic interference, etc. are considered. Existing approaches do not provide a clear answer to the question of how to improve not only the accuracy of measurements, but also the ability to determine the nature of the defects of the controlled sample. The research goal is to increase the accuracy of the eddy-current control by compensating of external interferences. The improvement of the construction of the measuring system of the resonance eddy-current flaw detector with a transformer connection with two connected measuring oscillatory circuits and a differential sensor having two measuring coil blocks is proposed. These design changes practically do not affect the ability of the eddy-current flaw detector to detect defects of different types and do not reduce its sensitivity to the anisotropy of the properties of the controlled sample or the gap between the transducer sensor and the controlled product. This takes on special significance for the eddy-current flaw detectors, the basis of which is the resonance method of tracking from the effect of the gap, when the resonance frequency and the quality factor of the measuring oscillatory circuit are monitored. According to the results of the research, the dependence of the eddy-currents flux on the magnetic flux of the primary coil and the properties of the controlled sample was established. It is shown that the transformer transducer is more resistant to external factors, since the measuring coils are back-to-back connected and EMF, caused by external factors, are mutually compensating.
PDF Version

Issue 9, Pages: 150-163
DOI: https://doi.org/10.30970/eli.9.150
INFLUENCE OF LOW-TEMPERATURE ANNEALING IN VACUUM ON PHYSICAL PROPERTIES OF SINGLE CRYSTAL $n-mathrm{Cd_xHg_{1-x}Te}$ $(x approx 0,19)$
V. Belyukh, B. Pavlyk
One of the outstanding problems, which arises up at $mathrm{Cd_xHg_{1-x}Te}$ crystal growth and making of devices on his basis, there is the thermodynamics stability problem of fairly pure single-crystal $n-mathrm{Cd_xHg_{1-x}Te}$ $(x approx 0,19-0,21)$. Our research directed on the subsequent study of thermodynamics stability problem of $n-mathrm{Cd_xHg_{1-x}Te}$ and is continuation of experiments on the low-temperature annealing of this alloy system. The $mathrm{Cd_xHg_{1-x}Te}$ crystals were grown from the basic components of 6N or 7N cleanness classes and were additionally doped by an indium from melt. With the result that lightly doped n-type material [$N_D-N_A approx(1-4)10^{14} cm^{-3}$] was obtained. The $mathrm{Cd_xHg_{1-x}Te}$ samples used in this investigation had a composition $(x approx 0,19)$. Samples made in classical Hall configuration were annealed in a vacuum ($10^{-3}$ torr) at $T$ = 373 K. Influence of low temperature annealing in a vacuum on physical properties of $n-mathrm{Cd_xHg_{1-x}Te}$ was studied on the basis of the Hall coefficient $R_H$ and conductivity $sigma$ measurements versus temperature have performed with an automated dc system (standard KAMAK) in the temperature range of 80-273 K. Already after the first stage of annealing during 4 hours the clear inversion of conduction type ($n$-type to $p$-type) in $n-mathrm{Cd_xHg_{1-x}Te}$ sample $(x approx 0,19)$ was attained. And after a next stage of annealing (annealing time - 4 hours) we recorded the clear change of sign of the Hall coefficient in the temperature dependence $R_H(1/T)$. Theoretical analysis obtained experimental data was executed on the basis of models for $n$- and $p-mathrm{Cd_xHg_{1-x}Te}$. This allowed us to estimate of concentration of acceptors $N_A$ and donors $N_D$, electron/hole mobility ratio, $b =mu_e/mu_h$, acceptor binding energy $varepsilon_A$. The simulation results showed that in the case of annealing in vacuum, the reason for inversion of the conductivity type is the thermal generation of defects of the acceptor type (vacancies Hg). The conclusion that the passivation of the surface of this material with a high mobility of charge carriers is necessary to maintain the stability of physical parameters was fully confirmed.
PDF Version

Issue 9, Pages: 164-171
DOI: https://doi.org/10.30970/eli.9.164
X-RAY LUMINESCENCE OF $mathrm{Tl_4CdI_6}$ CRYSTALS
M. Solovyov, O. Futey, V. Franiv, V. Solovyov, V. Stakhura, A. Franiv, A. Kashuba
We report on the study of low-temperature spectra (80-130 K) of X-ray luminescence of $mathrm{Tl_4CdI_6}$ crystals. Temperature behavior of the studied X-ray luminescence spectra and identification of their maxima are presented. The recombination mechanisms associated with natural anisotropy of the compounds under study are discussed and the possibility of their practical application is analyzed
PDF Version

Main page Search Rules for Authors

© Ivan Franko National University of Lviv, 2011

Developed and supported - Laboratory of high performance computing systems