+7 (495) 987 43 74 ext. 3304
Join us -              
Рус   |   Eng

Authors

Dli Maxim I.

Degree
Dr. Sci. (Eng.), Professor, Information Technologies in Economics and Management Department, Branch of the National Research University “MPEI” in Smolensk, Smolensk; Leading Researcher, Synergy University
E-mail
midli@mail.ru
Location
Smolensk, Russia
Articles

Constructing integrated model for risk management of metallurgical enterprise

A method of risk management in the metallurgical enterprise is proposed. Activities specific features are considered and risk management process is decomposed in such a way that every task is mathematically described and allocated to one of the steps originated from the decomposition.

Read more...

Method for intellectual management of industrial enterprise information resources

Industrial enterprise information resources management method describing separate components of a control system is considered. The method is based on using the set of interconnected mathematical models. The models presented incorporate graph theory, fuzzy logic and cognitive modeling methods modifications.

Read more...

Information and transport networks projects management under uncertainty

The article deals with the problem of project management for the development of information and transport enterprise networks. A formalized statement of the problem is as well as modification of the algorithm based on the ant colony using fuzzy logic and fuzzy production rules to take account of the uncertainty of demand in different nodes are presented.
Read more...

A three-level fuzzy cognitive model for region innovation development analysis

The necessity of the use of cognitive maps for the simulation of innovative development of the region is proved. The main innovation of modeling is in fuzzy cognitive maps. New kind of fuzzy cognitive maps incorporating uncertainty and variability of system performance are elaborated.
Read more...

The trajectory estimation model for project management in creation and organization of high-technology industrial products production

The projects of creation and organization the high-technology industrial products production include interrelated tasks (activities) of envisioning, planning, design and developing. Such projects have several specifics, such as different structural relationships between activities, high level of information uncertainty and large amount of controlled parameters. The quantity of these parameters depends on external factors and internal connections of the project. The described peculiarities determine the necessity of modifying of widely practiced project management formal methods and models. The article describes requirements to project models, on which the proposed method of model creation is based. The method includes the following stages: the decomposition of the project to subprojects; the creation of network models of the subprojects; the creation of the model that consists of activities belonging to different subprojects and having common input/output connections; the identification of the project goals with the help of indicators. The indicators may be of different types: quantitative (point ant interval estimations) and qualitative. The indicators for each goal-oriented project state are integrated with the use of proposed algorithm and form the project trajectory. The considered model makes possible to estimate the project trajectory in various time points. As a result the project management becomes stable under uncertainty.
Read more...

Simulation modeling and fuzzy logic in real-time decision-making of airport services

Decision making by the aircrafts services of the international airport, which provides for intensive traffic of aircraft and their ground handling, becomes a very topical issue. If earlier it was believed that the intensity is provided only by the number of runways, nowadays a large accumulation of aircraft on the airport platform-field creates equally complex difficulties in comparison with aircraft take-offs and landings. Solving such problems with the use of «crisp methods» of queuing theory gives little. This article deals with modern «fuzzy methods» based on simulation modeling and fuzzy logic.
Read more...

Outsource-using integration methods of business entities information systems

Business entities performance is inextricably linked with information resources sharing today. This problem can be solved by information systems integration. Often IT is not company’s core function and information system integration is carried out using IT-outsource. Architecture patterns choice and information system development project management are keys in this case. This article suggests relevant to modern conditions information systems integration architecture patterns. Information system life cycle stages and information systems development project stages interrelation model taking in the account information risks influence is suggested for effective information system development project management. Decision support system architecture is suggested for information risks minimization during information systems development project. Suggested results practical utilization is assumed towards the development of business entities information system integration project management.
Read more...

Economical information system lifecycle management based on decentralized application theory

Effective business processes become a competitive edge of the organization, and enable it to timely respond to changes in external environment. Research and development (innovation) are the key business process providing the basic value of organization products/services, along information system lifecycle management is one of components of this business process. The goal of this work is software developing and maintenance projects efficiency improving by transaction costs reducing. This article suggests economical information system lifecycle management model based on decentralized application theory, that tries to reduce the cost of information search by securely storing the lifecycle process output and project documentation versions; to reduce the cost of coordination by automating the lifecycle process output verification; to reduce the cost of contracting by using auto-implementing smart contracts and eliminating the dependence on the necessity of establishing «trust» relations between parties to the lifecycle. The practical use of the results is expected in developing information system lifecycle management tools.
Read more...

Formation of the structure of the intellectual system of analyzing and rubricating unstructured text information in different situations

The analysis of electronic text documents written in natural language is one of the most important tasks implementing in systems of automated analyzing linguistic information. Today the most complicated problem is analyzing unstructured text documents coming to various organizations and authorities through the electronic communications. The increasing volume of such documents leads to the need to rubricate incoming messages, i.e. to solve the classification task. The analysis of the scientific works in this field has showed the impossibility of constructing a unified model for rubricating unstructured electronic text documents in various situations. The main reasons are the lack of statistical data, the dynamism of the thesaurus and the small size of the incoming document. To solve this problem, we propose a multimodel approach to the rubrication that is characterized by the combined use of intellectual and probabilistic-statistical methods of the text document analysis. The choice of a specific model is carried out using fuzzy logic algorithms based on the proposed characteristics (the size of document, the degree of rubric thesaurus intersection, the frequency of meaningful keywords, etc.). The implementation of the proposed multimodel approach will improve the accuracy of attributing unstructured electronic text documents to concrete rubrics taking into account their specificity and various objectives of practical application in the organization.
Read more...

Developing the economic information system for automated analysis of unstructured text documents

The study of tasks and methods of automated text rubrication was conducted and their prospects for the analysis of unstructured electronic text documents were evaluated taking into account the peculiarities of appeals received from citizens to the authorities. The architecture of the information system of automated analysis of such documents is developed. It implements the proposed multi-model approach to the rubrication based on the integrated use of intelligent and probabilistic-statistical methods. The procedure of processing citizens’appeals received by the authorities using the document management system and the developed information system is given.
Read more...

Algorithms for the formation of images of the states of objects for their analysis by deep neural networks

Algorithms of visualization of numerical data characterizing the state of objects and systems of various nature with the aim of finding hidden patterns in them using convolutional neural networks are presented. The algorithms used methods for obtaining images from numerical data on the basis of the discrete Fourier transform of time series fragments, as well as on the basis of the application of visualization using three-component system diagrams, if such a three-component representation of the system is possible. The software implementation of the proposed algorithms was performed in the Linux environment in the Python 3 language using the Keras open neural network library, which is a superstructure above the TensorFlow machine learning framework. For the learning process of the neural network, a Nvidia graphics processor was used that supports the technology of the CUDA parallel computing software and hardware architecture, which significantly reduced the learning time. The proposed approach is the recognition States of the objects according to their visualized data are based on the recognition of no boundaries or forms of the figures in the images and their textures. Also presented is a program that generates sets of images to implement the process of learning and testing convolutional neural networks in order to pre-tune them and assess the quality of the proposed algorithms.Keywords: Internet, Internet security, parental control applications, user security, information security, Internet threats.
Read more...

Using fuzzy decision trees to rubricate unstructured small-sized text documents

Every day, a large number of appeals (statements, proposals or complaints) submitted in unstructured text form are received on Internet portals and e-mails of public authorities. The quality and speed of automatic processing of such electronic messages directly depend on the correctness of their classification (rubrication). It consists in assigning the received message to one or several thematic rubrics that determine the directions of the departments. The choice of a mathematical approach to analysis and rubrication directly depends on the characteristics of incoming appeals. The analysis of their specifics (small size, the presence of errors, a free-style of the problem statement, etc.) has revealed the impossibility of using classical approaches to the classification of text documents. The article suggests using the apparatus of fuzzy decision tree for rubricating small-sized unstructured text documents arriving at Internet portals and e-mails of public authorities. It allows classification under conditions of the rubric intersection and a lack of statistical information for applying probabilistic and neural network methods. The proposed model for the document rubrication is distinguished by the consideration of syntactic relationships and roles of words in the sentences based on the use of binary fuzzy decision tree. The tree is constructed on the basis of the results of analysis of the degree of rubric thesaurus intersection and the distances between rubrics in the n-dimensional feature space.
Read more...

Analysis of the influence of the architecture of the input layers of convolution and subsampling of a deep neural network on the quality of image recognition

The results of the study of the influence of the characteristics of convolution and subsampling layers (sub-sampling layer) at the input of a deep convolutional neural network on the quality of pattern recognition are presented. For the convolution layer, the variable parameters were the size of the convolution kernel; the varied parameters of the architecture of the down sampling layer were the size of the receptive field, which determines which region of the input feature will be processed to form the output of the layer. All the parameters listed that determine the architecture of the input layers of convolution and subsampling, the neural network developers have to select, based on their experience, known good practices. This choice is influenced by a preliminary analysis of the parameters of the processed images: image size, number of color channels, features of signs determining the classification of recognizable objects in different classes (recognition of silhouette, texture) and more. To take into account the noted factors when creating the architecture of the input convolution and subsampling layers, it is proposed to use numerical characteristics calculated based on the analysis of histograms of input images and pixel color intensity dispersions. A histogram of both the entire image and the fragments is constructed, as well as the calculation of the total variance and local variances of the fragments, compared with the total dispersion. Based on these comparisons, recommendations were developed for choosing the size of the convolution kernel, which will reduce the time needed to search for a suitable neural network architecture. A study of the influence of the above parameters on the quality of image recognition by a convolutional neural network was carried out experimentally, using a network created in Python using the Keras and Tensorflow libraries. To visualize and control the learning process of the neural network, the TensorBoard cross-platform solution was used. Network training was carried out on the Nvidia GeForce GTX 1060 GPU, supporting CUDA technology for hardware and software parallel computing architecture. Read more...

Rubrication of text documents based on fuzzy difference relations

One of the key areas of informatization of public authorities is to develop and implement the systems of automated processing the electronic appeals (applications, complaints, suggestions) of individuals and legal entities that arrive on official websites and portals of government. The rubrication plays an important role in solving this problem. It consists in the appeals’ distribution according to thematic rubrics determining the directions of the activity of departments carrying out processing and preparation of the corresponding response. The results of the analysis of the specific features of such text messages (small size, markup lack, the errors’ presence, thesaurus unsteadiness, etc.) confirmed the impossibility of using traditional approaches to rubrication and justified the feasibility of using data mining methods. The article proposes a new approach to the analysis and rubrication of electronic unstructured text documents arrived on official websites and portals of public authorities. It involves the formation of a tree-like structure of the rubric field, based on fuzzy relationships of differences between the syntactic characteristics of documents. The analysis is based on determining the fuzzy correspondence of these documents by their syntactic characteristics with the values of the clusters’ centers. It is carried out sequentially from the root to the leaves of the constructed fuzzy decision tree. The proposed rubrication method is programmatically implemented and tested in the automated processing and analysis of appeals (applications, complaints and suggestions) of citizens entering the Administration of Smolensk Region. This made it possible to ensure prompt and high-quality updating of rubrics and document analysis under conditions of non-stationary composition of the thesaurus and the importance of rubric words. Read more...

Rubrication of text information based on the voting of intellectual classifiers

The practical implementation of the concept of electronic government is one of the priorities of Russian state policy. The organization of effective interaction between authorities and citizens is an important element of this concept. In addition to providing public services, it should include the processing of electronic appeals (applications, complaints, suggestions, etc.). Research has shown that the speed and efficiency of appeal processing largely depend on the quality of determining the thematic rubric, i.e. solving the rubrication task. The analysis of citizens' appeals received by the e-mail and official websites of public authorities has revealed several specific features (small size, errors in the text, free presentation style, description of several problems) that do not allow the successful application of traditional approaches to their rubrication. To solve this problem, it has been proposed to use various methods of intellectual analysis of unstructured text data (in particular, fuzzy logical algorithms, fuzzy decision trees, fuzzy pyramidal networks, neuro-fuzzy classifi convolutional and recurrent neural networks). The article describes the conditions of the applicability of six intellectual classifiers proposed for rubricating the electronic citizens’ appeals. They are based on such factors as the size of the document, the degree of intersection of thematic rubrics, the dynamics of their thesauruses, and the amount of accumulated statistical information. For a situation where a specific model cannot make an unambiguous choice of a thematic rubric, it is proposed to use the classifier voting method, which can significantly reduce the probability of rubrication errors based on the weighted aggregation of solutions obtained by several models selected using fuzzy inference. Read more...

Creation of a chemical-technological system digital twin using the Python language

Currently, when modeling complex technological processes in cyber-physical systems, procedures for creating so-called "digital twins" (DT) have become widespread. DT are virtual copies of real objects which reflect their main properties at various stages of the life cycle. The use of digital twins allows real-time monitoring of the current state of the simulated system, and also provides additional opportunities for engineering and deeper customization of its components to improve the quality of products. The development of the "digital twin" technology is facilitated by the ongoing Fourth Industrial Revolution, which is characterized by the massive introduction of cyber-physical systems into production process. These systems are based on the use of the latest technologies for data processing and presentation and have a complex structure of information chain between its components. When creating digital twins of such systems elements, it is advisable to use programming languages, that allow visualization of simulated processes and provide a convenient and developed apparatus for working with complex mathematical dependencies. The Python programming language has similar characteristics. In the article, as an example of a cyber- physical system, a chemical-technological system based on a horizontal-grate machine is considered. This system is designed to implement the process of producing pellets from the apatite-nepheline ore mining wastes. The article describes various aspects of creating a digital twin of its elements that carry out the chemical-technological drying process in relation to a single pellet. The digital twin is implemented using the Python 3.7.5 programming language and provides the visualization of the process in the form of a three-dimensional interactive model. Visualization is done using the VPython library. The description of the digital twin software operation algorithm is given, as well as the type of the information system interface, the input and output information type, the results of modeling the investigated chemical-technological process. It is shown that the developed digital twin can be used in three versions: independently (Digital Twin Prototype), as an instance of a digital twin (Digital Twin Instance), and also as part of a digital twins set (Digital Twin Aggregate). Read more...

Валерию Павловичу Мешалкину – 80 лет

V. P. Meshalkin is the founder of the new scientific direction "Theoretical foundations of engineering, ensuring reliability, logistics management of energy resource efficiency of chemical and technological systems for the outputing of high-quality products". The article describes the main scientific achievements of academician V. P. Meshalkin, who is a leading scientist in several fields of study, such as analysis and synthesis of highly reliable energy-saving chemical-technological systems; managing the operation of low-waste production facilities with optimal specific consumption of raw materials, energy, water and structural materials. The main projects, which are currently successfully carried out under the general guidance of the Academician of the Russian Academy of Sciences V. P. Meshalkin, are presented, including projects on the development of scientific foundations for the rational use of mineral raw materials, methods of engineering and management of the usage of energy-effi nt environmentally safe digitalized production of industrial waste processing, etc. Read more...

Preliminary assessment of the pragmatic value of information in the classifiсation problem based on deep neural networks

A method is proposed for preliminary assessment of the pragmatic value of information in the problem of classifying the state of an object based on deep recurrent networks of long short-term memory. The purpose of the study is to develop a method for predicting the state of a controlled object while minimizing the number of used prognostic parameters through a preliminary assessment of the pragmatic value of information. This is an especially urgent task under conditions of processing big data, characterized not only by significant volumes of incoming information, but also by information rate and multiformatness. The generation of big data is now happening in almost all areas of activity due to the widespread introduction of the Internet of Things in them. The method is implemented by a two-level scheme for processing input information. At the first level, a Random Forest machine learning algorithm is used, which has significantly fewer adjustable parameters than a recurrent neural network used at the second level for the final and more accurate classification of the state of the controlled object or process. The choice of Random Forest is due to its ability to assess the importance of variables in regression and classification problems. This is used in determining the pragmatic value of the input information at the first level of the data processing scheme. For this purpose, a parameter is selected that reflects the specified value in some sense, and based on the ranking of the input variables by the level of importance, they are selected to form training datasets for the recurrent network. The algorithm of the proposed data processing method with a preliminary assessment of the pragmatic value of information is implemented in a program in the MatLAB language, and it has shown its efficiency in an experiment on model data. Read more...

Rubrication of text documents based on fuzzy difference relations

One of the key areas of informatization of public authorities is to develop and implement the systems of automated processing the electronic appeals (applications, complaints, suggestions) of individuals and legal entities that arrive on official websites and portals of government. The rubrication plays an important role in solving this problem. It consists in the appeals’ distribution according to thematic rubrics determining the directions of the activity of departments carrying out processing and preparation of the corresponding response. The results of the analysis of the specific features of such text messages (small size, markup lack, the errors’ presence, thesaurus unsteadiness, etc.) confirmed the impossibility of using traditional approaches to rubrication and justified the feasibility of using data mining methods. The article proposes a new approach to the analysis and rubrication of electronic unstructured text documents arrived on official websites and portals of public authorities. It involves the formation of a tree-like structure of the rubric field, based on fuzzy relationships of differences between the syntactic characteristics of documents. The analysis is based on determining the fuzzy correspondence of these documents by their syntactic characteristics with the values of the clusters’ centers. It is carried out sequentially from the root to the leaves of the constructed fuzzy decision tree. The proposed rubrication method is programmatically implemented and tested in the automated processing and analysis of appeals (applications, complaints and suggestions) of citizens entering the Administration of Smolensk Region. This made it possible to ensure prompt and high-quality updating of rubrics and document analysis under conditions of non-stationary composition of the thesaurus and the importance of rubric words. Read more...

A method for classifying mixing devices using deep neural networks with an expanded receptive field

The paper presents the results of research aimed at developing a method and software tools for identifying the class of a mixing device by its resistance coefficient through experimental data processing. Currently, the main methods for studying mixing devices are finite element methods, as well as procedures of estimating turbulent transfer parameters using laser dopplerometry and chemical methods of sample analysis. These methods require expensive equipment and provide results only for certain types of equipment. This makes it difficult to extend the inferences to a wider class of devices with different designs of mixing impellers. The proposed method involves processing the results of an experiment in which a point light source forming a beam directed vertically upwards is located at the bottom of a container filled with a transparent liquid. A mixing device with variable rotation frequency is placed in the container. When performing experiments in real conditions, small deviations in the size and location of the mixing device lead to difficult-to-predict fluctuations of the funnel surface. Therefore, the image of one marker describes a trajectory that is difficult to predict. It, under certain conditions, can intersect with the trajectories of other markers or be interrupted at the moment when the marker is closed by a stirrer blade passing over it. The resulting image of the markers is associated with a change in the rotational speed of the blade by a rather complex relationship. To identify this dependence, it is proposed to use deep neural networks operating in parallel in two channels. Each channel analyzes the video signal from the surface of the stirred liquid and the time sequence characterizing the change in the speed of rotation of the blades of the device. It is proposed to use neural networks of various architectures in the channels - a convolutional neural network in one channel and a recurrent one in another. The results of the operation of each data processing channel are aggregated according to the majority rule. The computational novelty of the proposed algorithm lies in the expansion of the receptive field for each of the networks due to the mutual conversion of images and time sequences. As a result, each of the networks is trained on a larger amount of data in order to identify hidden regularities. The effectiveness of the method is confirmed by testing it with the use of a software application developed in the MatLab environment. Read more...

Multilevel algorithms for evaluating and making decisions on the optimal control of an integrated system for processing fine ore raw materials

The results of studies aimed at developing multi-level decision-making algorithms for management of energy and resource efficiency, technogenic and environmental safety of a complex multi-stage system for processing fine ore raw materials are presented (MSPFORM). A distinctive feature of such a system is its multidimensionality and multiscale, which manifests itself in the presence of two options for implementing technological processes for processing finely dispersed ore raw materials, the need to take into account the interaction of the aggregates included in the system, as well as the hierarchy of describing the processes occurring in them - mechanical, thermophysical, hydrodynamic, physical and chemical. Such a variety of processes characterizes the interdisciplinarity of research and the complexity of obtaining analytical, interconnected mathematical models. This situation inspired the analyze use of artificial intelligence methods, such as deep machine learning and fuzzy logic, to describe and analyze processes. The scientific component of the research results consists in the developed generalized structure of the MSPFORM, the conceptual basis of multilevel algorithms for evaluating and making decisions on the optimal control of this system, the proposed composition of the parameters and the form of the optimization criterion. The task of the study was to analyze possible options for the processing of ore raw materials, to develop a concept for the construction of the MSPFORM allowing the possibility of optimizing its functioning according to the criterion of energy and resource efficiency while meeting the requirements of environmental safety. The application of evolutionary algorithms for solving the problem of optimizing the MSPFORM according to the criterion of minimum energy consumption is announced and its stages are specified. The structure of the block of neuro-fuzzy analysis of information about the parameters of processes in MSPFORM is presented, which is based on the use of deep recurrent and convolutional neural networks, as well as a fuzzy inference system. The results of a simulation experiment on approbation of the software implementation of this block in the MatLab environment are presented. Read more...

An intelligent model for managing the risks of violation of the characteristics of electromechanical devices in a multi-stage system for processing ore raw materials

The results of studies on the development of the structure of an intelligent model for managing the risks of violation of the characteristics of electromechanical devices in a multi-stage system for processing ore raw materials are presented. Such devices are involved in all cycles of the technological process, so the assessment of this risk for them is an urgent task. A method for assessing such risks is proposed, which is based on the assessment of the useful life of equipment, performed on the basis of the prediction of characteristics by a deep recurrent neural network, with further generalization of the results of such an assessment in a fuzzy inference block. Recurrent neural networks with long short-term memory were used, which are one of the most powerful tools for solving time series regression problems, including predicting their values for long intervals. The use of deep neural networks to predict the characteristics of electromechanical devices made it possible to obtain a high prediction accuracy, which made it possible to apply a relatively less accurate recurrent least squares method for the iterative process of estimating the useful life of equipment. This approach made it possible to build a computational evaluation process with its constant refinement as new results of measurements of the characteristics of electromechanical devices become available. The results of a model experiment with a software implementation of the proposed method, performed in the MatLab 2021a environment, are presented, which showed the consistency of the program modules and obtaining a risk assessment result that is consistent with the expected dynamics of its change. Read more...

Fuzzy model of a multi-stage chemical-energy-technological processing system fine ore raw materials

The results of the study, the purpose of which was to build a software model of a multi-stage integrated system for processing finely dispersed ore raw materials, are presented. The role of such raw materials can be processed waste at mining and processing plants of apatite-nepheline and other types of ores, which accumulate in large volumes in tailing dumps. They create a significant environmental threat in the territories adjacent to the plants due to weathering, dust formation, penetration into the soil and aquifers of chemical compounds and substances hazardous to human health. Therefore, the improvement of existing production processes, the development of new technological systems for mining and processing plants, including the application of the principles of the circular economy, waste recycling, justifies the relevance of the chosen research area. The proposed program model is based on the use of trainable trees of systems (blocks) of fuzzy inference of the first and second types. This approach made it possible to avoid unnecessary complication of the bases of fuzzy inference rules when using only one fuzzy block when building a multi-parameter model of the entire multi-stage complex system. The use of several fuzzy inference blocks that describe the behavior of individual units of the system and their configuration in accordance with the physical structure of the system allows the use of relatively simple sets of rules for individual blocks. The joint selection of their parameters when training a tree of fuzzy blocks makes it possible to achieve high accuracy of the solutions obtained. The novelty of the research results is the proposed software fuzzy model of an integrated system for processing finely dispersed ore raw materials. The results of a simulation experiment conducted in the MatLab environment using a synthetic data set generated in Simulink are presented. The results showed that the trained fuzzy model provides good fidelity of the parameters and variables from the test part of the synthetic set. Read more...

Fuzzy dynamic ontological model for decision support of energy-intensive systems management based on precedents

The article discusses the features of applying the precedent approach when managing complex energy-intensive systems in the context of the need to take into account various energy, technical, environmental and operational indicators, as well as the uncertainty of many internal and external factors influence. This leads to the presence of a large amount of semi-structured information that can be presented using various scales, which determines the prospects of using the precedent approach. The proposed fuzzy ontological model for supporting decision support based on precedents is described, characterized by the use of dynamic concepts, as well as concepts in the form of different scale numerical and linguistic variables. An algorithm for assessing the proximity of precedents based on an ontological model is proposed, which differs by taking into account the dynamic aspects of changes in the state of controlled systems. The developed algorithms for fuzzy inference for decision support based on precedents are presented, which allow the use of both linguistic and numerical variables as input characteristics of the fuzzy production model, as well as using various logical connections between the rules pre-requisites. The software that implements the developed model and algorithms is described. Particular attention is paid to the modified fuzzy inference component, implemented using Python 3.8.7 language tools. To implement the user interface of the specified component, the cross-platform graphic library Tkinter was used. The results of computational experiments using real data obtained during the operation of an energy-intensive system for processing fine ore raw materials, including a conveyor-type roasting machine, are presented. Minimization of specific total costs for thermal and electrical energy was considered as a criterion for the effectiveness of management decisions. The outcome obtained showed that the proposed model and software make it possible to obtain a result comparable to the one of using complex analytical dependencies, while ensuring a reduction in time and financial costs. Read more...

A method for solving the inverse kinematics problem based on reinforcement learning for controlling robotic manipulators

A method for solving the inverse kinematics problem for a three-link robotic manipulator is proposed based on one of the types of machine learning - reinforcement learning. In the general case, this task consists of finding the laws of change in the generalized coordinates of the manipulator’s gripping device that provide the specified kinematic parameters. When solving the problem analytically, the basis for calculating inverse kinematics is the Denavit – Hartenberg parameters with further implementation of numerical matrix calculations. However, taking into account the kinematic redundancy of multi-link manipulators, this approach is labor-intensive and does not allow automated consideration of changes in the external environment in real time, as well as the features of the robot’s field of application. Therefore, an urgent research task is to develop a solution whose structure contains a self-learning block that provides a solution to the inverse kinematics problem under conditions of a changing external environment, the behavior of which is unknown in advance. The proposed method is based on simulating the process of achieving the goal of robot control (positioning the gripping device of the manipulator) at a given point in space using the trial and error method. For approaching the goal at each learning step, a reward function is calculated, which is used when controlling the robot. In the proposed method, the agent is a recurrent artificial neural network, and the environment, the state of which is observed and assessed, is a robotic manipulator. The use of a recurrent neural network made it possible to take into account the history of the movement of the manipulator and overcome the difficulties associated with the fact that different combinations of angles between links can lead to the same point in the workspace. Testing of the proposed method was carried out on a virtual model of the robot, made using the MatLAB Robotics System Toolbox and the Simscape environment, which showed high efficiency in terms of the “time – accuracy” criterion of the proposed method for solving the inverse kinematics problem. Read more...

A method for predicting bank customer churn based on an ensemble machine learning model

The results of research are presented, the purpose of which was to develop a method for predicting the outflow of clients of a commercial bank based on the use of machine learning models (including deep artificial neural networks) for processing client data, as well as the creation of software tools that implement this method. The object of the study is a commercial bank, and the subject of the study is its activities in the B2C segment, which includes commercial interaction between businesses and individuals. The relevance of the chosen area of research is determined by the increased activity of banks in the field of introducing digital services to reduce non-operating costs associated, in particular, with retaining clients, since the costs of attracting new ones are much higher than maintaining existing clients. The scientific novelty of the research results is the developed method for predicting the outflow of commercial bank clients, as well as the algorithm underlying the software that implements the proposed method. The proposed ensemble forecasting model is based on three classification algorithms: k-means, random forest and multilayer perceptron. To aggregate the outputs of individual models, it is proposed to use a learning tree of fuzzy inference systems of the Mamdani type. Training of the ensemble model is carried out in two stages: first, the listed three classifiers are trained, and then, based on the data obtained from their outputs, a tree of fuzzy inference systems is trained. The ensemble model in the proposed method implements a static version of the forecast, the results of which are used in a dynamic forecast performed in two versions – based on the recurrent least squares method and based on a convolutional neural network. Model experiments carried out on a synthetic dataset taken from the Kaggle website showed that the ensemble model has a higher quality of binary classification than each model individually. Read more...