+7 (495) 987 43 74 ext. 3304
Join us -              
Рус   |   Eng

Journal archive

№5(101) October 2022 year

Content:

Software engineering

Algorithmic efficiency

An original adaptive-index-clustering algorithm is proposed: “Managed vegetation index”. An original adaptive-multi-index-cluster algorithm for comprehensive assessment of the impact of chemical pollution on forests using satellite photographs is proposed, which is distinguished by the use of an adaptive procedure for the formation of pixel clusters displaying a plurality of spectral channels of a photographic image of each type of vegetation state of a forest stand in the zones of chemical pollution of forest tracts, as well as using the procedure for calculating the weighted average values of complex vegetation indices for each zone of chemical pollution, which allows, based on the values of complex vegetation indices, to determine various biological, phytological and physico-chemical states of forest areas.It should be noted that in order to solve the complex problem of constructing complex indices linked to ecological zones, it is proposed to use the simple idea of increasing the quality of modeling and forecasting by expanding the amount of information. The proposed problem can be solved using a statistical analysis of data on the distribution of pixels whose belonging to ecological zones is known in advance. The development of the algorithm is based on the following prerequisites: (1) using a linear combination of individual classical vegetation indices of the state of forest areas, it is possible to create a new specialized complex vegetation index that makes it possible to identify ecological zones in forest areas according to the levels of impact on forests of chemical pollution of industrial enterprises; (2) the possibility of using specialized complex vegetation indices in the form of weighted average linear combinations of classical vegetation indices. Specialized complex vegetation indices of adaptive selection of weight coefficients are capable of displaying various biological, physicochemical and ecological characteristics of the state of forests based on clustering of satellite image pixels. The proposed algorithm makes it possible to calculate, as a result of clustering, more accurate estimates of the total areas of ecological zones of forest tracts, which can be used as a basis for assessing the degree of ecological degradation of forest tracts and environmental damage.

The oil industry is the leading sector of the Russian economy, that makes the largest contribution to the country’s budget, creates a huge number of jobs and fully meets the domestic needs for oil and its products. In Russia, transportation of crude oil from fields to consumers (primarily refineries) is carried out by 5 modes of transport. Pipeline transport has received the greatest distribution. It provides transportation for 83% of crude oil and 30% of oil products. The most important element of the pipeline system is tank parks, which are used to collect and store oil at the junctions of technological pipeline sections and transshipment to other modes of transport. They are especially dangerous industrial objects. Therefore, they are subject to extremely stringent design and construction requirements. The most important stage in the construction of a tank park is the site selection, which is carried out on the basis of economic criteria and engineering requirements. In order to reduce the number of options for its location, where the survey party will travel, it is proposed to conduct a preliminary selection of the most promising territories by solving the task of multi-criteria optimization. The presence of a huge number of criteria leads to the need to use heuristic methods, among which swarm optimization algorithms based on modeling the collective behavior of various living organisms are widely used. To solve this problem, it is proposed to use bacterial optimization algorithms that allow taking into account both favorable and negative factors. Fuzzy logic elements can be added to the classical algorithm (it is proposed to set the initial positions of bacteria using fuzzy-logical inference systems, where the available statistics and expert assessments will be input parameters). In general, the proposed approach can be used to select sites for the construction of various hazardous industrial facilities, for which a large number of parameters must be taken into account.

The paper presents an overview and comparative analysis of four weighting methods for multi-criteria decision-making problem based on pairwise comparisons: AHP, Dematel, BWM and SWARA. It is demonstrate, by examples that the reliability of evaluations largely depends on the correct use of the pairwise comparison tool: evaluations are given on a verbal scale, then converted into quantitative values and then the criteria priorities are calculated. All stages of pairwise comparisons are multivariate. In particular, the validity of this decision-making tool depends on the choice of numerical scale and the method of prioritization. Given the importance, a set of concepts relating to linguistic variables, linguistic pairwise comparison matrices, and numerical scale (scale function) are presented in detail. It is demonstrate that the information of the pairwise comparison matrix in AHP is higher and is sufficient for the unambiguous implementation of the Dematel, BWM and SWARA methods. Although the reliability of the solution for a larger number of input information is considered higher, nevertheless, it cannot be argued that the decision of the AHP are more significant. The emphasis in this study is on the transformation of the numerical scale. The transformation of the numerical scale a directly related to the mental representation of the verbal scale, since the decision maker forms the scale according to his mental representation. It is demonstrate that the compression of the numerical scale leads to the alignment of priorities. The trend is the same for all types of numerical scales and prioritization methods, but the process occurs at different speeds. For scales with a smaller number of gradations, a decrease in the degree of priority on the numerical scale is characteristic, which leads to a decrease in the difference in weights. In particular, this difference can be adjusted by scaling.

Models and methods

The paper presents the results of research aimed at developing a method and software tools for identifying the class of a mixing device by its resistance coefficient through experimental data processing. Currently, the main methods for studying mixing devices are finite element methods, as well as procedures of estimating turbulent transfer parameters using laser dopplerometry and chemical methods of sample analysis. These methods require expensive equipment and provide results only for certain types of equipment. This makes it difficult to extend the inferences to a wider class of devices with different designs of mixing impellers. The proposed method involves processing the results of an experiment in which a point light source forming a beam directed vertically upwards is located at the bottom of a container filled with a transparent liquid. A mixing device with variable rotation frequency is placed in the container. When performing experiments in real conditions, small deviations in the size and location of the mixing device lead to difficult-to-predict fluctuations of the funnel surface. Therefore, the image of one marker describes a trajectory that is difficult to predict. It, under certain conditions, can intersect with the trajectories of other markers or be interrupted at the moment when the marker is closed by a stirrer blade passing over it. The resulting image of the markers is associated with a change in the rotational speed of the blade by a rather complex relationship. To identify this dependence, it is proposed to use deep neural networks operating in parallel in two channels. Each channel analyzes the video signal from the surface of the stirred liquid and the time sequence characterizing the change in the speed of rotation of the blades of the device. It is proposed to use neural networks of various architectures in the channels - a convolutional neural network in one channel and a recurrent one in another. The results of the operation of each data processing channel are aggregated according to the majority rule. The computational novelty of the proposed algorithm lies in the expansion of the receptive field for each of the networks due to the mutual conversion of images and time sequences. As a result, each of the networks is trained on a larger amount of data in order to identify hidden regularities. The effectiveness of the method is confirmed by testing it with the use of a software application developed in the MatLab environment.

The article discusses issues related to the development of a flexible intelligent software package for solving the problem of optimal planning of multi-assortment production. These industries are characterized by a large range of products, many types and configurations of equipment, with an increase in the dimension of the problem, the number of options for production schedules grows exponentially, therefore, it is extremely important to develop a specialized complex for effective optimal planning and scheduling, insisting on the characteristics of various multi-assortment industries. The purpose of this work is to increase the productivity of multi-assortment enterprises and reduce the time of production of products by developing methods and algorithms for optimizing scheduling in the form of a problem-oriented software package. The article presents a mathematical formulation of the optimization problem and a set of mathematical models and algorithms for the formation of objective functions for optimal scheduling of reconfigurable productions. Conducting this study is based on the use of methods of scheduling theory, optimization and evolutionary calculations, tools for object-oriented development of complex software systems and databases. The proposed software package has various intelligent user interfaces, supplemented by databases of products, equipment and technological regulations, a library of objective functions and mathematical optimization methods, an expert system tuning module, as well as an interactive system for visualizing the resulting production plans in the form of a Gantt chart and decision tree of the optimization problem. Testing of the software package was carried out on the data of polymer and metallurgical enterprises in Russia and Germany and confirmed the effectiveness of solving planning problems. Implementation of the proposed software package makes it possible to ensure efficient loading of enterprise equipment, reduce production costs and simplify the process of making managerial decisions in the course of production planning.

Currently, there is an acute problem of waste disposal of mining and processing plants, which accumulate in significant volumes in the territories adjacent to them and pose a serious threat to the environment. In this regard, the creation of technological systems for processing ore waste and the improvement of their information support represent an urgent area of research. An example of such a system is a complex chemical and energy technology system for the production of yellow phosphorus from waste apatite-nepheline ores. The purpose of the study was to develop a model for collecting data on the parameters of the processes of heat treatment of pelletized phosphate ore raw materials in such a system, as well as a method for identifying dependencies between these parameters. The identification of dependencies in the information support of the yellow phosphorus production system will improve the quality of its functioning in terms of management criteria, energy and resource efficiency. To achieve this goal, the tasks of choosing a mathematical concept for the basis of the method being developed, constructing an algorithm and creating software implementing this method, conducting model experiments were solved. The method is based on the use of deep recurrent neural networks of long-term short-term memory, which have a high generalizing ability and are used in solving problems of regression and classification of multidimensional time sequences, in the form of which, as a rule, the parameters of a chemical and energy technology system are presented. The method is implemented as an application created in the MatLab 2021 environment. The application interface allows you to interactively conduct experiments with various sets of input and output parameters to identify the relationship between them, as well as change the hyperparameters of neural networks. As a result of the application, a repository of trained neural networks is created that simulate the relationships found between the specified parameters of the technological system and can be applied in decision support systems, management and engineering.

Defense software

Currently information exchange methods and means of communication development are being done a significant impact on the level of all industrial and economic entities innovation potential, which is also the same for their group formations, such as regional complexes. It is necessary to note high degree of integration and interdependence of all such systems elements and processes closely interconnected by different kind of networks. Among them, it is possible to highlight the interaction between participants of scientific and industrial cluster within the framework of innovative activities, which should provide possibility to transfer and receive various kinds of data, which could be both open and confidential type. At the current stage, there is not many applied tools for ensuring confidentiality in the implementation of these processes. For example, they partially solve the problem of traffic tunnelling systems based on OpenVPN or WireGuard tunnels, and other software solutions provide the potential of an extensible cloud (Nextcloud). However, analysing the functionality of these solutions, it is possible to identify shortcomings that do not allow their implementation in the complex production and economic systems processes of innovative development. Thus, existing traffic tunnelling solutions are not adapted for deployment on a corporate scale with a flexible organisational structure. In solutions based on Nextcloud, the complexity disadvantages of the server configuration and the cost of the primary software configuration are highlighted. To solve the above problems, in article has been proposed an intelligent traffic tunneling system, which is based on using additional means of primary automated OpenVPN connection initialization at neural module expense. A dynamic digital fingerprint distribution system with two-way key exchange was used as an authorization server. The developed software solution was tested and then compared with existing analogues. This experiment may to conclusion that the developed software solution is not inferior in a number of aspects to existing methods, and can subsequently be used to ensure secure information and communication exchange between industrial and economic entities in clusters during innovative processes implementation.

Processes and systems modeling

The paper discusses the issues of implementing an adaptive testing system based on the use of artificial neural network (INS) modules, which should solve the problem of intelligent choice of the next question, forming an individual testing trajectory. The aim of the work is to increase the accuracy of the INS to form the level of complexity of the next test question for two types of architectures – direct propagation (FNN – Feedforward Neural Network) and recurrent with long-term short-term memory (LSTM – Long-Short Term Memory). The data affecting the quality of training are analyzed, the architectures of the input layer of the direct propagation INS are considered, which have significantly improved the quality of neural networks. To solve the problem of choosing the thematic block of the question, a hybrid module structure is proposed, including the INS itself and a software module for algorithmic processing of the results obtained from the INS. A study of the feasibility of using direct propagation ANNs in comparison with the LSTM architecture was carried out, the input parameters of the network were identified, various architectures and parameters of the ANN training were compared (algorithms for updating weights, loss functions, the number of training epochs, packet sizes). The substantiation of the choice of a direct distribution network in the structure of the hybrid module for selecting a thematic block is given. The above results were obtained using the Keras high-level library, which allows you to quickly start at the initial stages of research and get the first results. Traditionally, learning has taken place over a large number of eras.

The rising trend of computer technology using makes digital signal processing (DSP) techniques converted into numerical data sets particularly relevant. For the most part, they are quite complex and their use is not always justified for a wide range of applications. This determines the ongoing interest in heuristic algorithms that are based on simplified approaches and allow quickly obtaining approximation of estimates with the least work amount. This paper discusses a method of pulsed (single) aperiodic signal with a high level of noise component mathematical processing by approximating its shape by a piecewise linear function, that parameters are determined using the method of least squares. A brief justification for this method is given, based on an analysis of the stochastic nature of the noise component. A numerical analysis of the signals spectral composition before and after processing is performed, as well as a comparison with other common methods: filtering and coherent averaging. It is shown that the waveform piecewise linear approximation can effectively separate the useful signal from the noise component, does not require complex algorithmic designs, and its program code implementation is possible in any high-level languages. The developed method is applicable for all types of signals and is most effective for processing single aperiodic pulses without its repetition possibility. The proposed approach can also be used in the educational process when studying the programming basics and for solving economic problems based on the determination of trend lines by parametric methods.

Software engineering

Industry 4.0 is an initiative that involves building smart factories, supply chains and the production process. One of the key related concepts is digital twins, which enable forecasting and planning using real-time data in complex models. The concept involves working with large amounts of data, both when developing systems from scratch, and for building them on the basis of existing modeling software. The tasks of processing, storing and using such data streams are solved daily by large Internet companies operating on the data of millions of users to build business processes. Such companies have been developing systems using microservice architecture for ten or more years, which allows them to build scalable and deterministic systems for processing data flow. However, within the framework of the task, it became necessary to use modeling programs to build a digital twin, which set us the task of integration, since programs for building models are not adapted to work within microservice systems. The way out of this situation is to create data exchange drivers. An example of such a simulation program is Unisim Design. The paper formulates the problem of extracting data from a program that was not originally adapted to work within a software package that implies constant interaction between its parts. A solution has been found and implemented that allows obtaining data from this program without using commercial software and closed libraries.

IT-MANAGEMENT

Performance management

There are a number of strategic tasks in the system of higher education, the solution of which by traditional methods is not possible or very difficult. One of these tasks is the management of the contingent of students. The complexity of this process is determined by the requirement that the university fulfill various key indicators while ensuring the quality of education. The aim of the study is to improve the process of students’ contingent management of the educational institution based on data management. Universities accumulate huge number of various information, the analysis of which is able to provide the decision-making based on data but not on intuition. The analysis of large information array is not possible without the usage of modern products and technologies related to Business Intelligence. This paper sets out the task of creating a decision support system (DSS) for contingent management, a range of questions is described, to which this system will quickly give answers and help an analyst or the head of a university in making decisions. As the research methods used, the methodology for creating a DSS with a description of the main results of each stage, as well as methods of statistical data analysis, is used. The DSS introduction to the daily activities of Higher education institution allows getting the rapid response to changes in academic achievement, forecasting contingent retention and potential budget losses, assessing the number of vacancies and qualitative performance. The system allows the rector of the university to monitor the dynamics of the main indicators on a weekly basis and gives an idea of the university from the founder’s point of view. Further research is aimed at developing the information system by adding advisory functions, as well as expanding the range of questions that the system is able to give a quick answer to – evaluating the activities of the teaching staff by key indicators, estimating the costs of implementing one or another area of training, and others.