+7 (495) 987 43 74 ext. 3304
Join us -              
Рус   |   Eng

articles

№ 1(115) 24 february 2025 year
Rubric: Researching of processes and systems
The author: Perevaryukha A.

Купить статью

Download the first page

Analysis of epidemic processes is one of the oldest tasks for the application of modeling methods in the field of studying the state of society. Despite the availability of many approaches to the development of epidemic models, experts were unable to timely obtain an acceptable forecast for the ongoing spread of coronavirus in the winter of 2024. With new waves, the updated virus has returned once again after victory over the infection was declared. The possibilities and problems of office structures based on modifications of SIR models for a modern epidemic stage of a virus that continues to mutate are determined. The global dynamics of infections changed the oscillation mode twice: after the peak in the spring of 2022 and in the winter of 2024. After the global Omicron wave, local epidemics acquired an asynchronous character based on the formation and attenuation of a series of waves. The frequency of occurrence of individual infection peaks varied significantly across regions already in 2020. In some countries, frequent short waves of large amplitude developed. We classified the scenarios according to the characteristic features of their nonlinear dynamics. We proposed a method for modeling the sharp development of spread of the virus based on equations with threshold regulation functions that describe variants of the formation of outbreaks of infections and situational damping functions that determine the form of oscillating attenuation for the number of infections. The fading trend after primary wave in the model is interrupted by a mass infection event, which induces an outbreak of infections and then a new regime of fluctuation attenuation follows. Our computational experiment simulates the development of an extreme peak after the stage of attenuation of waves of a local epidemic as a bifurcation scenario for the reactivation of waves of the SARS-CoV-2 coronavirus activity, which is due to effect of a crowded disease. Continue...
№ 1(115) 24 february 2025 year
Rubric: Data protection
Authors: Belim S., Belim S., Munko S.

Купить статью

Download the first page

All steganographic methods are focused on a specific container file format. Text documents with markup are the most difficult object for steganography methods. The article suggests a model for embedding structured text documents in control tags. The model uses the document tree structure and embeds into free leaf nodes. This approach adds hidden data that does not affect the display of the document. Two steganographic methods are implemented based on this model. The first method embeds hidden data into html document tags. The embedding method adds underplayed tags and style classes to free leaf nodes. The hidden data extraction method uses the embedding identifier. This role is played by the name of the new class. The name generation algorithm is based on the embedding key and hash function. The format of the identifiers matches the format of the source document names. This naming method allows the hidden message blocks to be randomly allocated to free leaf nodes. The second method embeds steganographic inserts into xml documents. Hidden data is added to the free leaf node attributes. The method requires two new attributes to execute. The optional structure describes both attributes. The format of this structure is indistinguishable from the structures present in the document. The embedding identifier is also based on the embedding key and the embedded block number. The data view uses an encryption algorithm with an additional key. Both methods use embedded data masking to counteract source code steganalysis. Steganalysis of such methods has exponential algorithmic complexity, so both methods are only applicable to large files. Continue...
№ 2(116) 25 april 2025 year
Rubric: Performance management
Authors: Bulygina O. V., Kulyasov N., Vorotilova M., Yartsev D.

Купить статью

Download the first page

Line personnel occupy the vast majority of positions in many organizations, which determines the importance of timely and successful filling of such vacancies. The search for candidates for such positions is carried out through mass recruitment, which is characterized by high labor intensity, budgetary and time constraints, and the need for regular repetition due to high staff turnover rates. The noted features make it impossible to carry out this process without the use of modern software. Since mass recruitment does not require finding the best candidate for each vacancy and is limited to searching for specialists based on formal criteria from their resume, the main share of labor and time costs falls on the primary selection of candidates. Existing software does not have sufficient functionality to effectively automate this process. Given the need to process large volumes of multidimensional data, they do not provide a comprehensive accounting of different types of candidate characteristics and automatic adjustment of selection criteria taking into account their priority for the vacancy being filled. To solve the problem, an automated method for forming a set of candidates for linear positions was developed. It is based on the integrated use of an adaptive neuro-fuzzy inference system and a bioinspired algorithm inspired by the behavior of a fish school. The developed hybrid method was implemented as a computer program using the Python language. The results of its testing showed the convergence of the optimization algorithm, and their comparison with manual selection confirmed the prospects for using it to solve tasks of mass recruitment of line personnel. Continue...
№ 2(116) 25 april 2025 year
Rubric: Models and methods
Authors: Berezkin E., SHuvalov V. B.

Купить статью

Download the first page

The performance reliability indicators characterize the operability of the “test object – test tool” system and significantly depend on the performance reliability parameters of the testing equipment. Consequently, they can serve as criteria for selecting the necessary tools at the design stage of digital devices and assessing their effectiveness. The paper proposes quantitative criteria for assessing the effectiveness of the hardware testing method based on the assumption that a digital device performs a certain generalizing function, the values of which depend on a set of quantities reflecting individual operating modes of the device, and can be classified as correct only if there are no errors in the operating device. To quantitatively assess the performance reliability of the equipment, it is proposed to use the probability value that the digital device as a whole will function error-free provided that there is no detectable fault. The calculated evaluation value allows you to select the best of several possible test circuit options or synthesize a new one. Cases of organizing test procedures based on various principles and their combinations are considered. An optimization problem of placing test circuits in a tested device is formulated and a technique for solving it under certain restrictions is proposed. A distinctive feature of the proposed approach is the elimination of the need to use the values of conditional probabilities of detected faults, on the use of which known methods are built, although their practical receipt is very labor-intensive. The operation of the method of rational placement of control circuits is illustrated by the example of a control signal block. Continue...
№ 2(116) 25 april 2025 year
Rubric: Software engineering
Authors: Lyutikova L., Kazakova E.

Купить статью

Download the first page

In this paper we propose a method for analyzing incomplete and inaccurate data in order to identify factors for predicting the volume of mudflows. The analysis is based on the mudflow activity inventory data for the south of Russia, which is poorly formalized, has missing values in the mudflow types field, and requires significant additional processing. Due to the lack of information on the mudflow type in the cadastral records, the primary objective of the study is to develop and apply a methodology for classifying mudflow types to fill in the missing data. For this purpose, a comparative study of machine learning methods was performed, including neural networks, support vector machines, and logistic regression. The experimental results indicate that the neural network-based model has the highest prediction accuracy among the methods considered. However, the support vector machine method demonstrated a higher sensitivity rate for classes represented by a small number in the test sample. In this regard, it was concluded that an integrated approach is appropriate, combining the strengths of both methods, which can help improve the overall classification accuracy in this subject area. Forecasting the volume of material removal and data clustering showed the presence of nonlinear dependencies, incompleteness and poor structuring of data even after filling in missing values of the mudflow type, which required a transition from numerical data to categorical data. This transition increased the model’s resistance to outliers and noise, allowing for a highly accurate forecast of a one-time removal. Since the forecast does not reveal the factors influencing its result, an analysis was conducted to identify these factors and present the found patterns in the form of logical rules. The formation of logical rules was carried out using two methods: the associative analysis method and the construction of a logical classifier. As a result of applying associative analysis, rules were found that reflect some patterns in the data, which, as it turned out, need significant correction. The use of the developed logical methods made it possible to clarify and correct the patterns identified using associative rules, which, in turn, ensured the determination of a set of factors influencing the volume of the mudflow. Continue...
№ 2(116) 25 april 2025 year
Rubric: Algorithmic efficiency
Authors: Trefilov P., Romanova M., Venets V.

Купить статью

Download the first page

Probabilistic models for forecasting and assessing the reliability of navigation parameters in intelligent transportation systems are proposed. The relevance of the study is driven by the need to enhance the reliability of robotic transportation systems operating in dynamically changing urban environments. In such environments, sensor failures, signal distortions, and a high degree of data uncertainty are possible. The proposed approach is based on the application of probabilistic analysis methods and statistical control to detect anomalies in navigation parameters such as coordinates, speed, and orientation. The concept of navigation data reliability is introduced as a quantitative measure characterizing the degree of correspondence between the measured parameters and the actual state of the system. Key validity criteria are defined: confidence probability, significance level and confidence coefficients. To improve the reliability of parameter assessment, a combination of statistical analysis methods and filtering algorithms is proposed. Forecasting involves preliminary data processing aimed at smoothing noise and verifying data consistency. Outlier detection is performed using statistical methods, including confidence intervals and variance minimization. An forecasting model based on the Kalman filter and dynamic updating of probabilistic estimates has been developed. The integration of various methods into a unified system minimizes the impact of random and systematic errors, ensuring more accurate assessment of navigation parameters. The proposed approach is applicable to the development of navigation systems for autonomous robots and unmanned vehicles, enabling them to adapt to external conditions without the need for precise a priori data. Continue...