IT management |
|
Performance management |
|
|
The subject of the research is models and algorithms of calendar planning of projects of the target integrated program of ecological rehabilitation of the region, the peculiarity of which is the absence of technological links between program projects. The purpose of the work is to form an economic and mathematical model for determining the rational deadlines for the implementation of projects of the target program for environmental rehabilitation of the region in the absence of technological dependencies between projects; comparing and improving project scheduling algorithms. The result of the work is the formation of an economic and mathematical model that includes two criteria – minimizing the duration of the target comprehensive program and maximizing the progressiveness of achieving its goals, as well as a system of restrictions on the annual amount of investment and the relationship between the desired dates for the start and end of program projects. Two well-known variants of the algorithm for sequentially assigning projects to the calendar plan and a modification of the algorithm are considered, which makes it possible to form the optimal set of projects for each year of the program implementation. The proposed modification of the algorithm using the solution of the problem of finding the optimal set of criteria makes it possible to satisfy both criteria of the economic and mathematical model. To test the operability and effectiveness of the analyzed alternative algorithms, a software package in VBA-Excel was developed. Numerical calculations of the application of the developed algorithm are presented, showing the advantage of the developed algorithm. Conclusions are drawn about the expediency of using this algorithm and the possibility of its adjustment to consider the technological relationships between software projects, which makes it possible to significantly expand the scope of its application.
|
---|---|
Performance management |
|
|
The MatInf research data management system has been developed to support teams of researchers working with large volumes of data from high-throughput experiments in the field of inorganic materials science. One of the key features of the system is its architecture, which provides full support for user-defined data types specified after deployment. This is achieved through flexible system configuration, late binding of data types to web services and integration of data validation, extraction and visualization mechanisms. As part of the work, a detailed analysis of the requirements for RDMS was carried out, which allowed the formation of the main functional principles of the system, including support for data on chemical compounds, flexible object typing, access delimitation and management of relationships between data. The developed structure of the relational database implements the information storage model with the possibility of creating new types of objects related to the subject area. Additional use of links between objects (graph structure) allows to manage effectively the relationships between experimental results, materials and methods of their synthesis. To ensure flexibility and extensibility, the system supports integration with external API services to integrate user-defined data formats and provides API access to data, providing opportunities for integration with other systems such as machine learning tools. RDMS is developed on the basis of ASP.Net Core and relational DBMS Microsoft SQL Server, which guarantees its reliability, performance and scalability. Examples of using the system to accumulate experimental data, document experiments and improve the reproducibility of research are presented. The open architecture and free distribution of the system make it a fairly universal tool for digitalization of research in the field of inorganic materials science, allowing the platform to be adapted to various tasks, including support for new data types and integration with external analytical tools. The development novelty lies in the absence of freely available alternative solutions capable of maintaining typed storage of materials science data (and thus search by quantitative composition of material), support extensible used-defined types system and integration with arbitrary formats of research documents without changing the system core.
|
Software engineering |
|
Algorithmic efficiency |
|
|
In space-ground interferometry systems, in order to improve UV plane filling and the quality of synthesized images, antenna systems separated from each other are used, thus forming variable projections of the spacecraft – ground receiving station. The increasing amount of information from highly sensitive receiving devices leads to rapid filling of the onboard memory with scientific data. In order to quickly read data from the onboard memory device, it is reasonable to increase the speed of transmitted data on the spacecraft-NSPI line. A variant of correcting group errors in a communication channel using algorithms of interleaved lossless coding/decoding distributed according to linear and pseudo-random laws is considered, which allows increasing the survivability of memory elements when transmitting data from autonomous systems in near and deep space. Some memory elements can be switched off until a certain moment of time. Group errors arising in this case can be transformed into single errors in accordance with the interleaving algorithms for correction of the latter by redundant noise-resistant code. In case of failure of certain memory elements, using the above mentioned algorithms, it is possible to transform group errors into single errors with subsequent correction in the same way. The application of non-loss interleaving interleaving codes under the conditions of normalization of the frame of the basic packet allows to increase the reliability of functioning of electronic memory elements. Nowadays, the development of computer technology allows data accumulation/input via a high-speed bus. This significantly increases the recording speed and the volume of stored data. A variant of the recording system is proposed, which can be used for operation in satellite data reception centers and radio astronomical data processing centers. The versatility of this system provides the possibility to build in a video converter, decoder, increase SSD data array.
|
|
Random number generators produce numbers with the uniform distribution, and there is no correlation between these numbers that would allow to predict the generated number better than random guessing. Such generators are used in various tasks, such as modeling or information security. Hardware devices that transform a random physical process into a data stream are used as generators. Another type of generators are software generators that use a formula or algorithm to obtain a new number. It is believed that software generators, although they work faster, but do not have a proven property of randomness. There are a number of known cases when patterns were found in such generators, which led to the refusal to use them. There are test sets for checking the generator exists. If all tests are successfully passed, the generator is recommended for use. The purpose of this article is to develop an algorithm for testing bit sequences for randomness. In this paper, a new algorithm for testing random number generators is proposed, that can be included in the general set of statistical tests required for verification. Unlike previously known tests based on the entropy approach, the new test uses autocorrelation of the generated data. It is shown that the proposed test allows one to identify deviations from randomness in generators that have previously passed known statistical tests. During the experiments, the output sequences of the cipher, hash function and some pseudo-random number generators were tested. As the analysis of the test results showed, some generators have autocorrelation in the generated data. The idea of the test is that any generator has a period when all numbers from the generated set (alphabet) are appeared in the sequence. In the ideal case, the period will be close to the size of the alphabet or will be slightly larger. However, if some symbols are repeated more often, the period will increase significantly. In other words, an increase in the period may indicate either a deviation from a uniform distribution or the presence of autocorrelation in the generated data. The latter phenomenon is the area of interest this article. In this research we use elements of probability theory and mathematical statistics. The experiments were conducted on a large volume of the generated sequence of numbers.
|
Information security |
|
Models and methods | |
|
The study proposes a hierarchical fuzzy rule-based model that allows one to determine the assessment of the state of a client of the banking ecosystem and calculate his/her credit rating. Additionally, the proposed model allows one to form multiple intermediate assessments. These intermediate assessments are obtained at the output of individual fuzzy rule-based models that form a hierarchical structure. The use of intermediate assessments allows one to determine the groups of parameters that influenced the value of the aggregated assessment and the value of the client’s credit rating. If the proposed model produces low values of the credit rating, the analysis of intermediate variables begins to identify the causes. As a result of the analysis, a set of input variables is formed, with the help of which the reasons for assigning a certain credit rating are explained. To correct the low value of the credit rating, control actions are formed. A feature of such actions is the involvement of the client in the business processes of the banking ecosystem. The experiment confirmed the possibility of the proposed theoretical provisions to distribute the objects of analysis by state classes and determine the values of aggregated assessments for them depending on various combinations of input parameter values. The possibility of using the proposed model to explain the obtained results by forming maps of the client’s state and predicting the result of applying control actions on the client’s state has been confirmed.
|
|
When searching for solutions to nonlinear optimal control problems, one may encounter difficulties related to the presence of local extremes. The use of traditional optimization methods is effective in the case of convex problems with the property that the found local extremum is global. Therefore, it is important to develop methods and algorithms for solving multi-extremal optimal control problems. Since the operation of most optimization methods depends on the choice of the initial values of the optimized parameters, it is proposed to apply the method of differential evolution. This method optimizes a set of possible solutions in the range of acceptable values of the desired parameters, the initial values of which are set randomly. The aim of the work is to develop an evolutionary algorithm for finding a solution to a multi-extremal optimal control problem. Overcoming the stuck solution in the local optimum is possible by maintaining population diversity. If the solution falls into the region of the local extremum with an insufficient set number of iterations of the algorithm, an incorrect solution can be obtained. Therefore, in order to dislodge a population from the area of the local extremum, a modification of the differential evolution method is proposed – a dynamic population size. If the population is drawn into the region of a local extremum, then its average fitness changes slightly. In this case, the vectors-individuals with the lowest fitness are removed and new individuals are added. Computational experiments have been carried out on a model optimal control problem with a non-convex reachability domain. The work of the developed evolutionary algorithm is compared with the method of variations in the control space and the algorithm of differential evolution with a constant population size. The effectiveness of the developed evolutionary algorithm in solving a multi-extremal optimal control problem is demonstrated.
|
Laboratory |
|
Researching of processes and systems |
|
|
Modern trends in the development of Industry 4.0 and the digitalization of industry are leading to the creation of networks of various manufacturing enterprises that use shared digital platforms and cyber-physical systems, forming industrial digital ecosystems. This highlights the relevance of research aimed at standardizing and optimizing information exchange between stakeholders in production processes, which is crucial for enhancing competitiveness and reducing product life cycles. These challenges create a growing need for methods that ensure semantic coherence of digital models of products, processes, and enterprises within reference frameworks such as RAMI 4.0, particularly in the context of distributed manufacturing. The aim of this study is to develop a method for applying ontological models and knowledge bases in the design of cyber-physical systems aligned with the RAMI 4.0 standard, with an emphasis on achieving consistency, integrity, and dynamic interaction within digital platforms. The main results include the development of a system of hierarchically linked ontologies, a classification of asset properties and relationships within the RAMI 4.0 structure, a predicate framework for knowledge bases, and a practical implementation of the method using the 1C:Enterprise platform. The proposed approach ensures terminological unification, improves data exchange efficiency, and supports decision-making in the design and operation of cyber-physical systems. It contributes to the advancement of systems engineering by providing both a theoretical foundation and practical tools for industrial digitalization and the standardization of enterprise interactions within Industry 4.0 ecosystems, while also opening new prospects for research into intelligent manufacturing systems.
|
|
The paper analyzes the experience of development of automation of information provisioning of administrative bodies of territorial government. The analysis made it possible to understand that the information provision of administrative government is fundamentally different from the creation of information systems for industrial enterprises. The example of a specific district administration shows that the creation of a single automated information system in its original understanding in the theory of information systems is impossible. The implementation of the concept of “growing” the system of F. E. Temnikov based on the registration and analysis of incoming requests is presented. The idea was proposed by one of the authors of the article, the head of information and communication department of the Administration of the Kalininsky district of St. Petersburg). After the accumulation of a large volume of unordered decisions and acquired technical means, the concept and model of a multi-level information and control complex were proposed and applied taking into account the features of the administrative government body, providing a holistic view of the accumulated information support. The concept is based on the definition of the system definition article. Stratified model analysis methods have been proposed, helping to development and adjust the structure of the information-control complex. The prospects methods of creating an information system based on the application of service architecture for the field of administrative-government are considered. The relevance of the study is that it shows the usefulness of analyzing the history of information system development of a specific district administration of a city to development a theory of creating administrative management information systems.
|