Graphs and chemistry. Algebra and harmony in chemical applications




The study of the relationship between the properties of substances and their structure is one of the main tasks of chemistry. A great contribution to its solution was made by the structural theory of organic compounds, among the founders of which is the great Russian chemist Alexander Mikhailovich Butlerov (1828-1886). It was he who first established that the properties of a substance depend not only on its composition (molecular formula), but also on the order in which the atoms in the molecule are interconnected. This order was called "chemical structure". Butlerov predicted that the composition C 4 H 10 can correspond to two substances having a different structure - butane and isobutane, and confirmed this by synthesizing the latter substance.

The idea that the order in which atoms are connected is of key importance for the properties of matter has proved to be very fruitful. It is based on the representation of molecules using graphs, in which atoms play the role of vertices, and the chemical bonds between them are the edges connecting the vertices. In the graphical representation, the lengths of the bonds and the angles between them are ignored. The C molecules described above 4 H 10 are shown in the following columns:

Hydrogen atoms are not indicated in such graphs, since their location can be unambiguously determined from the structure of the carbon skeleton. Recall that carbon in organic compounds is tetravalent, therefore, in the corresponding graphs, no more than four edges can depart from each vertex.

Graphs are mathematical objects, so they can be characterized using numbers. From this came the idea to express the structure of molecules by numbers that are associated with the structure of molecular graphs. These numbers are called "topological indices" in chemistry. By calculating some topological index for a large number of molecules, one can establish a relationship between its values ​​and the properties of substances, and then use this relationship to predict the properties of new, not yet synthesized substances. To date, chemists and mathematicians have proposed hundreds of various indices characterizing certain properties of molecules.

  1. Methods for calculating topological indices

Methods for calculating topological indices can be very diverse, but all of them must satisfy quite natural requirements:

1) each molecule has its own, individual index;

2) Molecules with similar properties have similar indices.

Let's see how this idea is implemented using the example of saturated hydrocarbons - alkanes. The key to constructing many indices is the concept of the "distance matrix" D. This is the name of the matrix whose elements show the number of edges separating the corresponding vertices of the molecular graph. Let us construct this matrix for three isomeric hydrocarbons of composition C 5 H 12 . To do this, we draw their molecular graphs and renumber the vertices (in an arbitrary order):

The diagonal elements of the distance matrix for hydrocarbons are equal to 0. In the first column, vertex 1 is connected to vertex 2 by one edge, so the matrix element d 12 = 1. Similarly, d 13 = 2, d 14 = 3, d 15 = 4. The first row in the distance matrix of normal pentane is: (0 1 2 3 4). Complete distance matrices for three graphs:

molecule chemistry topological index

The distance between vertices does not depend on the order of their enumeration, so the distance matrices are symmetrical with respect to the diagonal.

The first topological index reflecting the structure of a molecular graph (G) was proposed in 1947 by Wiener. It is defined as the sum of the diagonal elements of the distance matrix plus half the sum of its off-diagonal elements:

(1)

For the above graphs corresponding to pentanes C 5 H 12 , the Wiener index takes values ​​of 20, 18, and 16. It can be assumed that it describes the degree of hydrocarbon branching: the largest values ​​correspond to the least branched hydrocarbons. With an increase in the length of the carbon skeleton, the Wiener index increases, as there are more elements in the distance matrix. Statistical analysis on the example of several hundred hydrocarbons showed that the Wiener index correlates with some physical properties of alkanes: boiling points, heats of vaporization, molar volume.

Another type of index is not based on distances between vertices, but on the number of nearest neighbors for each vertex. As an example, let's calculate the Randic index, which is defined as follows:

(2)

where vi- the degree of the i-th vertex, that is, the number of edges extending from it. For the graphs above, the Randic index is:

(3)

(4)

(5)

This index also decreases with increasing degree of branching of the carbon skeleton and can be used to describe the physical properties of alkanes.

Alkanes are the most boring type of organic molecules from a chemical point of view, since they do not contain any "features" - double and triple bonds or atoms of elements other than hydrogen and carbon (such elements are called heteroatoms). The introduction of heteroatoms into the composition of a molecule can radically change the properties of a substance. Thus, the addition of just one oxygen atom converts the rather inert gaseous ethane C 2 H 6 to liquid ethanol C 2 H 5 OH, which exhibits rather high chemical and biological activity.

Consequently, in the topological indices of molecules more complex than alkanes, the presence of multiple bonds and heteroatoms must be taken into account. This is done by assigning certain numerical coefficients - "weights" to the vertices and edges of the graphs. For example, in the distance matrix, the diagonal elements can be defined in terms of the nuclear charge Zi(recall that for carbon Z = 6):

(6)

Off-diagonal elements are determined by summation over edges, and each edge connecting atoms with charges Ziand Zj, weight is assigned

(7)

where b is equal to the bond order between the atoms (1 for a single bond, 2 for a double bond, 3 for a triple bond). For ordinary carbon-carbon single bonds, k = 1. Compare propane Wiener indices C 3 H 8 and three oxygen-containing substances similar in composition: propyl alcohol C 3 H 8 O, its isomeric isopropyl alcohol C 3 H 8 O and acetone C 3 H 6 Oh

To do this, we calculate the distance matrices according to the indicated rules. In molecular graphs, we indicate all atoms, except for hydrogen atoms. 1) Propane

2) In the propyl alcohol molecule, oxygen is bonded to the extreme carbon atom:

For a single C–O bond, the weighting factor is 36/(68) = 0.75. Diagonal element of the matrix corresponding to oxygen:

d 44 = 1 – 6/8 = 0.25.

For molecules containing heteroatoms, the Wiener index ceases to be an integer. 3) In the isopropyl alcohol molecule, oxygen is bonded to the middle carbon atom:

4) In acetone, the order of connection of atoms is the same as in isopropyl alcohol, but the bond between carbon and oxygen is double:

For the C=O double bond, the weighting factor is 36/(268) = 0.375

As can be seen, the addition of a heteroatom to the structure of alkanes leads to an increase in the Wiener index due to an increase in the size of the distance matrix. Adding multiple bonds and increasing the degree of branching of the molecule reduces this index. These rules also hold for more complex molecules. Initially, topological indices were developed only for the purpose of predicting the physicochemical properties of substances. However, later they began to be used to solve other problems. Let's consider some of them. One of the applications of topological indices is related to the classification of organic compounds and the creation of organic databases. The problem is to find such an index that one-to-one characterizes the chemical structure and from which this structure can be restored. The required index must have a good discriminating ability, that is, to distinguish among themselves even molecules that are close in structure. This task is daunting, since more than 20 million organic structures are already known. Its solution, apparently, will be found as a result of using composite topological indices.

Translation editor's preface
Preface to the Russian edition
Foreword
TOPOLOGY OF A FINITE POINT SET AND MOLECULAR STRUCTURE. R. Merrifield, X. Simmons
1. Introduction
2. Finite topology
2.1. Topology graph
2.2. Qualitative characteristics of graph topology
2.3. Quantitative characteristics of graph topology: combinatorics
3. Topology of alternative molecules
3.1. Structure complexity
3.2. Connectivity and Delocalization
4. Topology of non-alternant molecules
4.1. Count Duplex
4.2. duplex topology
Literature
STEREOCHEMICAL TOPOLOGY. D. Volba
1. Introduction
2. An approach to the synthesis of topological stereoisomers based on Möbius strips
2.1. Complete synthesis of the first molecular Möbius strip
3. Criteria for topological stereoisomerism
3.1. Topological chirality
3.2. Topological diastereoisomerism
4. Clipping reaction and approaches to the synthesis of the molecular trifoliate knot
4.1. Rupture of the steps of the Möbius ladder
4.2. Molecular Trefoil Knot
Literature
QUALITATIVE STEREOCHEMISTRY J. Dugundji
1. Introduction
2. Permutational isomers
3. Chemical identity group
Literature
THEORY OF MOLECULAR STRUCTURE. R. Bader
1. Overview of the theory
2. Some applications
Literature
ALGEBRAIC AND TOPOLOGICAL STRUCTURE OF QUANTUM CHEMISTRY, CHEMICAL KINETICS AND VISUAL RULES ALLOWING TO MAKE QUALITATIVE PREDICTIONS FOR CHEMICAL PRACTICE. O. Sinanoglu
1. Introduction
2. Microchemistry or qualitative quantum chemical rules derived directly from structural formulas or ORTEP diagrams
2.1. The field of the vector space of valences Vn(R), which exists in the Euclidean three-dimensional space (?)
2.2. The principle of linear covariance in quantum chemistry
2.3. Non-unitary classification of molecules
2.4. From structural formulas of molecules to more detailed structural-electronic formulas (and to graphs)
2.5. "Structural and deformation covariance" of molecules and graphical rules for obtaining qualitative quantum chemical results
3. Morphology of reaction mechanisms, synthesis pathways, and topological “step/compound rules”
4. Features of obtaining quantum qualitative characteristics of each reaction stage of the reaction mechanism or pathway
Literature
REACTION TOPOLOGY: THE THEORY OF MANIFOLDS OF POTENTIAL SURFACES AND QUANTUM-CHEMICAL DESIGN OF SYNTHESIS. P. Mezhey
1. Introduction
2. Topological manifolds, differentiable manifolds, and reaction topology
3. Ratios of critical points; intersection graphs in the topological space (M, Tc) and quantum chemical reaction schemes
4. Computational aspects
5. Degenerate critical points and chemical structures that do not correspond to true PES minima
6. Conclusions
Literature
TOPOLOGY OF BINDING IN POLYHEDRAL MOLECULES. R. King
1. Introduction
2. Basis of the concept
3. Vertex atoms
4. Polyhedral systems with localized binding
5. Systems with fully delocalized binding
6. Electron-redundant polyhedral systems
7. Electron-deficient polyhedral systems
8. Anomalous peaks
9. Polyhedrans
10. Conclusions
Literature
FORMS OF CLUSTERS OF ELEMENTS OF THE MAIN SUBGROUPS: A TOPOLOGICAL APPROACH TO THE COUNTING OF SKELETON ELECTRONS. M. McGlinchy, Y. Tal
1. Introduction
2. Clusters with fully delocalized binding
3. Clusters with binding localization on edges
3.1. Six-atom clusters
3.2. Semiatomic clusters
3.3. Eight-atom clusters
4. Quantum-topological substantiation of the polyhedral model
5. Conclusions
Literature
TOPOLOGICAL PROPERTIES OF BINARY COMPOUNDS OF SULFUR WITH NITROGEN. A. Turner
1. Introduction
2. Prototype molecule - tetrasulfur tetranitride
3. Planar cyclic molecules and SnNm ions
4. Non-planar systems - equivalence of centers of charge distribution
5. Application of the electron density functional theory
Literature
SHOULD YOU DO DEVELOPMENT OF TOPOLOGICAL INDICES? D. Rouvray
1. Introduction
2. Wiener index
3. Index construction
4. Indices of the distance matrix
5. Indices of adjacency matrix
6. Centric topological indices
7. Information-theoretic indices
8. Composite topological indices
9. Some mathematical relations
10. Shape and size of molecules
11. Main applications of indices
12. Bibliographic classification of compounds
13. Determination of physico-chemical parameters
14. Development of pharmaceutical drugs
15. Conclusions
Literature
TOPOLOGICAL INDICES BASED ON SURROUNDING SYMMETRY: CHEMICAL AND BIOCHEMICAL APPLICATIONS. V. Magnuson, D. Harris, S. Beisak
1. Introduction
2. Information content of the graph
2.1. Definitions
2.2. Key points
2.3. Equivalence relation
2.4. Calculation of other topological indices
3. Calculation of indices
4. Applications in Quantitative Structure Activity Correlation (QKSA) studies
4.1. Solubility of alcohols
4.2. Inhibition of microsomal para-hydroxylation of aniline by alcohols
4.3. Toxicity (LD50) of barbiturates
Literature
ORDERING OF GRAPHS AS AN APPROACH TO STUDIES OF STRUCTURE-ACTIVITY CORRELATIONS. M. Randic, J. Kraus, B. Dzonova-Jerman-Blazic
1. Introduction
2. Basic principles of the method
3. Application to substances with antimalarial activity
3.1. Building a sequence of circuits
3.2. Comparison of A-M molecules
4. Discussion
Literature
MATHEMATICAL MODEL OF MOLECULAR COMPLEXITY. S. Bertz
1. Seeking
2. Model development
2.1. Graph theory and molecular topology
2.2. Information theory and molecular symmetry
3. Model validation
3.1. Model Limitations
4. Reliability of the model
5. Conclusions
Literature
DISTANCE MATRIX FOR MOLECULES CONTAINING HETERO-ATOMS. M. Barish, J. Yashari, R. Lall, V. Srivastava, I. Trinaistich
1. Introduction
2. Relationship between adjacency matrix and distance matrix
3. Distance matrix for heterosystems
Literature
CANONICAL NUMBERING AND SYSTEM OF LINEAR NOTATION OF CHEMICAL GRAPHS. W. Herndon
1. Introduction
2. Canonical numbering
3. Single-valued linear notation
4. Canonical enumeration of regular graphs
5. Conclusions
Literature
SYMMETRY AND SPECTRA OF GRAPHS. THEIR APPLICATIONS IN CHEMISTRY. K. Balasubramanian
1. Introduction
2. Tree pruning
3. Tree Pruning and Tree Symmetry Groups
4. Spectral polynomials of trees obtained using the trimming process
5. Applications in chemistry
Literature
AUTOMORPHISM GROUPS OF SOME CHEMICAL GRAPHS. G. Jones, E. Lloyd
1. Introduction
2. Some graphs and their groups
3. Reaction graphs
3.1. Example 1: Berry mechanism
3.2. Example 2: 1,2 shifts in carbonium ions
3.3. Example 3: 1,2-shifts in homotetrahedranyl cations
3.4. Example 4: Digonal twists in octahedral complexes
3.5. Example 5: 1,3-shifts in homotetrahedranyl cations
4. Suborbital graphs
5. Conclusions
Literature
PROBLEM OF RECONSTRUCTION. W. Tutt
USE OF RIEMANN SURFACES IN THE GRAPH THEORETICAL REPRESENTATION OF MÖBIUS SYSTEMS. A. Day, R. Mallion, M. Rigby
1. Introduction
2. Formalism of the method
3. Application
4. Conclusions
Literature
GLOBAL DYNAMICS OF SOME CLASSES OF REACTION SYSTEMS. X. Measuring
1. Introduction
2. Graph-theoretic formulation
2.1. Structure of the governing equations
2.2. Some concepts of graph theory
2.3. Reaction invariants
2.4. Existence of stationary states
3. Vertex-driven networks
3.1. Constant input streams
3.2. Periodic input streams
4. Conclusions
Literature
"LOGICAL DESCRIPTION" IN COMPARISON WITH "CONTINUOUS DESCRIPTION" OF SYSTEMS CONTAINING FEEDBACK LOOPS: RELATIONSHIP BETWEEN TIME DELAYS AND PARAMETERS. R. Thomas
1. Introduction
2. Logical description of systems containing feedback loops
2.1. Delays "on" and "off"
2.2. Logic Equations
2.3. State tables
2.4. Chains (sequences of states)
2.5. Stability analysis
3. Continuous description
3.1. Logic time delays and continuous parameters
Literature
QUALITATIVE DYNAMICS AND STABILITY OF CHEMICAL REACTION SYSTEMS. B. Clark
1. Introduction
2. Setting the chemical system
3. Time scales - removal of substances that react too quickly and too slowly
4. Theory of chemical networks
5. System dynamics
6. Manifold of stationary states
7. Simple theorems for network analysis
8. A deeper discussion of stationary states and their existence
9. Correctness
10. Uniqueness
11. Global attraction
12. Networks in which the manifold is not regular, unambiguous and globally attractive
13. Network topology and stability
14. Concluding remarks
15. Application
15.1. Universal Functions
15.2. Functions for symbolic processing and calculation of the current matrix
15.3. Theorem checking functions and related functions
15.4. Individual functions
Literature
HIGHEST CHAOS IN SIMPLE REACTION SYSTEMS. O. Ressler, J. Hudson
1. Introduction
2. The method of generating ordinary chaos
3. The method of generating higher chaos
4. Discussion
Literature
STRANGE ATTRACTORS IN LINEAR PERIODIC TRANSFER FUNCTIONS WITH PERIODIC PERTURBATIONS. X. Degn
1. Introduction
2. Results
Literature
USING SENSITIVITY ANALYSIS IN DETERMINING THE STRUCTURAL STABILITY OF MULTIPARAMETER OSCILLATORS. R. Larter
1. Introduction
2. Method
2.1. Standard Theory
2.2. Modified theory
3. Results
3.1. Initial conditions
3.2. Rate constants
3.3. More difficult situations
Literature
REPRESENTATION OF n-DIMENSIONAL CHEMICAL MANIFOLDS USING ELECTRIC NETWORKS. L. Pusener
1. Introduction: topological and geometric analysis of chemical processes
2. Basic geometric properties of n-dimensional metric manifolds
3. Representation as a network
4. Example for a two-dimensional system
5. Optimal paths
6. An example of using a chemical network for linear transitions between multiple states
7. Variational networks
Application: network analysis
Literature
LOGIC OF CHEMICAL IDEAS. P. Plyat, E. Hass
1. Introduction
2. Topology of pericyclic reactions
3. Lattices of pericyclic reactions
4. Orthomodular and Boolean reactive four-dimensional lattices
5. Conclusions
Literature
MULTIDIMENSIONAL X-MODEL. GRAPH THEORY AND ALGEBRAIC APPROACH TO DESCRIPTION OF MECHANISMS OF COMPLEX CHEMICAL REACTIONS. E. Hass, P. Plyat
1. Introduction
2. One-parameter X-model
3. Multivariate X-model
3.1. Reaction pathways for -cycloadditions
4. Conclusions
Literature
CLASSIFICATION OF THE MECHANISMS OF CHEMICAL REACTIONS FROM THE GEOMETRIC POINT OF VIEW. P. Sellers
1. Introduction
2. Milner's example
3. Mechanisms without cycles
4. Other mechanisms
5. Multiple total reactions
6. Conclusions
Literature
GRAPHS, POLYMER MODELS, EXCLUDED VOLUME AND CHEMICAL REALITY. D. Klein, W. Seitz
1. Introduction
2. Isolated linear circuits
3. Counting isomers
4. Conformations of branched polymers
5. Theory of scaling
6. Transfer matrices
7. Self-similarity and renormalization
8. Discussion
Literature
VOLUME FUNCTION FOR WATER BASED ON A RANDOM LATTICE SUBGRAPH MODEL. L. Quintas
1. Introduction and preliminary mathematical remarks
2. Model of random graphs for water
3. Volume function for water
4. Correspondence of V(p) with numerical data
5. Concluding remarks
Literature
TOPOLOGICAL ASPECTS OF ENZYME-SUBSTRATE RECOGNITION. S. Swaminathan
1. The problem of enzyme-substrate recognition
2. Edelstein-Rosen model
3. Method of phenomenological calculus
4. Hilbert space of description
5. Postulates for the dynamics of complex systems
6. Model of enzyme-substrate recognition
7. Concluding remarks
Literature
DYNAMICS OF FORMATION OF THE SECONDARY STRUCTURE OF RNA. X. Martinets
1. Introduction
2. Energy minimization methods
3. Modeling method
4. Conclusions
Literature
LISP PROGRAM FOR FUNCTIONAL-FRAGMENT REPRESENTATION OF MOLECULES AND THEIR GEOMETRY. K. Trindle, R. Givan
1. Introduction
2. Lisp is a non-numerical programming language
3. Representation of molecules using the Lisp language
4. Informal fragment recognition algorithm
5. Some special problems
6. Building a Distance Matrix Using the Fragment Databank
7. Factor analysis and Crippen's algorithm for determining geometry through distances
8. Conclusions and perspectives
Literature
Subject index

  • Specialty HAC RF02.00.03
  • Number of pages 410

Dissertation title Doctor of Chemistry Vonchev, Danail Georgiev

Half a century ago, Paul Dirac expressed the opinion that, in principle, all chemistry is contained in the laws of quantum mechanics, but in reality, insurmountable mathematical difficulties arise in practical calculations. Electronic computing technology has helped to reduce the distance between the possibilities and the implementation of the quantum mechanical approach. Yet calculations of molecules with a large number of electrons are complex and not precise enough, and so far only a few molecular properties can be calculated in this way. On the other hand, in organic chemistry there are important structural problems that have not been fully resolved, and above all, this is the problem of the relationship between the structure and properties of molecules. In theoretical chemistry, there is a question of quantifying the basic structural characteristics of molecules - their branching and cyclicity. This question is essential, since a quantitative analysis of the general regularities in the structure of branched and cyclic molecules can to a large extent be transferred to their other properties. In this way, it would be possible to predict the ordering of a group of isomeric compounds according to the values ​​of such properties as stability, reactivity, spectral and thermodynamic properties, etc. This could facilitate the prediction of the properties of yet unsynthesized classes of compounds and the search for structures "with predetermined properties. Despite significant efforts are still open and the question of rational coding of chemical information for the purpose of its efficient storage and use by computers.The optimal solution of this issue would have an impact on the improvement of the classification and nomenclature of both organic compounds and the mechanisms of chemical reactions.Before the theory of Periodic sys-4 of chemical elements also raises the question of a holistic and quantitative interpretation of the periodicity of the properties of chemical elements based on quantities that reflect the electronic structure better than the ordinal number of the element.

As a result, during the last decades, the development of new theoretical methods in chemistry, united under the name of mathematical chemistry, has been stimulated. The main place in it is occupied by topological methods, which reflect the most general structural and geometric properties of molecules. One of the branches of topology, graph theory, offers a convenient mathematical language for the chemist to describe the molecule, since structural formulas are essentially chemical graphs. The advantages offered by graph theory in chemical research are based on the possibility of direct application of its mathematical apparatus without the use of a computer, which is important for experimental chemists. Graph theory allows one to delve quite simply into the structural characteristics of molecules. The results obtained are general and can be formulated as theorems or rules and thus can be applied to any similar chemical (and non-chemical) objects.

After the publication of Shannon's and Wiener's fundamental works on information theory and cybernetics, interest in information-theoretic methods of research has also steadily increased. The original meaning of the term "information" is associated with information, messages and their transmission. This concept quickly left the limits of communication theory and cybernetics and penetrated into various sciences of animate and inanimate nature, society and cognition. The process of developing the information-theoretic approach in science is more complicated than the formal transfer of the cybernetic category of information to other areas of knowledge. The informational approach is not just a translation from less common languages ​​into a metalanguage. It offers a different view of systems and phenomena and allows you to get new results. By expanding the connections between, well, different scientific disciplines, this method makes it possible to find useful analogies and common patterns between niches. Developing, modern science tends to an ever greater degree of generalization, to unity. In this regard, information theory is one of the most promising areas.

An important place in this process is occupied by the application of information theory in chemistry and other natural sciences - physics, biology, etc. In these sciences, the methods of information theory are used in the study and description of those properties of objects and processes that are associated with structure, orderliness, organization systems "The usefulness of the information approach in chemistry lies primarily in the fact that new possibilities are offered for the quantitative analysis of various aspects of chemical structures - atoms, molecules, crystals, etc. In these cases, the concepts of "structural" information and "information content" of atoms and molecules.

In connection with the foregoing, the main goal of the dissertation work is to show the fruitfulness of the graph-theoretic and information-theoretic approach to structural problems in c. chemistry, from atoms and molecules to polymers and crystals, the achievement of this goal involves, as separate steps:

1. Definition of a system of quantities (information and topological indices; for the quantitative characterization of atoms, molecules, polymers and crystals.

2. Development on this basis of a new, more general approach to the question of the correlation between their properties, geometric and electronic structure. Prediction of the properties of some organic compounds, polymers and non-synthesized transactinide nMx elements.

Creation of methods for modeling the growth of crystals and crystal vacancies.

3. Generalized topological characterization of molecules by expressing the essence of their branching and cyclicity in a series of mathematically proven structural rules, and the study of the mapping of these rules by various molecular properties.

4. Creation of new, effective methods for coding chemical compounds and mechanisms of chemical reactions in connection with the improvement of their classification and nomenclature, and especially in connection with the use of computers for processing chemical information.

CHAPTER 2. RESEARCH METHOD 2L. THEOREGIO-INSTRUCTION METHOD 2.1.1 "Introduction

Information is one of the most fundamental concepts in modern science, a concept no less general than the concepts of matter and energy. This view finds justification in the very definitions of information. According to Wiener, "information is neither matter nor energy".

Ashby views information as a "measure of diversity in a given system". According to Glushkov, "information is a measure of inhomogeneity in the distribution of space and time." On this basis, the fact that in addition to material and energy nature, objects and phenomena in nature and technology also have informational properties is increasingly recognized today. Some forecasts go further, predicting that the focus of scientific research will increasingly shift towards the informational nature of processes, which will be the main object of research in the 21st century. These forecasts are essentially based on the possibility of optimal control of systems and processes through information, what, in fact? is the main function of information in cybernetics. In the future, these ideas can lead to the creation of technologies in which every atom and molecule will be controlled by information, a possibility that has found realization so far only in living nature.

The emergence of information theory usually refers to 1948, when Claude Shannon published his fundamental work. The idea of ​​information, however, as a quantity related to entropy, is much older. Back in 1894, Boltzmann established that any information obtained about a given system is associated with a decrease in the number of its possible states and, therefore, an increase in entropy means "loss of information." In 1929

Szilard developed this idea for the general case of information in physics. her CP

Later, Vrilluin "generalized the idea of ​​the relationship between entropy and information in his non-gentropic principle in a form that also covers the informational side of phenomena. Questions about the connection between information theory and thermodynamics, and, in particular, about the relationship between entropy and information, are still the subject of much attention (a detailed list of publications in this area is given in the review 58). From the latest development of the issue, Kobozev's work on the thermodynamics of thinking should be especially noted, in which the thesis about the antientropic nature of thought processes is substantiated.

The theory of information that emerged as a "special theory of communication" quickly outgrew its original limits and found application in various fields of science and technology: in chemistry, biology, medicine, linguistics, psychology, aesthetics, etc. The role of information in biology was recognized first of all. Have you resolved important issues related to the storage, processing and transmission of information in living organisms, including the coding of genetic information 60-7? assessment of the possibility of spontaneous spontaneous generation of life on Earth^, formulation of the basic laws of biological thermodynamics^, analysis of bioenergy issues, etc. The information content of objects was used as a quantitative criterion

A A A evolution ". The question was raised about the informational nature of nutrition processes ^®^^.

Information theory is still slowly penetrating into chemistry and physics, although some progress has been made in this area in recent years. The question was raised about the possible existence of an information balance of chemical reactions. An assessment was made of the information capacity of bioorganic molecules and, on this basis, a new classification of these compounds was proposed, and the specificity of chemical reactions was also assessed.

Levin, Bernstein, and others have applied information theory to molecular dynamics to describe the behavior of molecular systems that are far from equilibrium. The essence of this approach is the concept of a "surprise", a deviation from what is expected based on the microcanonical distribution. Various applications have been proposed, including the study of the performance of lasers, the determination of the branching ratio of competing reaction paths (assuming the path corresponding to the maximum of the Shannon function as the most likely), and so on.

Dodel et al. proposed to distribute the space occupied by a molecular system into a number of mutually exclusive subspaces called lodges. The best lodges containing localized groups of electrons are found by minimizing the information function. Sears et al. found a connection between quantum mechanical kinetic energies and informational quantities. As a consequence of this result, the variational principle of quantum mechanics can be formulated as the principle of minimum information. op os

Kobozev et al. related the selectivity and activity of catalysts to their informational content. They also formulated optimal information conditions for characterization and prediction of catalytic properties. Formation and growth of kris

Ouch. rp oo tall were considered as an information process”. Rakov subjected to informational analysis the treatment of catalyst surfaces with various chemical agents.

In modern analytical chemistry, there is an increasing trend towards optimal experimentation in order to obtain maximum information from a minimum number of experiments.

These new ideas are based on information theory, game theory and systems theory. Other authors have applied information theory to minimize analysis error and time, to achieve higher selectivity, to evaluate the effectiveness of analytical methods, and so on. Research of this kind also includes physical methods in analytical chemistry, including gas chromatography, atomic emission spectral analysis, etc.

Information-theoretic methods have also proved to be useful in geochemistry for characterizing the frequency distribution of chemical compounds in geochemical systems,170 for assessing the degree of complexity and for classifying these systems.

In engineering chemistry, information analysis can be used to solve such problems of chemical-technological systems as the choice of optimal operating conditions, the establishment of control requirements, etc.101.

Examples of the successful application of information theory in chemistry indicate once again that systems in nature and technology also have an informational nature. It also shows that the information approach acts as a universal language for describing systems, and, in particular, chemical structures of any type, to which it associates a certain information function and a numerical measure. It expands. field of possible applications of information theory in chemistry.

The usefulness of the information approach in chemistry is primarily that it offers the possibility of quantitative analysis of various aspects of chemical structures. The degree of complexity of these structures, their organization and specificity can be compared on a single quantitative scale. This allows one to study some of the most general properties of chemical structures, such as their branching and cyclicity, to investigate and compare the degree of organization in various classes of chemical compounds, the specificity of biologically active substances and catalysts, and allows one to approach the question of the degree of similarity and difference between two chemical objects.

The informational approach is very suitable for solving personal classification problems. In these cases, it is possible to derive general information equations for the main groupings of objects of classification (groups and periods in the Periodic system of chemical elements, homologous series of chemical compounds, series of isomeric compounds, etc.) *

The great distinguishing ability of information methods with respect to complex structures (isomers, isotopes, etc.) can be used in computerized processing and storage of chemical information. These methods are useful not only when choosing between different structures, but also between alternative hypotheses and approximations, which is of interest for quantum chemistry. The possibilities of putting forward new hypotheses based on information theory, however, are more limited, since this theory describes the relationship between variables, but does not describe the behavior of any of them.

Problem. The relationship that exists between structure and properties is another area of ​​successful application of the information theory approach in chemistry. The effectiveness of this approach will be shown in a dissertation work for qualitatively different structural levels in chemistry - electron shells of atoms, molecules, polymers, crystals and even atomic nuclei^»^. It can be implemented in both qualitative and quantitative aspects. In the first case, various structural rules can be defined on the information basis, reflecting the mutual influence of two or more structural factors. It is also possible to obtain quantitative correlations between information indices and properties?®. At the same time, in principle, information indices provide better correlations compared to other indices, since they more fully reflect the features of chemical structures. Successful correlations are possible not only with quantities directly related to entropy, but also with quantities such as binding energy, whose relationship with information is far from obvious. Here, properties are included as a separate molecule "or atom, but also their large aggregates, i.e. properties that depend on the interaction between molecules and atoms, and not only on their internal structure. In addition., Processes in chemistry can also be the subject of information analysis based on changes in information indices during interactions.

Some limitations of the informational approach should also be kept in mind. Although they are .ton, quantitative measures of information are relative, not absolute. They are also statistical characteristics and refer to aggregates, but not to individual elements of them. Information indices can be defined for various properties of atoms and molecules, but the relationship between them is often complex and implicit.

On the other hand, having multiple information indexes for a single structure can cause mixed feelings. However, it should be remembered that any of these indexes is legal. The right question here is which of these quantities are useful and to what extent.

In this chapter, for the first time, information-theoretical indices are introduced:, / characterizing the electronic structure of atoms, as well as new information indices about the symmetry, topology, and electronic structure of molecules. The application of these structural characteristics is discussed in chapter III, sections IV.2 and V 1.

2.1.2. Necessary information from information theory

Information theory offers quantitative methods for the study of the acquisition, storage, transmission, transformation and use of information. The main place in these methods is occupied by the quantitative measurement of information. The definition of the concept of the amount of information requires the rejection of widespread, but unclear ideas about information as an aggregate, facts, information, knowledge.

The concept of the amount of information is closely related to the concept of entropy as a measure of uncertainty. In 1923, Hartley characterized the uncertainty of an experiment with n different outcomes by the number ¿od n. In Shannon's statistical theory of information, published in 1948, the amount of information is defined through the concept of probability. It is known that this concept is used to describe a situation in which there is uncertainty associated with the choice of one or more elements (outcomes) from a certain set. Following Shannon, the measure of the uncertainty of the outcome X/experiment X with probability p(X¡) -¿Oy(X)) . A measure of the average uncertainty of the complete experiment X with Xt, X2, ♦ possible outcomes, with probabilities p(X4), p(X2), respectively. chp(Xn), is the quantity

H(x) = - pcx,) Log p(Xi) cg>

In statistical information theory H(X) is called the entropy of the probability distribution. The latter, in the case of /7 different outcomes, form a finite probabilistic scheme, i.e.

The concept of probability can be defined in a more general way from the point of view of set theory. Let a finite set be a partition of A into a /T) class in which /\ are disjoint sets; by some equivalence relation X * Set of equivalence classes

R/X = (2.2; is called the factor set R in X

Kolmogorov's probability function (probabilistic correspondence) p is subject to three conditions:

The number series PfXf) , P(X2) , ., P(XGG)) is called the distribution of the partition A, and the Shannon function H(X) from Eq. (2.1) expresses the entropy of the partition X

It should be borne in mind that the concept of entropy in information theory is more general than thermodynamic entropy. The latter, considered as a measure of disorder in atomic-molecular motions, is a special case of a more general concept of entro-! FDI is the measure of any disorder or uncertainty or diversity.

The amount of information X is expressed by the value of the detached uncertainty. Then the average entropy of a given event with many possible outcomes is equal to the average amount of information needed to select any event X from the set ^ ,X^,. and is determined by the Shannon formula (y-e 2.1):

I(x) = -K$Lp(x,-)logp(K) = Hw

Here K is a positive constant that determines the units of information measurement. Assuming K \u003d 4, entropy (respectively, information) is measured in decimal units. The most convenient measurement system is based on binary units (bits). In this case, K ~ W2 and the logarithm wur-u(2.4) is taken in base two and \-! is denoted by t for short.One binary unit of information (or 1 bit) is the amount of information that is obtained when the result of choosing between two equally probable possibilities is known, and in units of entropy,¿ .dgasG\ the conversion factor is the Voltzmann constant (1.38.10 yj.gra.d~divided by /a?Yu.

It is proved that the choice of the logarithmic function for the amount of information is not random, and this is the only function that satisfies the conditions of activity and non-negativity of information

Both single and average information are always positive. This property is related to the fact that the probability is always less than one, and the constant in equation (2.4) is always taken positive. E | & ring ^ Y, then

13 p(x, -) = H(x, o c2.5) and this inequality is preserved even after averaging.

The average amount of information for a given event (experience) X reaches a maximum with a uniform distribution of probabilities p(X,) - p(X2) = . . .=p(Xn)* i.e. for p(X)) for any P:

For a pair of random dependent events X and y, the average amount of information is also expressed by the Shannon formula:

1(xy> = - р(х, yj) № pix, yj) (2.7)

Equation (2.7) can be generalized for any finite set, regardless of the nature of its elements:

1(xy) = -Z Z. P(X,nYj) 16P(X-,nYj) (2.8) are two quotient sets of P with respect to two different equivalence relations x and y, and K/xy is a factor-set of sections of X; and:

Having written the joint probability in equation (2.7) as the product of the unconditional and conditional probabilities p(x;, y^ = p(><¡)"P(Уj/x¡) , и представив логарифм в виде сумш»получается уравнение:

1(Xy) = 1(X) x - 1(y/X) (2.9) where T(x/y) is the average amount of conditional information contained in y relative to x, and is given by:

1(y/X) = -Y p(X, y1) 1B p(Y;/X-,) (2.10)

Defining a function:

1 (X, y.! = 1 (Y> - 1 (y / X) (2-Sh and replacing it in equation (2.9):

1(xy) - 1(X) + 1(y) -1(x, y) (2.12) it becomes clear that T(X, y) expresses the deviation of information about the complex event (X, y") from the additivity of information about individual events (outcomes): x and y. Therefore, G (X, Y) is a measure of the degree of statistical dependence (connection) between X and y. relationship between x and y3 is symmetrical.

In the general case, for a statistical relationship between x and y and the average amount of unconditional information about X or y, the following inequalities hold:

Equality in power when the second term in equation (2.11) is zero, i.e. when each / corresponds to I for which p(y. ¡X))=

If the quantities X and y are independent, i.e. if in equation (2.12) T (X, y) \u003d 0, then

1(xy) =1(X)<2Л5>

This equation expresses the property of the additivity of the amount of information and is generalized for independent random variables. comes to mind:

1(xnx2,.,xn) = 11 1(x/) (2.16)

Probabilistic approaches to the quantitative determination of information are also known. Ingarden and Urbanich proposed an axiomatic/sizvdelenie-Shein information without probabilities, in the form of a function of finite Boolean rings. Of considerable interest is the epsilon-entropy (combinatorial approach) proposed by Kolmogorov^^ and especially the algorithmic determination of the amount of information. According to Kolmogorov, the amount of information contained in one object (set) relative to another object (set) is considered as the "minimum length" of programs, written as a sequence of zeros and ones, and allowing a one-to-one transformation; the first object in the second:: \u003d H (X / y) \u003d W "W I (R) (2-17)

Kolmogorov's algorithmic approach offers new

17 logical foundations of information theory based on the concepts of complexity and sequence, takcha&Yn-concepts "entropy" and "amount of information" turned out to be applicable to individual objects.

Incredible methods in information theory expand the content of the concept of the amount of information from the amount of reduced uncertainty to the amount of reduced uniformity or to the amount of diversity in accordance with Ashby's interpretation. Any set of probabilities normalized to unity can be considered as corresponding to a certain set of elements that possess diversity. Diversity is understood as a characteristic of the elements of a set, which consists in their difference, non-coincidence with respect to some relation of equivalence. This @ can be a set of "different elements, relationships, relations, properties of objects. The smallest unit of information, a bit, in this approach expresses the minimum difference, i.e. the difference between two objects that are not identical, differ in some property.

In this aspect, information-theoretic methods are applicable to the definition of the so-called. structural information is the amount of information contained in the structure of a given system. A structure here means any finite set whose elements are distributed over subsets (equivalence classes) depending on a certain equivalence relation.

Let this structure contain A/ elements and they are distributed according to some equivalence criterion in subsets of equivalent elements: . This distribution corresponds to a finite probabilistic scheme of the probability subset ^ pn p2> . . ?Rp elements

2.18) where ¿T - A/"u is the probability of one (randomly) chosen element to fall into / - that subset for which A/,-elements. The entropy H of the probability distribution of the elements of this structure, determined by equation (2.4), can be considered / as a measure of the average amount of information, I, that is contained in one element of the structure: - p

1u P/ , bits per element (2.19)

The general information content of the structure is given by the derivative equation (2.19):

1-M1-A//0/h-hnmm,<*.»>

There is no consensus in the literature about how to name the quantities defined by y-y (2.19) and (2.20). Some authors prefer to refer to them as average and general information content, respectively. Thus, according to Moushowitz, I is not a measure of entropy in the sense in which it is used in information theory, as it does not express the average uncertainty of a structure consisting of /\/ elements in an ensemble of all possible structures that have the same: the number of elements. I is rather the information content of the structure under consideration in relation to the system of transformations that leave the system invariant. According to Rem from equation (2.4) measures the amount of information after the experiment, and before it H(x) is a measure of the entropy associated with the uncertainty of the experiment. In our opinion, an "experiment" that reduces the uncertainty of chemical structures (atoms, molecules, etc.) is itself "the process of forming these structures from their unrelated elements. Information is here in a connected form, it is contained in a structure, and therefore often the term "information content" of the structure is used.

The concept of structural information based on the given interpretation1 of equations (2.19) and (2.20) agrees well with Ashby's ideas about the amount of information as the amount of diversity. When a system consists of the same elements, there is no diversity in it. In this case, in y-s (2.19) and (2.20)/="/

With the maximum variety of elements in the structure, Λ £ = /, and the information content of the structure is maximum:

4 "* -N16 u, T ^ ^ vi

2.1.3. Information-theoretical indices for characterizing the electronic structure of atoms of chemical elements

Recommended list of dissertations in the specialty "Organic Chemistry", 02.00.03 VAK code

  • Asymptotic problems of combinatorial coding theory and information theory 2001, Candidate of Physical and Mathematical Sciences Vilenkin, Pavel Alexandrovich2011, candidate of physical and mathematical sciences Shutkin, Yuri Sergeevich

Please note that the scientific texts presented above are posted for review and obtained through original dissertation text recognition (OCR). In this connection, they may contain errors related to the imperfection of recognition algorithms. There are no such errors in the PDF files of dissertations and abstracts that we deliver.

E. Babaev.  Candidate of Chemical Sciences.

      Speaking about the mathematization of science, most often they mean only the purely pragmatic use of computational methods, forgetting the apt statement of A. A. Lyubishchev about mathematics as not so much a servant as the queen of all sciences. It is the level of mathematization that brings this or that science into the category of exact if we mean by this not the use of exact quantitative estimates, but a high level of abstraction, the freedom to operate with concepts related to the categories of non-numerical mathematics.
      Among the methods of such qualitative mathematics that have found effective application in chemistry, the main role belongs to sets, groups, algebras, topological constructions and, first of all, graphs the most general method of representing chemical structures.

Take, for example, four points arbitrarily located on a plane or in space, and connect them with three lines. No matter how these points (called vertices) are located and no matter how they are connected to each other by dashes (called edges), we will get only two possible graph structures that differ from each other in the mutual arrangement of connections: one graph, similar to the letters "П " or "I", and another graph that looks like the letters "T", "E" or "U". If instead of four abstract points we take four carbon atoms, and instead of dashes chemical bonds between them, then the two indicated graphs will correspond to two possible isomers of butane normal and iso-structure.
      What is the reason for the growing interest of chemists in graph theory, this bizarre, but very simple language of dots and dashes?
      A graph has the remarkable property that it remains unchanged under any deformations of the structure that are not accompanied by breaking the links between its elements. The graph structure can be distorted, completely depriving it of symmetry in the usual sense; nevertheless, the graph will retain symmetry in the topological sense, determined by the sameness, interchangeability of terminal vertices. Given this hidden symmetry, one can, for example, predict the number of different isomeric amines obtained from the structures of butane and isobutane by replacing carbon atoms with nitrogen atoms; Graphs make it possible to use simple physical considerations to understand regularities such as "structure property".
      Another, somewhat unexpected idea to express the structural properties of graphs using numbers (for example, the degree of their branching). Intuitively, we feel that isobutane is more branched than normal butane; Quantitatively, this can be expressed, say, by the fact that the structural fragment of propane is repeated three times in the isobutane molecule, and only twice in normal butane. This structural number (called the Wiener topological index) correlates surprisingly well with saturated hydrocarbon characteristics such as boiling point or heat of combustion. Recently, a kind of fashion has appeared for the invention of various topological indices, there are already more than twenty of them; the alluring simplicity makes this Pythagorean method more and more popular * .
      The use of graph theory in chemistry is not limited to the structure of molecules. Back in the thirties, A. A. Balandin, one of the forerunners of modern mathematical chemistry, proclaimed the principle of isomorphic substitution, according to which the same graph carries uniform information about the properties of the most heterogeneous structured objects; it is only important to clearly define which elements are chosen as vertices and which relations between them will be expressed by edges. So, in addition to atoms and bonds, phases and components, isomers and reactions, macromolecules and interactions between them can be chosen as vertices and edges. One can notice a deep topological relationship between the Gibbs phase rule, Horiuchi's stoichiometric rule, and the rational classification of organic compounds according to their degree of unsaturation. With the help of graphs, interactions between elementary particles, crystal fusion, cell division are successfully described... In this sense, graph theory serves as a visual, almost universal language of interdisciplinary communication.

The development of each scientific idea traditionally goes through stages: article review monograph textbook. The inflorescence of ideas called mathematical chemistry has already passed the stage of reviews, although it has not yet reached the status of an academic discipline. Due to the diversity of directions, collections are now the main form of publications in this area; several such collections were published in 1987-1988.
      The first collection edited by R. King "Chemical Applications of Topology and Graph Theory" (M., "Mir", 1987) contains the translation of the reports of the international symposium with the participation of chemists and mathematicians from different countries. The book gives a complete picture of the colorful palette of approaches that have emerged at the intersection of graph theory and chemistry. It touches on a very wide range of issues, starting from the algebraic structure of quantum chemistry and stereochemistry, the magic rules of electronic counting, and ending with the structure of polymers and the theory of solutions. Organic chemists will undoubtedly be attracted by a new strategy for the synthesis of molecular knots such as a trefoil, an experimental implementation of the idea of ​​a Mobius molecular strip. Review articles on the use of the above-mentioned topological indices for estimating and predicting a wide variety of properties, up to the biological activity of molecules, will be of particular interest.
      The translation of this book is also useful in that the issues raised in it may help to remove a number of debatable problems in the field of methodology of chemical science. Thus, the rejection by some chemists in the 50s of the mathematical symbolism of resonance formulas was replaced in the 70s by the rejection by individual physicists of the very concept of chemical structure. In the framework of mathematical chemistry, such contradictions can be eliminated, for example, with the help of a combinatorial-topological description of both classical and quantum-chemical systems.
      Although the works of Soviet scientists are not presented in this collection, it is gratifying to note the increased interest in the problems of mathematical chemistry in domestic science. An example is the first workshop "Molecular graphs in chemical research" (Odessa, 1987), which brought together about a hundred specialists from all over the country. Compared with foreign studies, domestic works are distinguished by a more pronounced applied nature, focus on solving problems of computer synthesis, creating various data banks. Despite the high level of reports, the meeting noted an unacceptable backlog in the training of specialists in mathematical chemistry. Only at Moscow and Novosibirsk universities are occasional courses given on its individual issues. At the same time, it is time to seriously raise the question what kind of mathematics should chemistry students study? After all, even in the university mathematical programs of chemical departments, such sections as the theory of groups, combinatorial methods, the theory of graphs, topology are practically not represented; in turn, university mathematicians do not study chemistry at all. In addition to the problem of education, the issue of scientific communications is acute: an all-Union journal on mathematical chemistry is needed, which is published at least once a year. The journal "MATCH" (Mathematical Chemistry) has been published abroad for many years, and our publications are scattered among collections and various periodicals.

Until recently, the Soviet reader could get acquainted with mathematical chemistry only through the book by V.I. Sokolov "Introduction to Theoretical Stereochemistry" (M.: Nauka, 1979) and I.S. , 1977). Partially filling this gap, the Siberian branch of the publishing house "Nauka" published last year the book "Application of Graph Theory in Chemistry" (edited by N. S. Zefirov, S. I. Kuchanov). The book consists of three sections, with the first one dealing with the use of graph theory in structural chemistry; the second part deals with reaction graphs; the third shows how graphs can be used to facilitate the solution of many traditional problems in the chemical physics of polymers. Of course, this book is not yet a textbook (much of the ideas discussed are original results of the authors); nevertheless, the first part of the collection can be fully recommended for initial acquaintance with the subject.
      Another collection of the proceedings of the seminar of the Faculty of Chemistry of Moscow State University "Principles of Symmetry and Consistency in Chemistry" (edited by N. F. Stepanov) was released in 1987. The main topic of the collection is group-theoretic, graph-theoretic and system-theoretic methods in chemistry. The range of questions discussed is unconventional, and the answers to them are even less standard. The reader will learn, for example, about the reasons for the three-dimensionality of space, about the possible mechanism for the occurrence of dissymmetry in living nature, about the principles for constructing a periodic system of molecules, about the symmetry planes of chemical reactions, about describing molecular forms without using geometric parameters, and much more. Unfortunately, you can find the book only in scientific libraries, since it was not widely available for sale.
      Since we are talking about the principles of symmetry and consistency in science, one cannot but mention another unusual book "System. Symmetry. Harmony" (M.: Thought, 1988). This book is devoted to one of the versions of the so-called general systems theory (GTS) proposed and developed by Yu.A. The initial principles of Urmantsev's GTS are the concepts of system and chaos, polymorphism and isomorphism, symmetry and asymmetry, as well as harmony and disharmony.
      It seems that Urmantsev's theory should arouse the closest attention of chemists, if only because in it the traditional chemical concepts of composition, isomerism, dissymmetry are elevated to the rank of system-wide. In the book one can find striking symmetry analogues for example between isomers of leaves and molecular structures ** . Of course, when reading the book, in some places a certain level of professional impartiality is needed say, when it comes to chemical-musical parallels or the rationale for a mirror-symmetrical system of elements. Nevertheless, the book is permeated by the central idea to find a universal language that expresses the unity of the universe, akin to which, perhaps, the Castalian language of Hermann Hess' "bead game".
Speaking about the mathematical constructions of modern chemistry, one cannot ignore the wonderful book by A.F. Bochkov and V.A. Smith "Organic Synthesis" (Moscow: Nauka, 1987). Although its authors are "pure" chemists, a number of ideas discussed in the book are very close to the problems raised above. Without dwelling on the brilliant form of presentation and the depth of the content of this book, after reading which you want to do organic synthesis, we emphasize only two points. First, considering organic chemistry through the prism of its contribution to world science and culture, the authors draw a clear parallel between chemistry and mathematics as universal sciences, drawing objects and problems of their research within themselves. In other words, to the traditional status of mathematics as the queen and servant of chemistry, one can add a peculiar hypostasis of her sister. Secondly, convincing the reader that organic synthesis is an exact science, the authors appeal to the accuracy and rigor of both structural chemistry itself and the perfection of the logic of chemical ideas.
      If experimenters say so, can there be any doubt that the hour of mathematical chemistry has struck?

________________________
  * See "Chemistry and Life", 1988, No. 7, p.22.
** See "Chemistry and Life", 1989, No. 2.

MUNICIPAL AUTONOMOUS GENERAL EDUCATIONAL INSTITUTION SECONDARY EDUCATIONAL SCHOOL № 2

Prepared

Legkokonets Vladislav, 10A student

Practical application of Graph Theory

Supervisor

L.I. Noskova, mathematics teacher

st.Bryukhovetskaya

2011

1.Introduction………………………………………………………………………….………….3

2. The history of the emergence of graph theory………………………………………….………..4

3.Basic definitions and theorems of graph theory……………………………….………6

4. Tasks solved with the help of graphs……………………………..……………………..8

4.1 Famous tasks………………………………….………………………...8

4.2 Some interesting tasks………………………………….……………..9

5. Application of graphs in various areas of people's lives……………………………...11

6. Problem solving……………………………………………………………………………...12

7. Conclusion………………….…………………………………………………………….13

8. List of references………….…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9.Appendix……………………………………………………………………….…………15

Introduction

Born in solving puzzles and entertaining games, graph theory has now become a simple, accessible and powerful tool for solving problems related to a wide range of problems. Graphs are literally ubiquitous. In the form of graphs, one can, for example, interpret road diagrams and electrical circuits, geographical maps and molecules of chemical compounds, connections between people and groups of people. Over the past four decades, graph theory has become one of the most rapidly developing branches of mathematics. This is driven by the demands of a rapidly expanding application area. It is used in the design of integrated circuits and control circuits, in the study of automata, logical circuits, program flowcharts, in economics and statistics, chemistry and biology, in scheduling theory. That's why relevance The topic is due, on the one hand, to the popularity of graphs and related research methods, and on the other hand, an undeveloped, integral system for its implementation.

The solution of many life problems requires long calculations, and sometimes these calculations do not bring success. This is what it consists research problem. The question arises: is it possible to find a simple, rational, short and elegant solution for their solution. Is it easier to solve problems if you use graphs? It determined topic of my research: "Practical Application of Graph Theory"

aim research was with the help of graphs to learn how to quickly solve practical problems.

Research hypothesis. The graph method is very important and widely used in various fields of science and human life.

Research objectives:

1. To study the literature and Internet resources on this issue.

2. Check the effectiveness of the graph method in solving practical problems.

3. Make a conclusion.

Practical significance of the study is that the results will undoubtedly arouse the interest of many people. Haven't any of you tried to build a family tree of your family? And how to do it correctly? The head of a transport company probably has to solve the problem of more profitable use of transport when transporting goods from a destination to several settlements. Each student faced logical transfusion tasks. It turns out they are solved with the help of graphs easily.

The following methods are used in the work: observation, search, selection, analysis.

The history of the emergence of graph theory

The mathematician Leonhard Euler (1707-1783) is considered to be the founder of graph theory. The history of the emergence of this theory can be traced through the correspondence of the great scientist. Here is a translation of the Latin text, which is taken from Euler's letter to the Italian mathematician and engineer Marinoni, sent from St. Petersburg on March 13, 1736.

“Once I was given a problem about an island located in the city of Koenigsberg and surrounded by a river, across which seven bridges are thrown.

[Appendix Fig.1] The question is whether anyone can go around them continuously, passing only once through each bridge. And then I was informed that no one had yet been able to do this, but no one had proved that it was impossible. This question, although banal, seemed to me, however, worthy of attention because neither geometry, nor algebra, nor combinatorial art is sufficient for its solution. After much deliberation, I found an easy rule, based on a quite convincing proof, by which in all problems of this kind one can immediately determine whether such a round can be made through any number and arbitrarily located bridges or cannot. The Konigsberg bridges are located so that they can be represented in the following figure [Appendix Fig.2], where A denotes an island, and B , C and D are parts of the continent separated from each other by river branches

Regarding the method he discovered for solving problems of this kind, Euler wrote:

"This solution, by its nature, seems to have little to do with mathematics, and it is not clear to me why this solution should be expected from a mathematician rather than from any other person, for this solution is supported by reason alone, and there is no need to involve to find this solution, any laws inherent in mathematics. So, I do not know how it turns out that questions that have very little to do with mathematics are more likely to be resolved by mathematicians than by others. "

So is it possible to get around the Königsberg bridges by passing only once through each of these bridges? To find the answer, let's continue Euler's letter to Marinoni:

"The question is to determine whether it is possible to get around all these seven bridges, passing through each only once, or not. My rule leads to the following solution to this question. First of all, you need to look at how many sections are separated by water - such , which have no other transition from one to another, except through the bridge.In this example, there are four such sections - A, B, C, D. Next, you need to distinguish whether the number of bridges leading to these individual sections is even or odd. So, in our case, five bridges lead to section A, and three bridges to the rest, i.e. The number of bridges leading to individual sections is odd, and this one is already enough to solve the problem.When this is determined, we apply the following rule : if the number of bridges leading to each individual section were even, then the detour in question would be possible, and at the same time it would be possible to start this detour from any section. would be odd, for only one be n if it cannot be even, then even then the transition could take place, as prescribed, but only the beginning of the detour must certainly be taken from one of those two sections to which an odd number of bridges leads. If, finally, there were more than two sections to which an odd number of bridges leads, then such a movement is generally impossible ... if other, more serious problems could be cited here, this method could be even more useful and should not be neglected " .

Basic definitions and theorems of graph theory

Graph theory is a mathematical discipline created by the efforts of mathematicians, so its presentation includes the necessary rigorous definitions. So, let's proceed to the organized introduction of the basic concepts of this theory.

    Definition 1. A graph is a collection of a finite number of points, called the vertices of the graph, and pairwise connecting some of these vertices of lines, called the edges or arcs of the graph.

This definition can be formulated differently: a graph is a non-empty set of points (vertices) and segments (edges), both ends of which belong to a given set of points

In the future, we will denote the vertices of the graph in Latin letters A, B, C, D. Sometimes the graph as a whole will be denoted by a single capital letter.

Definition 2. The vertices of the graph that do not belong to any edge are called isolated.

Definition 3. A graph consisting only of isolated vertices is called zero - count .

Notation: O "– a graph with vertices and no edges

Definition 4. A graph in which each pair of vertices is connected by an edge is called complete.

Designation: U" a graph consisting of n vertices and edges connecting all possible pairs of these vertices. Such a graph can be represented as an n-gon in which all diagonals are drawn

Definition 5. The degree of a vertex is the number of edges that the vertex belongs to.

Definition 6. A graph whose degrees of all k vertices are the same is called a homogeneous graph of degree k .

Definition 7. The complement of a given graph is a graph consisting of all the edges and their ends that must be added to the original graph in order to obtain a complete graph.

Definition 8. A graph that can be represented in a plane in such a way that its edges intersect only at the vertices is called planar.

Definition 9. A polygon of a planar graph that does not contain any vertices or edges of the graph inside is called its face.

The concepts of a plane graph and graph faces are used in solving problems for the "correct" coloring of various maps.

Definition 10. A path from A to X is a sequence of edges leading from A to X such that every two adjacent edges have a common vertex and no edge occurs more than once.

Definition 11. A cycle is a path where the start and end points are the same.

Definition 12. A simple cycle is a cycle that does not pass through any of the vertices of the graph more than once.

Definition 13. long way , laid on a loop , is the number of edges of this path.

Definition 14. Two vertices A and B in a graph are called connected (disconnected) if there exists (does not exist) a path leading from A to B in it.

Definition 15. A graph is called connected if every two of its vertices are connected; if the graph contains at least one pair of disconnected vertices, then the graph is called disconnected.

Definition 16. A tree is a connected graph that does not contain cycles.

A three-dimensional model of a graph-tree is, for example, a real tree with its intricately branched crown; the river and its tributaries also form a tree, but already flat - on the surface of the earth.

Definition 17. A disconnected graph consisting solely of trees is called a forest.

Definition 18. A tree, all n vertices of which are numbered from 1 to n, is called a tree with renumbered vertices.

So, we have considered the main definitions of graph theory, without which it would be impossible to prove theorems, and, consequently, to solve problems.

Problems solved using graphs

Famous Challenges

The traveling salesman problem

The traveling salesman problem is one of the famous problems in the theory of combinatorics. It was put up in 1934, and the best mathematicians broke their teeth about it.

The problem statement is the following.
The traveling salesman (traveling merchant) must leave the first city, visit cities 2,1,3..n once in an unknown order, and return to the first city. Distances between cities are known. In what order should the cities be traversed so that the closed path (tour) of the traveling salesman is the shortest?

Method for solving the traveling salesman problem

Greedy Algorithm “go to the nearest (which you have not yet entered) city.”
This algorithm is called “greedy” because in the last steps you have to pay dearly for greed.
Consider for example the network in Figure [app fig.3] representing a narrow rhombus. Let the salesman start from city 1. The “go to the nearest city” algorithm will take him to city 2, then 3, then 4; on the last step, you will have to pay for greed, returning along the long diagonal of the rhombus. The result is not the shortest, but the longest tour.

The problem of the Königsberg bridges.

The task is formulated as follows.
The city of Konigsberg is located on the banks of the Pregel River and two islands. Different parts of the city were connected by seven bridges. On Sundays, the townspeople took walks around the city. Question: is it possible to take a walk in such a way that, having left the house, come back, passing exactly once over each bridge.
Bridges over the Pregel river are located as in the picture
[Appendix Fig.1].

Consider a graph corresponding to the bridge scheme [appendix fig.2].

To answer the question of the problem, it is enough to find out whether the graph is Euler. (At least one vertex must have an even number of bridges). It is impossible, walking around the city, to go through all the bridges once and come back.

Several interesting challenges

1. "Routes".

Task 1

As you remember, the hunter for dead souls Chichikov visited famous landowners once each. He visited them in the following order: Manilov, Korobochka, Nozdrev, Sobakevich, Plyushkin, Tentetnikov, General Betrishchev, Petukh, Konstanzholgo, Colonel Koshkarev. A diagram was found on which Chichikov sketched the relative position of the estates and country roads connecting them. Determine which estate belongs to whom, if Chichikov did not pass any of the roads more than once [appendix fig.4].

Solution:

According to the road map, it can be seen that Chichikov began his journey with the E estate, and ended with the O estate. We notice that only two roads lead to the estates B and C, so Chichikov had to drive along these roads. Let's mark them with bold lines. The sections of the route passing through A are determined: AC and AB. Chichikov did not travel on the roads AE, AK and AM. Let's cross them out. Let's mark with a thick line ED ; cross out DK . Cross out MO and MN; mark with a bold line MF ; cross out FO ; we mark FH , NK and KO with a thick line. Let us find the only possible route under the given condition. And we get: the estate E - belongs to Manilov, D - Korobochka, C - Nozdrev, A - Sobakevich, V - Plyushkin, M - Tentetnikov, F - Betrishchev, N - Petukh, K - Konstanzholgo, O - Koshkarev [app fig.5].

Task 2

The figure shows a map of the area [appendix fig.6].

You can only move in the direction of the arrows. Each point can be visited no more than once. In how many ways can you get from point 1 to point 9? Which route is the shortest and which is the longest.

Solution:

Sequentially "stratify" the scheme into a tree, starting from vertex 1 [app fig.7]. Let's get a tree. The number of possible ways to get from 1 to 9 is equal to the number of "hanging" vertices of the tree (there are 14 of them). Obviously the shortest path is 1-5-9; the longest is 1-2-3-6-5-7-8-9.

2 "Groups, dating"

Task 1

Participants of the music festival, having met, exchanged envelopes with addresses. Prove that:

a) an even number of envelopes were sent in total;

b) the number of participants who exchanged envelopes an odd number of times is even.

Solution: Let the festival participants be A 1 , A 2 , A 3 . . . , And n are the vertices of the graph, and the edges connect pairs of vertices representing guys who exchanged envelopes [Appendix Fig.8]

Solution:

a) the degree of each vertex A i shows the number of envelopes that participant A i gave to his friends. The total number of transmitted envelopes N is equal to the sum of the degrees of all vertices of the graph N = step. A 1 + step. A 2 + + . . . + step. And n -1 + step. And n , N =2p , where p is the number of graph edges, i.e. N is even. Therefore, an even number of envelopes were sent;

b) in the equality N = step. A 1 + step. A 2 + + . . . + step. And n -1 + step. And n the sum of odd terms must be even, and this can only be if the number of odd terms is even. And this means that the number of participants who exchanged envelopes an odd number of times is even.

Task 2

Once Andrei, Boris, Volodya, Dasha and Galya agreed to go to the cinema in the evening. They decided to agree on the choice of the cinema and the session by phone. It was also decided that if it was not possible to phone someone, then the trip to the cinema would be cancelled. In the evening, not everyone gathered at the cinema, and therefore the visit to the cinema fell through. The next day, they began to find out who called whom. It turned out that Andrey called Boris and Volodya, Volodya called Boris and Dasha, Boris called Andrey and Dasha, Dasha called Andrey and Volodya, and Galya called Andrey, Volodya and Boris. Who was unable to phone and therefore did not come to the meeting?

Solution:

Let's draw five points and designate them with the letters A, B, C, D, E. These are the first letters of the names. Let's connect those dots that correspond to the names of the guys who called each other.

[app fig.9]

It can be seen from the picture that each of the guys - Andrey, Boris and Volodya - phoned everyone else. That's why these guys came to the cinema. But Galya and Dasha failed to call each other (points D and D are not connected by a segment) and therefore, in accordance with the agreement, they did not come to the cinema.

The use of graphs in various areas of people's lives

In addition to the examples given, graphs are widely used in construction, electrical engineering, management, logistics, geography, mechanical engineering, sociology, programming, automation of technological processes and industries, psychology, and advertising. So, from all of the above, the practical value of graph theory irrefutably follows, the proof of which was the goal of this study.

In any field of science and technology, you meet with graphs. Graphs are wonderful mathematical objects with which you can solve mathematical, economic and logical problems, various puzzles and simplify the conditions of problems in physics, chemistry, electronics, automation. It is convenient to formulate many mathematical facts in the language of graphs. Graph theory is part of many sciences. Graph theory is one of the most beautiful and visual mathematical theories. Recently, graph theory has found more and more applications in applied issues. Even computer chemistry has emerged - a relatively young field of chemistry based on the application of graph theory.

Molecular graphs, used in stereochemistry and structural topology, chemistry of clusters, polymers, etc., are undirected graphs that display the structure of molecules [app fig.10]. The vertices and edges of these graphs correspond to the corresponding atoms and the chemical bonds between them.

Molecular graphs and trees: [app fig.10] a, b - multigraphs resp. ethylene and formaldehyde; in-mol. isomers of pentane (trees 4, 5 are isomorphic to tree 2).

In the stereochemistry of organisms, the most often use molecular trees - the main trees of molecular graphs, which contain only all the vertices corresponding to C atoms. trees and the establishment of their isomorphism allow you to determine the pier. structures and find the total number of isomers of alkanes, alkenes and alkynes

Protein networks

Protein networks - groups of physically interacting proteins that function in a cell jointly and in a coordinated manner, controlling the interconnected processes occurring in the body [app fig. eleven].

Hierarchical system graph called a tree. A distinctive feature of a tree is that there is only one path between any two of its vertices. The tree does not contain cycles and loops.

Usually, a tree representing a hierarchical system has one main vertex, which is called the root of the tree. Each vertex of the tree (except the root) has only one ancestor - the object designated by it belongs to one top-level class. Any vertex of the tree can generate several descendants - vertices corresponding to lower-level classes.

For each pair of tree vertices, there is a unique path connecting them. This property is used when finding all the ancestors, for example, in the male line, of any person whose family tree is represented as a family tree, which is also a “tree” in the sense of graph theory.

An example of my family tree [appendix fig.12].

One more example. The figure shows the biblical family tree [appendix fig.13].

Problem solving

1. Transport task. Let there be a base with raw materials in the city of Krasnodar, which needs to be planted in the cities of Krymsk, Temryuk, Slavyansk-on-Kuban and Timashevsk in one run, while spending as little time and fuel as possible and returning back to Krasnodar.

Solution:

First, let's create a graph of all possible routes. [app fig.14], taking into account the real roads between these settlements and the distance between them. To solve this problem, we need to create another graph, a tree [app fig.15].

For the convenience of the solution, we denote the cities with numbers: Krasnodar - 1, Krymsk - 2, Temryuk - 3, Slavyansk - 4, Timashevsk - 5.

This resulted in 24 solutions, but we only need shortest paths. Of all the solutions, only two are satisfied, these are 350 km.

Similarly, it is possible and, I think, it is necessary to calculate real transportation from one locality to another.

    Logic task for transfusion. There are 8 liters of water in a bucket, and there are two pots with a capacity of 5 and 3 liters. it is required to pour 4 liters of water into a five-liter saucepan and leave 4 liters in a bucket, i.e. pour water equally into a bucket and a large saucepan.

Solution:

The situation at any moment can be described by three numbers [appendix fig.16].

As a result, we get two solutions: one in 7 moves, the other in 8 moves.

Conclusion

So, in order to learn how to solve problems, you need to understand what they are, how they are arranged, what components they consist of, what are the tools that are used to solve problems.

Solving practical problems with the help of graph theory, it became clear that at every step, at every stage of their solution, it is necessary to apply creativity.

From the very beginning, at the first stage, it lies in the fact that you need to be able to analyze and encode the condition of the problem. The second stage is a schematic notation, which consists in the geometric representation of graphs, and at this stage the element of creativity is very important because it is far from easy to find correspondences between the elements of the condition and the corresponding elements of the graph.

When solving a transport problem or a problem of compiling a family tree, I concluded that the graph method is certainly interesting, beautiful and visual.

I was convinced that graphs are widely used in economics, management, and technology. Graph theory is also used in programming. This was not discussed in this paper, but I think that this is only a matter of time.

In this scientific work, mathematical graphs, their areas of application are considered, several problems are solved with the help of graphs. Knowledge of the basics of graph theory is necessary in various areas related to production management, business (for example, construction network diagram, mail delivery schedules). In addition, while working on a scientific work, I mastered working on a computer in a WORD text editor. Thus, the tasks of scientific work are fulfilled.

So, from all of the above, the practical value of graph theory irrefutably follows, the proof of which was the goal of this work.

Literature

    Berge K. Graph theory and its applications. -M.: IIL, 1962.

    Kemeny J., Snell J., Thompson J. Introduction to finite mathematics. -M.: IIL, 1963.

    Ore O. Graphs and their application. -M.: Mir, 1965.

    Harary F. Graph theory. -M.: Mir, 1973.

    Zykov A.A. Theory of Finite Graphs. -Novosibirsk: Nauka, 1969.

    Berezina L.Yu. Graphs and their application. -M.: Education, 1979. -144 p.

    "Soros Educational Journal" No. 11 1996 (Article "Flat Graphs");

    Gardner M. "Mathematical leisure", M. "Mir", 1972 (chapter 35);

    Olekhnik S. N., Nesterenko Yu. V., Potapov M. K. "Old entertaining problems", M. "Nauka", 1988 (part 2, section 8; appendix 4);

Application

Application



P

Rice. 6

Rice. 7

Rice. eight

application

Application


Application

Application


P

Rice. fourteen

application

Application