Internet Windows Android

What is information? Informatics and information. Classification of information Classification of information according to the way of perception aesthetic

Information is information about something.

The concept and types of information, transmission and processing, search and storage of information

Expand content

Collapse content

Information is, definition

Information is any information received and transmitted, stored by various sources. Information is the totality of information about the world around us, about all kinds of processes taking place in it, which can be perceived by living organisms, electronic machines and other information systems.

- this significant information about something, when the form of their presentation is also information, that is, it has a formatting function in accordance with its own nature.

Information is everything that can be supplemented by our knowledge and assumptions.

Information is information about something, regardless of the form of their presentation.

Information is the mental product of any psychophysical organism, produced by it when using some means, called the means of information.

Information is information perceived by a person and (or) special. devices as a reflection of the facts of the material or spiritual world in the process of communication.

Information is data organized in such a way that makes sense to the person dealing with it.

Information is the value a person puts into data based on the known conventions used to represent it.


Information is information, explanation, presentation.

Information is any data or information that anyone is interested in.

Information is information about objects and phenomena of the environment, their parameters, properties and state, which are perceived by information systems (living organisms, control machines, etc.) in the process of life and work.

The same information message (newspaper article, announcement, letter, telegram, reference, story, drawing, radio broadcast, etc.) may contain a different amount of information for different people - depending on their previous knowledge, on the level of understanding of this messages and interest in it.

In cases where they talk about automated work with information through any technical devices, they are not interested in the content of the message, but in how many characters this message contains.

In relation to computer data processing, information is understood as a certain sequence of symbolic designations (letters, numbers, encoded graphic images and sounds, etc.) that carry a semantic load and are presented in a form understandable to a computer. Each new character in such a sequence of characters increases the information volume of the message.


Currently, there is no single definition of information as a scientific term. From the point of view of various fields of knowledge, this concept is described by its specific set of features. For example, the concept of "information" is basic in the course of computer science, and it is impossible to define it through other, more "simple" concepts (in geometry, for example, it is impossible to express the content of the basic concepts of "point", "line", "plane" through simpler concepts).


The content of the basic, basic concepts in any science must be explained by examples or identified by comparing them with the content of other concepts. In the case of the concept of "information", the problem of its definition is even more complicated, since it is a general scientific concept. This concept is used in various sciences (computer science, cybernetics, biology, physics, etc.), while in each science the concept of "information" is associated with different systems of concepts.


The concept of information

In modern science, two types of information are considered:

Objective (primary) information is the property of material objects and phenomena (processes) to generate a variety of states, which through interactions (fundamental interactions) are transmitted to other objects and imprinted in their structure.

Subjective (semantic, semantic, secondary) information is the semantic content of objective information about objects and processes of the material world, formed by the human mind with the help of semantic images (words, images and sensations) and fixed on some material carrier.


In the everyday sense, information is information about the surrounding world and the processes taking place in it, perceived by a person or a special device.

Currently, there is no single definition of information as a scientific term. From the point of view of various fields of knowledge, this concept is described by its specific set of features. According to the concept of K. Shannon, information is the removed uncertainty, i.e. information that should remove, to one degree or another, the uncertainty that the consumer has before they are received, expand his understanding of the object with useful information.


From Gregory Beton's point of view, the elementary unit of information is a "caring difference" or an effective difference for some larger perceiving system. Those differences that are not perceived, he calls "potential", and perceived - "active". "Information consists of indifferent differences" (c) "Any perception of information is necessarily an acquisition of information about a difference." From the point of view of computer science, information has a number of fundamental properties: novelty, relevance, reliability, objectivity, completeness, value, etc. The science of logic is primarily involved in the analysis of information. The word "information" comes from the Latin word informatio, which in translation means information, clarification, familiarization. The concept of information was considered by ancient philosophers.

Before the industrial revolution, defining the essence of information remained the prerogative of mainly philosophers. Further, the science of cybernetics, which was new at that time, began to consider issues of information theory.

Sometimes, in order to comprehend the essence of a concept, it is useful to analyze the meaning of the word that denotes this concept. Clarifying the internal form of the word and studying the history of its use can shed unexpected light on its meaning, eclipsed by the usual "technological" use of this word and modern connotations.

The word information entered the Russian language in the Petrine era. For the first time it is recorded in the "Spiritual Regulations" of 1721 in the meaning of "representation, concept of something". (In European languages, it was fixed earlier - around the 14th century.)

Based on this etymology, information can be considered any significant change in form, or, in other words, any materially fixed traces formed by the interaction of objects or forces and amenable to understanding. Information is thus a converted form of energy. The carrier of information is a sign, and the way of its existence is interpretation: revealing the meaning of a sign or a sequence of signs.

The meaning can be an event reconstructed from the sign that caused its occurrence (in the case of "natural" and involuntary signs, such as traces, evidence, etc.), or a message (in the case of conventional signs characteristic of the sphere of language). It is the second kind of signs that makes up the body of human culture, which, according to one of the definitions, is "a set of non-hereditarily transmitted information."

Messages may contain information about facts or interpretation of facts (from Latin interpretatio, interpretation, translation).

A living being receives information through the senses, as well as through reflection or intuition. The exchange of information between subjects is communication or communication (from lat. communicatio, message, transmission, derived in turn from lat. communico, to make common, to inform, talk, connect).

From a practical point of view, information is always presented as a message. An informational message is associated with a message source, a message recipient, and a communication channel.


Returning to the Latin etymology of the word information, let's try to answer the question of what exactly the form is given here.

It is obvious that, firstly, some sense, which, being initially formless and unexpressed, exists only potentially and must be "built" in order to become perceived and transmitted.

Secondly, to the human mind, which is brought up to think structurally and clearly. Thirdly, a society that, precisely because its members share these meanings and share them, gains unity and functionality.

Information as an expressed reasonable meaning is knowledge that can be stored, transmitted and be the basis for the generation of other knowledge. The forms of knowledge conservation (historical memory) are diverse: from myths, annals and pyramids to libraries, museums and computer databases.

Information - information about the world around us, about the processes taking place in it, which are perceived by living organisms, control machines and other information systems.

The word "information" is Latin. For a long life, its meaning has undergone evolution, sometimes expanding, sometimes narrowing its boundaries to the limit. At first, the word "information" meant: "representation", "concept", then - "information", "message transmission".


In recent years, scientists have decided that the usual (generally accepted) meaning of the word "information" is too elastic, vague, and gave it such a meaning: "a measure of certainty in a message."

Information theory was brought to life by the needs of practice. Its origin is associated with the work of Claude Shannon "Mathematical Theory of Communication", published in 1946. The foundations of information theory are based on the results obtained by many scientists. By the second half of the 20th century, the globe was buzzing with transmitted information, running through telephone and telegraph cables and radio channels. Later, electronic computers appeared - information processors. And for that time, the main task of information theory was, first of all, to increase the efficiency of the functioning of communication systems. The difficulty in the design and operation of means, systems and communication channels is that it is not enough for the designer and engineer to solve the problem from physical and energy positions. From these points of view, the system can be the most perfect and economical. But it is also important when creating transmission systems to pay attention to how much information will pass through this transmission system. After all, information can be quantified, calculated. And they act in such calculations in the most usual way: they abstract from the meaning of the message, as they renounce concreteness in the arithmetic operations familiar to all of us (as from the addition of two apples and three apples they pass to the addition of numbers in general: 2 + 3).


The scientists said they "completely ignored human evaluation of information." To a sequence of 100 letters, for example, they assign a certain meaning to information, without regard to whether that information makes sense and whether, in turn, practical application makes sense. The quantitative approach is the most developed branch of information theory. According to this definition, a collection of 100 letters—a 100-letter phrase from a newspaper, Shakespeare's play, or Einstein's theorem—has exactly the same amount of information.


This quantification of information is highly useful and practical. It corresponds exactly to the task of the communications engineer, who must convey all the information contained in the submitted telegram, regardless of the value of this information for the addressee. The communication channel is soulless. One thing is important for the transmitting system: to transmit the required amount of information in a certain time. How to calculate the amount of information in a particular message?

The assessment of the amount of information is based on the laws of probability theory, more precisely, it is determined through the probabilities of events. This is understandable. The message has value, carries information only when we learn from it about the outcome of an event that has a random character, when it is to some extent unexpected. After all, the message about the already known does not contain any information. Those. if, for example, someone calls you on the phone and says: “It is light during the day and dark at night,” then such a message will surprise you only with the absurdity of the statement of the obvious and well-known, and not with the news that it contains. Another thing, for example, the result of the race at the races. Who will come first? The outcome here is difficult to predict. The more the event of interest to us has random outcomes, the more valuable the message about its result, the more information. An event message that has only two equally possible outcomes contains one piece of information called a bit. The choice of the unit of information is not accidental. It is associated with the most common binary way of encoding it during transmission and processing. Let us try, at least in the most simplified form, to imagine that general principle of the quantitative evaluation of information, which is the cornerstone of the entire theory of information.


We already know that the amount of information depends on the probabilities of certain outcomes of an event. If an event, as scientists say, has two equally likely outcomes, this means that the probability of each outcome is 1/2. This is the probability of getting heads or tails when tossing a coin. If an event has three equally likely outcomes, then the probability of each is 1/3. Note that the sum of the probabilities of all outcomes is always equal to one: after all, one of all possible outcomes will definitely come. An event, as you understand, can have unequal outcomes. So, in a football match between strong and weak teams, the probability of a strong team winning is high - for example, 4/5. The probability of a draw is much less, for example 3/20. The probability of defeat is very small.


It turns out that the amount of information is a measure of reducing the uncertainty of some situation. Different amounts of information are transmitted over communication channels, and the amount of information passing through the channel cannot exceed its capacity. And it is determined by how much information passes here per unit of time. One of the characters in Jules Verne's novel The Mysterious Island, journalist Gideon Spillet, was telephoning a chapter from the Bible so that his competitors could not use the telephone. In this case, the channel was loaded completely, and the amount of information was equal to zero, because the subscriber received information known to him. This means that the channel was idle, passing a strictly defined number of pulses, without loading them with anything. Meanwhile, the more information each of a certain number of pulses carries, the more fully the channel bandwidth is used. Therefore, it is necessary to intelligently encode information, to find an economical, stingy language for transmitting messages.


The information is "sifted" in the most thorough way. In the telegraph, frequently occurring letters, combinations of letters, even whole phrases are depicted with a shorter set of zeros and ones, and those that are less common are shown with a longer one. In the case when the length of the code word is reduced for frequently occurring symbols and increased for rarely occurring ones, one speaks of efficient encoding of information. But in practice, it often happens that the code resulting from the most thorough “sifting”, a convenient and economical code, can distort the message due to interference, which, unfortunately, always happens in communication channels: sound distortion in the phone, atmospheric noise in radio, distortion or darkening of the image in television, transmission errors in the telegraph. These interferences, or, as they are called by experts, noise, fall on the information. And from this there are the most incredible and, of course, unpleasant surprises.


Therefore, to increase the reliability in the transmission and processing of information, it is necessary to introduce extra characters - a kind of protection against distortion. They - these extra characters - do not carry the actual content in the message, they are redundant. From the point of view of information theory, everything that makes a language colorful, flexible, rich in shades, multifaceted, multi-valued, is redundancy. How redundant from such positions is Tatyana's letter to Onegin! How much informational excesses are in it for a short and understandable message "I love you"! And how informationally accurate are the hand-drawn signs that are understandable to everyone who enters the subway today, where instead of words and phrases of announcements there are laconic symbolic signs indicating: “Entrance”, “Exit”.


In this regard, it is useful to recall an anecdote told at one time by the famous American scientist Benjamin Franklin about a hatter who invited his friends to discuss a sign project. It was supposed to draw a hat on the sign and write: “John Thompson, the hatter, makes and sells hats for cash” . One of the friends remarked that the words "for cash" are redundant - such a reminder would be offensive to the buyer. Another also found the word "sells" superfluous, since it goes without saying that a hatter sells hats, and does not give them away for free. The third thought that the words "hatter" and "makes hats" were an unnecessary tautology, and the last words were thrown out. The fourth suggested throwing out the word "hatter" - the painted hat clearly says who John Thompson is. Finally, the fifth assured that it was completely indifferent to the buyer whether the hatter was called John Thompson or otherwise, and suggested that this indication be dispensed with. Thus, in the end, there was nothing left on the sign but a hat. Of course, if people used only such codes, without redundancy in messages, then all "information forms" - books, reports, articles - would be extremely short. But they would lose in intelligibility and beauty.

Information can be divided into types according to different criteria: in truth: true and false;

according to the way of perception:

Visual - perceived by the organs of vision;

Auditory - perceived by the organs of hearing;

Tactile - perceived by tactile receptors;

Olfactory - perceived by olfactory receptors;

Taste - perceived by taste buds.


in the form of presentation:

Text - transmitted in the form of symbols intended to designate lexemes of the language;

Numerical - in the form of numbers and signs denoting mathematical operations;

Graphic - in the form of images, objects, graphs;

Sound - oral or in the form of a recording, the transmission of language lexemes by auditory means.


by appointment:

Mass - contains trivial information and operates with a set of concepts understandable to most of the society;

Special - contains a specific set of concepts, when used, information is transmitted that may not be understood by the bulk of society, but is necessary and understandable within a narrow social group where this information is used;

Secret - transmitted to a narrow circle of people and through closed (secure) channels;

Personal (private) - a set of information about a person that determines the social position and types of social interactions within the population.


by value:

Relevant - information is valuable at a given time;

Reliable - information received without distortion;

Understandable - information expressed in a language understandable to the person to whom it is intended;

Complete - information sufficient to make the right decision or understanding;

Useful - the usefulness of information is determined by the subject who received the information, depending on the volume of possibilities for its use.


The value of information in various fields of knowledge

In information theory, many systems, methods, approaches, ideas are being developed nowadays. However, scientists believe that new trends will be added to the modern trends in information theory, new ideas will appear. As proof of the correctness of their assumptions, they cite the “live”, developing nature of science, point out that information theory is surprisingly quickly and firmly introduced into the most diverse areas of human knowledge. Information theory has penetrated into physics, chemistry, biology, medicine, philosophy, linguistics, pedagogy, economics, logic, technical sciences, and aesthetics. According to the experts themselves, the doctrine of information, which arose due to the needs of the theory of communication and cybernetics, stepped over their limits. And now, perhaps, we have the right to talk about information as a scientific concept that puts into the hands of researchers a theoretical and informational method with which you can penetrate into many sciences about animate and inanimate nature, about society, which will allow not only to look at all problems from a new perspective. side, but also to see the unseen. That is why the term "information" has become widespread in our time, becoming part of such concepts as the information system, information culture, even information ethics.


Many scientific disciplines use information theory to emphasize a new direction in the old sciences. This is how, for example, information geography, information economics, and information law arose. But the term "information" has become extremely important in connection with the development of the latest computer technology, the automation of mental work, the development of new means of communication and information processing, and especially with the emergence of computer science. One of the most important tasks of information theory is the study of the nature and properties of information, the creation of methods for its processing, in particular, the transformation of a wide variety of modern information into computer programs, with the help of which the automation of mental work takes place - a kind of strengthening of the intellect, and hence the development of the intellectual resources of society.


The word "information" comes from the Latin word informatio, which means information, clarification, familiarization. The concept of "information" is basic in the course of computer science, but it is impossible to define it through other, more "simple" concepts. The concept of "information" is used in various sciences, and in each science the concept of "information" is associated with different systems of concepts. Information in biology: Biology studies wildlife and the concept of "information" is associated with the appropriate behavior of living organisms. In living organisms, information is transmitted and stored using objects of various physical nature (DNA state), which are considered as signs of biological alphabets. Genetic information is inherited and stored in all cells of living organisms. Philosophical approach: Information is interaction, reflection, cognition. Cybernetic approach: Information is the characteristics of a control signal transmitted over a communication line.

The role of information in philosophy

The traditionalism of the subjective has always dominated in the early definitions of information as categories, concepts, properties of the material world. Information exists outside our consciousness, and can only be reflected in our perception as a result of interaction: reflection, reading, receiving in the form of a signal, stimulus. Information is not material, like all properties of matter. Information stands in the following order: matter, space, time, consistency, function, etc., which are the fundamental concepts of a formalized reflection of objective reality in its distribution and variability, diversity and manifestations. Information is a property of matter and reflects its properties (state or ability to interact) and quantity (measure) through interaction.


From a material point of view, information is the order of the objects of the material world. For example, the order of letters on a sheet of paper according to certain rules is written information. The sequence of multi-colored dots on a sheet of paper according to certain rules is graphic information. The order of musical notes is musical information. The order of genes in DNA is hereditary information. The order of bits in a computer is computer information, and so on. etc. For the implementation of information exchange, the presence of necessary and sufficient conditions is required.

The necessary conditions:

The presence of at least two different objects of the material or non-material world;

The presence of objects in common property that allows you to identify objects as a carrier of information;

Objects have a specific property that allows them to distinguish objects from each other;

The presence of a space property that allows you to determine the order of objects. For example, the arrangement of written information on paper is a specific property of paper that allows letters to be arranged from left to right and from top to bottom.


There is only one sufficient condition: the presence of a subject capable of recognizing information. This is a person and human society, societies of animals, robots, etc. An informational message is constructed by selecting copies of objects from the basis and arranging these objects in space in a certain order. The length of the informational message is defined as the number of copies of the basis objects and is always expressed as an integer. It is necessary to distinguish between the length of an information message, which is always measured as an integer, and the amount of knowledge contained in an information message, which is measured in an unknown unit of measure. From a mathematical point of view, information is a sequence of integers that are written into a vector. The numbers are the number of the object in the information basis. The vector is called the information invariant, since it does not depend on the physical nature of the basis objects. One and the same informational message can be expressed in letters, words, sentences, files, pictures, notes, songs, video clips, any combination of all previously named.

The role of information in physics

Information is information about the surrounding world (object, process, phenomenon, event), which is the object of transformation (including storage, transmission, etc.) and is used to develop behavior, to make decisions, to manage or to learn.


The characteristics of information are as follows:

This is the most important resource of modern production: it reduces the need for land, labor, capital, reduces the consumption of raw materials and energy. So, for example, having the ability to archive your files (that is, having such information), you can not spend money on buying new floppy disks;

Information brings new productions to life. For example, the invention of the laser beam was the cause of the emergence and development of the production of laser (optical) disks;

Information is a commodity, and the seller of information does not lose it after the sale. So, if a student informs his friend about the schedule of classes during the semester, he will not lose this data for himself;

Information gives additional value to other resources, in particular, labor. Indeed, a worker with a higher education is valued more than a worker with a secondary one.


As follows from the definition, three concepts are always associated with information:

The source of information is that element of the surrounding world (object, process, phenomenon, event), information about which is the object of transformation. So, the source of information that the reader of this textbook is currently receiving is computer science as a sphere of human activity;

The consumer of information is that element of the surrounding world that uses information (for the development of behavior, for decision making, for management or for learning). The consumer of this information is the reader himself;

A signal is a material carrier that captures information for its transfer from a source to a consumer. In this case, the signal is electronic in nature. If the student takes this manual in the library, then the same information will be on paper. Being read and memorized by a student, the information will acquire another carrier - biological, when it is "recorded" in the student's memory.


The signal is the most important element in this circuit. The forms of its presentation, as well as the quantitative and qualitative characteristics of the information contained in it, which are important for the consumer of information, are discussed later in this section of the textbook. The main characteristics of the computer as the main tool that maps the source of information into a signal (link 1 in the figure) and “bringing” the signal to the consumer of information (link 2 in the figure) are given in the Computer section. The structure of the procedures that implement links 1 and 2 and make up the information process is the subject of consideration in the part Information process.

The objects of the material world are in a state of continuous change, which is characterized by the exchange of energy of the object with the environment. A change in the state of one object always leads to a change in the state of some other object in the environment. This phenomenon, regardless of how, which particular states and which particular objects have changed, can be considered as a signal transmission from one object to another. Changing the state of an object when a signal is sent to it is called signal registration.


A signal or a sequence of signals form a message that can be perceived by the recipient in one form or another, as well as in one volume or another. Information in physics is a term that qualitatively generalizes the concepts of "signal" and "message". If signals and messages can be quantified, then we can say that signals and messages are units of measurement of the amount of information. The message (signal) is interpreted differently by different systems. For example, a long and two short beeps in sequence in Morse code terminology is the letter de (or D), in BIOS terminology from AWARD, a video card malfunction.

The role of information in mathematics

In mathematics, information theory (mathematical communication theory) is a section of applied mathematics that defines the concept of information, its properties, and establishes limiting relationships for data transmission systems. The main sections of information theory are source coding (compressive coding) and channel (noise-immune) coding. Mathematics is more than a scientific discipline. It creates a single language for all Science.


The subject of mathematics research is abstract objects: number, function, vector, set, and others. Moreover, most of them are introduced axiomatically (axiom), i.e. without any connection with other concepts and without any definition.

Information is not among the subjects of study of mathematics. However, the word "information" is used in mathematical terms - own information and mutual information, related to the abstract (mathematical) part of information theory. However, in mathematical theory, the concept of "information" is associated with exclusively abstract objects - random variables, while in modern information theory this concept is considered much more widely - as a property of material objects. The connection between these two identical terms is undeniable. It was the mathematical apparatus of random numbers that was used by the author of information theory Claude Shannon. He himself means by the term "information" something fundamental (irreducible). Shannon's theory intuitively assumes that information has content. Information reduces the overall uncertainty and information entropy. The amount of information available to measure. However, he warns researchers against the mechanical transfer of concepts from his theory to other areas of science.


"The search for ways to apply information theory in other fields of science is not reduced to a trivial transfer of terms from one field of science to another. This search is carried out in a long process of putting forward new hypotheses and their experimental verification." K. Shannon.

The role of information in cybernetics

The founder of cybernetics, Norbert Wiener, spoke of information as follows:

Information is not matter or energy, information is information. "But the basic definition of information that he gave in several of his books is the following: information is a designation of content that we received from the outside world, in the process of adapting us and our feelings.

Information is the basic concept of cybernetics, just as economic information is the basic concept of economic cybernetics.


There are many definitions of this term, they are complex and contradictory. The reason, obviously, is that various sciences deal with cybernetics as a phenomenon, and cybernetics is only the youngest of them. I. is the subject of study of such sciences as the science of management, mathematical statistics, genetics, the theory of mass media I. (press, radio, television), computer science, which deals with the problems of scientific and technical I., etc. Finally, recently Philosophers show great interest in the problems of reflection: they tend to regard reflection as one of the basic universal properties of matter, connected with the concept of reflection. With all interpretations of the concept of I., it assumes the existence of two objects: the source of I. and the consumer (receiver) of I. The transfer of I. from one to another occurs with the help of signals that, generally speaking, may not have any physical connection with its meaning: this the relationship is determined by agreement. For example, a blow to the veche bell meant that it was necessary to gather in the square, but for those who did not know about this order, he did not inform any I.


In the situation with the vesper bell, the person involved in the agreement on the meaning of the signal knows that at the moment there can be two alternatives: the vespers will take place or not. Or, to put it in the language of I. theory, an indefinite event (veche) has two outcomes. The received signal leads to a decrease in uncertainty: the person now knows that the event (veche) has only one outcome - it will take place. However, if it was known in advance that the veche would take place at such and such an hour, the bell did not announce anything new. It follows from this that the less likely (i.e., more unexpected) the message, the more I. it contains, and vice versa, the more likely the outcome before the event, the less I. contains the signal. Approximately such reasoning led in the 40s. 20th century to the emergence of a statistical, or “classical”, theory of I., which defines the concept of I. through a measure of reducing the uncertainty of knowledge about the accomplishment of an event (such a measure was called entropy). N. Wiener, K. Shannon and Soviet scientists A. N. Kolmogorov, V. A. Kotelnikov and others stood at the origins of this science. ., storage capacity of I. devices, etc., which served as a powerful stimulus for the development of cybernetics as a science and electronic computing technology as a practical application of the achievements of cybernetics.


As for the definition of the value, usefulness of I. for the recipient, there is still a lot of unresolved, unclear. If we proceed from the needs of economic management and, consequently, economic cybernetics, then information can be defined as all the information, knowledge, messages that help solve a particular management problem (that is, reduce the uncertainty of its outcomes). Then some possibilities open up for evaluating I.: it is all the more useful, more valuable, the sooner or at a lower cost it leads to the solution of the problem. The concept of I. is close to the concept of data. However, there is a difference between them: data are signals from which I still need to be extracted. Data processing is the process of bringing them to a form suitable for this.


The process of their transfer from the source to the consumer and perception as I. can be considered as passing through three filters:

Physical, or statistical (a purely quantitative limitation on the bandwidth of the channel, regardless of the content of the data, that is, in terms of syntactic);

Semantic (selection of those data that can be understood by the recipient, i.e., correspond to the thesaurus of his knowledge);

Pragmatic (selection among the understood information of those that are useful for solving a given problem).

This is well shown in the diagram taken from E. G. Yasin's book on economic information. Accordingly, three aspects of the study of I. problems are distinguished - syntactic, semantic, and pragmatic.


According to its content, I. is subdivided into socio-political, socio-economic (including economic I.), scientific and technical, etc. In general, there are many classifications of I., they are built on various grounds. As a rule, due to the proximity of concepts, data classifications are built in the same way. For example, I. is subdivided into static (constant) and dynamic (variable), and data at the same time - into constants and variables. Another division is primary, derivative, output I. (data are classified in the same way). The third division is I. managing and informing. The fourth is redundant, useful and false. Fifth - complete (continuous) and selective. This idea of ​​Wiener gives a direct indication of the objectivity of information, i.e. its existence in nature is independent of human consciousness (perception).

Modern cybernetics defines objective information as an objective property of material objects and phenomena to generate a variety of states that are transferred from one object (process) to another through fundamental interactions of matter and imprinted in its structure. A material system in cybernetics is considered as a set of objects that themselves can be in different states, but the state of each of them is determined by the states of other objects in the system.

In nature, the set of system states is information, the states themselves are the primary code, or source code. Thus, each material system is a source of information. Cybernetics defines subjective (semantic) information as the meaning or content of a message.

The role of information in computer science

The subject of science is precisely the data: methods of their creation, storage, processing and transmission. Content (also: "filling" (in the context), "site content") - a term meaning all types of information (both textual and multimedia - images, audio, video) that make up the content (visualized, for the visitor, content) of the web -site. It is used to separate the concept of information that makes up the internal structure of the page / site (code), from that which will eventually be displayed on the screen.

The word "information" comes from the Latin word informatio, which means information, clarification, familiarization. The concept of "information" is basic in the course of computer science, but it is impossible to define it through other, more "simple" concepts.


The following approaches to the definition of information can be distinguished:

Traditional (ordinary) - used in computer science: Information is information, knowledge, messages about the state of affairs that a person perceives from the outside world with the help of the senses (sight, hearing, taste, smell, touch).

Probabilistic - used in information theory: Information is information about objects and phenomena of the environment, their parameters, properties and state, which reduce the degree of uncertainty and incompleteness of knowledge about them.


Information is stored, transmitted and processed in symbolic (sign) form. The same information can be presented in different forms:

Signed writing, consisting of various signs, among which there is a symbolic one in the form of text, numbers, specials. characters; graphic; tabular, etc.;

The form of gestures or signals;

Oral verbal form (conversation).


The presentation of information is carried out with the help of languages, as sign systems, which are built on the basis of a certain alphabet and have rules for performing operations on signs. Language is a certain sign system for representing information. Exist:

Natural languages ​​are spoken languages ​​in spoken and written form. In some cases, spoken language can be replaced by the language of facial expressions and gestures, the language of special signs (for example, road signs);

Formal languages ​​are special languages ​​for various areas of human activity, which are characterized by a rigidly fixed alphabet, stricter rules of grammar and syntax. These are the language of music (notes), the language of mathematics (numbers, mathematical signs), number systems, programming languages, etc. At the heart of any language is the alphabet - a set of symbols / signs. The total number of symbols in an alphabet is called the cardinality of the alphabet.


Information carriers are a medium or a physical body for the transmission, storage and reproduction of information. (These are electrical, light, thermal, sound, radio signals, magnetic and laser disks, printed publications, photographs, etc.)

Information processes are processes associated with the receipt, storage, processing and transmission of information (i.e. actions performed with information). Those. These are processes during which the content of information or the form of its presentation changes.

To ensure the information process, a source of information, a communication channel and a consumer of information are needed. The source transmits (sends) information, and the receiver receives (perceives) it. The transmitted information is achieved from the source to the receiver using a signal (code). Changing the signal allows you to get information.

Being an object of transformation and use, information is characterized by the following properties:

Syntax is a property that determines how information is presented on a carrier (in a signal). So, this information is presented on electronic media using a specific font. Here you can also consider such information presentation parameters as the style and color of the font, its size, line spacing, etc. The selection of the required parameters as syntactic properties is obviously determined by the proposed transformation method. For example, for a visually impaired person, the font size and color are essential. If you intend to enter this text into a computer through a scanner, the paper size is important;


Semantics is a property that defines the meaning of information as the correspondence of a signal to the real world. So, the semantics of the signal “computer science” is in the definition given earlier. Semantics can be thought of as some agreement, known to the consumer of information, about what each signal means (the so-called interpretation rule). For example, it is the semantics of signals that is studied by a novice motorist who studies the rules of the road, learning road signs (in this case, the signs themselves act as signals). The semantics of words (signals) is learned by a learner of any foreign language. We can say that the meaning of teaching computer science is to study the semantics of various signals - the essence of the key concepts of this discipline;


Pragmatics is a property that determines the impact of information on consumer behavior. So the pragmatics of the information received by the reader of this study guide is, at least, the successful passing of the computer science exam. I would like to believe that the pragmatics of this work will not be limited to this, and it will serve for further education and professional activity of the reader.

It should be noted that signals of different syntax can have the same semantics. For example, the signals "computer" and "computer" mean an electronic device for converting information. In this case, one usually speaks of signal synonymy. On the other hand, one signal (i.e., information with one syntactic property) can have different pragmatics for consumers and different semantics. Thus, a road sign known as a “brick” and having a well-defined semantics (“no entry”) means a ban on entry for a motorist, but it does not affect a pedestrian in any way. At the same time, the “key” signal can have different semantics: a treble clef, a spring clef, a key to open a lock, a key used in computer science to encode a signal in order to protect it from unauthorized access (in this case, the signal is called homonymy). There are signals - antonyms that have opposite semantics. For example, "cold" and "hot", "fast" and "slow", etc.


The subject of study of the science of informatics is precisely the data: the methods of their creation, storage, processing and transmission. And the information itself recorded in the data, its meaningful meaning is of interest to users of information systems who are specialists in various sciences and fields of activity: a physician is interested in medical information, a geologist is interested in geological information, an entrepreneur is interested in commercial information, etc. (including a computer scientist who is interested in information on working with data).

Semiotics - the science of information

Information cannot be imagined without its receipt, processing, transmission, etc., that is, outside the framework of information exchange. All acts of information exchange are carried out by means of symbols or signs, with the help of which one system influences another. Therefore, the main science that studies information is semiotics - the science of signs and sign systems in nature and society (the theory of signs). In each act of information exchange, one can find three of its "participants", three elements: a sign, an object that it designates, and a recipient (user) of the sign.


Depending on the relations between which elements are considered, semiotics is divided into three sections: syntactics, semantics and pragmatics. Syntactics studies signs and the relationships between them. At the same time, it abstracts from the content of the sign and from its practical significance for the recipient. Semantics studies the relationship between signs and the objects they designate, while abstracting from the recipient of signs and the value of the latter: for him. It is clear that the study of the patterns of semantic representation of objects in signs is impossible without taking into account and using the general patterns of construction of any sign systems studied by syntactics. Pragmatics studies the relationship between signs and their users. Within the framework of pragmatics, all factors that distinguish one act of information exchange from another, all questions of the practical results of using information and its value for the recipient are studied.


At the same time, many aspects of the relationship of signs between themselves and with the objects they designate are inevitably affected. Thus, the three sections of semiotics correspond to three levels of abstraction (distraction) from the characteristics of specific acts of information exchange. The study of information in all its diversity corresponds to the pragmatic level. Distracting from the recipient of information, excluding him from consideration, we move on to studying it at the semantic level. With a distraction from the content of signs, the analysis of information is transferred to the level of syntactic. Such interpenetration of the main sections of semiotics, associated with different levels of abstraction, can be represented using the scheme "Three sections of semiotics and their relationship." Measurement of information is carried out respectively in three aspects: syntactic, semantic and pragmatic. The need for such a different measurement of information, as will be shown below, is dictated by the practice of designing and organizing the work of information systems. Consider a typical production situation.


At the end of the shift, the site planner prepares data on the implementation of the production schedule. This data is sent to the information and computing center (ICC) of the enterprise, where it is processed, and in the form of reports on the state of production at the current moment, they are issued to managers. Based on the data received, the shop manager decides to change the production plan for the next planning period or to take any other organizational measures. It is obvious that for the head of the shop, the amount of information that the summary contained depends on the magnitude of the economic impact received from its use in decision-making, on how useful the information was. For the site planner, the amount of information in the same message is determined by the accuracy of its correspondence to the actual state of affairs on the site and the degree of surprise of the reported facts. The more unexpected they are, the faster you need to report them to management, the more information in this message. For ITC employees, the number of characters, the length of the message carrying information will be of paramount importance, since it determines the loading time of computer equipment and communication channels. At the same time, neither the usefulness of information, nor the quantitative measure of the semantic value of information is practically of no interest to them.


Naturally, when organizing a production management system, building models for choosing a solution, we will use the usefulness of information as a measure of the information content of messages. When building an accounting and reporting system that provides management of data on the course of the production process, the novelty of the information received should be taken as a measure of the amount of information. The organization of procedures for the mechanical processing of information requires measuring the volume of messages in the form of the number of characters being processed. These three essentially different approaches to measuring information do not contradict or exclude each other. On the contrary, by measuring information on different scales, they allow a more complete and comprehensive assessment of the information content of each message and more efficient organization of the production management system. According to the apt expression of Prof. NOT. Kobrinsky, when it comes to the rational organization of information flows, the quantity, novelty, usefulness of information turn out to be as interconnected as the quantity, quality and cost of products in production.

Information in the material world

Information is one of the general concepts associated with matter. Information exists in any material object in the form of a variety of its states and is transmitted from object to object in the process of their interaction. The existence of information as an objective property of matter logically follows from the well-known fundamental properties of matter - structure, continuous change (movement) and interaction of material objects.


The structure of matter is manifested as an internal dismemberment of integrity, a regular order of connection of elements in the composition of the whole. In other words, any material object, from the subatomic particle of the Meta Universe (Big Bang) as a whole, is a system of interconnected subsystems. As a result of continuous movement, understood in a broad sense as movement in space and development in time, material objects change their states. The state of objects also changes when interacting with other objects. The set of states of the material system and all its subsystems represents information about the system.


Strictly speaking, due to uncertainty, infinity, structural properties, the amount of objective information in any material object is infinite. This information is called complete. However, it is possible to single out structural levels with finite sets of states. Information that exists at a structural level with a finite number of states is called private. For private information, the meaning is the concept of the amount of information.

From the above representation, the choice of the unit of measure for the amount of information follows logically and simply. Imagine a system that can be in only two equally probable states. Let's assign code "1" to one of them, and "0" to the other. This is the minimum amount of information that the system can contain. It is the unit of measurement of information and is called a bit. There are other, more difficult to define, methods and units for measuring the amount of information.


Depending on the material form of the carrier, information can be of two main types - analog and discrete. Analog information changes in time continuously and takes values ​​from a continuum of values. Discrete information changes at some points in time and takes values ​​from a certain set of values. Any material object or process is the primary source of information. All its possible states constitute the code of the source of information. The instantaneous value of states is represented as a symbol ("letter") of this code. In order for information to be transmitted from one object to another as a receiver, it is necessary that there be some kind of intermediate material carrier that interacts with the source. Such carriers in nature, as a rule, are rapidly propagating processes of the wave structure - cosmic, gamma and x-ray radiation, electromagnetic and sound waves, potentials (and maybe not yet discovered waves) of the gravitational field. When electromagnetic radiation interacts with an object, its spectrum changes as a result of absorption or reflection, i.e. the intensities of some wavelengths change. The harmonics of sound vibrations also change during interactions with objects. Information is also transmitted during mechanical interaction, but mechanical interaction, as a rule, leads to large changes in the structure of objects (up to their destruction), and the information is greatly distorted. Distortion of information during its transmission is called misinformation.


The transfer of source information to a carrier structure is called encoding. In this case, the source code is converted into the carrier code. A carrier with a source code transferred to it in the form of a carrier code is called a signal. The signal receiver has its own set of possible states, which is called the receiver code. The signal, interacting with the receiving object, changes its states. The process of converting a signal code into a receiver code is called decoding. The transfer of information from a source to a receiver can be considered as an information interaction. Information interaction is fundamentally different from other interactions. With all other interactions of material objects, there is an exchange of matter and (or) energy. In this case, one of the objects loses matter or energy, while the other receives them. This property of interactions is called symmetry. During information interaction, the receiver receives information, and the source does not lose it. Information interaction is not symmetrical. Objective information itself is not material, it is a property of matter, such as structure, movement, and exists on material carriers in the form of its codes.

Information in wildlife

Wildlife is complex and varied. Sources and receivers of information in it are living organisms and their cells. The organism has a number of properties that distinguish it from inanimate material objects.


Main:

Continuous exchange of matter, energy and information with the environment;

Irritability, the body's ability to perceive and process information about changes in the environment and the internal environment of the body;

Excitability, the ability to respond to the action of stimuli;

Self-organization, manifested as changes in the body to adapt to environmental conditions.


The organism, considered as a system, has a hierarchical structure. This structure, relative to the organism itself, is subdivided into internal levels: molecular, cellular, the level of organs, and, finally, the organism itself. However, the organism also interacts with organismic living systems, the levels of which are the population, the ecosystem, and all living nature as a whole (the biosphere). Not only matter and energy flows, but also information flows between all these levels. Information interactions in living nature occur in the same way as in inanimate nature. At the same time, wildlife in the process of evolution has created a wide variety of sources, carriers and receivers of information.


The reaction to the influences of the external world is manifested in all organisms, since it is due to irritability. In higher organisms, adaptation to the environment is a complex activity that is effective only with sufficiently complete and timely information about the environment. The receivers of information from the external environment are the sense organs, which include sight, hearing, smell, taste, touch and the vestibular apparatus. In the internal structure of organisms, there are numerous internal receptors associated with the nervous system. The nervous system consists of neurons, the processes of which (axons and dendrites) are analogous to information transmission channels. The main organs that store and process information in vertebrates are the spinal cord and brain. In accordance with the characteristics of the sense organs, the information perceived by the body can be classified as visual, auditory, gustatory, olfactory and tactile.


Getting on the retina of the human eye, the signal in a special way excites the cells that make it up. Nerve impulses of cells through axons are transmitted to the brain. The brain remembers this sensation in the form of a certain combination of states of its constituent neurons. (Continuation of the example - in the section "Information in human society"). By accumulating information, the brain creates a connected information model of the surrounding world on its structure. In living nature, for an organism - the receiver of information, an important characteristic is its availability. The amount of information that the human nervous system is able to submit to the brain when reading texts is approximately 1 bit per 1/16 of a second.

The study of organisms is hampered by their complexity. The abstraction of structure as a mathematical set, which is acceptable for inanimate objects, is hardly acceptable for a living organism, because in order to create a more or less adequate abstract model of an organism, it is necessary to take into account all the hierarchical levels of its structure. Therefore, it is difficult to introduce a measure of the amount of information. It is very difficult to determine the relationships between the components of the structure. If it is known which organ is the source of information, then what is the signal and what is the receiver?


Before the advent of computers, biology, dealing with the study of living organisms, used only qualitative, i.e. descriptive models. In a qualitative model, it is practically impossible to take into account the information links between the components of the structure. Electronic computing technology has made it possible to apply new methods in biological research, in particular, the method of machine modeling, which involves a mathematical description of known phenomena and processes occurring in the body, adding hypotheses about some unknown processes to them, and calculating possible variants of the body's behavior. The resulting options are compared with the actual behavior of the organism, which allows you to determine the truth or falsity of the hypotheses put forward. Information interaction can also be taken into account in such models. Extremely complex are the information processes that ensure the existence of life itself. And although it is intuitively clear that this property is directly related to the formation, storage and transmission of complete information about the structure of the body, an abstract description of this phenomenon seemed impossible for some time. However, the information processes that ensure the existence of this property have been partly revealed through the deciphering of the genetic code and reading the genomes of various organisms.

Information in human society

The development of matter in the process of motion is directed towards the complication of the structure of material objects. One of the most complex structures is the human brain. So far, this is the only structure known to us that has the property that man himself calls consciousness. Speaking about information, we, as thinking beings, a priori mean that information, in addition to its presence in the form of signals we receive, also has some kind of meaning. Forming in his mind a model of the surrounding world as an interconnected set of models of its objects and processes, a person uses semantic concepts, not information. Meaning is the essence of any phenomenon that does not coincide with itself and connects it with a wider context of reality. The word itself directly indicates that the semantic content of information can be formed only by thinking receivers of information. In human society, it is not the information itself that acquires decisive importance, but its semantic content.


Example (continued). Having experienced such a sensation, a person assigns the concept “tomato” to the object, and the concept “red color” to its state. In addition, his consciousness fixes the connection: "tomato" - "red". This is the meaning of the received signal. (Example continued: later in this section). The ability of the brain to create semantic concepts and connections between them is the basis of consciousness. Consciousness can be viewed as a self-developing semantic model of the surrounding world. Meaning is not information. Information exists only on a physical medium. Human consciousness is considered intangible. Meaning exists in the human mind in the form of words, images and sensations. A person can pronounce words not only out loud, but also “to himself”. He can also create (or remember) images and sensations “to himself”. However, he can retrieve the information corresponding to this meaning by speaking or writing the words.

Example (continued). If the words "tomato" and "red color" are the meaning of concepts, then where is the information? Information is contained in the brain in the form of certain states of its neurons. It is also contained in the printed text consisting of these words, and when encoding letters with a three-digit binary code, its number is 120 bits. If you say the words aloud, there will be much more information, but the meaning will remain the same. The greatest amount of information is carried by a visual image. This is reflected even in folklore - "it is better to see once than to hear a hundred times." Information restored in this way is called semantic information, since it encodes the meaning of some primary information (semantics). Hearing (or seeing) a phrase spoken (or written) in a language that a person does not know, he receives information, but cannot determine its meaning. Therefore, in order to transmit the semantic content of information, some agreements are required between the source and the receiver about the semantic content of the signals, i.e. words. Such agreements can be reached through communication. Communication is one of the most important conditions for the existence of human society.

In the modern world, information is one of the most important resources and, at the same time, one of the driving forces for the development of human society. Information processes occurring in the material world, wildlife and human society are studied (or at least taken into account) by all scientific disciplines from philosophy to marketing. The increasing complexity of the tasks of scientific research has led to the need to involve large teams of scientists of various specialties in their solution. Therefore, almost all the theories considered below are interdisciplinary. Historically, two complex branches of science - cybernetics and informatics - are directly involved in the study of information.


Modern cybernetics is a multi-disciplinary branch of science that studies super-complex systems, such as:

Human society (social cybernetics);

Economics (economic cybernetics);

Living organism (biological cybernetics);

The human brain and its function - consciousness (artificial intelligence).


Informatics, which was formed as a science in the middle of the last century, separated from cybernetics and is engaged in research in the field of methods for obtaining, storing, transmitting and processing semantic information. Both of these industries use several underlying scientific theories. These include information theory, and its sections - coding theory, algorithm theory and automata theory. Studies of the semantic content of information are based on a complex of scientific theories under the general name of semiotics. Information theory is a complex, mainly mathematical theory, which includes a description and evaluation of methods for extracting, transmitting, storing and classifying information. Considers information carriers as elements of an abstract (mathematical) set, and interactions between carriers as a way of arranging elements in this set. This approach makes it possible to formally describe the code of information, that is, to define an abstract code and explore it with mathematical methods. For these studies, he applies the methods of probability theory, mathematical statistics, linear algebra, game theory and other mathematical theories.


The foundations of this theory were laid by the American scientist E. Hartley in 1928, who determined the measure of the amount of information for some communication problems. Later, the theory was significantly developed by the American scientist C. Shannon, Russian scientists A.N. Kolmogorov, V.M. Glushkov and others. Modern information theory includes sections of coding theory, algorithm theory, digital automata theory (see below) and some others. There are also alternative information theories, for example, "Qualitative Information Theory", proposed by the Polish scientist M. Mazur. Any person is familiar with the concept of an algorithm, without even knowing it. Here is an example of an informal algorithm: “Cut the tomatoes into circles or slices. Put chopped onion in them, pour over with vegetable oil, then sprinkle with finely chopped capsicum, mix. Before use, sprinkle with salt, put in a salad bowl and garnish with parsley. (Tomato salad).


The first rules for solving arithmetic problems in the history of mankind were developed by one of the famous scientists of antiquity Al-Khwarizmi in the 9th century AD. In his honor, formalized rules for achieving a goal are called algorithms. The subject of the theory of algorithms is to find methods for constructing and evaluating effective (including universal) computational and control algorithms for information processing. To substantiate such methods, the theory of algorithms uses the mathematical apparatus of information theory. The modern scientific concept of algorithms as ways of processing information was introduced in the works of E. Post and A. Turing in the 20s of the twentieth century (Turing Machine). The Russian scientists A. Markov (Normal Markov Algorithm) and A. Kolmogorov made a great contribution to the development of the theory of algorithms. Automata theory is a section of theoretical cybernetics that studies mathematical models of actually existing or fundamentally possible devices processing discrete information at discrete times.


The concept of an automaton originated in the theory of algorithms. If there are some universal algorithms for solving computational problems, then there must be devices (albeit abstract) for the implementation of such algorithms. Actually, the abstract Turing machine, considered in the theory of algorithms, is at the same time an informally defined automaton. The theoretical justification for the construction of such devices is the subject of automata theory. Automata theory uses the apparatus of mathematical theories - algebra, mathematical logic, combinatorial analysis, graph theory, probability theory, etc. Automata theory, together with the theory of algorithms, is the main theoretical basis for creating electronic computers and automated control systems. Semiotics is a complex of scientific theories that study the properties of sign systems. The most significant results have been achieved in the branch of semiotics - semantics. The subject of research in semantics is the semantic content of information.


A sign system is a system of concrete or abstract objects (signs, words), with each of which a certain value is associated in a certain way. In theory, it is proved that there can be two such comparisons. The first type of correspondence directly defines the material object that denotes this word and is called the denotation (or, in some works, the nominee). The second type of correspondence determines the meaning of the sign (word) and is called the concept. At the same time, such properties of comparisons as “meaning”, “truth”, “definability”, “following”, “interpretation”, etc. are studied. For research, the apparatus of mathematical logic and mathematical linguistics is used. F de Saussure in the 19th century, formulated and developed by C. Pierce (1839-1914), C. Morris (b. 1901), R. Carnap (1891-1970) and others. the meaning of a text in a natural language as a record in some formalized semantic (semantic) language. Semantic analysis is the basis for creating devices (programs) for machine translation from one natural language to another.

Information is stored by means of its transfer to some material carriers. Semantic information recorded on a material storage medium is called a document. Mankind has learned to store information for a very long time. In the most ancient forms of information storage, the arrangement of objects was used - shells and stones on the sand, knots on a rope. A significant development of these methods was writing - a graphic representation of symbols on stone, clay, papyrus, paper. Of great importance in the development of this direction was the invention of printing. Throughout its history, humanity has accumulated a huge amount of information in libraries, archives, periodicals and other written documents.


At present, the storage of information in the form of sequences of binary characters has gained particular importance. To implement these methods, various storage devices are used. They are the central link of information storage systems. In addition to them, such systems use information retrieval tools (search engine), help obtaining tools (information and reference systems) and information display tools (output device). Formed according to the purpose of information, such information systems form databases, data banks and a knowledge base.

The transfer of semantic information is the process of its spatial transfer from the source to the recipient (addressee). Man learned to transmit and receive information even earlier than to store it. Speech is a method of transmission that our distant ancestors used in direct contact (conversation) - we still use it now. To transmit information over long distances, it is necessary to use much more complex information processes. To implement such a process, information must be formalized (presented) in some way. To represent information, various sign systems are used - sets of predetermined semantic symbols: objects, pictures, written or printed words of a natural language. The semantic information about some object, phenomenon or process presented with their help is called a message.


Obviously, in order to transmit a message over a distance, information must be transferred to some kind of mobile carrier. Carriers can move in space with the help of vehicles, as is the case with letters sent by mail. This method ensures complete reliability of information transmission, since the addressee receives the original message, but requires a significant amount of time for transmission. Since the middle of the 19th century, methods of transmitting information have become widespread, using a naturally propagating carrier of information - electromagnetic oscillations (electrical oscillations, radio waves, light). The implementation of these methods requires:

Preliminary transfer of the information contained in the message to the carrier - encoding;

Ensuring the transmission of the signal thus obtained to the addressee via a special communication channel;

Reverse conversion of the signal code into the message code - decoding.

The use of electromagnetic media makes the delivery of a message to the addressee almost instantaneous, however, it requires additional measures to ensure the quality (reliability and accuracy) of the transmitted information, since real communication channels are subject to natural and artificial interference. Devices that implement the process of data transmission form communication systems. Depending on the method of presenting information, communication systems can be divided into sign (telegraph, telefax), sound (telephone), video and combined systems (television). The most developed communication system in our time is the Internet.

Data processing

Since information is not material, its processing consists in various transformations. Processing processes include any transfer of information from a medium to another medium. The information to be processed is called data. The main type of processing of primary information received by various devices is the transformation into a form that ensures its perception by the human senses. Thus, space photographs obtained in X-rays are converted into ordinary color photographs using special spectrum converters and photographic materials. Night vision devices convert an image obtained in infrared (thermal) rays into an image in the visible range. For some communication and control tasks, it is necessary to convert analog information. For this, analog-to-digital and digital-to-analog signal converters are used.


The most important type of semantic information processing is the determination of the meaning (content) contained in a certain message. Unlike primary semantic information, it does not have statistical characteristics, that is, a quantitative measure - either there is a meaning or it is not. And how much of it, if any, is impossible to establish. The meaning contained in the message is described in an artificial language that reflects the semantic relationships between the words of the source text. A dictionary of such a language, called a thesaurus, resides in the message receiver. The meaning of words and phrases of the message is determined by referring them to certain groups of words or phrases, the meaning of which has already been established. Thesaurus, thus, allows you to establish the meaning of the message and, at the same time, is replenished with new semantic concepts. The described type of information processing is used in information retrieval systems and machine translation systems.


One of the widespread types of information processing is the solution of computational problems and problems of automatic control with the help of computers. Information processing is always done with a purpose. To achieve it, the order of actions on information, leading to a given goal, must be known. This procedure is called an algorithm. In addition to the algorithm itself, you also need some device that implements this algorithm. In scientific theories, such a device is called an automaton. It should be noted as the most important feature of information that, due to the asymmetry of information interaction, new information arises during information processing, and the original information is not lost.

Analog and digital information

Sound is wave vibrations in a medium, such as air. When a person speaks, the vibrations of the throat ligaments are converted into wave vibrations of the air. If we consider sound not as a wave, but as oscillations at one point, then these oscillations can be represented as air pressure changing over time. A microphone can pick up pressure changes and convert them into electrical voltage. There was a transformation of air pressure into electrical voltage fluctuations.


Such a transformation can occur according to various laws, most often the transformation occurs according to a linear law. For example, like this:

U(t)=K(P(t)-P_0),

where U(t) is the electrical voltage, P(t) is the air pressure, P_0 is the mean air pressure and K is the conversion factor.

Both electrical voltage and air pressure are continuous functions in time. The functions U(t) and P(t) are information about the vibrations of the throat ligaments. These functions are continuous and such information is called analog. Music is a special case of sound and it can also be represented as some function of time. It will be an analog representation of music. But music is also recorded in the form of notes. Each note has a duration that is a multiple of a predetermined duration, and a pitch (do, re, mi, fa, sol, etc.). If this data is converted into numbers, then we get a digital representation of music.


Human speech is also a special case of sound. It can also be represented in analog form. But just as music can be broken down into notes, speech can be broken down into letters. If each letter is given its own set of numbers, then we will get a digital representation of speech. The difference between analog information and digital information is that analog information is continuous, while digital information is discrete. The transformation of information from one type to another, depending on the type of transformation, is called differently: simply "conversion", such as digital-to-analog conversion, or analog-to-digital conversion; complex transformations are called "encoding", eg delta coding, entropy coding; the transformation between characteristics such as amplitude, frequency or phase is called "modulation", for example, amplitude-frequency modulation, pulse-width modulation.

Usually, analog conversions are quite simple and can be easily handled by various devices invented by man. A tape recorder converts magnetization on film into sound, a voice recorder converts sound into magnetization on film, a video camera converts light into magnetization on film, an oscilloscope converts electrical voltage or current into an image, and so on. Converting analog information to digital is much more difficult. Some transformations cannot be performed by the machine or can be done with great difficulty. For example, converting speech into text, or converting a recording of a concert into notes, and even by its nature a digital representation: it is very difficult for a machine to convert text on paper into the same text in computer memory.

Why, then, use the digital representation of information, if it is so difficult? The main advantage of digital information over analog is noise immunity. That is, in the process of copying information, digital information is copied as it is, it can be copied almost an infinite number of times, while analog information is noisy during copying, its quality deteriorates. Usually, analog information can be copied no more than three times. If you have a two-cassette audio tape recorder, you can make such an experiment, try re-recording the same song from cassette to cassette several times, after a few such re-recordings you will notice how much recording quality has deteriorated. The information on the cassette is stored in analog form. You can rewrite music in mp3 format as many times as you like, and the quality of the music does not deteriorate. The information in an mp3 file is stored digitally.

Amount of information

A person or some other receiver of information, having received a portion of information, resolves some uncertainty. Let's take a tree as an example. When we saw the tree, we resolved a number of uncertainties. We learned the height of the tree, the type of tree, the density of the foliage, the color of the leaves, and if it is a fruit tree, then we saw the fruits on it, how ripe they were, etc. Before we looked at the tree, we did not know all this, after we looked at the tree, we resolved the uncertainty - we got information.


If we go out into the meadow and look at it, we will get a different kind of information, how big the meadow is, how tall the grass is, and what color the grass is. If a biologist enters the same meadow, he will, among other things, be able to find out: what varieties of grass grow in the meadow, what type of meadow this is, he will see which flowers have bloomed, which ones will just bloom, whether the meadow is suitable for grazing cows, etc. That is, he will receive more information than we do, since he had more questions before he looked at the meadow, the biologist will resolve more uncertainties.

The greater the uncertainty was resolved in the process of obtaining information, the more information we received. But this is a subjective measure of the amount of information, and we would like to have an objective measure. There is a formula for calculating the amount of information. We have some uncertainty, and we have N-th number of cases of resolution of uncertainty, and each case has some probability of resolution, then the amount of information received can be calculated using the following formula that Shannon suggested to us:

I = -(p_1 \log_(2)p_1 + p_2 \log_(2)p_2 + ... +p_N \log_(2)p_N), where

I is the amount of information;

N is the number of outcomes;

p_1, p_2, ..., p_N - outcome probabilities.

The amount of information is measured in bits - an abbreviation for the English words BInary digiT, which means a binary digit.

For equiprobable events, the formula can be simplified:

I = \log_(2)N, where

I is the amount of information;

N is the number of outcomes.

Take, for example, a coin and throw it on the table. It will land either heads or tails. We have 2 equally likely events. After we tossed a coin, we got \log_(2)2=1 bit of information.

Let's try to find out how much information we get after we roll the die. The cube has six sides - six equally likely events. We get: \log_(2)6 \approx 2.6. After we rolled the die on the table, we got approximately 2.6 bits of information.


The chance of seeing a Martian dinosaur when we leave our house is one in ten in a billion. How much information will we get about the Martian dinosaur after we leave the house?

-\left(((1 \over (10^(10))) \log_2(1 \over (10^(10))) + \left(( 1 - (1 \over (10^(10))) ) \right) \log_2 \left(( 1 - (1 \over (10^(10))) )\right)) \right) \approx 3,4 \cdot 10^(-9) bits.

Suppose we tossed 8 coins. We have 2^8 coin drop options. So after tossing coins we get \log_2(2^8)=8 bits of information.

When we ask a question and are equally likely to get a yes or no answer, then after answering the question we get one bit of information.


Surprisingly, if we apply the Shannon formula for analog information, then we get an infinite amount of information. For example, the voltage at a point in an electrical circuit can take on an equiprobable value from zero to one volt. The number of outcomes we have is infinity, and by substituting this value into the formula for equiprobable events, we get infinity - an infinite amount of information.

Now I'll show you how to encode "War and Peace" with just one notch on any metal rod. Let's encode all the letters and signs found in "War and Peace" using two-digit numbers - they should be enough for us. For example, we will give the letter “A” the code “00”, the letter “B” - the code “01”, and so on, we will encode punctuation marks, Latin letters and numbers. We recode "War and Peace" using this code and get a long number, for example, this is 70123856383901874..., add a comma and zero before this number (0.70123856383901874...). The result is a number from zero to one. Let's put a risk on a metal rod so that the ratio of the left side of the rod to the length of this rod is exactly equal to our number. Thus, if we suddenly want to read War and Peace, we simply measure the left side of the rod to the risks and the length of the entire rod, divide one number by another, get the number and recode it back into letters (“00” to “A”, "01" in "B", etc.).

In reality, we will not be able to do this, since we will not be able to determine the lengths with infinite accuracy. Some engineering problems prevent us from increasing the measurement accuracy, and quantum physics shows us that after a certain limit, quantum laws will already interfere with us. Intuitively, we understand that the lower the measurement accuracy, the less information we receive, and the greater the measurement accuracy, the more information we receive. Shannon's formula is not suitable for measuring the amount of analog information, but there are other methods for this, which are discussed in Information Theory. In computer technology, a bit corresponds to the physical state of the information carrier: magnetized - not magnetized, there is a hole - no hole, charged - not charged, reflects light - does not reflect light, high electrical potential - low electrical potential. In this case, one state is usually denoted by the number 0, and the other by the number 1. Any information can be encoded by a sequence of bits: text, image, sound, etc.


Along with a bit, a value called a byte is often used, usually it is equal to 8 bits. And if the bit allows you to choose one equally likely option out of two possible, then the byte is 1 out of 256 (2 ^ 8). To measure the amount of information, it is also customary to use larger units:

1 KB (one kilobyte) 210 bytes = 1024 bytes

1 MB (one megabyte) 210 KB = 1024 KB

1 GB (one gigabyte) 210 MB = 1024 MB

In reality, the SI prefixes kilo-, mega-, giga- should be used for factors of 10^3, 10^6 and 10^9, respectively, but the practice of using factors with powers of two has historically developed.


A Shannon bit and a computer bit are the same if the probabilities of a zero or one occurring in a computer bit are equal. If the probabilities are not equal, then the amount of information according to Shannon becomes less, we saw this in the example of the Martian dinosaur. The computer amount of information gives an upper estimate of the amount of information. Volatile memory after power is applied to it is usually initialized with some value, for example, all ones or all zeros. It is clear that after power is supplied to the memory, there is no information there, since the values ​​in the memory cells are strictly defined, there is no uncertainty. Memory can store a certain amount of information, but after power is supplied to it, there is no information in it.

Disinformation is deliberately false information provided to an adversary or business partner for more effective conduct of hostilities, cooperation, checking for information leakage and direction of its leakage, identifying potential black market customers. Also, disinformation (also misinformed) is the process of manipulating information itself, such as: misleading someone by providing incomplete information or complete, but no longer necessary information, distorting the context, distorting part of the information.


The purpose of such an impact is always the same - the opponent must act as the manipulator needs. The act of the object against which disinformation is directed may consist in making the decision necessary for the manipulator or in refusing to make a decision that is unfavorable for the manipulator. But in any case, the ultimate goal is the action that will be taken by the opponent.

Disinformation, therefore, is a product of human activity, an attempt to create a false impression and, accordingly, push for desired actions and / or inaction.

Types of disinformation:

Misleading a specific person or group of persons (including an entire nation);

Manipulation (by the actions of one person or a group of persons);

Creating public opinion about some problem or object.

Misrepresentation is nothing more than outright deception, the provision of false information. Manipulation is a method of influence aimed directly at changing the direction of people's activity. There are the following levels of manipulation:

Strengthening the values ​​(ideas, attitudes) that exist in the minds of people that are beneficial to the manipulator;

Partial change of views on a particular event or circumstance;

A radical change in life attitudes.

The creation of public opinion is the formation in society of a certain attitude towards the chosen problem.


Sources and links

en.wikipedia.org - the free encyclopedia Wikipedia

youtube.com - YouTube video hosting

images.yandex.ua - Yandex pictures

google.com.ua - google pictures

en.wikibooks.org - wikibook

inf1.info – Planet of Informatics

old.russ.ru – Russian Journal

shkolo.ru - Information guide

5byte.ru - Informatics website

ssti.ru – Information technologies

klgtu.ru - Informatics

informatika.sch880.ru - website of the computer science teacher O.V. Podvintseva

bibliofond.ru - electronic library Bibliofond

life-prog.ru - programming

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

[Enter text]

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION

Branch of the federal state budgetary educational institution

higher professional education

"RUSSIAN STATE HUMANITARIAN UNIVERSITY"

in NARO-FOMINSK, MOSCOW REGION

FACULTY OF DOCUMENTS

Larina Anna Alexandrovna

Course work on the discipline "Information support of management"

Topic: Information classification

Naro-Fominsk 2013

Ccontent

Introduction

Chapter 1. Classification of information: concept, principles, criteria

1.1 Basic information classification systems

1.2 Features underlying the classification of information

2.1 International

2.2 All-Russian

Conclusion

List of sources used

Introduction

At present, in order to compete successfully in the market of goods and services, their manufacturers must respond quickly to the rapidly changing needs of potential consumers, ensuring the high quality of the final product at minimal production costs.

To achieve these goals, manufacturing companies have to timely implement information systems that guarantee their support. The timing, costs, and quality of information systems being created largely depend on how effectively the interaction between management specialists (customers, and subsequently users of information systems) and information specialists (information system developers) is organized.

One of the elements that play an important role in the development of modern information systems is the organization of information coding. In this case, a special role is given to methods of information classification. This is due to the fact that the variety of forms and values ​​that various economic indicators can acquire necessitates the application of certain principles for systematizing this information in order to ensure the convenience of its storage, search, processing and use in the process of preparing management decisions.

The main goal of this work is to consider the classification of information as an integral part of the management information support, without which it is impossible to effectively and efficiently carry out management activities.

Chapter 1. Classification of information: concept, principles, criteria

1.1 Basic information classification systems

According to the federal law "On Information, Informatization and Protection of Information", information is understood as "information about persons, objects, facts, events, phenomena and processes, regardless of the form of their presentation." All information is combined into information systems - "organizationally ordered sets of documents and information technologies, including the use of computer technology and communications that implement information processes."

What is a "classification"? Classification is the division of a set of objects into subsets according to their similarity or difference in accordance with accepted methods. Classification captures regular relationships between objects in order to determine the place of the object in the system, which indicates its properties. From this point of view, classification is the most important means of creating a system for storing and searching for information. Classification is universal due to the role it can play as a tool for scientific knowledge, forecasting and management.

The basis of classification is a feature that allows you to distribute a set of objects into subsets. The classification process is the process of allocating objects of classification according to the chosen classification system.

The need for classification is associated with identifying the general properties of an information object, developing rules and procedures for processing information, reducing the volume and time of searching for the necessary information, and simplifying information processing. Classification system - a set of rules for distributing objects of a set into subsets based on classification features and dependencies within features.

A number of requirements are imposed on object classification systems: the completeness of coverage of objects in the area under consideration, the uniqueness of details, the possibility of including new objects.

Each classification system has such basic characteristics as flexibility, capacity, depth and occupancy.

Characteristic

Properties

1. Flexibility

The ability to include new classification features and objects in the classification system without violating its integrity.

2. Capacity

The number (maximum possible) of classification groups in the classification system

3. Depth

The number of permitted levels (steps) corresponding to the number of classification features

4. Fullness

The ratio of the actual number of classification groups to the capacity of the system

The hierarchical, facet and descriptor object classification systems are known and most widely used.

With a hierarchical classification system, a set of objects is divided, depending on the selected classification feature, into classes (groupings) that form the I level. Each class of level I, in accordance with its classification feature, is divided into subclasses (level II). Each subclass of II level is divided into groups (III level), etc.

When using a hierarchical classification system, the following restrictions must be observed:

the classification groupings obtained at each level should constitute the initial set of objects;

classification groupings at each level should not intersect;

classification at each stage should be carried out only on one basis.

The advantages of a hierarchical classification system are the simplicity and consistency of construction, the possibility of using an unlimited number of classification features in various branches of the hierarchical structure. Disadvantages of a hierarchical classification system: a rigid structure that makes it difficult to make changes; the impossibility of grouping objects according to previously unforeseen features.

For coding indicators of predominantly evaluative type, which have a relatively simple record structure, faceted classification can be applied.

The faceted classification system allows you to divide many objects at the same time according to several criteria independent of each other. A classification feature that is used to form independent classification groupings is called a facet.

A facet is a set of homogeneous values ​​of a classification feature. Within a facet, values ​​can be random or ordered, so making changes to facets is not difficult. Classification consists in assigning values ​​from facets. The main requirement when filling in a facet is to exclude the possibility of repeating the same values ​​of classification features in different facets.

The advantages of a faceted classification system are a high degree of flexibility, the use of a large number of classification features and their values ​​to create groupings, and the ease of modifying systems without changing the grouping structure. The disadvantages of the faceted classification system include the complexity of the construction structure and the low degree of system occupancy.

An example of another classification system widely used in the organization of information retrieval is descriptor classification. The language of the descriptor system is close to the natural professional language for describing information objects, which is its advantage. With this classification, a set of keywords or phrases that describe a particular object of the subject area is distinguished. Among the keywords that are synonyms, one is selected, called the descriptor (the descriptor is the only member of the synonymic series of keywords). With the help of descriptors, an internal search image of specific information requests is created.

To automate the search for information between descriptors, associative links are established that carry various semantic and syntactic loads. Based on the identified links between the words that make up the language of a given subject area, the so-called semantic maps are built, reflecting the whole variety of associative relationships between descriptors. With their help, transitions from one descriptor to another, related to it in meaning, can be implemented.

The advantages of the descriptor system can be used in solving the actual problem of developing information systems directly at the request of management specialists without involving professional programmers.

1.2 Features underlying the classification of information

Any classification is always relative. The same object can be classified according to different features or criteria. Often there are situations when, depending on the environmental conditions, an object can be assigned to different classification groups.

The classification of information operating in the organization is based on the five most common features: place of origin, stage of processing, display method, stability, control function.

Main classification criteria:

1. Place of origin. According to the place of origin, information can be divided into input, output, internal, external. Input information is information that enters the firm or its divisions. Output information is information that comes from a firm to another firm or organization. One and the same information can be input for one firm, and for another, which produces it, output. In relation to the object of management (firm or its subdivision: workshop, department, laboratory), information can be defined as both internal and external. Internal information occurs inside the object, external information - outside the object.

2. Stage of processing. According to the stage of processing, information can be primary, secondary, intermediate, effective. Primary information is information that arises directly in the process of the object's activity and is recorded at the initial stage.

Secondary information is information that is obtained as a result of processing primary information and can be intermediate and effective.

Intermediate information is used as input for subsequent calculations.

The resulting information is obtained in the process of processing primary and intermediate information and is used to make management decisions.

3. Display method. According to the method of displaying information is divided into textual and graphical.

Text information is a set of alphabetic, numeric and special characters that represent information on a physical medium (paper, image on a display screen).

Graphic information is various kinds of graphs, diagrams, diagrams, drawings, etc.

4. Stability. In terms of stability, information can be variable (current) and constant (conditionally constant). Variable information reflects the actual quantitative and qualitative characteristics of the production and economic activities of the company. It can change for each case both in terms of purpose and quantity (for example, the number of products produced per shift, weekly costs for the delivery of raw materials). Persistent information is information that is permanent and reusable over a long period of time. Permanent information can be reference, regulatory, planned:

permanent reference information includes a description of the permanent properties of an object in the form of signs that are stable for a long time (for example, an employee's personnel number, an employee's profession, a workshop number, etc.),

constant regulatory information contains local, industry and national regulations (for example, the amount of income tax, the standard for the quality of products of a certain type, the minimum wage, etc.),

permanent planning information contains planned indicators that are reused in the company (for example, a plan for the production of televisions, a plan for training specialists of a certain qualification).

5. Control functions. Management functions usually classify economic information. At the same time, the following groups are distinguished: planned, normative and reference, accounting and operational (current).

Planned information - information about the parameters of the control object for the future period. This information is the focus of all activities of the company.

Regulatory and reference information contains various regulatory and reference data. It is rarely updated.

Accounting information is information that characterizes the activities of the company for a certain past period of time. Based on this information, the following actions can be taken: planning information is adjusted, an analysis of the company's economic activities is made, decisions are made on more efficient work management, etc. In practice, accounting information, statistical information and operational accounting information can act as accounting information.

Operational (current) information is information used in operational management and characterizing production processes in the current (given) period of time. Serious requirements are imposed on operational information in terms of the speed of receipt and processing, as well as the degree of its reliability. The success of the company in the market largely depends on how quickly and efficiently it is processed.

2.1 International classifiers of information

The TESI classifier is a normative document, which is a systematized set of names and codes of classification groups and (or) objects of classification.

Allocate levels of classifiers:

international;

interstate (within the CIS)

all-Russian;

industry

local

International classifiers are part of the System of International Economic Standards (SIES) and are required for the transfer of information between different countries. SMES is a set of standard solutions for classification groups and coding of special and economic information and the formation of sources of this information. The SIEC includes the classifications of the United Nations (UN) and its specialized entities, including:

International Standard Industrial Classification of All Economic Activities (ISIC);

Classification of main products (CPC);

International Standard Trade Classification (SITC);

Classification of functions of governing bodies (KFOU);

Classification of government functions;

Classification of food and agricultural organizations (FAO);

Classifications of the International Labor Organization (ILO);

United Nations Educational, Scientific and Cultural Classifications (UNESCO);

International Standard Classification of Education (ISCED).

The classifications of the European Community and other international regional organizations include:

European Community (EC) classification;

Common sectoral classification of economic activity within the EU (NACE) and others.

There are the following systems of interaction of classifiers of economic information:

The system of equal classifiers is characterized by the fact that at each level of control, its own local classifier is used for the purposes of information processing, and an appropriate translator is used to receive or transmit information from the external environment. The disadvantage of this system is that the system that has the largest number of information flows from various organizations at the input should have the largest number of translators;

the system of priority classifiers is used for enterprises of the same industry. With this system, each enterprise in this industry and each level of management has local classifiers. The exchange of information is carried out in terms of a higher-level classifier. This system gives a reduction in the number of translators regardless of the number of input and output streams. However, difficulties arise in the transfer of information flows between enterprises belonging to different industries;

the system of intermediary classifiers is used in intersectoral management. At each object of any control level, processing is carried out in terms of its local classifier, and the exchange is carried out in terms of one intermediary classifier. The advantages of such a system lie in the need to create only one translator for each enterprise and in providing the possibility of centralized maintenance of the intermediary classifier, which gives a minimum number of errors when encoding information.

the unified classifier system is designed to process information at all enterprises that are part of the economic macrosystem, but in reality it cannot be implemented due to the need to encode all the information that exists in the country.

2.2 All-Russian classifiers

The creation of a single information space in Russia and its integration with the European and world information space has long been one of the most important tasks, the solution of which largely determines the further development of the country. The solution to this problem is possible only if Russian and foreign information systems are harmonized and information compatibility of all interacting information systems is ensured.

Achieving information compatibility is ensured by the unification and standardization of information technology tools, information carriers, the language of a formalized description of data, the structure of information systems and technological processes in them.

The All-Russian Classifier (OK) is a classifier adopted by the State Standard of the Russian Federation and mandatory for use in certain areas of activity established by the developer in agreement with ministries and departments.

OK is developed in cases where they:

1. provide comparability of data in different areas and levels of economic activity;

2. these classifiers ensure harmonization with international classifiers;

3. they are informationally related to the current OK;

Classifiers operating on the territory of the Russian Federation are included in the Unified Classification and Coding System (ESKK).

All-Russian classifiers were revised in accordance with the requirements of a market economy and the state program for the transition of the Russian Federation to the International System of Accounting and Statistics. These include: 1. All-Russian classifier of information about all-Russian classifiers (OKOK). Developed by the Federal State Unitary Enterprise "All-Russian Research Institute of Classification, Terminology and Information on Standardization and Quality" (FGUP "VNIIKI") of the State Standard of Russia. The All-Russian Classifier of Information on All-Russian Classifiers (OKOK) is part of the national standardization system of the Russian Federation. OKOK is intended for:

Ensuring the compatibility of state information systems and resources created at the federal and regional levels of government in the Russian Federation;

Control over the composition of the all-Russian classifiers of technical, economic and social information (hereinafter referred to as the all-Russian classifiers) and the exclusion of duplication of various all-Russian classifiers and facets in them;

Reflection of information on the use of international (regional, interstate) classifications and standards in all-Russian classifiers.

The object of classification in OKOK is information about the all-Russian classifiers of technical, economic and social information and facets included in the all-Russian classifiers.

2. All-Russian classifier of types of economic activity (OKVED). Developed by the Ministry of Economic Development and Trade of the Russian Federation, Center for Economic Classifications. The All-Russian classifier of types of economic activity is part of the Unified System for the Classification and Coding of Technical, Economic and Social Information (ESKK) of the Russian Federation. It is intended for classification and coding of types of economic activity and information about them.

OKVED is used in solving problems related to:

Classification and coding of types of economic activity declared by economic entities during registration;

Determining the main and other types of economic activity actually carried out by business entities;

Development of regulatory legal acts relating to state regulation of certain types of economic activity;

The implementation of state statistical monitoring by type of activity over the development of economic processes;

Preparation of statistical information for comparisons at the international level;

Coding information about the types of economic activity in information systems and resources, the unified state register of enterprises and organizations, and other information registers;

The objects of classification in OKVED are types of economic activity. OKVED includes a list of classification groupings of types of economic activity and their descriptions.

3. The All-Russian classifier of information about the population (OKIN) is part of the Unified System for Classifying and Coding Technical, Economic and Social Information of the Russian Federation (ESKK).

OKIN is intended for use in the collection, processing and analysis of demographic, social and economic information about the population, solving problems of accounting, analysis and training of personnel by enterprises, institutions and organizations of all forms of ownership, ministries and departments. OKIN consists of facets that can be used independently of each other in solving various problems. When developing OKIN, the All-Union classifier of technical, economic and social indicators was used.

4. All-Russian classifier of public services (OKUN). The All-Russian Classifier of Services to the Population (OKUN) is an integral part of the Unified System for the Classification and Coding of Technical, Economic and Social Information (ESKK TEI). The classifier is designed to solve the following problems:

Development and improvement of standardization in the sphere of public services;

Carrying out certification of services in order to ensure the safety of life, health of consumers and environmental protection, prevent damage to consumer property;

Accounting and forecasting the volume of sales of services to the population;

Studying the demand of the population for services;

Actualization of types of services, taking into account the new socio-economic conditions in the Russian Federation.

The objects of classification are services to the population, provided by enterprises and organizations of various organizational and legal forms of ownership, using various forms and methods of service.

For the classifier of services to the population, a hierarchical classification is adopted with the division of the entire classification set of objects into groups. Then each group is divided into subgroups, which in turn are divided into types of activities according to the intended functional purpose. OKUN uses a sequential coding system.

5. All-Russian classifier of professions of workers, positions of employees and wage categories (OKPDTR);

6. All-Russian currency classifier (OKV);

7. All-Russian product classifier (OKP).

Conclusion

The classification systems considered in the work are well suited for organizing a search for the purpose of subsequent logical and arithmetic processing of information. Thanks to the use of classification systems, the unification of the perception of information and the processes of its processing in economic management systems, the standardization of processed information is ensured, which leads to a reduction in the cost of creating and operating information systems, and an increase in their efficiency.

The classification of information is necessary for a comprehensive and systematic approach to all information and, in particular, to documentation problems.

Without the classification of information, it is impossible to carry out management automation, which comes out on top in modern conditions. In the absence of proper classification of information, the speed, productivity and efficiency of managerial work decreases.

So, the classification of information today is the most important means of creating information storage and retrieval systems, without which the effective functioning of management information support is impossible.

List of sources used

1. Civil Code of the Russian Federation, 1994, Art. 3301.

2. Federal Law of the Russian Federation "On information, information technologies and information protection", dated July 27, 2006

3. Decree of the Government of the Russian Federation "On the development of a unified system for the classification and coding of technical, economic and social information" dated November 1, 1999 "ConsultantPlus"

4. GOST 6.01.1-87 Unified classification and coding system for technical and economic information. Basic provisions [electronic resource]: "Consultant Plus".

5. All-Russian classifier of information about all-Russian classifiers (OKOK) [electronic resource]: December 25, 2002 "ConsultantPlus".

6. All-Russian classifier of information about the population (OKIN) [electronic resource]: July 31, 1995 "Consultant Plus".

7. Information systems in the economy: Textbook, ed. prof. D.V. Chistov. - M.: INFRA - M, 2009. - 234 p.

Hosted on Allbest.ru

Similar Documents

    Signs of information classification. Communication as the process of transferring information from a source to a recipient in order to change his knowledge of attitudes or behavior. Interpersonal communication barriers (microbarriers). Ways to improve the communication system.

    presentation, added 03/12/2014

    Categories of management information in the financial sector, sources of its receipt and definition of quality. Financial information influencing management decision making. Management decision-making technology based on the information received.

    term paper, added 10/29/2014

    Essence of information and its classification. Analysis of information classified as a trade secret. Research of possible threats and information leakage channels. Analysis of protection measures. Analysis of ensuring the reliability and protection of information in LLC "Tism-Yugnefteprodukt".

    thesis, added 10/23/2013

    Purpose and brief description of decision support systems. Concepts and principles of decision theory. Obtaining information, decision-making criteria and their scales. Scheme for classifying possible sources and ways of obtaining information.

    term paper, added 02/14/2011

    The concept of information, its essence and features, methods and sources of obtaining. The mechanism of functioning of the economic information system, its classification and varieties, characteristics and distinctive features. Advantages of automation systems.

    term paper, added 04/14/2009

    Use of accounting information and accounting functions in the enterprise management system. Principles for the implementation of accounting activities. Types of accounting in accordance with the purpose of accounting information. Categories of information for planning, its sources and objects.

    abstract, added 11/09/2011

    Management information system, its classification. Features of methods for collecting and analyzing information in state bodies of the Russian Federation. Improving the efficiency of the functioning of the communication structure. Movement of communication flows.

    term paper, added 09/16/2015

    The concept of information, sources of management information and information services in enterprises. The process of industrial intelligence of confidential information and its protection. Types of management information and enterprise management information systems.

    abstract, added 08/17/2009

    The concept of information as a means of communication. Its importance for making managerial decisions. General characteristics of OAO "Sinar". Analysis of information flows in the enterprise. Recommendations for improving the process of information transfer and processing.

    term paper, added 07/15/2011

    Properties, indicators and classification of design and regulatory information. Modern technologies for automated accounting, processing and storage of documented reporting and administrative information. Management information systems in construction.

Information can be conditionally divided into different types, based on one or another of its properties or characteristics. On fig. 1.3. a generalized classification scheme of information given in the works is given. The classification is based on the following nine principles: the form of public consciousness, the degree of significance, the method of coding, the sphere and place of origin, the stage of processing, the method of display, transmission and perception, and stability.

Rice. 1.3. Information classification

According to the form of social consciousness distinguish economic, political, legal, scientific, aesthetic, religious, philosophical information.

Economic Information- the most important piece of information, reflecting the attitude of people in the process of material production and influencing not only the economy, but also all the most important spheres of the social division of labor and forms of consciousness.

Political Information covers, first of all, the phenomena, facts and events of the political life of society - the relationship between classes, nations, states. This information acts as an important means of power and control.

legal information operates with norms, rules established by the state in accordance with its goals and interests, regulates the relations and behavior of people.

scientific information- this is the logical information obtained in the process of cognition, adequately reflecting the patterns of the objective world and used in the socio-historical paradigm.

aesthetic information- a piece of information accessible to sensory perception and constituting an aspect of artistic images (or their side, which can somehow be transmitted in time and space).

Religious Information- such a side and part of the display by man of natural and social forces and processes in which they take the form of the supernatural.

Philosophical Information- part of the information transmitted to private sciences and other areas of human activity as worldview and methodological knowledge.

For public purposes (in order of importance) information can be divided into mass (public), special and personal.

Bulk information is divided into:

socio-political (obtaining from the media);

ordinary (information of the process of everyday communication);

popular science (scientifically meaningful experience of all mankind, historical, cultural and national traditions).

Special information is divided into production, technical, managerial and scientific. Technical information has the following gradations: machine-tool, machine-building, instrumental. Scientific information is divided into biological, mathematical, physical ...

Personal information is knowledge, experience, intuition, skills, plans, forecasts, emotions, feelings, hereditary memory of a particular person.

By coding method signal information can be divided into analog and digital.

analog the signal represents information about the value of the initial parameter, which is reported in the information, in the form of the value of another parameter, which is the physical basis of the signal, its physical carrier. For example, the values ​​of the angles of inclination of the clock hands are the basis for analog time display. The height of the mercury column in a thermometer is the parameter that gives analog temperature information. The longer the bar on the thermometer, the higher the temperature. To display information in an analog signal, all intermediate values ​​of the parameter from the minimum to the maximum are used, i.e. theoretically an infinite number of them.

Digital the signal uses as a physical basis for recording and transmitting information only a minimum number of such values, most often only two. For example, in the basis of recording information in a computer, two states of the physical carrier of the signal are used - electrical voltage. One state - there is an electrical voltage, conventionally denoted by one (1), the other - there is no electrical voltage, conventionally denoted by zero (0). To transfer information about the value of the initial parameter, it is necessary to use the data representation as a combination of zeros and ones, i.e. digital representation. It is interesting that at one time computers based on ternary arithmetic were developed and used, since it is natural to take the following three states as the main states of electrical voltage: 1) the voltage is negative, 2) the voltage is zero, 3) the voltage is positive. Until now, there are scientific papers devoted to such machines and describing the advantages of ternary arithmetic. Now b the competition was won by the manufacturers of binary machines. Will it always be like this? Here are some examples of consumer digital devices. Electronic clocks with digital display give digital time information. The calculator makes calculations with digital data. A mechanical lock with a digital code can also be called a primitive digital device.

By area of ​​origin there is the following classification. Information that has arisen in inanimate nature is called elementary, in the world of animals and plants - biological, in human society social. In nature, living and inanimate, information is carried by color, light, shadow, sounds and smells. As a result of the combination of color, light and shadow, sounds and smells, aesthetic information. Along with natural aesthetic information, as a result of people's creative activity, another kind of information arose - works of art. In addition to aesthetic information, human society creates semantic information as a result of knowledge of the laws of nature, society, thinking. The division of information into aesthetic and semantic is obviously very conditional, it is simply necessary to understand that in one information its semantic part may prevail, and in another - aesthetic.

By place of origin information can be divided into the following types.

Input Information is information that enters an organization or its divisions.

day off Information is information that comes from an organization to another organization (department).

Internal information occurs inside the object, external information - outside the object.

By processing stage information is divided into the following types.

Primary information is information that arises directly in the process of the object's activity and is recorded at the initial stage.

Secondary information is information that is obtained as a result of processing primary information and can be intermediate and resultant.

Intermediate information is used as input for subsequent calculations.

The resulting information is obtained in the process of processing primary and intermediate information and is used to make management decisions.

By way of display information is divided into textual and graphical.

Text information is a set of alphabetic, numeric and special characters that represent information on physical media (paper, image on the display screen).

Graphic information is various kinds of graphs, diagrams, diagrams, drawings, etc.

According to the method of transmission and perception information is classified as follows. Information transmitted in the form of visible images and symbols is called visual; conveyed by sounds auditory; sensations - tactile; smells - taste. Information perceived by office equipment and computers is called machine-oriented information. The amount of machine-oriented information is constantly increasing due to the continuously increasing use of new information technologies in various spheres of human life.

About 80-90% of information a person receives through the organs of vision (visually), about 8-15% through the organs of hearing (auditory), approximately 1-5% through the other senses (smell, taste, touch).

By stability information can be variable (current) and constant (conditionally constant).

Variable information reflects the actual quantitative and qualitative characteristics of the production and economic activities of the enterprise. It can change for each case, both in purpose and in quantity.

Constant information is unchanging and reusable information over a long period of time.

Permanent information can be reference, regulatory, planned. Permanent reference information includes a description of the permanent properties of the object in the form of features that are stable for a long time. Permanent regulatory information contains local, industry and national regulations in various areas of human activity. Permanent planning information contains planned indicators of production processes that are reused at the enterprise.

There are other options for classifying information. A particular researcher chooses for himself one or another classification, depending on the problem facing him, on the relationships that he studies.

Question #1

The concept of "information". The word "information" comes from the Latin word informatio, which means information, clarification, familiarization. The concept of "information" is basic in the course of informatics, it is impossible to define it through other, more "simple" concepts.

Information properties.

1. Attribute Properties are those properties without which information does not exist.

2. 2. Pragmatic Properties- these are the properties that characterize the degree of usefulness of information for the user, consumer and practice. Manifested in the process of using information

3. 3. Dynamic properties are those properties that characterize the change in information over time.

Question number 2

Classification of information as an integral part of management information support, without which it is impossible to effectively and efficiently carry out management activities. Categories of TESI classifiers and their status (international, all-Russian)

Signaling forms

Used methods of channel separation (RC) can be classified into linear and non-linear (combination).

In most cases of channel splitting, each message source is allocated a special signal called canal. The message-modulated channel signals are combined to form group signal. If the union operation is linear, then the resulting signal is called linear group signal.

For the unification of multichannel communication systems, the main or standard channel is taken voice frequency channel(channel PM), which ensures the transmission of messages with an effectively transmitted frequency band of 300 ... 3400 Hz, corresponding to the main spectrum of the telephone signal.

Multi-channel systems are formed by combining PM channels into groups, usually multiples of 12 channels. In turn, "secondary multiplexing" of PM channels by telegraph channels and data transmission channels is often used.

Information classification. Forms convey information.

Information can be divided into types according to different criteria:

1. by truth: true and false;

2. according to the way of perception: Visual - perceived by the organs of vision: Auditory - perceived by the organs of the spirit: Tactile - perceived by tactile receptors; Olfactory - perceived by olfactory receptors; Taste - perceived by taste buds.

3.by presentation form

Text - transmitted in the form of symbols intended to designate lexemes of the language.

Numerical - in the form of numbers and signs denoting mathematical operations.

G panic - in the form of images, objects, graphs.

Sound - oral or in the form of a recording, the transmission of language lexemes by auditory means.

4.by appointment

Mass - contains trivial information and operates with a set of concepts understandable to most of the society. Special - contains a specific set of concepts, when using j, information is transmitted that may not be understood by the bulk of society. Secret - transmitted to a narrow circle of people and through closed (secure) channels.

Personal (private) - a set of information about a person that determines the "social position and types of social interactions within the population.

5.by value

Relevant - information is valuable at a given time.

Reliable - information received without distortion.

Understandable - information expressed in a language understandable to the one to whom it is intended.

Complete - information sufficient to make the right decision or

understanding. Useful - the usefulness of information is determined by the subject who received the information, depending on the volume of possibilities for its use.

Transfer of information

The transfer of semantic information is the process of its spatial transfer from the source to the recipient (addressee)") - To transfer information over long distances, it is necessary to use information processes.

To represent information, various sign systems are used - sets of predetermined semantic symbols: objects,1 pictures, written or printed words of a natural language.

I The semantic information presented with their help about some object, phenomenon or process is called a message, j It is obvious that in order to transmit a message over a distance, information must be transferred to some kind of mobile carrier. Carriers can move in space with the help of vehicles. This method ensures complete reliability of information transmission, since the addressee receives the original message, but requires a significant amount of time for transmission. Since the middle of the 19th century, methods of transmitting information have become widespread; using a naturally propagating information carrier - electromagnetic oscillations (electrical oscillations, radio waves, light). The implementation of these methods requires: preliminary transfer of the information contained in the message to the carrier - coding to ensure the transmission of the signal obtained in this way to the addressee via a special communication channel; reverse conversion of the signal code into the message code - decoding. Devices that implement the data transfer process form communication systems. Depending on the method of presenting information, communication systems can be divided into sign (telegraph, telefax), sound (telephone), video and combined systems (television). The most developed communication system in our time is the Internet.

Question)

Informational resources- in a broad sense - a set of data organized to effectively obtain reliable information.

These are books, articles, patents, dissertations, research and development documentation, technical translations, data on advanced manufacturing practices, etc.

Information resources (unlike all other types of resources - labor, energy, mineral, etc.) grow the faster the more they are spent.

Resources are available stocks, funds that can be used when needed. Currently, scientists and practitioners attribute information resources to important strategic resources on which the development of the economy, science, education, culture, etc. depends. The first attempts to define information resources were made in the 90s of the XX century, when the so-called "resource approach" to the study of information took shape. A narrow and broad understanding of information resources is used: in a narrow sense, only network information resources available through computer means of communication are meant, and in a broad sense, any information recorded on traditional or electronic media suitable for preservation and distribution.

Information resources can be of various types - mass media, libraries, the Internet. The following information resources can be successfully sold through the Internet:

News feeds (on-line-news). For example, a feed of financial and political news is vital for traders to make buying and selling decisions on the exchanges;

Subscriptions to electronic copies of periodicals. Some newspapers and magazines produce their full electronic copies and make them available to them;

Access to electronic archives and databases containing information on a variety of issues;

Analytical reports and studies;

Own analytical materials and forecasts.

According to the category of access, information resources can be open (publicly available) or with limited access. In turn, documented information with restricted access is divided into classified as state secrets and confidential.

Classification of information systems:

In a broad sense, an information system is a set of technical, software and organizational support, as well as personnel, designed to provide the right people with the right information in a timely manner (“an information system is a complex that includes computing and communication equipment, software, linguistic tools and information resources , as well as system personnel and providing support for a dynamic information model of some part of the real world to meet the information needs of users").

In a narrow sense, only a subset of IS components in the broad sense, including databases, a database management system (DBMS) and specialized application programs, is called an information system. IS in the narrow sense is considered as a software and hardware system designed to automate the purposeful activities of end users, providing, in accordance with the processing logic embedded in it, the possibility of obtaining, modifying and storing information.

IS task - satisfaction of specific information needs within a specific subject area.

4. Due to the unconditional priority of the binary number system in the internal representation of information in a computer, character encoding is based on matching each of them with a certain group of binary characters. Coding-decoding should use uniform codes, i.e. binary groups of equal length.

Solve the simplest problem: having, say, a uniform code from groups of N binary characters, how many different code combinations can be formed. The answer is obvious TO= 2N. So, at N = 6 TO= 64 - obviously small, with N = 7 TO= 128 - quite enough.

However, for encoding several (at least two) natural alphabets (plus all the signs noted above), this is not enough. Minimum sufficient value N in this case 8; having 256 combinations of binary symbols, it is quite possible to solve the indicated problem. Since 8 binary characters make up 1 byte, one speaks of "byte" encoding systems.

In a communication channel, a message composed of characters (letters) of one alphabet can be converted into a message of characters (letters) of another alphabet. The rule that describes the one-to-one correspondence of the letters of the alphabets is called a code. The process of converting a message is called recoding. Such a message transformation can be carried out at the moment the message arrives from the source into the communication channel (coding) and at the moment the message is received by the recipient (decoding).

Question #5

Notation- a symbolic method of writing numbers, representing numbers using written characters.

Notation:

§ gives representations of a set of numbers (integer and/or real);

§ gives each number a unique representation (or at least a standard representation);

§ reflects the algebraic and arithmetic structure of numbers.

§ The most common positional number systems at present are: decimal, octal and hexadecimal. Each positional system has a specific alphabet of numbers and a base.

Number systems

Number system - is a way of representing and displaying numbers using a strictly limited set of characters, each of which has certain quantitative values. Numbers in number systems are represented using a certain set of characters - numbers , and their number depends on the system used.

Rules for non-decimal arithmetic- operation subtraction in binary code is replaced by the operation additions with a negative number, the addition of two positive, positive and negative, negative and positive, and two negative numbers. In general, the addition operation, along with the shift operation, are the main ones, because in addition to subtraction, the operations of multiplication and division of binary numbers are reduced to them. Division binary numbers are produced, as in the usual decimal number system. At the first step, you should check the possibility of subtracting the divisor from the dividend (the result should not be negative), if possible, one is written to the quotient, otherwise zero, and the divisor is shifted one bit to the right relative to the dividend. Then one digit of the dividend is taken down and the test is repeated. The sign of the result is obtained by addition, as in multiplication.

Indicator Generations of computers
First 1951-1954 Second 1958-1960 Third 1965-1966 Fourth Fifth?
A 1976-1979 B 1985-?
Processor element base Electronic lamps transistors Integrated circuits (ICs) Large ICs (LSI) Extra Large ICs (VLSI) +Optoelectronics +Cryoelectronics
RAM element base cathode ray tubes Ferrite cores Ferrite cores BIS VLSI VLSI
Maximum RAM capacity, bytes 10 2 10 1 10 4 10 5 10 7 10 8 (?)
Maximum processor speed (ops / s) 10 4 10 6 10 7 10 8 10 9 +Multiprocessing 10 12 , +Multiprocessing
Programming languages machine code + Assembler + High-level procedural languages ​​(HLL) + New procedural HLL +Non-procedural HLL + New non-procedural HLLs
Means of communication between the user and the computer Control panel and punch cards Punched cards and punched tapes Alphanumeric terminal Monochrome graphic display, keypad Color + graphic display, keyboard, mouse, etc. Devices for voice communication with computers

In 1642, Blaise Pascal designed an eight-bit adder. In 1820, the Frenchman Charles de Colmar created an adding machine capable of multiplication and division. All the basic ideas that underlie the operation of computers were outlined as early as 1833 by the English mathematician Charles Babbage. He developed a project for a machine for performing scientific and technical calculations, where he foresaw the main devices of a modern computer, as well as its tasks. Management was carried out by software. For input and output, he suggested using punched cards - sheets of thick paper with information applied using holes. In 1888, American engineer Herman Hollerith designed the first electromechanical calculating machine. This machine, called a tabulator, could read and sort statistical records encoded on punched cards.

In February 1944, at one of the IBM enterprises, in collaboration with scientists from Harvard University, the Mark 1 machine was created by order of the US Navy. It was a monster weighing about 35 tons. The "Mark 1" used mechanical elements to represent numbers and electromechanical elements to control the operation of the machine. The numbers were stored in registers consisting of ten-tooth counting wheels. Each register contained 24 wheels, with 23 of them used to represent a number (i.e. "Mark 1" could "grind" numbers up to 23 bits long), and one to represent its sign. The register had a mechanism for transferring tens and therefore was used not only to store numbers located in one register, the number could be transferred to another register and added to (or subtracted from) the number located there. In total, the "Mark 1" had 72 registers and, in addition, an additional memory of 60 registers formed by mechanical switches. Constants were manually entered into this additional memory - numbers that did not change during the calculations. Computer classification

supercomputer- the most powerful computing system that exists in the corresponding historical period

Mainframes more affordable than "super".

minicomputer- use - either to control technological processes, or in the time-sharing mode as a control machine of a small local network.

Microcomputer- Among them are multi-user, equipped with many remote terminals and operating in time-sharing mode; built-in, which can control the machine, any subsystem of a car or other device (including military ones), being its small part.

work station- is used in several, sometimes inconsistent, senses. So, a workstation can be a powerful micro-computer, focused on specialized work of a high professional level, which cannot be attributed to personal computers, if only because of the very high cost.

8) Safety precautions and rules for the operation of PC devices.

1. Persons not younger than 18 years of age are allowed to work independently on a PC,

who have passed a medical examination, special training, instructions on labor protection at the workplace, who have studied the “Operation Manual” and have learned safe methods and techniques for performing work.

Personnel authorized to work on a PC for adjustment, operation of PR-tion are obliged to:

receive instruction in labor protection;

· familiarize yourself with the general operating rules and instructions for labor safety, which are contained in the “Operating Manual”;

· to get acquainted with warning records on covers, walls, panels of blocks and devices;

Familiarize yourself with the rules for the operation of electrical equipment.

2. The PC must be connected to a single-phase network with a normal voltage of 220 (120) V, a frequency of 50 (60) Hz and a neutral ground. The grounding contacts of the sockets must be securely connected to the protective earthing circuit of the room. The room must be equipped with an emergency circuit breaker or a general power off switch.

3. It is forbidden to independently repair the PC (its blocks), if this is not part of your responsibilities.

4. During the operation of the PC, the following requirements and rules must be met:

Do not connect or disconnect the connectors and cables of the power supply when the mains voltage is applied;

Do not leave the PC turned on without supervision;

Do not leave your PC turned on during a thunderstorm;

Disconnect the PC from the network upon completion of work;

devices must be located at a distance of 1 m from heating devices; workplaces should be located at a distance of at least 1.5 meters from each other;

The devices must not be exposed to direct sunlight;

the continuous duration of work when entering data on a PC should not exceed 4 hours with an 8-hour working day, after each hour of work it is necessary to take a break of 5-10 minutes, after 2 hours for 15 minutes; in the room where computer equipment is located, it must be equipped fire fighting area.

9. A complete set of software required to organize, say, an automated workstation (AWS) for a design engineer, researcher (physicist, chemist, biologist, etc.) costs more (sometimes several times) than the cost of an adequate class computer .

All kinds of software

Operating systems are a set of programs that provide

Resource management, i.e. coordinated operation of all computer hardware;

Process management, i.e. execution of programs, their interaction with computer devices, with data;

User interface i.e. dialogue between the user and the computer, the execution of certain simple commands - information processing operations.

Programming systems;

Tool software, integrated packages;

Application programs.

10. Application programs are designed to provide application computer technology in various fields of human activity. Application developers spend a lot of effort on improving and modernizing popular systems. New versions support the old ones, maintaining continuity, and include a basic minimum (standard) of features.

One of the possible classification options for software tools (PS) that make up application software (APS) is shown in Figure 2.11. Like almost any classification, the one shown in the figure is not the only possible one. It does not even present all types of application programs. However, the use of the classification is useful in providing a general idea of ​​the PPO.

Fig.2. P. Application software classification

12. Personal computer operating systems have been deeply influenced by the concept of the file system that underpins the UNIX operating system. In UNIX, the I/O subsystem unifies the way you access both files and peripherals. In this case, a file is understood as a set of data on a disk, terminal, or some other device. Thus, the file system is a data management system.

File systems of operating systems create for users some virtual representation of external storage devices of computers, allowing them to work with them not at a low level, but at a high level of data sets and structures. The file system hides from programmers a picture of the real location of information in external memory, and also provides standard responses to errors. When working with files, the user is provided with tools for creating new files, operations for reading and writing information.

NTFS The standard file system for the Microsoft Windows family of operating systems, NT.NTFS, has replaced the FAT file system used in MS-DOS and Microsoft Windows. NTFS maintains a metadata system and uses specialized data structures to store information about files to improve performance, reliability, and disk space efficiency. NTFS stores information about files in the Master File Table. NTFS has built-in capabilities to restrict access to data for different users. There are several versions of TEAYZH m1u2 used in Windows NT 3.51 and Windows NT 4.0b m3u0 comes with Windows 2000b m3u1 - with Windows XP

FAT the classic file system architecture that is used for flash drives and memory cards. In the recent past, it was used in floppy disks, hard drives and other storage media. In Fat, the size of one file is limited to 4 GB Developed by Bill Gates and Mark McDonald ( English) in 1976-1977. It was used as the main file system in operating systems of the DOS and Windows families. There are three versions of FAT - FAT12, FAT16 And FAT32. They differ in the bitness of records in the disk structure, that is, the number of bits allocated for storing the cluster number. FAT12 is mainly used for floppy disks, FAT16 for small disks. used primarily for flash drives.

11 QUESTION!)

1. An interface is a way of communication between a user and a personal computer, a user with application programs, and programs among themselves. The interface serves for the convenience of managing the computer software. Interfaces are single-tasking and multi-tasking, single-user and multi-user. Interfaces differ from each other in terms of ease of software management, that is, the way programs are launched. There are universal interfaces that allow all ways to run programs, such as Windows 3.1, Windows-95. Example: Windows-95 has all the launch methods, including the ability to launch programs using the Start button menu.

2. Types of interfaces.

2.1. Command line (text) interface.

To control the computer, a command is written (entered from the keyboard) into the command line, for example, the name of the program's batch file or service words specially reserved by the operating system. The command can be edited if necessary. Then the Enter key is pressed to execute the command. All types of operating systems have this type of interface as the main one, for example MS-DOS 6.22. As an additional tool, this type of interface has all types of software shells (Norton Commander, DOS Navigator, etc.) and Windows 3.1, Windows-95/98. The command line interface is inconvenient, since you have to remember the names of many commands, a mistake in writing even one character is unacceptable. It is rarely used in a session of direct work with the operating system or in case of failures when other methods are not possible.

2.2. Graphical full screen interface.

It has, as a rule, a menu system with hints at the top of the screen. The menu is often a drop-down (drop-down) menu. To control the computer, the screen cursor or mouse cursor, after searching in the directory tree, is placed on batch files of programs (*.exe, *.com, *.bat) and the Enter key or the right mouse button is pressed to start the program. Different files may be highlighted in different colors or have different patterns. Directories (folders) are separated from files by size or pattern.

This interface is the main one for all types of software shells. Example: Norton Commander and Norton-like shells (DOS Navigator, Windows Commander, Disk Commander). Windows 3.1 (File Manager) and Windows-95/98 (My Computer and Explorer) tools have a similar interface. This interface is very convenient, especially when working with files, because it provides high speed operations. Allows you to create a custom menu, launch applications by file extension, which increases the speed of working with programs.

2.3. Graphical multi-window pictographic interface.

It is a desktop (DeskTop) on which icons (icons or program icons) lie. All operations are performed, as a rule, with the mouse. To control the computer, the mouse cursor is brought to the icon and the program is launched by clicking the left mouse button on the icon. This is the most convenient and promising interface, especially when working with programs. Example: Apple Macintosh computer interface, Windows 3.1, Windows-95/98, OS/2.

Here is a brief description and designation of some of the most common types of communication equipment/terminal equipment (DCE/DTE) interface.

V.24 is the equivalent of RS-232, Com-port, asynchronous port (by the way, it can also work in synchronous mode). The port is low-speed, although motherboards have recently begun to appear that can operate at speeds up to 230,400 bps. Its characteristics are limited by the presence of only one "ground" wire and a high level of logical ones and zero - -3V and +3V, respectively. The standard connector is a DB-25 or DB-9 plug on a terminal device (DTE, computer) and a DB-25 socket on a communication device (DCE, modem).

V.35 - originally developed as a standard for high-speed modems, but only the high-speed DCE / DTE interface developed under this standard has taken root. It has a low level of logic ones and zeros and differential lane lines.

QUESTION #13

13). Files, attributes. Forming filenames...

file is a named area on a storage medium. The file has a name from 1 to 255 . Punctuation marks should not be in the name (except for a dash). Through a dot, you can put a name extension that indicates the file format DOC.EXE launches the file. Attributes are the following file characteristics (read-only, hidden, system, archive) Files are text, binary, graphic. The storage media names are mounted, typically mounted storage media writers.

In computer science, the following definition is used: a file is a named sequence of bytes.

Working with files is implemented by means of operating systems.

Names like files have and are processed in a similar way:

§ data areas (optionally on disk);

§ devices (both physical, ports for example; and virtual);

§ data streams (Named pipe);

§ network resources, sockets;

§ objects of the operating system.

Files of the first type historically emerged first and are most widely distributed, so often the data area corresponding to the name is also called a "file".

Attributes

Some file systems, such as NTFS, provide attributes (usually a yes/no binary value encoded by one bit). In many modern operating systems, attributes have almost no effect on the ability to access files; for this, in some operating systems and file systems, there are access rights.

Attribute name translation meaning file systems Operating Systems
READ ONLY only for reading not allowed to write to file DOS, OS/2, Windows
SYSTEM systemic operating system critical file FAT32, FAT12, FAT16, NTFS, HPFS, VFAT DOS, OS/2, Windows
HIDDEN hidden the file is hidden from display unless explicitly indicated otherwise FAT32, FAT12, FAT16, NTFS, HPFS, VFAT DOS, OS/2, Windows
ARCHIVE archival (requiring archiving) the file was modified after backup or was not copied by backup programs FAT32, FAT12, FAT16, NTFS, HPFS, VFAT DOS, OS/2, Windows
SUID Setting a user ID running the program on behalf of the owner ext2 Unix-like
SGID Group ID setting program execution as a group (for directories: any file created in a directory with a set SGID will get the given owner group) ext2 Unix-like
sticky bit sticky bit initially instructed the kernel not to unload the completed program from memory immediately, but only after some time, in order to avoid constant loading from the disk of the most frequently used programs, it is currently used differently in different operating systems ext2 Unix-like

Formation of file names.

A file is a named part of a hard disk or floppy disk. Also, a file is a logical device, a potential source or receiver of information. The length of each file is limited only by the capacity of the computer's external memory device.
Long filenames
The maximum length of a file name is 255 characters, including spaces. Names can contain spaces, symbols
Cyrillic and other characters forbidden in DOS: / : . *? "< >
The total length of the path and file name must not exceed 260 characters (drive name - 2 characters + root directory name / - 1 character
+ file name - at least 1 character + separator dot -1 character = 5+255=260).
When a file is created, it is assigned 2 names - long and short (according to DOS rules - in 8.3 format). The short name is formed according to the following rules:
1) spaces and symbols forbidden in DOS are removed from a long name. For an 8-letter name, the first 6 remaining
characters, to which is added the ~ sign and the serial number of the file (among files with the same initial characters). xxxxxxx~
2) for 3 type letters, the first three characters after the last dot in the long name are used.
For example:
long name
Short name
Microsoft Windows 95.bmp
Micros~1.bmp
Microsoft Office.tmp
Micros~2.tmp
Coursework Ivanova I.I..doc
Courses~.doc
The universal Unicode encoding assigns 2 bytes to each character. Windows uses this encoding to store long filenames, ie. a long name may require up to 500 bytes (255 characters at maximum length). In DOS in FAT system file information
(name, size, creation date and time) is stored in a 32-byte directory entry. On Windows, information about a file (short name, size, creation date and time) is stored in a regular directory entry. The long name and the date of the last access are stored in the directory entries adjacent to the main and marked in a special way. That. one file occupies 2 or more directory entries (21 in case of maximum length: 1 for normal (DOS), others for long name). Peculiarities:
1) the directory size, access time, fragmentation probability increases;
2) the floppy root directory contains 224 entries. That. the root directory of a floppy disk may contain about 10 files named
maximum length. If all elements are full, then a message is displayed about the lack of memory, lack of free disk space (even
if there is free space on the disk). Therefore, it is necessary to sort files into folders and not store them in the root directory (except for service ones).

File types

Different operating and/or file systems may implement different types of files; in addition, the implementation of different types may differ.

§ "Ordinary file" - a file that allows the operation of reading, writing, moving within a file

§ Catalog directory- alphabetical directory) or directory - a file containing records about the files included in it. Directories can contain entries for other directories, forming a tree structure.

§ Hard link hard link, "hardlink" tracing paper is often used) - in the general case, the same information area can have several names. Such names are called hard links (hardlinks). After creating a hardlink, it is impossible to say where the “real” file is and where the hardlink is, since the names are equal. The data area itself exists as long as at least one of the names exists. Hardlinks are only possible on one physical medium.

Electronic media refers to media for one-time or repeated recording (usually digital) electrically: CD-ROM, DVD-ROM, semiconductor (flash memory, etc.), floppy disks.

They have a significant advantage over paper (sheets, newspapers, magazines) in terms of volume and unit cost. For storing and providing operational (not long-term storage) information - they have an overwhelming advantage, there are also significant opportunities for providing AND in a form convenient for the consumer (formatting, sorting). The disadvantage is the small screen size (or significant weight) and the fragility of the reading devices, dependence on power supplies.

Currently, electronic media are actively replacing paper media in all sectors of life, which leads to significant savings in wood. Their disadvantage is that for reading AND for each type and format of media, a corresponding reader is required.

[edit] Storage devices

Main article:Memory device

A carrier, together with a mechanism for writing / reading information on it ( reader, reader), is called information storage device(also - information accumulator, if it provides for adding an incoming to an existing one). These devices can be based on a wide variety of physical recording principles.

In some cases (to ensure readability, when the medium is rare, etc.), the storage medium is delivered to the consumer along with a storage device for reading it.

Root directory

The directory that directly or indirectly includes all other directories and files of the file system is called the root directory. It is marked with " /" (slash).

The path to the file.

In order to find a file in a hierarchical file structure, you must specify the path to the file. The path to the file includes the logical name of the drive written through the separator "\" and the sequence of names of nested directories, the last of which contains the given desired file.

For example, the path to the files in the figure can be written like this.

1. Information can be divided according to the form of presentation into 2 types:

A discrete form of information presentation is a sequence of symbols that characterizes a discontinuous, changing value (the number of traffic accidents, the number of serious crimes, etc.);

An analog or continuous form of information representation is a value that characterizes a process that does not have interruptions or gaps (human body temperature, vehicle speed on a certain section of the path, etc.).

2. According to the area of ​​​​occurrence, information can be distinguished:

Elementary (mechanical), which reflects the processes, phenomena of inanimate nature;

Biological, which reflects the processes of the animal and plant world;

Social, which reflects the processes of human society.

3. According to the method of transmission and perception, the following types of information are distinguished:

Visual, transmitted by visible images and symbols;

Auditory, transmitted by sounds;

Tactile, transmitted by sensations;

Organoleptic, transmitted by smells and tastes;

Machine, issued and perceived by means of computer technology.

4. Information created and used by a person for public purposes can be divided into three types:

Personal, intended for a specific person;

Mass, intended for anyone who wants to use it (socio-political, popular science, etc.);

Special, intended for use by a narrow circle of people involved in solving complex special problems in the field of science, technology, and economics.

5. According to the coding methods, the following types of information are distinguished:

Symbolic, based on the use of symbols - letters, numbers, signs, etc. It is the simplest, but in practice it is used only to transmit simple signals about various events. An example is the green light of a street traffic light, which indicates the possibility of pedestrians or vehicle drivers starting to move.

Text, based on the use of combinations of characters. Here, as in the previous form, symbols are used: letters, numbers, mathematical signs. However, information is contained not only in these symbols, but also in their combination, in the order in which they follow. So, the words CAT and TOK have the same letters, but contain different information. Due to the relationship of symbols and the display of human speech, textual information is extremely convenient and widely used in human activities: books, brochures, magazines, various documents, audio recordings are encoded in text form.

Graphic, based on the use of an arbitrary combination of graphic primitives in the space. This form includes photographs, diagrams, drawings, drawings, which are of great importance in human activity.

3. Units of measurement of information

Units of measurement of information are used to measure the amount of information - a value calculated logarithmically. This means that when several objects are treated as one, the number of possible states is multiplied and the amount of information is added. It doesn't matter if we are talking about random variables in mathematics, digital memory registers in engineering, or even quantum systems in physics.

Most often, the measurement of information concerns the amount of computer memory and the amount of data transmitted via digital communication channels.

For the first time, an objective approach to measuring information was proposed by the American engineer R. Hartley in 1928, then in 1948 it was generalized by the American scientist C. Shannon. Hartley considered the process of obtaining information as the selection of one message from a finite pre-specified set of N equiprobable messages, and the amount of information I contained in the selected message was defined as the binary logarithm N.

Probability is a numerical measure of the reliability of a random event, which, with a large number of trials, is close to the ratio of the number of cases when the event occurred with a positive outcome to the total number of cases. Two events are said to be equally likely if their probabilities are the same.

Examples of equiprobable events

1. when tossing a coin: “tails fell out”, “heads fell out”; 2. on the page of the book: “the number of letters is even”, “the number of letters is odd”; 3. when throwing a dice: “number 1 fell out”, “number 2 fell out”, “number 3 fell out”, “number 4 fell out”, “number 5 fell out”, “number 6 fell out”.

Uneven events

Let us determine whether the messages “a woman will be the first to leave the building” and “a man will be the first to leave the building” are equiprobable. It is impossible to answer this question unambiguously. First, as you know, the number of men and women is not the same. Secondly, it all depends on what kind of building we are talking about. If this is a military barracks, then for a man this probability is much higher than for a woman.

The logarithm of the number a to the base b (log b a) is equal to the exponent to which the number b must be raised to get the number a. Logarithms to base two, which are called binary logarithms, are widely used in computer science.

Hartley formula:

I = log 2 N

Shannon proposed another formula for determining the amount of information that takes into account the possible unequal probability of messages in the set.

Shannon formula:

I=P1log21/P1+P2log21/P2+…+PNlog21/PN,

where pi is the probability of the i-th message

Since each register of an arithmetic unit and each memory cell consists of homogeneous elements, and each element can be in one of two stable states (which can be identified with zero and one), K. Shannon introduced a unit of information - a bit.

A bit is too small a unit of measurement. In practice, a larger unit is often used - a byte, equal to eight bits. It is eight bits that are required to encode any of the 256 characters of the computer keyboard alphabet (256=28).

Even larger derived units of information are also widely used:

1 Kilobyte (KB) = 1024 bytes,

1 Megabyte (MB) = 1024 KB,

1 Gigabyte (GB) = 1024 MB.

Recently, due to the increase in the volume of processed information, such derived units as:

1 Terabyte (TB) = 1024 GB,

1 Petabyte (PB) = 1024 TB.

For a unit of information, one could choose the amount of information needed to distinguish, for example, ten equally probable messages. This will not be a binary (bit), but a decimal (dit) unit of information.