The law of averages in simple terms. The law of averages or the secret of successful salespeople

The average value is the most general indicator in statistics. This is due to the fact that it can be used to characterize a population by a quantitatively varying characteristic. For example, to compare the wages of workers at two enterprises, the wages of two specific workers cannot be taken, since it is a varying indicator. Also, the total amount of wages paid at enterprises cannot be taken, since it depends on the number of employees. If we divide the total wages of each enterprise by the number of employees, we can compare them and determine at which enterprise the average wage is higher.

In other words, the wages of the population of workers being studied receive a generalized characteristic in an average value. It expresses what is general and typical that is characteristic of the totality of workers in relation to the characteristic being studied. In this value, it shows the general measure of this characteristic, which has different meanings among units of the population.

Determination of average value. In statistics, the average value is a generalized characteristic of a set of similar phenomena according to some quantitatively varying characteristic. The average value shows the level of this characteristic per unit of the population. Using the average value, you can compare different populations with each other according to varying characteristics (per capita income, agricultural productivity, cost of production at various enterprises).

The average value always generalizes the quantitative variation of the characteristic with which we characterize the population under study, and which is equally inherent in all units of the population. This means that behind any average value there is always a series of distribution of population units according to some varying characteristic, i.e. variation series. In this respect, the average value differs fundamentally from relative values ​​and, in particular, from intensity indicators. The intensity indicator is the ratio of the volumes of two different aggregates (for example, GDP production per capita), while the average one generalizes the characteristics of the elements of the aggregate according to one of the characteristics (for example, the average wage of a worker).

Average value and the law of large numbers. The change in average indicators reveals a general tendency, under the influence of which the process of development of phenomena as a whole takes shape, but in some individual cases this tendency may not be clearly visible. It is important that averages are based on a massive generalization of facts. Only under this condition will they reveal the general trend underlying the process as a whole.


In the increasingly complete suppression of deviations generated by random causes, as the number of observations increases, the essence of the law of large numbers and its significance for average values ​​is revealed. That is, the law of large numbers creates conditions for the average value to reveal the typical level of a varying characteristic under specific conditions of place and time. The magnitude of this level is determined by the essence of this phenomenon.

Types of averages. Average values ​​used in statistics belong to the class of power averages, the general formula of which is as follows:

Where x is the power average;

X – changing values ​​of the characteristic (options)

– number option

Average degree indicator;

Addition sign.

For different values ​​of the exponent of the average, different types of average are obtained:

Arithmetic mean;

Mean square;

Average cubic;

Harmonic mean;

Geometric mean.

Various types average values ​​have different values ​​when using the same initial statistical materials. Moreover, the larger the average power index, the higher its value.

In statistics, the correct characterization of the population in each individual case is provided only by a very specific type of average values. To determine this type of average value, a criterion is used that determines the properties of the average: the average value will only be a correct generalizing characteristic of the population according to a varying characteristic when, when replacing all variants with an average value, the total volume of the varying characteristic remains unchanged. That is, the correct type of average is determined by how the total volume of the varying characteristic is formed. Thus, the arithmetic mean is used when the volume of a varying characteristic is formed as the sum of individual options, the square mean - when the volume of a varying characteristic is formed as a sum of squares, the harmonic mean - as the sum of the reciprocal values ​​of individual options, the geometric mean - as the product of individual options. In addition to averages in statistics

Descriptive characteristics of the distribution of the varying characteristic (structural means), mode (the most common option) and median (the middle option) are used.


What is the secret of successful salespeople? If you observe the best salespeople in any company, you will notice that they have one thing in common. Each of them meets with more people and makes more presentations than less successful salespeople. These people understand that sales is a numbers game and the more people they tell about their products or services, the more deals they will close - that's all. They understand that if they communicate not only with those few who will definitely say yes to them, but also with those whose interest in their offer is not so great, then the law of averages will work in their favor.


Your income will depend on the number of sales, but at the same time, it will be directly proportional to the number of presentations you make. Once you understand and practice the law of averages, the anxiety associated with starting a new business or working in a new field will begin to decrease. As a result, a sense of control and confidence in your ability to earn money will begin to grow. If you just make presentations and hone your skills in the process, deals will come.

Instead of thinking about the number of deals, think better about the number of presentations. There is no point in waking up in the morning or coming home in the evening and wondering who will buy your product. Instead, it's best to plan how many calls you need to make each day. And then, no matter what - make all these calls! This approach will make your work easier - because it is a simple and specific goal. If you know that you have a specific and achievable goal, it will be easier for you to make the planned number of calls. If you hear “yes” a couple of times during this process, so much the better!

And if “no,” then in the evening you will feel that you honestly did everything you could, and you will not be tormented by thoughts about how much money you earned, or how many companions you acquired in a day.

Let's say in your company or business, the average salesperson closes one deal per four presentations. Now imagine that you are drawing cards from a deck. Each card of the three suits - spades, diamonds and clubs - is a presentation in which you professionally present a product, service or opportunity. You do it as well as you can, but you still don't close the deal. And each heart card is a deal that allows you to get money or acquire a new companion.

In such a situation, wouldn't you want to draw as many more cards? Let's say you are offered to draw as many cards as you want, while paying you or offering you a new companion each time you draw a heart card. You will start drawing cards enthusiastically, barely noticing what suit the card you just pulled out is.

You know that in a deck of fifty-two cards there are thirteen hearts. And in two decks there are twenty-six heart cards, and so on. Will you be disappointed when you draw spades, diamonds or clubs? Of course not! You will only think that each such “miss” brings you closer to what? To the heart card!

But you know what? You have already been given such an offer. You are in a unique position to earn as much as you want and draw as many hearts as you want to draw in your life. And if you simply “draw cards” conscientiously, improve your skills and endure a little spades, diamonds and clubs, you will become an excellent salesman and achieve success.

One of the things that makes sales so fun is that every time you shuffle the deck, the cards are shuffled differently. Sometimes all the hearts end up at the beginning of the deck, and after a lucky streak (when it seems to us that we will never lose!) a long row of cards of a different suit awaits us. And other times, to get to the first heart, you have to go through an endless number of spades, clubs and diamonds. And sometimes cards of different colors fall out strictly in order. But in any case, in every deck of fifty-two cards, in some order, there are always thirteen hearts. Just pull out cards until you find them.



From: Leylya,  

Words about large numbers refer to the number of tests - a large number of values ​​of a random variable or the cumulative effect of a large number of random variables is considered. The essence of this law is as follows: although it is impossible to predict what value an individual random variable will take in a single experiment, however, the total result of the action of a large number of independent random variables loses its random character and can be predicted almost reliably (i.e. with high probability). For example, it is impossible to predict which way one coin will land. However, if you throw 2 tons of coins, then with great confidence we can say that the weight of the coins that fell with the coat of arms up is equal to 1 ton.

The law of large numbers primarily refers to the so-called Chebyshev inequality, which estimates in a single test the probability of a random variable accepting a value that deviates from the average value by no more than a given value.

Chebyshev's inequality. Let X– arbitrary random variable, a=M(X) , A D(X) – its variance. Then

Example. The nominal (i.e. required) value of the diameter of the sleeve turned on the machine is equal to 5mm, and the dispersion is no more 0.01 (this is the accuracy tolerance of the machine). Estimate the probability that during the manufacture of one bushing the deviation of its diameter from the nominal one will be less than 0.5mm .

Solution. Let r.v. X– diameter of the manufactured bushing. According to the condition, its mathematical expectation is equal to the nominal diameter (if there is no systematic failure in the machine settings): a=M(X)=5 , and the dispersion D(X)≤0.01. Applying Chebyshev's inequality at ε = 0.5, we get:

Thus, the probability of such a deviation is quite high, and therefore we can conclude that in a single production of a part, it is almost certain that the deviation of the diameter from the nominal one will not exceed 0.5mm .

In its meaning, the standard deviation σ characterizes average the deviation of a random variable from its center (i.e. from its mathematical expectation). Because this average deviation, then during testing large (emphasis on o) deviations are possible. How large deviations are practically possible? When studying normally distributed random variables, we derived the “three sigma” rule: a normally distributed random variable X in a single test practically does not deviate from its average further than , Where σ= σ(X)– standard deviation of r.v. X. We derived this rule from the fact that we obtained the inequality

.

Let us now estimate the probability for arbitrary random variable X accept a value that differs from the average by no more than triple the standard deviation. Applying Chebyshev's inequality at ε = and given that D(Х)= σ 2 , we get:

.

Thus, in general case we can estimate the probability of a random variable deviating from its mean by no more than three standard deviations by the number 0.89 , while for a normal distribution this can be guaranteed with probability 0.997 .

Chebyshev's inequality can be generalized to a system of independent identically distributed random variables.

Generalized Chebyshev inequality. If independent random variables X 1 , X 2 , … , X n M(X i )= a and variances D(X i )= D, That

At n=1 this inequality transforms into the Chebyshev inequality formulated above.

Chebyshev's inequality, having independent significance for solving the corresponding problems, is used to prove the so-called Chebyshev's theorem. We will first talk about the essence of this theorem, and then give its formal formulation.

Let X 1 , X 2 , … , X n– a large number of independent random variables with mathematical expectations M(X 1 )=a 1 , … , M(X n )=a n. Although each of them, as a result of an experiment, can take a value far from its average (i.e., mathematical expectation), however, a random variable
, equal to their arithmetic mean, will most likely take a value close to a fixed number
(this is the average of all mathematical expectations). This means the following. Let, as a result of the test, independent random variables X 1 , X 2 , … , X n(there are many of them!) took values ​​accordingly X 1 , X 2 , … , X n respectively. Then if these values ​​themselves may turn out to be far from the average values ​​of the corresponding random variables, their average value
will most likely be close to the number
. Thus, the arithmetic mean of a large number of random variables already loses its random character and can be predicted with great accuracy. This can be explained by the fact that random deviations of values X i from a i may be of different signs, and therefore in total these deviations are most likely compensated.

Terema Chebyshev (law of large numbers in Chebyshev form). Let X 1 , X 2 , … , X n – a sequence of pairwise independent random variables whose variances are limited to the same number. Then, no matter how small the number ε we take, the probability of inequality

will be as close to one as desired if the number n take random variables large enough. Formally, this means that under the conditions of the theorem

This type of convergence is called convergence by probability and is denoted:

Thus, Chebyshev’s theorem says that if there is a sufficiently large number of independent random variables, then their arithmetic mean in a single test will almost reliably take on a value close to the mean of their mathematical expectations.

Most often, Chebyshev's theorem is applied in situations where random variables X 1 , X 2 , … , X n have the same distribution (i.e. the same distribution law or the same probability density). In fact, it is simply a large number of instances of the same random variable.

Consequence(generalized Chebyshev inequality). If independent random variables X 1 , X 2 , … , X n have the same distribution with mathematical expectations M(X i )= a and variances D(X i )= D, That

, i.e.
.

The proof follows from the generalized Chebyshev inequality by passing to the limit at n→∞ .

Let us note once again that the equalities written out above do not guarantee that the value of the quantity
strives for A at n→∞. This quantity still remains a random variable, and its individual values ​​can be quite far from A. But the probability of such (far from A) values ​​with increasing n tends to 0.

Comment. The conclusion of the corollary is obviously also valid in the more general case, when independent random variables X 1 , X 2 , … , X n have different distributions, but the same mathematical expectations (equal A) and jointly limited variances. This allows us to predict the accuracy of the measurement of a certain quantity, even if these measurements were made by different instruments.

Let us consider in more detail the application of this corollary when measuring quantities. Let's use some device n measurements of the same quantity, the true value of which is equal to A and we don't know. The results of such measurements X 1 , X 2 , … , X n may differ significantly from each other (and from the true value A) due to various random factors (pressure changes, temperature, random vibration, etc.). Consider r.v. X– instrument reading for a single measurement of a quantity, as well as a set of r.v. X 1 , X 2 , … , X n– instrument reading at the first, second, ..., last measurement. Thus, each of the quantities X 1 , X 2 , … , X n there is just one of the instances of s.v. X, and therefore they all have the same distribution as the r.v. X. Since the measurement results do not depend on each other, then r.v. X 1 , X 2 , … , X n can be considered independent. If the device does not produce a systematic error (for example, the zero on the scale is not “off”, the spring is not stretched, etc.), then we can assume that the mathematical expectation M(X) = a, and therefore M(X 1 ) = ... = M(X n ) = a. Thus, the conditions of the above corollary are satisfied, and therefore, as an approximate value of the quantity A we can take a “realization” of a random variable
in our experiment (consisting of conducting a series of n measurements), i.e.

.

With a large number of measurements, good accuracy of calculation using this formula is practically certain. This is the rationale for the practical principle that with a large number of measurements, their arithmetic mean practically does not differ much from the true value of the measured value.

The “sampling” method, widely used in mathematical statistics, is based on the law of large numbers, which allows one to obtain its objective characteristics with acceptable accuracy from a relatively small sample of values ​​of a random variable. But this will be discussed in the next section.

Example. A certain quantity is measured on a measuring device that does not make systematic distortions A once (received value X 1 ), and then another 99 times (obtained values X 2 , … , X 100 ). For the true measurement value A the result of the first measurement is taken first
, and then the arithmetic mean of all measurements
. The measurement accuracy of the device is such that the standard deviation of the measurement σ is no more than 1 (therefore the variance D 2 also does not exceed 1). For each measurement method, estimate the probability that the measurement error will not exceed 2.

Solution. Let r.v. X– instrument reading for a single measurement. Then by condition M(X)=a. To answer the questions posed, we apply the generalized Chebyshev inequality

at ε =2 first for n=1 and then for n=100 . In the first case we get
, and in the second. Thus, the second case practically guarantees the specified measurement accuracy, while the first leaves great doubts in this sense.

Let us apply the above statements to the random variables arising in the Bernoulli scheme. Let us recall the essence of this scheme. Let it be produced n independent trials, each of which contains some event A can appear with the same probability r, A q=1–р(in meaning, this is the probability of the opposite event - the event not occurring A) . Let's spend some number n such tests. Let's consider random variables: X 1 – number of occurrences of the event A V 1 -th test, ..., X n– number of occurrences of the event A V n-th test. All entered s.v. can take values 0 or 1 (event A may or may not appear in the test), and the value 1 according to the condition is accepted in each trial with probability p(probability of event occurrence A in each trial), and the value 0 with probability q= 1 p. Therefore, these quantities have the same distribution laws:

X 1

X n

Therefore, the average values ​​of these quantities and their variances are also the same: M(X 1 )=0 q+1 p= p, …, M(X n )= p ; D(X 1 )=(0 2 q+1 2 p)− p 2 = p∙(1− p)= p q, … , D(X n )= p q. Substituting these values ​​into the generalized Chebyshev inequality, we obtain

.

It is clear that r.v. X=X 1 +…+X n is the number of occurrences of the event A in all n tests (as they say - “the number of successes” in n tests). Let in the conducted n testing event A appeared in k of them. Then the previous inequality can be written as

.

But the magnitude
, equal to the ratio of the number of occurrences of the event A V n independent trials, to the total number of trials, was previously called the relative event frequency A V n tests. Therefore there is an inequality

.

Turning now to the limit at n→∞, we get
, i.e.
(by probability). This constitutes the content of the law of large numbers in Bernoulli form. It follows from this that with a sufficiently large number of tests n arbitrarily small deviations of the relative frequency
events from its probability r- almost reliable events, and large deviations - almost impossible. The resulting conclusion about such stability of relative frequencies (which we previously spoke of as experimental fact) justifies the previously introduced statistical definition of the probability of an event as a number around which the relative frequency of an event fluctuates.

Considering that the expression pq= p∙(1− p)= pp 2 does not exceed on the interval of change
(this is easy to verify by finding the minimum of this function on this segment), from the above inequality
easy to get that

,

which is used in solving relevant problems (one of them will be given below).

Example. The coin was tossed 1000 times. Estimate the probability that the deviation of the relative frequency of the appearance of the coat of arms from its probability will be less than 0.1.

Solution. Applying inequality
at p= q=1/2 , n=1000 , ε=0.1, we will receive .

Example. Estimate the probability that, under the conditions of the previous example, the number k dropped emblems will be in the range from 400 to 600 .

Solution. Condition 400< k<600 means that 400/1000< k/ n<600/1000 , i.e. 0.4< W n (A)<0.6 or
. As we have just seen from the previous example, the probability of such an event is no less 0.975 .

Example. To calculate the probability of some event A 1000 experiments were carried out in which the event A appeared 300 times. Estimate the probability that the relative frequency (equal to 300/1000 = 0.3) is away from the true probability r no further than 0.1.

Solution. Applying the above inequality
for n=1000, ε=0.1, we get .

Law of Large Numbers

Law of large numbers V probability theory states that the empirical average ( arithmetic mean) of a sufficiently large finite sample from a fixed distribution close to the theoretical mean ( mathematical expectation) of this distribution. Depending on the type of convergence, the weak law of large numbers is distinguished, when convergence in probability, and the strong law of large numbers when it occurs convergence almost everywhere.

There will always be a number of trials in which, with any given probability in advance, the relative frequency of occurrence of some event will differ as little as desired from its probability.

The general meaning of the law of large numbers is that the combined action of a large number of random factors leads to a result that is almost independent of chance.

Methods for estimating probability based on finite sample analysis are based on this property. A clear example is the forecast of election results based on a survey of a sample of voters.

Weak law of large numbers

Let there be an infinite sequence (sequential enumeration) of identically distributed and uncorrelated random variables defined on the same probability space. That is, their covariance. Let . Let's denote sample mean first members:

Strengthened Law of Large Numbers

Let there be an infinite sequence of independent identically distributed random variables defined on one probability space. Let . Let us denote the sample average of the first terms:

.

Then it's almost certain.

See also

Literature

  • Shiryaev A. N. Probability, - M.: Science. 1989.
  • Chistyakov V. P. Course on Probability Theory, M., 1982.

Wikimedia Foundation. 2010.

  • Cinema of Russia
  • Gromeka, Mikhail Stepanovich

See what the “Law of Large Numbers” is in other dictionaries:

    LAW OF LARGE NUMBERS- (law of large numbers) In the case where the behavior of individual members of the population is highly original, the behavior of the group is, on average, more predictable than the behavior of any of its members. The trend is that groups... ... Economic dictionary

    LAW OF LARGE NUMBERS- see LAW OF LARGE NUMBERS. Antinazi. Encyclopedia of Sociology, 2009 ... Encyclopedia of Sociology

    Law of Large Numbers- the principle according to which the quantitative patterns inherent in mass social phenomena are most clearly manifested with a sufficiently large number of observations. Single phenomena are more susceptible to the influence of random and... ... Dictionary of business terms

    LAW OF LARGE NUMBERS- states that with a probability close to unity, the arithmetic mean of a large number of random variables of approximately the same order will differ little from a constant equal to the arithmetic mean of the mathematical expectations of these quantities. Difference... ... Geological encyclopedia

    law of large numbers- - [Ya.N.Luginsky, M.S.Fezi Zhilinskaya, Yu.S.Kabirov. English-Russian dictionary of electrical engineering and power engineering, Moscow, 1999] Topics of electrical engineering, basic concepts EN law of averageslaw of large numbers ... Technical Translator's Guide

    law of large numbers- didžiųjų skaičių dėsnis statusas T sritis fizika atitikmenys: engl. law of large numbers vok. Gesetz der großen Zahlen, n rus. law of large numbers, m pranc. loi des grands nombres, f … Fizikos terminų žodynas

    LAW OF LARGE NUMBERS- a general principle, due to which the joint action of random factors leads, under certain very general conditions, to a result that is almost independent of chance. The convergence of the frequency of occurrence of a random event with its probability as the number increases... ... Russian Sociological Encyclopedia

    Law of Large Numbers- a law stating that the combined action of a large number of random factors leads, under certain very general conditions, to a result almost independent of chance... Sociology: dictionary

    LAW OF LARGE NUMBERS- a statistical law expressing the relationship between statistical indicators (parameters) of the sample and the general population. The actual values ​​of statistical indicators obtained from a certain sample always differ from the so-called. theoretical... ... Sociology: Encyclopedia

    LAW OF LARGE NUMBERS- the principle by which the frequency of financial losses of a certain type can be predicted with high accuracy when there are a large number of losses of similar types ... Encyclopedic Dictionary of Economics and Law

Books

  • Set of tables. Mathematics. Probability theory and mathematical statistics. 6 tables + methodology, . The tables are printed on thick printed cardboard measuring 680 x 980 mm. The kit includes a brochure with teaching guidelines for teachers. Educational album of 6 sheets. Random...
Share: