The probability of a random event is. Independence of events

In economics, as in other areas of human activity or in nature, we constantly have to deal with events that cannot be accurately predicted. Thus, the sales volume of a product depends on demand, which can vary significantly, and on a number of other factors that are almost impossible to take into account. Therefore, when organizing production and carrying out sales, you have to predict the outcome of such activities on the basis of either your own previous experience, or similar experience of other people, or intuition, which to a large extent also relies on experimental data.

In order to somehow evaluate the event in question, it is necessary to take into account or specially organize the conditions in which this event is recorded.

The implementation of certain conditions or actions to identify the event in question is called experience or experiment.

The event is called random, if as a result of experience it may or may not occur.

The event is called reliable, if it necessarily appears as a result of a given experience, and impossible, if it cannot appear in this experience.

For example, snowfall in Moscow on November 30 is a random event. The daily sunrise can be considered a reliable event. Snowfall at the equator can be considered an impossible event.

One of the main tasks in probability theory is the task of determining a quantitative measure of the possibility of an event occurring.

Algebra of events

Events are called incompatible if they cannot be observed together in the same experience. Thus, the presence of two and three cars in one store for sale at the same time are two incompatible events.

Amount events is an event consisting of the occurrence of at least one of these events

An example of the sum of events is the presence of at least one of two products in the store.

The work events is an event consisting of the simultaneous occurrence of all these events

An event consisting of the appearance of two goods in a store at the same time is a product of events: - the appearance of one product, - the appearance of another product.

Events form a complete group of events if at least one of them is sure to occur in experience.

Example. The port has two berths for receiving ships. Three events can be considered: - the absence of ships at the berths, - the presence of one ship at one of the berths, - the presence of two ships at two berths. These three events form a complete group of events.

Opposite two unique possible events that form a complete group are called.

If one of the events that is opposite is denoted by , then the opposite event is usually denoted by .

Classical and statistical definitions of event probability

Each of the equally possible results of tests (experiments) is called an elementary outcome. They are usually designated by letters. For example, a die is thrown. There can be a total of six elementary outcomes based on the number of points on the sides.

From elementary outcomes you can create a more complex event. Thus, the event of an even number of points is determined by three outcomes: 2, 4, 6.

A quantitative measure of the possibility of the occurrence of the event in question is probability.

The most widely used definitions of the probability of an event are: classic And statistical.

The classical definition of probability is associated with the concept of a favorable outcome.

The outcome is called favorable to a given event if its occurrence entails the occurrence of this event.

In the above example, the event in question—an even number of points on the rolled side—has three favorable outcomes. In this case, the general
number of possible outcomes. This means that the classical definition of the probability of an event can be used here.

Classic definition equals the ratio of the number of favorable outcomes to the total number of possible outcomes

where is the probability of the event, is the number of outcomes favorable to the event, is the total number of possible outcomes.

In the considered example

The statistical definition of probability is associated with the concept of the relative frequency of occurrence of an event in experiments.

The relative frequency of occurrence of an event is calculated using the formula

where is the number of occurrences of an event in a series of experiments (tests).

Statistical definition. The probability of an event is the number around which the relative frequency stabilizes (sets) with an unlimited increase in the number of experiments.

In practical problems, the probability of an event is taken to be the relative frequency for a sufficiently large number of trials.

From these definitions of the probability of an event it is clear that the inequality is always satisfied

To determine the probability of an event based on formula (1.1), combinatorics formulas are often used, which are used to find the number of favorable outcomes and the total number of possible outcomes.

When assessing the probability of the occurrence of any random event, it is very important to have a good understanding of whether the probability (likelihood of an event) of the occurrence of the event of interest to us depends on how other events develop. In the case of the classical scheme, when all outcomes are equally probable, we can already estimate the probability values ​​of the individual event of interest to us independently. We can do this even if the event is a complex collection of several elementary outcomes. What if several random events occur simultaneously or sequentially? How does this affect the likelihood of the event we are interested in happening? If I roll a die several times and want a six to come up, and I keep getting unlucky, does that mean I should increase my bet because, according to probability theory, I'm about to get lucky? Alas, probability theory does not state anything like this. Neither dice, nor cards, nor coins can remember what they showed us last time. It doesn’t matter to them at all whether it’s the first time or the tenth time I’m testing my luck today. Every time I repeat the roll, I know only one thing: and this time the probability of getting a six is ​​again one sixth. Of course, this does not mean that the number I need will never come up. This only means that my loss after the first throw and after any other throw are independent events. Events A and B are called independent if the occurrence of one of them does not in any way affect the probability of the other event. For example, the probabilities of hitting a target with the first of two weapons do not depend on whether the target was hit by the other weapon, so the events “the first weapon hit the target” and “the second weapon hit the target” are independent. If two events A and B are independent, and the probability of each of them is known, then the probability of the simultaneous occurrence of both event A and event B (denoted AB) can be calculated using the following theorem.

Probability multiplication theorem for independent events

P(AB) = P(A)*P(B) the probability of the simultaneous occurrence of two independent events is equal to the product of the probabilities of these events.

Example 1. The probabilities of hitting the target when firing the first and second guns are respectively equal: p 1 = 0.7; p 2 = 0.8. Find the probability of a hit with one salvo by both guns simultaneously.

as we have already seen, events A (hit by the first gun) and B (hit by the second gun) are independent, i.e. P(AB)=P(A)*P(B)=p1*p2=0.56. What happens to our estimates if the initial events are not independent? Let's change the previous example a little.

Example 2. Two shooters shoot at targets at a competition, and if one of them shoots accurately, the opponent begins to get nervous and his results worsen. How to turn this everyday situation into a mathematical problem and outline ways to solve it? It is intuitively clear that it is necessary to somehow separate the two options for the development of events, to essentially create two scenarios, two different tasks. In the first case, if the opponent missed, the scenario will be favorable for the nervous athlete and his accuracy will be higher. In the second case, if the opponent took his chance decently, the probability of hitting the target for the second athlete decreases. To separate possible scenarios (often called hypotheses) for the development of events, we will often use a “probability tree” diagram. This diagram is similar in meaning to the decision tree that you have probably already dealt with. Each branch represents a separate scenario for the development of events, only now it has its own value of the so-called conditional probability (q 1, q 2, q 1 -1, q 2 -1).

This scheme is very convenient for analyzing sequential random events. One more important question remains to be clarified: where do the initial values ​​of probabilities in real situations come from? After all, probability theory doesn’t work with just coins and dice? Usually these estimates are taken from statistics, and when statistical information is not available, we conduct our own research. And we often have to start not with collecting data, but with the question of what information we actually need.

Example 3. Let's say we need to estimate in a city with a population of one hundred thousand inhabitants the market volume for a new product that is not an essential item, for example, for a balm for the care of colored hair. Let's consider the "probability tree" diagram. In this case, we need to approximately estimate the probability value on each “branch”. So, our estimates of market capacity:

1) of all city residents, 50% are women,

2) of all women, only 30% dye their hair often,

3) of them, only 10% use balms for colored hair,

4) of them, only 10% can muster the courage to try a new product,

5) 70% of them usually buy everything not from us, but from our competitors.


According to the law of multiplication of probabilities, we determine the probability of the event we are interested in A = (a city resident buys this new balm from us) = 0.00045. Let's multiply this probability value by the number of city residents. As a result, we have only 45 potential customers, and considering that one bottle of this product lasts for several months, the trade is not very lively. And yet there is some benefit from our assessments. Firstly, we can compare forecasts of different business ideas; they will have different “forks” in the diagrams, and, of course, the probability values ​​will also be different. Secondly, as we have already said, a random variable is not called random because it does not depend on anything at all. Its exact meaning is simply not known in advance. We know that the average number of buyers can be increased (for example, by advertising a new product). So it makes sense to focus our efforts on those “forks” where the probability distribution does not suit us particularly, on those factors that we are able to influence. Let's look at another quantitative example of consumer behavior research.

Example 3. On average, 10,000 people visit the food market per day. The probability that a market visitor enters the dairy products pavilion is 1/2. It is known that this pavilion sells an average of 500 kg of various products per day. Can we say that the average purchase in the pavilion weighs only 100 g?

Discussion.

Of course not. It is clear that not everyone who entered the pavilion ended up buying something there.


As shown in the diagram, to answer the question about the average weight of a purchase, we must find an answer to the question, what is the probability that a person entering the pavilion will buy something there. If we do not have such data at our disposal, but we need it, we will have to obtain it ourselves by observing the visitors to the pavilion for some time. Let’s say our observations showed that only a fifth of pavilion visitors buy something. Once we have obtained these estimates, the task becomes simple. Out of 10,000 people who come to the market, 5,000 will go to the dairy products pavilion; there will be only 1,000 purchases. The average weight of a purchase is 500 grams. It is interesting to note that in order to build a complete picture of what is happening, the logic of conditional “branching” must be defined at each stage of our reasoning as clearly as if we were working with a “specific” situation, and not with probabilities.

Self-test tasks.

1. Let there be an electrical circuit consisting of n elements connected in series, each of which operates independently of the others. The probability p of failure of each element is known. Determine the probability of proper operation of the entire section of the circuit (event A).


2. The student knows 20 out of 25 exam questions. Find the probability that the student knows the three questions given to him by the examiner.

3. Production consists of four successive stages, at each of which equipment operates, for which the probabilities of failure over the next month are equal to p 1, p 2, p 3 and p 4, respectively. Find the probability that there will be no production stoppages due to equipment failure in a month.

Initially, being just a collection of information and empirical observations about the game of dice, the theory of probability became a thorough science. The first to give it a mathematical framework were Fermat and Pascal.

From thinking about the eternal to the theory of probability

The two individuals to whom probability theory owes many of its fundamental formulas, Blaise Pascal and Thomas Bayes, are known as deeply religious people, the latter being a Presbyterian minister. Apparently, the desire of these two scientists to prove the fallacy of the opinion about a certain Fortune giving good luck to her favorites gave impetus to research in this area. After all, in fact, any gambling game with its winnings and losses is just a symphony of mathematical principles.

Thanks to the passion of the Chevalier de Mere, who was equally a gambler and a man not indifferent to science, Pascal was forced to find a way to calculate probability. De Mere was interested in the following question: “How many times do you need to throw two dice in pairs so that the probability of getting 12 points exceeds 50%?” The second question, which was of great interest to the gentleman: “How to divide the bet between the participants in the unfinished game?” Of course, Pascal successfully answered both questions of de Mere, who became the unwitting initiator of the development of probability theory. It is interesting that the person of de Mere remained known in this area, and not in literature.

Previously, no mathematician had ever attempted to calculate the probabilities of events, since it was believed that this was only a guessing solution. Blaise Pascal gave the first definition of the probability of an event and showed that it is a specific figure that can be justified mathematically. Probability theory has become the basis for statistics and is widely used in modern science.

What is randomness

If we consider a test that can be repeated an infinite number of times, then we can define a random event. This is one of the likely outcomes of the experiment.

Experience is the implementation of specific actions under constant conditions.

To be able to work with the results of the experiment, events are usually designated by the letters A, B, C, D, E...

Probability of a random event

In order to begin the mathematical part of probability, it is necessary to define all its components.

The probability of an event is a numerical measure of the possibility of some event (A or B) occurring as a result of an experience. The probability is denoted as P(A) or P(B).

In probability theory they distinguish:

  • reliable the event is guaranteed to occur as a result of the experience P(Ω) = 1;
  • impossible the event can never happen P(Ø) = 0;
  • random an event lies between reliable and impossible, that is, the probability of its occurrence is possible, but not guaranteed (the probability of a random event is always within the range 0≤Р(А)≤ 1).

Relationships between events

Both one and the sum of events A+B are considered, when the event is counted when at least one of the components, A or B, or both, A and B, is fulfilled.

In relation to each other, events can be:

  • Equally possible.
  • Compatible.
  • Incompatible.
  • Opposite (mutually exclusive).
  • Dependent.

If two events can happen with equal probability, then they equally possible.

If the occurrence of event A does not reduce to zero the probability of the occurrence of event B, then they compatible.

If events A and B never occur simultaneously in the same experience, then they are called incompatible. Tossing a coin is a good example: the appearance of heads is automatically the non-appearance of heads.

The probability for the sum of such incompatible events consists of the sum of the probabilities of each of the events:

P(A+B)=P(A)+P(B)

If the occurrence of one event makes the occurrence of another impossible, then they are called opposite. Then one of them is designated as A, and the other - Ā (read as “not A”). The occurrence of event A means that Ā did not happen. These two events form a complete group with a sum of probabilities equal to 1.

Dependent events have mutual influence, decreasing or increasing the probability of each other.

Relationships between events. Examples

Using examples it is much easier to understand the principles of probability theory and combinations of events.

The experiment that will be carried out consists of taking balls out of a box, and the result of each experiment is an elementary outcome.

An event is one of the possible outcomes of an experiment - a red ball, a blue ball, a ball with number six, etc.

Test No. 1. There are 6 balls involved, three of which are blue with odd numbers on them, and the other three are red with even numbers.

Test No. 2. There are 6 blue balls with numbers from one to six.

Based on this example, we can name combinations:

  • Reliable event. In Spanish No. 2 the event “get the blue ball” is reliable, since the probability of its occurrence is equal to 1, since all the balls are blue and there can be no miss. Whereas the event “get the ball with the number 1” is random.
  • Impossible event. In Spanish No. 1 with blue and red balls, the event “getting a purple ball” is impossible, since the probability of its occurrence is 0.
  • Equally possible events. In Spanish No. 1, the events “get the ball with the number 2” and “get the ball with the number 3” are equally possible, and the events “get the ball with an even number” and “get the ball with the number 2” have different probabilities.
  • Compatible Events. Getting a six twice in a row while throwing a die is a compatible event.
  • Incompatible events. In the same Spanish No. 1, the events “get a red ball” and “get a ball with an odd number” cannot be combined in the same experience.
  • Opposite events. The most striking example of this is coin tossing, where drawing heads is equivalent to not drawing tails, and the sum of their probabilities is always 1 (full group).
  • Dependent Events. So, in Spanish No. 1, you can set the goal of drawing the red ball twice in a row. Whether or not it is retrieved the first time affects the likelihood of being retrieved the second time.

It can be seen that the first event significantly affects the probability of the second (40% and 60%).

Event probability formula

The transition from fortune-telling to precise data occurs through the translation of the topic into a mathematical plane. That is, judgments about a random event such as “high probability” or “minimal probability” can be translated into specific numerical data. It is already permissible to evaluate, compare and enter such material into more complex calculations.

From a calculation point of view, determining the probability of an event is the ratio of the number of elementary positive outcomes to the number of all possible outcomes of experience regarding a certain event. Probability is denoted by P(A), where P stands for the word “probabilite”, which is translated from French as “probability”.

So, the formula for the probability of an event is:

Where m is the number of favorable outcomes for event A, n is the sum of all outcomes possible for this experience. In this case, the probability of an event always lies between 0 and 1:

0 ≤ P(A)≤ 1.

Calculation of the probability of an event. Example

Let's take Spanish. No. 1 with balls, which was described earlier: 3 blue balls with the numbers 1/3/5 and 3 red balls with the numbers 2/4/6.

Based on this test, several different problems can be considered:

  • A - red ball falling out. There are 3 red balls, and there are 6 options in total. This is the simplest example in which the probability of an event is equal to P(A) = 3/6 = 0.5.
  • B - rolling an even number. There are 3 even numbers (2,4,6), and the total number of possible numerical options is 6. The probability of this event is P(B)=3/6=0.5.
  • C - the occurrence of a number greater than 2. There are 4 such options (3,4,5,6) out of a total number of possible outcomes of 6. The probability of event C is equal to P(C)=4/6=0.67.

As can be seen from the calculations, event C has a higher probability, since the number of probable positive outcomes is higher than in A and B.

Incompatible events

Such events cannot appear simultaneously in the same experience. As in Spanish No. 1 it is impossible to get a blue and a red ball at the same time. That is, you can get either a blue or a red ball. In the same way, an even and an odd number cannot appear in a dice at the same time.

The probability of two events is considered as the probability of their sum or product. The sum of such events A+B is considered to be an event that consists of the occurrence of event A or B, and the product of them AB is the occurrence of both. For example, the appearance of two sixes at once on the faces of two dice in one throw.

The sum of several events is an event that presupposes the occurrence of at least one of them. The production of several events is the joint occurrence of them all.

In probability theory, as a rule, the use of the conjunction “and” denotes a sum, and the conjunction “or” - multiplication. Formulas with examples will help you understand the logic of addition and multiplication in probability theory.

Probability of the sum of incompatible events

If the probability of incompatible events is considered, then the probability of the sum of events is equal to the addition of their probabilities:

P(A+B)=P(A)+P(B)

For example: let's calculate the probability that in Spanish. No. 1 with blue and red balls, a number between 1 and 4 will appear. We will calculate not in one action, but by the sum of the probabilities of the elementary components. So, in such an experiment there are only 6 balls or 6 of all possible outcomes. The numbers that satisfy the condition are 2 and 3. The probability of getting the number 2 is 1/6, the probability of getting the number 3 is also 1/6. The probability of getting a number between 1 and 4 is:

The probability of the sum of incompatible events of a complete group is 1.

So, if in an experiment with a cube we add up the probabilities of all numbers appearing, the result will be one.

This is also true for opposite events, for example in the experiment with a coin, where one side is the event A, and the other is the opposite event Ā, as is known,

P(A) + P(Ā) = 1

Probability of incompatible events occurring

Probability multiplication is used when considering the occurrence of two or more incompatible events in one observation. The probability that events A and B will appear in it simultaneously is equal to the product of their probabilities, or:

P(A*B)=P(A)*P(B)

For example, the probability that in Spanish No. 1, as a result of two attempts, a blue ball will appear twice, equal to

That is, the probability of an event occurring when, as a result of two attempts to extract balls, only blue balls are extracted is 25%. It is very easy to do practical experiments on this problem and see if this is actually the case.

Joint events

Events are considered joint when the occurrence of one of them can coincide with the occurrence of another. Despite the fact that they are joint, the probability of independent events is considered. For example, throwing two dice can give a result when the number 6 appears on both of them. Although the events coincided and appeared at the same time, they are independent of each other - only one six could fall out, the second die has no influence on it.

The probability of joint events is considered as the probability of their sum.

Probability of the sum of joint events. Example

The probability of the sum of events A and B, which are joint in relation to each other, is equal to the sum of the probabilities of the event minus the probability of their occurrence (that is, their joint occurrence):

R joint (A+B)=P(A)+P(B)- P(AB)

Let's assume that the probability of hitting the target with one shot is 0.4. Then event A is hitting the target in the first attempt, B - in the second. These events are joint, since it is possible that you can hit the target with both the first and second shots. But events are not dependent. What is the probability of the event of hitting the target with two shots (at least with one)? According to the formula:

0,4+0,4-0,4*0,4=0,64

The answer to the question is: “The probability of hitting the target with two shots is 64%.”

This formula for the probability of an event can also be applied to incompatible events, where the probability of the joint occurrence of an event P(AB) = 0. This means that the probability of the sum of incompatible events can be considered a special case of the proposed formula.

Geometry of probability for clarity

Interestingly, the probability of the sum of joint events can be represented as two areas A and B, which intersect with each other. As can be seen from the picture, the area of ​​their union is equal to the total area minus the area of ​​their intersection. This geometric explanation makes the seemingly illogical formula more understandable. Note that geometric solutions are not uncommon in probability theory.

Determining the probability of the sum of many (more than two) joint events is quite cumbersome. To calculate it, you need to use the formulas that are provided for these cases.

Dependent Events

Events are called dependent if the occurrence of one (A) of them affects the probability of the occurrence of another (B). Moreover, the influence of both the occurrence of event A and its non-occurrence is taken into account. Although events are called dependent by definition, only one of them is dependent (B). Ordinary probability was denoted as P(B) or the probability of independent events. In the case of dependent events, a new concept is introduced - conditional probability P A (B), which is the probability of a dependent event B subject to the occurrence of event A (hypothesis) on which it depends.

But event A is also random, so it also has a probability that needs and can be taken into account in the calculations performed. The following example will show how to work with dependent events and a hypothesis.

An example of calculating the probability of dependent events

A good example for calculating dependent events would be a standard deck of cards.

Using a deck of 36 cards as an example, let’s look at dependent events. We need to determine the probability that the second card drawn from the deck will be of diamonds if the first card drawn is:

  1. Bubnovaya.
  2. A different color.

Obviously, the probability of the second event B depends on the first A. So, if the first option is true, that there is 1 card (35) and 1 diamond (8) less in the deck, the probability of event B:

R A (B) =8/35=0.23

If the second option is true, then the deck has 35 cards, and the full number of diamonds (9) is still retained, then the probability of the following event B:

R A (B) =9/35=0.26.

It can be seen that if event A is conditioned on the fact that the first card is a diamond, then the probability of event B decreases, and vice versa.

Multiplying dependent events

Guided by the previous chapter, we accept the first event (A) as a fact, but in essence, it is of a random nature. The probability of this event, namely drawing a diamond from a deck of cards, is equal to:

P(A) = 9/36=1/4

Since the theory does not exist on its own, but is intended to serve for practical purposes, it is fair to note that what is most often needed is the probability of producing dependent events.

According to the theorem on the product of probabilities of dependent events, the probability of the occurrence of jointly dependent events A and B is equal to the probability of one event A, multiplied by the conditional probability of event B (dependent on A):

P(AB) = P(A) *P A(B)

Then, in the deck example, the probability of drawing two cards with the suit of diamonds is:

9/36*8/35=0.0571, or 5.7%

And the probability of extracting not diamonds first, and then diamonds, is equal to:

27/36*9/35=0.19, or 19%

It can be seen that the probability of event B occurring is greater provided that the first card drawn is of a suit other than diamonds. This result is quite logical and understandable.

Total probability of an event

When a problem with conditional probabilities becomes multifaceted, it cannot be calculated using conventional methods. When there are more than two hypotheses, namely A1, A2,…, A n, ..forms a complete group of events provided:

  • P(A i)>0, i=1,2,…
  • A i ∩ A j =Ø,i≠j.
  • Σ k A k =Ω.

So, the formula for the total probability for event B with a complete group of random events A1, A2,..., A n is equal to:

Looking to the future

The probability of a random event is extremely necessary in many areas of science: econometrics, statistics, physics, etc. Since some processes cannot be described deterministically, since they themselves are probabilistic in nature, special working methods are required. The theory of event probability can be used in any technological field as a way to determine the possibility of an error or malfunction.

We can say that by recognizing probability, we in some way take a theoretical step into the future, looking at it through the prism of formulas.

1. Presentation of the main theorems and probability formulas: addition theorem, conditional probability, multiplication theorem, independence of events, total probability formula.

Goals: creating favorable conditions for introducing the concept of probability of an event; familiarity with the basic theorems and formulas of probability theory; introduce the total probability formula.

Progress of the lesson:

Random experiment (experience) is a process in which different outcomes are possible, and it is impossible to predict in advance what the outcome will be. Possible mutually exclusive outcomes of an experiment are called its elementary events . We denote the set of elementary events by W.

Random event is an event about which it is impossible to say in advance whether it will occur as a result of experience or not. Each random event A that occurred as a result of an experiment can be associated with a group of elementary events from W. The elementary events included in this group are called favorable for the occurrence of event A.

The set W can also be considered as a random event. Since it includes all elementary events, it will necessarily occur as a result of experience. Such an event is called reliable .

If for a given event there are no favorable elementary events from W, then it cannot occur as a result of the experiment. Such an event is called impossible.

Events are called equally possible , if the test results in equal opportunity for these events to occur. Two random events are called opposite , if as a result of the experiment one of them occurs if and only if the other does not occur. The event opposite to event A is denoted by .

Events A and B are called incompatible , if the appearance of one of them excludes the appearance of the other. Events A 1, A 2, ..., A n are called pairwise incompatible, if any two of them are inconsistent. Events A 1, A 2, ..., An form a complete system of pairwise incompatible events , if one and only one of them is sure to occur as a result of the test.

The sum (union) of events A 1, A 2, ..., A n is an event C, which consists in the fact that at least one of the events A 1, A 2, ..., A n occurs. The sum of events is denoted as follows:

C = A 1 +A 2 +…+A n.

The product (intersection) of events A 1, A 2, ..., A n is called such an event P, which consists in the fact that all events A 1, A 2, ..., A n occurred simultaneously. The production of events is indicated

Probability P(A) in probability theory acts as a numerical characteristic of the degree of possibility of the occurrence of any specific random event A when tests are repeated many times.



Let's say that in 1000 throws of a die, the number 4 appears 160 times. The ratio 160/1000 = 0.16 shows the relative frequency of the number 4 in a given series of tests. In a more general case frequency of a random event And when conducting a series of experiments, the ratio of the number of experiments in which a given event occurred to the total number of experiments is called:

where P*(A) is the frequency of event A; m is the number of experiments in which event A occurred; n is the total number of experiments.

The probability of a random event And they call a constant number around which the frequencies of a given event are grouped as the number of experiments increases ( statistical determination of the probability of an event ). The probability of a random event is denoted by P(A).

Naturally, no one will ever be able to perform an unlimited number of tests in order to determine the probability. There is no need for this. In practice, the frequency of an event over a large number of trials can be taken as probability. For example, from the statistical patterns of birth established over many years of observation, the probability of the event that the newborn will be a boy is estimated at 0.515.

If during the test there are no reasons due to which one random event would appear more often than others ( equally possible events), the probability can be determined based on theoretical considerations. For example, let’s find out in the case of tossing a coin the frequency of the coat of arms falling out (event A). different experimenters over several thousand tests have shown that the relative frequency of such an event takes values ​​close to 0.5. Considering that the appearance of the coat of arms and the opposite side of the coin (event B) are equally possible events, if the coin is symmetrical, the judgment P(A) = P(B) = 0.5 could be made without determining the frequency of these events. Based on the concept of “equal possibility” of events, another definition of probability is formulated.

Let the event A under consideration occur in m cases, which are called favorable to A, and not occur in the remaining n-m, unfavorable to A.

Then the probability of event A is equal to the ratio of the number of elementary events favorable to it to their total number(classical definition of the probability of an event):

where m is the number of elementary events favorable to event A; n - Total number of elementary events.

Let's look at a few examples:

Example #1:An urn contains 40 balls: 10 black and 30 white. Find the probability that a ball chosen at random will be black.

The number of favorable cases is equal to the number of black balls in the urn: m = 10. The total number of equally possible events (taking out one ball) is equal to the total number of balls in the urn: n = 40. These events are inconsistent, since one and only one ball is taken out. P(A) = 10/40 = 0.25

Example #2:Find the probability of getting an even number when throwing a die.

When throwing a dice, six equally possible incompatible events occur: the appearance of one number: 1,2,3,4,5 or 6, i.e. n = 6. favorable cases are the occurrence of one of the numbers 2,4 or 6: m = 3. the desired probability P(A) = m/N = 3/6 = ½.

As we see from the definition of the probability of an event, for all events

0 < Р(А) < 1.

Obviously, the probability of a certain event is 1, the probability of an impossible event is 0.

The theorem of addition of probabilities: the probability of the occurrence of one (no matter which) event from several incompatible events is equal to the sum of their probabilities.

For two incompatible events A and B, the probabilities of these events are equal to the sum of their probabilities:

P(A or B) = P(A) + P(B).

Example #3:find the probability of getting 1 or 6 when throwing a die.

Events A (rolling 1) and B (rolling 6) are equally possible: P(A) = P(B) = 1/6, therefore P(A or B) = 1/6 + 1/6 = 1/3

The addition of probabilities is valid not only for two, but also for any number of incompatible events.

Example #4:There are 50 balls in the urn: 10 white, 20 black, 5 red and 15 blue. Find the probability of a white, or black, or red ball appearing during a single operation of removing a ball from the urn.

The probability of drawing the white ball (event A) is P(A) = 10/50 = 1/5, the black ball (event B) is P(B) = 20/50 = 2/5 and the red ball (event C) is P (C) = 5/50 = 1/10. From here, using the formula for adding probabilities, we get P(A or B or C) = P(A) + P(B) = P(C) = 1/5 + 2/5 + 1/10 = 7/10

The sum of the probabilities of two opposite events, as follows from the theorem of addition of probabilities, is equal to one:

P(A) + P() = 1

In the above example, taking out a white, black and red ball will be the event A 1, P(A 1) = 7/10. The opposite event of 1 is drawing the blue ball. Since there are 15 blue balls, and the total number of balls is 50, we get P(1) = 15/50 = 3/10 and P(A) + P() = 7/10 +3/10 = 1.

If events A 1, A 2, ..., A n form a complete system of pairwise incompatible events, then the sum of their probabilities is equal to 1.

In general, the probability of the sum of two events A and B is calculated as

P(A+B) = P(A) + P(B) - P(AB).

Probability multiplication theorem:

Events A and B are called independent , if the probability of occurrence of event A does not depend on whether event B occurred or not, and vice versa, the probability of occurrence of event B does not depend on whether event A occurred or not.

The probability of joint occurrence of independent events is equal to the product of their probabilities. For two events P(A and B)=P(A)·P(B).

Example: One urn contains 5 black and 10 white balls, the other contains 3 black and 17 white balls. Find the probability that when balls are first drawn from each urn, both balls will be black.

Solution: the probability of drawing a black ball from the first urn (event A) is P(A) = 5/15 = 1/3, a black ball from the second urn (event B) is P(B) = 3/20

P(A and B)=P(A)·P(B) = (1/3)(3/20) = 3/60 = 1/20.

In practice, the probability of event B often depends on whether some other event A occurred or not. In this case they talk about conditional probability , i.e. the probability of event B given that event A occurs. Conditional probability is denoted by P(B/A).

In order to quantitatively compare events with each other according to the degree of their possibility, obviously, it is necessary to associate a certain number with each event, which is greater, the more possible the event. We will call this number the probability of an event. Thus, probability of an event is a numerical measure of the degree of objective possibility of this event.

The first definition of probability should be considered the classical one, which arose from the analysis of gambling and was initially applied intuitively.

The classical method of determining probability is based on the concept of equally possible and incompatible events, which are the outcomes of a given experience and form a complete group of incompatible events.

The simplest example of equally possible and incompatible events forming a complete group is the appearance of one or another ball from an urn containing several balls of the same size, weight and other tangible characteristics, differing only in color, thoroughly mixed before being removed.

Therefore, a test whose outcomes form a complete group of incompatible and equally possible events is said to be reducible to a pattern of urns, or a pattern of cases, or fits into the classical pattern.

Equally possible and incompatible events that make up a complete group will be called simply cases or chances. Moreover, in each experiment, along with cases, more complex events can occur.

Example: When throwing a dice, along with the cases A i - the loss of i-points on the upper side, we can consider such events as B - the loss of an even number of points, C - the loss of a number of points that are a multiple of three...

In relation to each event that can occur during the experiment, cases are divided into favorable, in which this event occurs, and unfavorable, in which the event does not occur. In the previous example, event B is favored by cases A 2, A 4, A 6; event C - cases A 3, A 6.

Classical probability the occurrence of a certain event is called the ratio of the number of cases favorable to the occurrence of this event to the total number of equally possible, incompatible cases that make up the complete group in a given experiment:

Where P(A)- probability of occurrence of event A; m- the number of cases favorable to event A; n- total number of cases.

Examples:

1) (see example above) P(B)= , P(C) =.

2) The urn contains 9 red and 6 blue balls. Find the probability that one or two balls drawn at random will turn out to be red.

A- a red ball drawn at random:

m= 9, n= 9 + 6 = 15, P(A)=

B- two red balls drawn at random:

The following properties follow from the classical definition of probability (show yourself):


1) The probability of an impossible event is 0;

2) The probability of a reliable event is 1;

3) The probability of any event lies between 0 and 1;

4) The probability of an event opposite to event A,

The classic definition of probability assumes that the number of outcomes of a trial is finite. In practice, very often there are tests, the number of possible cases of which is infinite. In addition, the weakness of the classical definition is that very often it is impossible to represent the result of a test in the form of a set of elementary events. It is even more difficult to indicate the reasons for considering the elementary outcomes of a test to be equally possible. Usually, the equipossibility of elementary test outcomes is concluded from considerations of symmetry. However, such tasks are very rare in practice. For these reasons, along with the classical definition of probability, other definitions of probability are also used.

Statistical probability event A is the relative frequency of occurrence of this event in the tests performed:

where is the probability of occurrence of event A;

Relative frequency of occurrence of event A;

The number of trials in which event A appeared;

Total number of trials.

Unlike classical probability, statistical probability is an experimental characteristic.

Example: To control the quality of products from a batch, 100 products were selected at random, among which 3 products turned out to be defective. Determine the probability of marriage.

The statistical method of determining probability is applicable only to those events that have the following properties:

The events under consideration should be the outcomes of only those tests that can be reproduced an unlimited number of times under the same set of conditions.

Events must have statistical stability (or stability of relative frequencies). This means that in different series of tests the relative frequency of the event changes little.

The number of trials resulting in event A must be quite large.

It is easy to verify that the properties of probability arising from the classical definition are also preserved in the statistical definition of probability.