We can measure and compute lots of things around us. For some experiments (when omitting external circumstances - so we have a full control over the action) we get exact results. But what about these, where the outcome can differ on each test? Later on, we will learn that random is nothing more than our ignorance, so let’s dive into a probability theory.

Basic notation

For all of our experiments, we will have a set of possible outcomes of random expectation - sample space $S$.

Basic operations are defined on our subsets:

  1. $A \cup B$ - union (”or”)
  2. $A \cap B$ - intersection (”and”)
  3. $\bar{A}$ - complement, what is not $A$ (prefix “not”)

Coin Toss

We have a coin (a fair one with head and tail). What happens if we make a coin toss? We can get either heads or tails, these are our possible outcomes, so $S=\{H, T\}$. For two coin tosses, we will have a sample space $S=\{HH, HT, TH, TT\}$.

Let’s have another example with the following rules:

  1. We always make a toss with the first coin.
  2. If and only if the first outcome was heads, we make a second toss.

Now our sample space would be $S=\{HH, HT, T\empty \}$.

The possible outcomes are called events. Probability of some event $A$ is denoted by $P(A) \in \mathbb{R}$, the following implies:

  1. $0 \le P(A) \le 1$
  2. $P(S) = 1$
  3. when $A \; \& \; B$ are disjoint, $P(A\cup B) = P(A) + P(B)$