The concept of entropy

In this blog post, we will delve into the concept of entropy. Entropy serves as a tool to gauge the level of uncertainty or disorder present in a random process by quantifying the information conveyed in a set of random variables. Put simply, entropy reflects the extent of uncertainty in a process - the greater the uncertainty or variability, the higher the entropy. Conversely, the introduction of new information leads to a reduction in entropy as uncertainty is resolved. This suggests that as we gather insights through measurements, uncertainty gradually diminishes. When all variables in a random process are comprehensively understood and uncertainty is eliminated, the entropy value reaches zero.

 

What is the usage of entropy?  

 

Let’s consider a straightforward example. Imagine we have a geotechnical engineering project that requires site characterization to determine the soil properties underground. In this scenario, entropy can be utilized to pinpoint the optimal locations for taking measurements in order to minimize uncertainty about the soil.

How does this work? Let me elaborate further. If we take a single sample from the soil, we gain some information and consequently reduce entropy. Adding another sample increases our knowledge of the soil properties and further decreases entropy. Therefore, the ideal sampling locations are those that maximize the difference in entropy before and after new measurements. By mathematically formulating and maximizing the difference, we can identify the most suitable locations for sampling. Isn’t this concept intriguing?

 

How to calculate entropy? 

 

To demonstrate how to calculate entropy, we will need to incorporate some mathematics. Let’s consider a vector of random variables denoted by X. Let’s also assume that the probability distribution governing X is represented by f. In this case, the entropy, H, can be determined as follows:

To calculate the entropy, we can simply take the logarithm of f(X), multiply it by f(X), and integrate over the entire domain of X. That’s all there is to it!

 

Make an example in MATLAB! 

 

OK, the formula described above can be written as a MATLAB function as follows:

function y = entropy_approx(f, lower, upper, varargin)

   y = integral(@(x) f(x, varargin{:}) .* log(f(x, varargin{:})), lower, upper);

end

As demonstrated, all you need to do is enter the distribution function type, the variable’s lower and upper bounds, and the variables themselves. To learn how to utilize this function, let’s examine a random variable with a mean of 0 and a standard deviation of 1, with lower and upper bounds set at 10 standard deviations below and above the mean, following a normal distribution. The entropy of this variable can be computed as shown below:

p_mu = 0;

p_sd = 1;

H_normal = entropy_approx(@(x) normpdf(x, p_mu, p_sd), p_mu - 10 * p_sd, p_mu + 10 * p_sd);

Alternatively, if we assume a random variable with an exponential distribution function and a lambda value of 2, and upper and lower bounds of 0 and 100, its entropy will be:

lambda = 2;

H_exp = entropy_approx(@(x) exppdf(x, 1/lambda), 0, 100);

 

More explanations?  

 

If you need a clearer explanation of the codes and calculations mentioned above, I am available to help. Please refer to the following videos for my detailed explanations:

 

Watch the video on YouTube

Watch the video on Aparat

سبد خرید

رمز عبورتان را فراموش کرده‌اید؟

ثبت کلمه عبور خود را فراموش کرده‌اید؟ لطفا شماره همراه یا آدرس ایمیل خودتان را وارد کنید. شما به زودی یک ایمیل یا اس ام اس برای ایجاد کلمه عبور جدید، دریافت خواهید کرد.

بازگشت به بخش ورود

کد دریافتی را وارد نمایید.

بازگشت به بخش ورود

تغییر کلمه عبور

تغییر کلمه عبور

حساب کاربری من

سفارشات

مشاهده سفارش