Binary Cost Function
Cost function helps to optimize our model with minimum error. For Logistic regression we generally use Binary Cost function which uses natural log to penalize the error.
Cost function for a row of data goes like this, where f(z) is the predicted probabilities value which lies between 0 and 1:
loss = cost(f(z), y) = -log(f(z)), if y =1
loss = cost(f(z), y) = -log(1-f(z)), if y =0
Now lets see how the formula works and penalize for the error.
When y =1,
loss = cost(f(z), y) = -log(f(z))
As shown in the figure below, if f(z) is closer to 1 (i.e. actual value), the error is less but if f(z) is closer to 0, natural log (-log(f(z)))will penalize it with higher error.

When y =0,
loss = cost(f(z), y) = -log(1 - f(z))
As shown in the figure below, if f(z) is closer to 0, i.e. actual value the error is less but if f(z) is closer to 1, natural log (-log(1-f(z)))will penalize it with higher error.

Till here we got the concept of how the cost function works when the real value of y is 0 and 1 respectively. Writing those cost function with different condition in one formula:

This cost function speaks about the average of all the error in the datasets with “m” rows. And at the closer look we can see that:
when, y=1, (-(1-y) log(1-f(z))), will become 0,
Similarly, when y =0, (-(y) log(f(z))), also become 0. Which then become equivalent to the equations above.

This above graph is the combined loss graph which we can easily relate with the above explanation. Hope you guys got the concept of Binary Loss function. Keep Learning, Keep Growing.