For one variable, the cost function for linear regression J ( w , b ) is defined as
J ( w , b ) = 2 m 1 β i = 0 β m β 1 β ( f w , b β ( x ( i ) ) β y ( i ) ) 2
You can think of f w , b β ( x ( i ) ) as the modelβs prediction of your Xbusiness
profit, as opposed to y ( i ) , which is the actual profit that is recorded in the data.
m is the number of training examples in the dataset
def compute_cost (x, y, w, b):
"""
Computes the cost function for linear regression.
Args:
x (ndarray): Shape (m,) Input to the model (Population of cities)
y (ndarray): Shape (m,) Label (Actual profits for the cities)
w, b (scalar): Parameters of the model
Returns
total_cost (float): The cost of using w,b as the parameters for linear regression
to fit the data points in x and y
"""
# number of training examples
m = x.shape[ 0 ]
# You need to return this variable correctly
total_cost = 0
### START CODE HERE ###
predicted_cost = w * x + b
total_cost = ( 1 / ( 2 * m)) * np.sum((predicted_cost - y) ** 2 )
### END CODE HERE ###
return total_cost