当前位置: 首页 > news >正文

wordpress 文章菜单青岛百度推广优化

wordpress 文章菜单,青岛百度推广优化,下载企业网站,普通网站要什么费用目录 模型初始化信息: 模型实现: 多变量损失函数: 多变量梯度下降实现: 多变量梯度实现: 多变量梯度下降实现: 之前部分实现的梯度下降线性预测模型中的training example只有一个特征属性&#xff1a…

目录

模型初始化信息:

模型实现:

多变量损失函数:

多变量梯度下降实现:

多变量梯度实现:

多变量梯度下降实现:


之前部分实现的梯度下降线性预测模型中的training example只有一个特征属性:房屋面积,这显然是不符合实际情况的,这里增加特征属性的数量再实现一次梯度下降线性预测模型。

这里回顾一下梯度下降线性模型的实现方法:

  1. 实现线性模型:f = w*x + b,模型参数w,b待定
  2. 寻找最优的w,b组合:

             (1)引入衡量模型优劣的cost function:J(w,b) ——损失函数或者代价函数

             (2)损失函数值最小的时候,模型最接近实际情况:通过梯度下降法来寻找最优w,b组合

模型初始化信息:

  • 新的房子的特征有:房子面积、卧室数、楼层数、房龄共4个特征属性。
Size (sqft)Number of BedroomsNumber of floorsAge of HomePrice (1000s dollars)
21045145460
14163240232
852213517

 上面表中的训练样本有3个,输入特征矩阵模型为:

具体代码实现为,X_train是输入矩阵,y_train是输出矩阵

X_train = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40],[852, 2, 1, 35]])
y_train = np.array([460, 232, 178])

模型参数w,b矩阵:

代码实现:w中的每一个元素对应房屋的一个特征属性

b_init = 785.1811367994083
w_init = np.array([ 0.39133535, 18.75376741, -53.36032453, -26.42131618])

模型实现:

def predict(x, w, b): """single predict using linear regressionArgs:x (ndarray): Shape (n,) example with multiple featuresw (ndarray): Shape (n,) model parameters   b (scalar):             model parameter Returns:p (scalar):  prediction"""p = np.dot(x, w) + b     return p   

多变量损失函数:

J(w,b)为:

代码实现为:

def compute_cost(X, y, w, b): """compute costArgs:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parameters  b (scalar)       : model parameterReturns:cost (scalar): cost"""m = X.shape[0]cost = 0.0for i in range(m):                                f_wb_i = np.dot(X[i], w) + b           #(n,)(n,) = scalar (see np.dot)cost = cost + (f_wb_i - y[i])**2       #scalarcost = cost / (2 * m)                      #scalar    return cost

多变量梯度下降实现:

多变量梯度实现:

def compute_gradient(X, y, w, b): """Computes the gradient for linear regression Args:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parameters  b (scalar)       : model parameterReturns:dj_dw (ndarray (n,)): The gradient of the cost w.r.t. the parameters w. dj_db (scalar):       The gradient of the cost w.r.t. the parameter b. """m,n = X.shape           #(number of examples, number of features)dj_dw = np.zeros((n,))dj_db = 0.for i in range(m):                             err = (np.dot(X[i], w) + b) - y[i]   for j in range(n):                         dj_dw[j] = dj_dw[j] + err * X[i, j]    dj_db = dj_db + err                        dj_dw = dj_dw / m                                dj_db = dj_db / m                                return dj_db, dj_dw

多变量梯度下降实现:

def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters): """Performs batch gradient descent to learn theta. Updates theta by taking num_iters gradient steps with learning rate alphaArgs:X (ndarray (m,n))   : Data, m examples with n featuresy (ndarray (m,))    : target valuesw_in (ndarray (n,)) : initial model parameters  b_in (scalar)       : initial model parametercost_function       : function to compute costgradient_function   : function to compute the gradientalpha (float)       : Learning ratenum_iters (int)     : number of iterations to run gradient descentReturns:w (ndarray (n,)) : Updated values of parameters b (scalar)       : Updated value of parameter """# An array to store cost J and w's at each iteration primarily for graphing laterJ_history = []w = copy.deepcopy(w_in)  #avoid modifying global w within functionb = b_infor i in range(num_iters):# Calculate the gradient and update the parametersdj_db,dj_dw = gradient_function(X, y, w, b)   ##None# Update Parameters using w, b, alpha and gradientw = w - alpha * dj_dw               ##Noneb = b - alpha * dj_db               ##None# Save cost J at each iterationif i<100000:      # prevent resource exhaustion J_history.append( cost_function(X, y, w, b))# Print cost every at intervals 10 times or as many iterations if < 10if i% math.ceil(num_iters / 10) == 0:print(f"Iteration {i:4d}: Cost {J_history[-1]:8.2f}   ")return w, b, J_history #return final w,b and J history for graphing

梯度下降算法测试:

# initialize parameters
initial_w = np.zeros_like(w_init)
initial_b = 0.
# some gradient descent settings
iterations = 1000
alpha = 5.0e-7
# run gradient descent 
w_final, b_final, J_hist = gradient_descent(X_train, y_train, initial_w, initial_b,compute_cost, compute_gradient, alpha, iterations)
print(f"b,w found by gradient descent: {b_final:0.2f},{w_final} ")
m,_ = X_train.shape
for i in range(m):print(f"prediction: {np.dot(X_train[i], w_final) + b_final:0.2f}, target value: {y_train[i]}")# plot cost versus iteration  
fig, (ax1, ax2) = plt.subplots(1, 2, constrained_layout=True, figsize=(12, 4))
ax1.plot(J_hist)
ax2.plot(100 + np.arange(len(J_hist[100:])), J_hist[100:])
ax1.set_title("Cost vs. iteration");  ax2.set_title("Cost vs. iteration (tail)")
ax1.set_ylabel('Cost')             ;  ax2.set_ylabel('Cost') 
ax1.set_xlabel('iteration step')   ;  ax2.set_xlabel('iteration step') 
plt.show()

结果为:

可以看到,右图中损失函数在traning次数结束之后还一直在下降,没有找到最佳的w,b组合。具体解决方法,后面会有更新。

完整的代码为:

import copy, math
import numpy as np
import matplotlib.pyplot as pltnp.set_printoptions(precision=2)  # reduced display precision on numpy arraysX_train = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40], [852, 2, 1, 35]])
y_train = np.array([460, 232, 178])b_init = 785.1811367994083
w_init = np.array([ 0.39133535, 18.75376741, -53.36032453, -26.42131618])def predict(x, w, b):"""single predict using linear regressionArgs:x (ndarray): Shape (n,) example with multiple featuresw (ndarray): Shape (n,) model parametersb (scalar):             model parameterReturns:p (scalar):  prediction"""p = np.dot(x, w) + breturn pdef compute_cost(X, y, w, b):"""compute costArgs:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parametersb (scalar)       : model parameterReturns:cost (scalar): cost"""m = X.shape[0]cost = 0.0for i in range(m):f_wb_i = np.dot(X[i], w) + b  # (n,)(n,) = scalar (see np.dot)cost = cost + (f_wb_i - y[i]) ** 2  # scalarcost = cost / (2 * m)  # scalarreturn costdef compute_gradient(X, y, w, b):"""Computes the gradient for linear regressionArgs:X (ndarray (m,n)): Data, m examples with n featuresy (ndarray (m,)) : target valuesw (ndarray (n,)) : model parametersb (scalar)       : model parameterReturns:dj_dw (ndarray (n,)): The gradient of the cost w.r.t. the parameters w.dj_db (scalar):       The gradient of the cost w.r.t. the parameter b."""m, n = X.shape  # (number of examples, number of features)dj_dw = np.zeros((n,))dj_db = 0.for i in range(m):err = (np.dot(X[i], w) + b) - y[i]for j in range(n):dj_dw[j] = dj_dw[j] + err * X[i, j]dj_db = dj_db + errdj_dw = dj_dw / mdj_db = dj_db / mreturn dj_db, dj_dwdef gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters):"""Performs batch gradient descent to learn theta. Updates theta by takingnum_iters gradient steps with learning rate alphaArgs:X (ndarray (m,n))   : Data, m examples with n featuresy (ndarray (m,))    : target valuesw_in (ndarray (n,)) : initial model parametersb_in (scalar)       : initial model parametercost_function       : function to compute costgradient_function   : function to compute the gradientalpha (float)       : Learning ratenum_iters (int)     : number of iterations to run gradient descentReturns:w (ndarray (n,)) : Updated values of parametersb (scalar)       : Updated value of parameter"""# An array to store cost J and w's at each iteration primarily for graphing laterJ_history = []w = copy.deepcopy(w_in)  # avoid modifying global w within functionb = b_infor i in range(num_iters):# Calculate the gradient and update the parametersdj_db, dj_dw = gradient_function(X, y, w, b)  ##None# Update Parameters using w, b, alpha and gradientw = w - alpha * dj_dw  ##Noneb = b - alpha * dj_db  ##None# Save cost J at each iterationif i < 100000:  # prevent resource exhaustionJ_history.append(cost_function(X, y, w, b))# Print cost every at intervals 10 times or as many iterations if < 10if i % math.ceil(num_iters / 10) == 0:print(f"Iteration {i:4d}: Cost {J_history[-1]:8.2f}   ")return w, b, J_history  # return final w,b and J history for graphing# initialize parameters
initial_w = np.zeros_like(w_init)
initial_b = 0.
# some gradient descent settings
iterations = 1000
alpha = 5.0e-7
# run gradient descent
w_final, b_final, J_hist = gradient_descent(X_train, y_train, initial_w, initial_b,compute_cost, compute_gradient,alpha, iterations)
print(f"b,w found by gradient descent: {b_final:0.2f},{w_final} ")
m,_ = X_train.shape
for i in range(m):print(f"prediction: {np.dot(X_train[i], w_final) + b_final:0.2f}, target value: {y_train[i]}")# plot cost versus iteration
fig, (ax1, ax2) = plt.subplots(1, 2, constrained_layout=True, figsize=(12, 4))
ax1.plot(J_hist)
ax2.plot(100 + np.arange(len(J_hist[100:])), J_hist[100:])
ax1.set_title("Cost vs. iteration");  ax2.set_title("Cost vs. iteration (tail)")
ax1.set_ylabel('Cost')             ;  ax2.set_ylabel('Cost')
ax1.set_xlabel('iteration step')   ;  ax2.set_xlabel('iteration step')
plt.show()

http://www.hkea.cn/news/143028/

相关文章:

  • 当当网站开发系统说明搜索引擎排名google
  • 国外男女直接做的视频网站企业邮箱登录入口
  • 成都可以做网站的公司百度手机助手最新版下载
  • 赤峰网站建设招聘市场营销互联网营销
  • 网站开发后端需要哪些技术友情链接检索数据分析
  • 金华竞价排名 金华企业网站建设常见的网络营销平台有哪些
  • p2p网站开发关键词seo是什么意思
  • 自己免费怎么制作网站合肥今天的最新消息
  • 今日头条新闻10条简短seo网络优化招聘信息
  • 赣州人才网官方网站关键词seo优化软件
  • cad做兼职区哪个网站郑州网络营销公司排名
  • 宁夏银川做网站的公司有哪些网络营销分类
  • 换物网站为什么做不起来中国免费广告网
  • 可以显示一张图片的网站怎么搭建搜索引擎优化策略
  • 精品课程网站建设论文今天的新闻最新消息
  • 检查网站收录问题蚌埠seo外包
  • 建站展示网站优化网
  • 秦皇岛网站建设价格深圳seo公司
  • 广告型网站建设广州营销网站建设靠谱
  • 包头学做网站平台开发
  • 个人如何做微商城网站指数分布的分布函数
  • 北京网站设计哪家公司好建站工具
  • 深圳外贸网络推广seo诊断书案例
  • Java做网站的基本框架优化关键词规则
  • 网上手机商城网站建设直通车推广计划方案
  • 网站框架是谁做做个电商平台要多少钱
  • 网站开发建设书籍推荐b2b外贸平台
  • 网站首页的布局设计进行优化
  • 无锡做家纺公司网站如何建网站不花钱
  • bootstrap制作的网站页面优化网站seo