国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代寫COMP9417、代做Python設計程序

時間:2024-02-28  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning Homework 1: Regularized Optimization & Gradient Methods
Introduction In this homework we will explore gradient based optimization. Gradient based algorithms have been crucial to the development of machine learning in the last few decades. The most famous exam ple is the backpropagation algorithm used in deep learning, which is in fact just a particular application of a simple algorithm known as (stochastic) gradient descent. We will first implement gradient descent from scratch on a deterministic problem (no data), and then extend our implementation to solve a real world regression problem.

Points Allocation There are a total of 30 marks.

Question 1 a): 2 marks
Question 1 b): 4 marks
Question 1 c): 2 marks
Question 1 d): 2 marks
Question 1 e): 6 marks
Question 1 f): 6 marks
Question 1 g): 4 marks
Question 1 h): 2 marks
Question 1 i): 2 marks
What to Submit
A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
.py file(s) containing all code you used for the project, which should be provided in a separate .zip
This code must match the code provided in the report.
You may be deducted points for not following these instructions.
You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1

You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please
nly post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
You may not use SymPy or any other symbolic programming toolkits to answer the derivation ques- tions. This will result in an automatic grade of zero for the relevant question. You must do the derivations manually.
When and Where to Submit
Due date: Week 4, Monday March 4th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be
3× 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
Submission must be made on Moodle, no exceptions.
Question 1. Gradient Based Optimization
The general framework for a gradient method for finding a minimizer of a function f : Rn → R is defined by

x(k+1) = x(k) − αk∇f(xk),

k = 0, 1, 2, . . . ,

(1)

where αk > 0 is known as the step size, or learning rate. Consider the following simple example of minimizing the function g(x) = 2 √ x3 + 1. We first note that g′(x) = 3x2(x3 + 1)−1/2. We then need to choose a starting value of x, say x(0) = 1. Let’s also take the step size to be constant, αk = α = 0.1. Then

we have the following iterations:

x(1) = x(0) − 0.1× 3(x(0))2((x(0))3 + 1)−1/2 = 0.78**9656440357 x(2) = x(1) − 0.1× 3(x(1))2((x(1))3 + 1)−1/2 = 0.6**6170**300827 x(3) = 0.5272505146487**7
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up and compare it to the true minimum of the function which is x∗ = −11). This idea works for functions that have vector valued inputs, which is often the case in machine learning. For example, when we minimize a loss function we do so with respect to a weight vector, β. When we take the step size to be constant at each iteration, this algorithm is known as gradient descent. For the entirety of this

question, do not use any existing implementations of gradient methods, doing so will result in an automatic mark of zero for the entire question.

(a) Consider the following optimisation problem:

x∈Rn min f(x),

f(x) = 2 1 ‖Ax− b‖22 + γ 2 ‖x‖22,

where

and where A ∈ Rm×n, b ∈ Rm are defined as

A =   −1 0 3

3 2 0 0 −1 2 ?**7; −4 −2 7 ?**9; ,

b =   −4 3 1 ?**9; ?**7; ,

and γ is a positive constant. Run gradient descent on f using a step size of α = 0.01 and γ = 2 and starting point of x(0) = (1, 1, 1, 1). You will need to terminate the algorithm when the following condition is met: ‖∇f(x(k))‖2 < 0.001. In your answer, clearly write down the version of the gradient steps (1) for this problem. Also, print out the first 5 and last 5 values of x(k), clearly

indicating the value of k, in the form:

1Does the algorithm converge to the true minimizer? Why/Why not?

What to submit: an equation outlining the explicit gradient update, a print out of the first 5 (k = 5 inclusive) and last 5 rows of your iterations. Use the round function to round your numbers to 4 decimal places. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

Consider now a slightly different problem: let y, β ∈ Rp and λ > 0. Further, we define the matrix
where blanks denote zero elements. 2 Define the loss function:

L(β) = 2p 1 ‖y − β‖22 + λ‖Wβ‖22.

The following code allows you to load in the data needed for this problem3:

Note, the t variable is purely for plotting purposes, it should not appear in any of your calculations. (b) Show that

ˆβ = arg min β L(β) = (I + 2λpWTW )−1y.

Update the following code4 so that it returns a plot of ˆβ and calculates L( ˆβ). Only in your code implementation, set λ = 0.9.

def create_W(p):

## generate W which is a p-2 x p matrix as defined in the question

W = np.zeros((p-2, p)) b = np.array([1,-2,1]) for i in range(p-2): W[i,i:i+3] = b return W

def loss(beta, y, W, L):

## compute loss for a given vector beta for data y, matrix W, regularization parameter L (lambda) # your code here

2If it is not already clear: for the first row of W : W11 = 1,W12 = −2,W13 = 1 and W1j = 0 for any j ≥ 4. For the second row of W : W21 = 0,W22 = 1,W23 = −2,W24 = 1 and W2j = 0 for any j ≥ 5 and so on.

3a copy of this code is provided in code student.py 4a copy of this code is provided in code student.py
return loss_val

## your code here, e.g. compute betahat and loss, and set other params..

plt.plot(t_var, y_var, zorder=1, color=’red’, label=’truth’) plt.plot(t_var, beta_hat, zorder=3, color=’blue’,

linewidth=2, linestyle=’--’, label=’fit’) plt.legend(loc=’best’) plt.title(f"L(beta_hat) = {loss(beta_hat, y, W, L)}") plt.show()

What to submit: a closed form expression along with your working, a single plot and a screen shot of your code along with a copy of your code in your .py file.

Write out each of the two terms that make up the loss function ( 2p‖y−β‖22 1 and λ‖Wβ‖22) explicitly using summations. Use this representation to explain the role played by each of the two terms. Be as specific as possible. What to submit: your answer, and any working either typed or handwritten.
Show that we can write (2) in the following way:
L(β) = p 1 j=1 p∑ Lj(β),

where Lj(β) depends on the data y1, . . . , yp only through yj . Further, show that

∇Lj(β) =   −(yj 0 0 0 0 . . . . . . − βj) ?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**9; ?**7; + 2λWTWβ,

j = 1, . . . , p.

Note that the first vector is the p-dimensional vector with zero everywhere except for the j-th index. Take a look at the supplementary material if you are confused by the notation. What to submit: your

answer, and any working either typed or handwritten.

(e) In this question, you will implement (batch) GD from scratch to solve the (2). Use an initial estimate β(0) = 1p (the p-dimensional vector of ones), and λ = 0.001 and run the algorithm for 1000 epochs

(an epoch is one pass over the entire data, so a single GD step). Repeat this for the following step sizes:

α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}

To monitor the performance of the algorithm, we will plot the value

∆(k) = L(β(k))− L( ˆβ),

where ˆβ is the true (closed form) solution derived earlier. Present your results in a single 3× 3 grid plot, with each subplot showing the progression of ∆(k) when running GD with a specific step-size.

State which step-size you think is best in terms of speed of convergence. What to submit: a single

plot. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

We will now implement SGD from scratch to solve (2). Use an initial estimate β(0) = 1p (the vector
f ones) and λ = 0.001 and run the algorithm for 4 epochs (this means a total of 4p updates of β. Repeat this for the following step sizes:
α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}

Present an analogous single 3 × 3 grid plot as in the previous question. Instead of choosing an index randomly at each step of SGD, we will cycle through the observations in the order they are stored in y to ensure consistent results. Report the best step-size choice. In some cases you might observe that the value of ∆(k) jumps up and down, and this is not something you would have seen using batch GD. Why do you think this might be happening?

What to submit: a single plot and some commentary. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

An alternative Coordinate Based scheme: In GD, SGD and mini-batch GD, we always update the entire p-dimensional vector β at each iteration. An alternative approach is to update each of the p parameters individually. To make this idea more clear, we write the loss function of interest L(β) as L(β1, β2 . . . , βp). We initialize β(0) , and then solve for k = 1, 2, 3, . . . ,

β(k) 1 = arg min β1 L(β1, β(k−1) 2 , β(k−1) 3 , . . . , β(k−1) p ) β(k) 2 = arg min β2 L(β(k) 1 , β2, β(k−1) 3 , . . . , β(k−1) p )

.

.

.

β(k) p = arg min βp L(β(k) 1 , β(k) 2 , β(k) 3 , . . . , βp).

Note that each of the minimizations is over a single (**dimensional) coordinate of β, and also that as as soon as we update β(k) j , we use the new value when solving the update for β(k) j+1 and so on. The idea is then to cycle through these coordinate level updates until convergence. In the next two parts we will implement this algorithm from scratch for the problem we have been working on (2).

(g) Derive closed-form expressions for ˆβ1, ˆβ2, . . . , ˆβp where for j = 1, 2, . . . , p:

ˆβj = arg min βj L(β1, . . . , βj−1, βj , βj+1, . . . , βp).

What to submit: a closed form expression along with your working.

Hint: Be careful, this is not as straight-forward as it might seem at first. It is recommended to choose a value for p, e.g. p = 8 and first write out the expression in terms of summations. Then take derivatives to get the closed form expressions.

Implement both gradient descent and the coordinate scheme in code (from scratch) and apply it to the provided data. In your implementation:
Use λ = 0.001 for the coordinate scheme, and step-sizeα = 1 for your gradient descent scheme.
Initialize both algorithms with β = 1p, the p-dimensional vector of ones.
For the coordinate scheme, be sure to update the βj ’s in order (i.e. 1,2,3,...)
For your coordinate scheme, terminate the algorithm after 1000 updates (each time you update a single coordinate, that counts as an update.)
For your GD scheme, terminate the algoirthm after 1000 epochs.
Create a single plot of k vs ∆(k) = L(β(k)) − L( ˆβ), where ˆβ is the closed form expression derived earlier.
Your plot should have both the coordinate scheme (blue) and GD (green)
displayed and should start from k = 0. Your plot should have a legend.
What to submit: a single plot and a screen shot of your code along with a copy of your code in your .py file.

(i) Based on your answer to the previous part, when would you prefer GD? When would you prefer the coordinate scheme? What to submit: Some commentary.

Supplementary: Background on Gradient Descent As noted in the lectures, there are a few variants of gradient descent that we will briefly outline here. Recall that in gradient descent our update rule is

β(k+1) = β(k) − αk∇L(β(k)),

k = 0, 1, 2, . . . ,

where L(β) is the loss function that we are trying to minimize. In machine learning, it is often the case that the loss function takes the form

L(β) = n 1 n∑ Li(β),

i=1

i.e. the loss is an average of n functions that we have lablled Li, and each Li depends on the data only through (xi, yi). It then follows that the gradient is also an average of the form

∇L(β) = n 1 n∑ ∇Li(β).

i=1

We can now define some popular variants of gradient descent .

(i) Gradient Descent (GD) (also referred to as batch gradient descent): here we use the full gradient, as in we take the average over all n terms, so our update rule is:

β(k+1) = β(k) − αk n∑ ∇Li(β(k)),

n

k = 0, 1, 2, . . . .

i=1

(ii) Stochastic Gradient Descent (SGD): instead of considering all n terms, at the k-th step we choose an index ik randomly from {1, . . . , n}, and update

β(k+1) = β(k) − αk∇Lik(β(k)),

k = 0, 1, 2, . . . .

Here, we are approximating the full gradient∇L(β) using ∇Lik(β).

(iii) Mini-Batch Gradient Descent: GD (using all terms) and SGD (using a single term) represents the two possible extremes. In mini-batch GD we choose batches of size 1 < B < n randomly at each step, call their indices {ik1 , ik2 , . . . , ikB}, and then we update

β(k+1) = β(k) − αk B B∑ ∇Lij (β(k)),

j=1

k = 0, 1, 2, . . . ,

so we are still approximating the full gradient but using more than a single element as is done in SGD.
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

掃一掃在手機打開當前頁
  • 上一篇:莆田鞋子批發市場進貨渠道(推薦十個最新進貨地方)
  • 下一篇:CSC173代做、Java編程設計代寫
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    国产精品亚洲美女av网站| 九九久久精品一区| 欧美亚洲丝袜| 久久国产精品首页| 久久精品视频91| 免费99视频| 三年中文高清在线观看第6集| 色婷婷av一区二区三区久久| 蜜桃传媒视频第一区入口在线看 | 国产精品久久久久久五月尺 | 国产精品6699| 黄色一级视频播放| 午夜精品99久久免费| 精品国内亚洲在观看18黄| 高清不卡一区二区三区| 欧美一区三区二区在线观看| 秋霞在线一区二区| 69精品小视频| 欧美日韩国产三区| 久久久久久18| 国产成人精品免费久久久久| 欧美亚洲视频一区二区| 欧美精品在线免费播放| 91精品国产91| 欧美日韩一区二区视频在线| 国产精品久久久久久久久久直播| 国产在线视频欧美| 亚洲欧洲三级| 国产不卡精品视男人的天堂| 黄www在线观看| 国产精品精品视频| 68精品久久久久久欧美| 黄色免费观看视频网站| 亚洲精品视频一区二区三区| 日韩中文字幕不卡视频| 国产精品一区二区3区| 日韩av免费网站| 精品国产一区二区三区四区精华| 国产成人在线一区| 国产欧美日韩精品专区| 日韩精品久久久| 久99久在线视频| 日韩最新免费不卡| 国产精品自产拍高潮在线观看| 日韩精品久久一区二区| 欧美精品videofree1080p| 国产成人生活片| 99国产精品白浆在线观看免费| 欧美日韩福利在线| 加勒比在线一区二区三区观看| 性色av一区二区咪爱| 久久成人精品一区二区三区| 国产成人精品免费久久久久| 国产免费一区二区三区香蕉精| 日韩美女免费线视频| 在线亚洲美日韩| 国产精品美乳在线观看| 91精品国产电影| 精品无码av无码免费专区| 色大师av一区二区三区| 欧美日本精品在线| 欧美精品一区二区性色a+v| 亚洲91精品在线亚洲91精品在线| 另类色图亚洲色图| 国产精品无码乱伦| 久久99国产精品一区| 色噜噜狠狠色综合网图区| 国产精品ⅴa在线观看h| 国产精品亚洲欧美导航| 国产一区二区中文字幕免费看| 欧美在线中文字幕| 日本一区网站| 亚洲欧美久久234| 精品免费久久久久久久| 国产精品日韩一区| 日韩综合视频在线观看| 国产福利视频在线播放| 97人人香蕉| 成人av在线不卡| 国产美女99p| 国产欧美日韩亚洲| 国产在线视频一区| 国产日韩欧美日韩| 国产尤物av一区二区三区| 欧美极品欧美精品欧美图片| 欧美在线免费视频| 日韩视频免费在线播放| 日韩中文在线字幕| 亚洲免费在线精品一区| 欧美精品久久一区二区| 欧美精品免费看| 欧美激情一级二级| 亚洲最大av网站| 亚洲第一综合网站| 亚洲乱码日产精品bd在线观看| 中文字幕免费在线不卡| 国产99在线播放| 一区精品视频| 亚洲影视中文字幕| 亚洲精品视频一二三| 日韩中文一区| 日韩欧美三级一区二区| 日本一区二区久久精品| 视频一区国产精品| 日本人妻伦在线中文字幕| 欧美一区二区三区电影在线观看| 日本最新高清不卡中文字幕 | 欧美国产一二三区| 黄色大片中文字幕| 裸模一区二区三区免费| 国产一区在线观| 成人免费网视频| 久久伊人资源站| 九色自拍视频在线观看| 精品国产一区二区三区久久| 国产精品久久久久免费| 一区精品视频| 色大师av一区二区三区| 奇米888一区二区三区| 加勒比在线一区二区三区观看| 国产三级中文字幕| 国产精品夜间视频香蕉| 91九色在线免费视频| 国产成人精品福利一区二区三区 | 美女视频久久黄| 亚洲一区二区三区久久| 日本不卡一二三区| 国产一区二区在线观看免费播放| 97精品国产97久久久久久粉红| 久久国产精品 国产精品 | 欧美日韩一区二区视频在线观看 | 亚洲一区二区三区免费观看| 日本一区二区三区视频免费看 | 欧美影院久久久| 国产欧美一区二区三区在线看| 91免费在线视频| 精品国产美女在线| 宅男av一区二区三区| 日韩黄色片在线| 国产精品香蕉av| 日韩在线激情视频| 国产99久久精品一区二区 | 久久精品2019中文字幕| 国产99久久久欧美黑人| 日韩精品一区二区三区四| 国产午夜福利视频在线观看| 久久人人爽人人爽人人片av高清| 久久精品国产久精国产一老狼| 一卡二卡3卡四卡高清精品视频| 日韩人妻精品无码一区二区三区| 国产资源第一页| 九色一区二区| 亚洲自拍av在线| 国外色69视频在线观看| 91精品国产免费久久久久久| 久久视频中文字幕| 亚洲aaa激情| 国产视频精品网| 国产成人精品一区二区三区福利 | 久久精品亚洲热| 国产999视频| 奇米影视首页 狠狠色丁香婷婷久久综合 | 国产成人精品免费久久久久| 欧美乱妇高清无乱码| 日韩欧美精品在线不卡| 超碰免费在线公开| 国产精品高潮粉嫩av| 日韩欧美猛交xxxxx无码| 99免费在线观看视频| 国产精品日日摸夜夜添夜夜av| 天天干天天色天天爽| 国产乱码一区| 国产精品久久久久不卡| 人偷久久久久久久偷女厕 | 欧美激情视频在线观看| 欧美亚洲另类激情另类| 国产精品91在线观看| 欧美黄网免费在线观看| 国模杨依粉嫩蝴蝶150p| 日韩综合视频在线观看| 日韩中文字幕一区| 99久久精品免费看国产一区二区三区 | 国外色69视频在线观看| 久久精品日韩| 亚洲成人一区二区三区| 成人黄色中文字幕| 欧美伦理91i| 国产专区在线视频| 国产精品免费区二区三区观看| 日本新janpanese乱熟| 91久久国产精品91久久性色 | 97热精品视频官网| 精品久久久久久久免费人妻| 欧美激情第六页| 日日狠狠久久偷偷四色综合免费| 欧美一区二区三区在线免费观看| 99久久久精品免费观看国产| 伊人久久在线观看| av久久久久久| 亚洲成人一区二区三区|