国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做 COMP9417、Python 語言程序代寫

時間:2024-03-25  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning Homework 2: Numerical Implementation of Logistic Regression
Introduction In homework 1, we considered Gradient Descent (and coordinate descent) for minimizing a regularized loss function. In this homework, we consider an alternative method known as Newton’s algorithm. We will first run Newton’s algorithm on a simple toy problem, and then implement it from scratch on a real data classification problem. We also look at the dual version of logistic regression.
Points Allocation There are a total of 30 marks.
• Question 1 a): 1 mark
• Question 1 b): 2 mark
• Question 2 a): 3 marks
• Question 2 b): 3 marks
• Question 2 c): 2 marks
• Question 2 d): 4 mark
• Question 2 e): 4 marks
• Question 2 f): 2 marks
• Question 2 g): 4 mark
• Question 2 h): 3 marks
• Question 2 i): 2 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
• .py file(s) containing all code you used for the project, which should be provided in a separate .zip file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1

• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please only post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
When and Where to Submit
• Due date: Week 7, Monday March 25th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be 80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be done through Moodle, no exceptions.
Page 2

Question 1. Introduction to Newton’s Method
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to in the question. Using existing implementations can result in a grade of zero for the entire question. In homework 1 we studied gradient descent (GD), which is usually referred to as a first order method. Here, we study an alternative algorithm known as Newton’s algorithm, which is generally referred to as a second order method. Roughly speaking, a second order method makes use of both first and second derivatives. Generally, second order methods are much more accurate than first order ones. Given a twice differentiable function g : R → R, Newton’s method generates a sequence {x(k)} iteratively according to the following update rule:
x(k+1) = x(k) − g′(x(k)) , k = 0,1,2,..., (1) g′′(x(k))
For example, consider the function g(x) = 12 x2 − sin(x) with initial guess x(0) = 0. Then g′(x) = x − cos(x), and g′′(x) = 1 + sin(x),
and so we have the following iterations:
x(1) = x(0) − x(0) − cos(x0) = 0 − 0 − cos(0) = 1 1 + sin(x(0)) 1 + sin(0)
x(2) = x(1) − x(1) − cos(x1) = 1 − 1 − cos(1) = 0.750363867840244 1 + sin(x(1)) 1 + sin(1)
x(3) = 0.**91128**911362 .
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up, plot the function and each of the iterates). We note here that in practice, we often use a different update called the dampened Newton method, defined by:
x(k+1) =x(k) −αg′(xk), k=0,1,2,.... (2) g′′(xk)
Here, as in the case of GD, the step size α has the effect of ‘dampening’ the update. Consider now the twice differentiable function f : Rn → R. The Newton steps in this case are now:
x(k+1) =x(k) −(H(x(k)))−1∇f(x(k)), k=0,1,2,..., (3)
where H(x) = ∇2f(x) is the Hessian of f. Heuristically, this formula generalized equation (1) to func- tions with vector inputs since the gradient is the analog of the first derivative, and the Hessian is the analog of the second derivative.
(a) Consider the function f : R2 → R defined by f(x,y)=100(y−x2)2 +(1−x)2.
Create a 3D plot of the function using mplot3d (see lab0 for example). Use a range of [−5, 5] for both x and y axes. Further, compute the gradient and Hessian of f . what to submit: A single plot, the code used to generate the plot, the gradient and Hessian calculated along with all working. Add a copy of the code to solutions.py
Page 3
      
(b) Using NumPy only, implement the (undampened) Newton algorithm to find the minimizer of the function in the previous part, using an initial guess of x(0) = (−1.2, 1)T . Terminate the algorithm when 􏰀􏰀∇f(x(k))􏰀􏰀2 ≤ 10−6. Report the values of x(k) for k = 0, 1, . . . , K where K is your final iteration. what to submit: your iterations, and a screen shot of your code. Add a copy of the code to solutions.py
Question 2. Solving Logistic Regression Numerically
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to do so in the question. Using existing implementations can result in a grade of zero for the entire question. In this question we will compare gradient descent and Newton’s algorithm for solving the logistic regression problem. Recall that in logistic regresion, our goal is to minimize the log-loss, also referred to as the cross entropy loss. Consider an intercept β0 ∈ R, parametervectorβ=(β1,...,βm)T ∈Rm,targetyi ∈{0,1}andinputvectorxi =(xi1,xi2,...,xip)T. Consider also the feature map φ : Rp → Rm and corresponding feature vector φi = (φi1 , φi2 , . . . , φim )T where φi = φ(xi). Define the (l2-regularized) log-loss function:
12λ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(β0, β) = 2 ∥β∥2 + n
where σ(z) = (1+e−z)−1 is the logistic sigmoid, and λ is a hyper-parameter that controls the amount of regularization. Note that λ here is applied to the data-fit term as opposed to the penalty term directly, but all that changes is that larger λ now means more emphasis on data-fitting and less on regularization. Note also that you are provided with an implementation of this loss in helper.py.
(a) Show that the gradient descent update (with step size α) for γ = [β0, βT ]T takes the form
  γ(k)=γ(k−1)−α×
􏰅 − λ 1T (y − σ(β(k−1)1 + Φβ(k−1))) 􏰆 n n 0 n ,
β(k−1) − λ ΦT (y − σ(β(k−1)1 + Φβ(k−1))) n0n
i=1
yi ln σ(β0 + βT φi) + (1 − yi) ln 1 − σ(β0 + βT φi) ,
where the sigmoid σ(·) is applied elementwise, 1n is the n-dimensional vector of ones and
 φ T1 ?**7;
 φ T2 ?**8; Φ= . ?**8;∈R
what to submit: your working out.
(b) In what follows, we refer to the version of the problem based on L(β0,β) as the Primal version. Consider the re-parameterization: β = 􏰇nj=1 θjφ(xj). Show that the loss can now be written as:
1Tλ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(θ0,θ)=2θ Aθ+n
i=1
n × m
.?**9; .?**9;
,
φTn yn
yiln σ(θ0+θTbxi) +(1−yi)ln 1−σ(θ0+θTbxi) .
whereθ0 ∈R,θ=(θ1,...,θn)T ∈Rn,A∈Rn×nandfori=1,...,n,bxi ∈Rn.Werefertothis version of the problem as the Dual version. Write down exact expressions for A and bxi in terms of k(xi,xj) := ⟨φ(xi),φ(xj)⟩ for i,j = 1,...,n. Further, for the dual parameter η = [θ0,θT ]T , show that the gradient descent update is given by:
 y 1 ?**7;
 y 2 ?**8; n y= . ?**8;∈R .
  η(k)=η(k−1)−α×
􏰅
− λ 1T (y − σ(θ(k−1)1 + Aθ(k−1))) n n 0 n
Aθ(k−1) − λ A(y − σ(θ(k−1)1 + Aθ(k−1))) n0n
Page 4
􏰆
,

If m ≫ n, what is the advantage of the dual representation relative to the primal one which just makes use of the feature maps φ directly? what to submit: your working along with some commentary.
(c) We will now compare the performance of (primal/dual) GD and the Newton algorithm on a real dataset using the derived updates in the previous parts. To do this, we will work with the songs.csv dataset. The data contains information about various songs, and also contains a class variable outlining the genre of the song. If you are interested, you can read more about the data here, though a deep understanding of each of the features will not be crucial for the purposes of this assessment. Load in the data and preform the follwing preprocessing:
(I) Remove the following features: ”Artist Name”, ”Track Name”, ”key”, ”mode”, ”time signature”, ”instrumentalness”
(II) The current dataset has 10 classes, but logistic regression in the form we have described it here only works for binary classification. We will restrict the data to classes 5 (hiphop) and 9 (pop). After removing the other classes, re-code the variables so that the target variable is y = 1 for hiphop and y = 0 for pop.
(III) Remove any remaining rows that have missing values for any of the features. Your remaining dataset should have a total of 3886 rows.
(IV) Use the sklearn.model selection.train test split function to split your data into X train, X test, Y train and Y test. Use a test size of 0.3 and a random state of 23 for reproducibility.
(V) Fit the sklearn.preprocessing.MinMaxScaler to the resulting training data, and then use this object to scale both your train and test datasets so that the range of the data is in (0, 0.1).
(VI) Print out the first and last row of X train, X test, y train, y test (but only the first 3 columns of X train, X test).
What to submit: the print out of the rows requested in (VI). A copy of your code in solutions.py
(d) For the primal problem, we will use the feature map that generates all polynomial features up to and including order 3, that is:
φ(x) = [1,x1,...,xp,x31,...,x3p,x1x2x3,...,xp−1xp−2xp−1].
In python, we can generate such features using sklearn.preprocessing.PolynomialFeatures.
For example, consider the following code snippet:
1 2 3 4 5
1if you need a sanity check here, the best thing to do is use sklearn to fit logistic regression models. This should give you an idea of what kind of loss your implementation should be achieving (if your implementation does as well or better, then you are on the right track)
Page 5
 from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(3)
 X = np.arange(6).reshape(3, 2)
poly.fit_transform(X)
 Transform the data appropriately, then run gradient descent with α = 0.4 on the training dataset for 50 epochs and λ = 0.5. In your implementation, initalize β(0) = 0, β(0) = 0 , where 0 is the
0pp p-dimensional vector of zeroes. Report your final train and test losses, as well as plots of training loss at each iteration. 1 what to submit: one plot of the train losses. Report your train and test losses, and
a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
 
(e) Fortheprimalproblem,runthedampenedNewtonalgorithmonthetrainingdatasetfor50epochs and λ = 0.5. Use the same initialization for β0,β as in the previous question. Report your final train and test losses, as well as plots of your train loss for both GD and Newton algorithms for all iterations (use labels/legends to make your plot easy to read). In your implementation, you may use that the Hessian for the primal problem is given by:
λ1TDΦ 􏰄 n n ,
where D is the n × n diagonal matrix with i-th element σ(di)(1 − σ(di)) and di = β0 + φTi β. what to submit: one plot of the train losses. Report your train and test losses, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(f) For the feature map used in the previous two questions, what is the correspongdin kernel k(x, y) that can be used to give the corresponding dual problem? what to submit: the chosen kernel.
H(β,β)= n n n
􏰃λ1TD1
0 λ ΦT D1n nn
(g) Implement Gradient Descent for the dual problem using the kernel found in the previous part. Use the same parameter values as before (although now θ(0) = 0 and θ(0) = 0 ). Report your final
0n
training loss, as well as plots of your train loss for GD for all iterations. what to submit: a plot of the
train losses and report your final train loss, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(h) Explain how to compute the test loss for the GD solution to the dual problem in the previous part. Implement this approach and report the test loss. what to submit: some commentary and a screen shot of your code, and a copy of your code in solutions.py.
(i) In general, it turns out that Newton’s method is much better than GD, in fact convergence of the Newton algorithm is quadratic, whereas convergence of GD is linear (much slower than quadratic). Given this, why do you think gradient descent and its variants (e.g. SGD) are much more popular for solving machine learning problems? what to submit: some commentary
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

掃一掃在手機打開當前頁
  • 上一篇:代寫 CS 20A、代做 C++語言程序
  • 下一篇:人在國外辦理菲律賓簽證需要什么材料呢
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    另类色图亚洲色图| 欧美综合在线播放| 久久偷看各类wc女厕嘘嘘偷窃| 国产日本欧美在线| 国产又大又硬又粗| 精品婷婷色一区二区三区蜜桃| 日本国产一区二区三区| 婷婷久久青草热一区二区| 亚洲国产日韩综合一区| 午夜午夜精品一区二区三区文| 亚洲欧洲中文| 欧美日韩国产第一页| 国产精品后入内射日本在线观看| 精品国偷自产在线| 国产精品视频入口| 国产精品福利小视频| 久久伊人免费视频| 久久99精品国产99久久6尤物| 欧美xxxx做受欧美.88| 久久国产精品影视| 亚洲一区二区在线观| 亚洲影视中文字幕| 少妇精品久久久久久久久久| 欧美一级片在线播放| 日韩精品在线中文字幕| 欧美性视频精品| 欧美视频在线第一页| 黄色高清视频网站| 国产精品自拍小视频| 91九色在线观看| 久操手机在线视频| 国产精品国产福利国产秒拍| 亚洲最新在线| 日韩国产一级片| 国产综合在线观看视频| 97久久精品视频| 久久精品国产一区二区三区| 在线视频福利一区| 日韩久久不卡| 高清视频在线观看一区| 国产成人精品999| 日韩亚洲第一页| 精品久久久久久中文字幕动漫| 亚洲在线免费看| 热草久综合在线| 国产精品自产拍在线观看中文| 久久久久久高清| 久久综合电影一区| 日韩精品手机在线观看| 国产精品一区二区性色av| 国产成人精品久久二区二区91| 国产精品久久久久久久9999| 亚洲图片在线观看| 精品欧美日韩| 国产福利一区视频| 中文字幕剧情在线观看一区| 欧美日韩喷水| 国产精品一区二区欧美| 俺去亚洲欧洲欧美日韩| 亚洲a在线观看| 国产日韩在线一区二区三区| 视频在线一区二区| 视频一区不卡| 成人3d动漫一区二区三区| 国产精品国模在线| 奇米影视亚洲狠狠色| 99热在线播放| 中文字幕欧美日韩一区二区三区| 精品欧美日韩| 国产成人看片| 日本不卡高字幕在线2019| www日韩在线观看| www国产亚洲精品久久网站| 亚洲a级在线观看| 高清视频欧美一级| 久久亚洲精品网站| 今天免费高清在线观看国语| 色婷婷综合成人| 日本午夜在线亚洲.国产| 国产精品69久久| 一级特黄妇女高潮| 国产九色精品| 欧美激情视频一区二区| 国产一区二区视频播放| 久久亚洲电影天堂| 精品一区二区三区国产| 国产精品视频在线免费观看| 日本精品福利视频| 久久国产精品亚洲va麻豆| 色播五月综合| 国产mv免费观看入口亚洲| 大地资源第二页在线观看高清版| 成人久久18免费网站漫画| 久久国产精品影视| 国产精品影片在线观看| 精品国产乱码久久久久久蜜柚 | 性日韩欧美在线视频| 国产精选一区二区| 欧美激情a在线| 国产欧美日韩网站| 在线不卡日本| 91精品啪在线观看麻豆免费| 亚洲精品乱码久久久久久自慰| 久久久亚洲福利精品午夜| 日韩中文字幕组| www.欧美免费| 国精产品一区一区三区有限在线| 久久这里只有精品视频首页| 国产欧美日韩免费看aⅴ视频| 中文精品视频一区二区在线观看 | 91久久精品久久国产性色也91| 亚洲精品国产一区| 久久久在线免费观看| 日本高清不卡在线| 国产精品网站入口| 国产精品主播视频| 大地资源第二页在线观看高清版| 久久国产亚洲精品无码| 欧美日韩日本网| 美女av一区二区三区| 91久久伊人青青碰碰婷婷| 日本wwwcom| 精品国产91亚洲一区二区三区www| av动漫在线免费观看| 日本最新高清不卡中文字幕| 国产精品爽爽爽| 国产精品一区二区三| 亚洲精品中文字幕无码蜜桃| 丝袜一区二区三区| 国产精品一区二区三区毛片淫片| 日本亚洲精品在线观看| 国产精品久久久久7777| 91精品国产综合久久香蕉| 欧美视频观看一区| 亚洲熟女乱色一区二区三区 | 久久久久久久久久久久久久国产 | 一区二区三区四区在线视频| 久久66热这里只有精品| 国产综合av一区二区三区| 亚洲视频导航| 久久精品国产一区二区三区| 国产伦精品一区二区三区视频免费| 视频一区亚洲| 国产精品初高中精品久久| 91精品国产成人| 精品嫩模一区二区三区| 亚洲精品免费av| 国产精品久久国产精品99gif| 久久久免费高清电视剧观看| 精品无人乱码一区二区三区的优势| 中文精品无码中文字幕无码专区| www.av一区视频| 国内精品久久久久久久| 无码免费一区二区三区免费播放| 国产精品狼人色视频一区| 久久久亚洲网站| 国产精品中文字幕在线| 欧美在线视频二区| 日产日韩在线亚洲欧美| 伊人天天久久大香线蕉av色| 国产精品成人免费视频| 日韩中文字幕第一页| 久久久亚洲成人| 成人h在线播放| 国产一区一区三区| 欧美中文字幕第一页| 中文字幕色一区二区| 久久精品在线视频| 久久大片网站| 99精品免费在线观看| 国产日韩欧美在线视频观看| 欧美日本韩国在线| 日韩精品欧美专区| 亚洲精品欧美极品| 欧美激情一二区| 久久成人国产精品| 久久久av电影| 国产成人看片| 日韩天堂在线视频| 久久久久久久午夜| 久草资源站在线观看| 91精品国产91久久久久久久久| 国产伦精品一区二区三区四区视频_ | 69av在线视频| 99久久久精品免费观看国产| 国产亚洲福利社区| 国产视频一区二区三区在线播放 | 日本a级片电影一区二区| 午夜精品亚洲一区二区三区嫩草 | 国模精品系列视频| 欧美日韩一区二区视频在线| 日韩欧美在线电影| 日韩欧美激情一区二区| 青青草国产精品一区二区| 欧洲精品久久久| 黄页免费在线观看视频| 国产在线精品一区二区三区》 | 高清欧美性猛交| 99久久自偷自偷国产精品不卡| 精品一区日韩成人|