国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代寫Tic-Tac-To: Markov Decision、代做java程序語言
代寫Tic-Tac-To: Markov Decision、代做java程序語言

時間:2024-12-14  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



Coursework 2 – Tic-Tac-To: Markov Decision
Processes & Reinforcement Learning (worth 25%
of your final mark)
Deadline: Thursday, 28th November 2024
How to Submit: To be submitted to GitLab (via git commit & push) – Commits are
timestamped: all commits after the deadline will be considered late.
Introduction
Coursework 2 is an individual assignment, where you will each implement Value
Iteration, Policy Iteration that plan/learn to play 3x3 Tic-Tac-Toe game. You will test
your agents against other rule-based agents that are provided. You can also play against
all the agents including your own agents to test them.
The Starter Code for this project is commented extensively to guide you, and includes
Javadoc under src/main/javadoc/ folder in the main project folder - you should read
these carefully to learn to use the classes. This is comprised of the files below.
You should get the Starter Code from GitLab: Follow the step by step instructions in
the document I have put together for you:
Open Canvas->F29AI -> Modules -> GitLab (and Git) Learning Materials (Videos and
Crib Sheets) -> Introduction to Eclipse, Git & GitLab.
If you are unfamiliar with git and/or GitLab I strongly suggest watching Rob
Stewart’s instructive videos on Canvas under the same module
Files you will edit & submit
ValueIterationAgent.java A Value Iteration agent for solving the Tic-Tac-Toe
game with an assumed MDP model.
PolicyIterationAgent.java A Policy Iteration agent for solving the Tic-Tac-Toe
game with an assumed MDP model.
QLearningAgent.java A q-learner, Reinforcement Learning agent for the
Tic-Tac-Toe game.
Files you should read & use but shouldn’t need to edit
Game.java The 3x3 Tic-Tac-Toe game implementation.
TTTMDP.java Defines the Tic-Tac-Toe MDP model
TTTEnvironment.java Defines the Tic-Tac-Toe Reinforcement Learning
environment
Agent.java Abstract class defining a general agent, which other
agents subclass.
HumanAgent.java Defines a human agent that uses the command line to
ask the user for the next move
RandomAgent.java Tic-Tac-Toe agent that plays randomly according to a
RandomPolicy
Move.java Defines a Tic-Tac-Toe game move
Outcome.java A transition outcome tuple (s,a,r,s’)
Policy.java An abstract class defining a policy – you should subclass
this to define your own policies
TransitionProb.java A tuple containing an Outcome object and a probability
of the Outcome occurring.
RandomPolicy.java A subclass of policy – it’s a random policy used by a
RandomAgent instance.
What to submit: You will fill in portions of ValueIterationAgent.java,
PolicyIterationAgent.java and QLearningAgent.java during the assignment.
Commit & push your changes to your fork of the repository. Do this frequently so
nothing is lost. There will soon be automatic unit tests written for this project, which
means that you’ll be able to see whether your code passes the tests, both locally, and on
GitLab. I will send an announcement once I’ve uploaded the tests.
PLEASE DO NOT UPLOAD YOUR SOLUTIONS TO A PUBLIC REPOSITORY. We have
spent a great deal of time writing the code & designing the coursework and want to be
able to reuse this coursework in the coming years.
Evaluation: Your code will be tested on GitLab for correctness using Maven & the Java
Unit Test framework. Please do not change the names of any provided functions or
classes within the code, or you will wreck the tests.
Mistakes in the code: If you are sure you have found a mistake in the current code let
me or the lab helpers know and we will fix it.
Plagiarism: While you are welcome to discuss the problem together in the labs, we will
be checking your code against other submissions in the class for logical redundancy. If
you copy someone else's code and submit it with minor changes, we will know. These
cheat detectors are quite hard to fool, so please don't try. We trust you all to submit
your own work only; please don't let us down. If you do, we will pursue the strongest
consequences with the school that are available to us.
Getting Help: You are not alone! If you find yourself stuck on something, ask in the
labs. You can ask for help on GitLab too – but it means you will need to commit & push
your code first: don’t worry, you won’t be judged until the deadline. It’s good practice to
commit & push your code frequently to the repository, even if it doesn’t work.
We want this coursework to be intellectually rewarding and fun.
MDPs & Reinforcement Learning
To get started, run Game.java without any parameters and you’ll be able to play the
RandomAgent using the command line. From within the top level, main project folder:
java –cp target/classes/ ticTacToe.Game
You should be able to win or draw easily against this agent. Not a very good agent!
You can control many aspects of the Game, but mainly which agents will play each
other. A full list of options is available by running:
java –cp target/classes/ ticTacToe.Game -h
Use the –x & -o options to specify the agents that you want to play the game. Your own
agents, namely, Value Iteration, Policy Iteration, and Q-Learning agents are denoted as
vi, pi & ql respectively, and can only play X in the game. This ignores the problem of
dealing with isomorphic state spaces (mapping x’s to o’s and o’s to x’s in this case). For
example if you want two RandomAgents to play out the game, you do it like this:
java target/classes/ ticTacToe.Game –x random –o
random
Look at the console output that accompanies playing the game. You will be told about
the rewards that the ‘X’ agent receives. The `O’ agent is always assumed to be part of
the environment.
Question 1 (6 points) Write a value iteration agent in ValueIterationAgent.java
which has been partially specified for you. Here you need to implement the iterate() &
extractPolicy() methods. The former should perform value iteration for a number of
steps (k steps – this is one of the fields of the class) and the latter should extract the
policy from the computed values.
Your value iteration agent is an offline planner, not a reinforcement agent, and so the
relevant training option is the number of iterations of value iteration it should run in its
initial planning phase – you can change this in ValueIterationAgent.java.
ValueIterationAgent constructs a TTTMDP object on construction – you do not need to
change this class, but use it in your value iteration implementation to generate the set of
next game states (the sPrimes), their associated probabilities & rewards when executing
a move from a particular game state (a Game object). You can do this using the provided
generateTransitions method in the TTTMDP class, which effectively gives you a
probability distribution over Outcomes.
Value iteration computes k-step estimates of the optimal values, Vk. You will see that the
the Value Function, Vk is stored as a java HashMap, from Game objects (states) to a
double value. The corresponding hashCode function for Game objects has been
implemented so you can safely use whole Game objects as keys in the HashMap.
Note: You may assume that 50 iterations is enough for convergence in this question.
Note: Unlike the MDPs in the class, in the CW2 implementation, your agent receives a
reward when entering a state – the reward simply depends on the target state, rather
than on source state, action, and target state. This means that there is no imagined
terminal state outside the game like in the lectures. Don’t worry – all the methods you
have learned are compatible with this setting.
Note: The O agent is modelled as part of the environment, so that once your agent
(X) takes an action, any next observed state would include O’s move. The agents need
NOT care about the intermediate game/state where only they have played and not yet
the opponent.
The following command loads your ValueIterationAgent, which will compute a policy
and executes it 10 times against the other agent which you specify, e.g. random, or
aggressive. The –s option specifies which agent goes first (X or O). By default, the X
agent goes first.
java target/classes/ ticTacToe.Game -x vi -o
random –s x
Question 2 (1 point): Test your Value Iteration Agent against each of the provided
agents 50 times and report on the results – how many games they won, lost & drew
against each of the other rule based agents. The rule based agents are: random,
aggressive, defensive.
This should take the form of a very short .pdf report named: vi-agent-report.pdf.
Commit this together with your code, and push to your fork.
Question 3 (6 point) Write a Policy Iteration agent in PolicyIterationAgent.java by
implementing the initRandomPolicy(), evaluatePolicy(), improvePolicy() &
train() methods. The evaluatePolicy() method should evaluate the current policy
(see your lecture notes), specified in the curPolicy field (which your
initRandomPolicy() initialized). The current values for the current policy should be
stored in the provided policyValues map. The improvePolicy() method performs the
Policy improvement step, and updates curPolicy.
Question 4 (1 point): As in Question 2, this time test your Policy Iteration Agent
against each of the provided agents 50 times and report on the results – how many
games they won, lost & drew. The other agents are: random, aggressive, defensive.
This should take the form of a very short .pdf report named: pi-agent-report.pdf.
Commit this together with your code, and push to your fork.
Questions 5 & 6 are on Reinforcement Learning:
Question 5 (5 points): Write a Q-Learning agent in QLearningAgent.java by
implementing the train() & extractPolicy()methods. Your agent should follow an
e-greedy policy during training (and only during training – during testing it should follow
the extracted policy). Your agent will need to train for many episodes before the qvalues converge. Although default values have been set/given in the code, you are
strongly encouraged to play round with the hyperparameters of q-learning: the learning
rate (a), number of episodes to train, as well as the epsilon in the e-greedy policy
followed during training.
Question 6 (1 point): Like the previous questions, test your Q-Learning Agent against
each of the provided agents 50 times and report on the results - how many games they
won, lost & drew. The other agents are: random, aggressive, defensive.
This should take the form of a very short .pdf report named: ql-agent-report.pdf.
Commit this together with your code, and push to your fork.
Javadoc: There is extensive comments in the code, Javadoc (under the folder doc/ in
the project folder) and inline. You should read these carefully to understand what is
going on, and what methods to call/use. They might also contain hints in the right
direction.
Value of Terminal States: you need to be careful about the values of terminal states -
terminal states are states where X has won, states where O has won, and states where
the game is a draw. The value of these game states - V(g) - should under all
circumstances and in all iterations be set to 0. Here’s why: to find the optimal value
of a state you will be looping over all possible actions from that state. For terminal states
this is empty, and might, depending on your implementation of finding the
maximum, lead to a result where you would be setting the value of the terminal state to
a very low negative value (e.g. Double.MIN_VALUE). To avoid this, for every game
state g that you are considering and calculating its optimal value, CHECK IF IT
IS A TERMINAL STATE (using g.isTerminal()); if it is, set its value to 0, and
move to the next game state (you can use the ‘continue;’ statement inside your
loop). Note that your agent would have already received its reward when
transitioning INTO that state, not out of it.
Testing your agent: If everything is working well, and you have the right parameters
(e.g. reward function) your agents should never lose.
You can play around with the reward values in the TTTMDP class – especially try
increasing or decreasing the negative losing reward. Increasing this negative reward (to
more negative numbers) would encourage your agent to prefer defensive moves to
attacking moves. This will change their behavior (both for Policy & Value iteration) and
should encourage your agent to never lose the game. Machine Learning isn't like
Mathematics with complete certainty - you almost always have to experiment to get the
parameters of your model right!

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp





 

掃一掃在手機打開當前頁
  • 上一篇:泰國駕照轉廣州駕照要怎么做(多長時間)
  • 下一篇:代寫JC4004編程、代做Python設計程序
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    国产精品丝袜高跟| 久久婷婷人人澡人人喊人人爽| 国产在线精品一区| 日韩专区第三页| 久久亚洲精品一区| 国产精品久久久久久久久久久久 | 国产伦精品一区二区三区四区免费| 精品乱子伦一区二区三区| 国产精品一 二 三| 国产自偷自偷免费一区| 亚洲蜜桃av| 宅男av一区二区三区| 久久久人人爽| 欧美精品一区二区三区三州| 日本一区二区在线视频观看| 亚洲 高清 成人 动漫| 国产精品视频自拍| 国产精品丝袜久久久久久高清| 超碰国产精品久久国产精品99| 亚洲在线播放电影| 一女被多男玩喷潮视频| 日韩在线观看免费高清| 国产成人涩涩涩视频在线观看| 国产伦理久久久| 国产精选久久久久久| 成人免费无码av| 91精品久久久久久久久久久| 国内精品久久久久| 国产欧美韩日| 91免费在线视频| 久久99精品国产一区二区三区 | 欧美高清视频一区| 国产深夜精品福利| 91精品国产高清久久久久久91| 国产三级精品网站| 欧美一级二级三级九九九| 欧美日韩一区二区视频在线观看| 亚洲美女网站18| 青青在线免费观看| 草b视频在线观看| 色偷偷偷亚洲综合网另类 | 中文字幕日韩一区二区三区不卡| 久草视频国产在线| 国产精品三级久久久久久电影| 国产成人中文字幕| 久久精品国产亚洲精品| 久久久最新网址| 日韩视频欧美视频| 久久成人精品视频| 日韩成人av电影在线| 国产视频观看一区| 色婷婷av一区二区三区在线观看 | 一本久道综合色婷婷五月| 欧美激情综合亚洲一二区| 少妇精品久久久久久久久久| 亚洲一区二区精品在线| 精品国产aⅴ麻豆| 欧美一级片一区| 日韩aⅴ视频一区二区三区| 亚洲日本一区二区三区在线不卡| 麻豆成人在线看| 亚州成人av在线| 国产美女精品免费电影| 深夜福利一区二区| 色噜噜狠狠狠综合曰曰曰88av| 7777免费精品视频| 国产精品入口夜色视频大尺度| 日韩中文字幕网| 九九久久久久久久久激情| 青青视频在线播放| 久久噜噜噜精品国产亚洲综合| 久久久欧美一区二区| 国产精品免费观看高清| 日本不卡视频在线播放| 国产精欧美一区二区三区| 一区二区三区电影| 色中色综合成人| 日本久久亚洲电影| 国产精品一区二| 欧美xxxx做受欧美.88| 欧美日韩免费精品| 国产ts一区二区| 亚洲精品视频一二三| 国产日产欧美一区二区| 国产精品人成电影| 日韩精品一区二区三区电影| 久久综合婷婷综合| 亚洲精品蜜桃久久久久久| 成人av蜜桃| 中文字幕久久一区| 成人免费视频97| 一本色道久久综合亚洲二区三区 | 131美女爱做视频| 久久久久成人网| 日韩少妇内射免费播放| 91国视频在线| 亚洲二区三区四区| 91精品免费久久久久久久久| 亚洲一区二区自拍| 欧美精品亚洲| 久久精品视频在线观看| 欧美极品色图| 国产精品久久久久久久久久ktv | 波霸ol色综合久久| 日韩精品在线观看av| 久久精品午夜一区二区福利| 亚洲7777| 国产肥臀一区二区福利视频| 日韩视频在线视频| 色偷偷av亚洲男人的天堂| 欧美精品一区二区三区四区五区| 国产欧美精品一区二区三区| 久久久亚洲综合网站| 亚洲精品日韩成人| 777精品久无码人妻蜜桃| 国产精品嫩草影院一区二区| 亚洲字幕一区二区| 91九色在线免费视频| 日韩有码免费视频| 日韩一区二区av| 国语自产精品视频在免费| 久久亚洲精品一区二区| 成人久久一区二区| 熟女少妇精品一区二区| 国产一二三四区在线观看| 免费97视频在线精品国自产拍| 亚洲va久久久噜噜噜久久狠狠| 欧美激情亚洲天堂| 国产精品国产自产拍高清av水多| 日本人成精品视频在线| 久久久久久久少妇| 欧美在线激情网| 久久久伊人欧美| 热久久视久久精品18亚洲精品| 国产精品99免视看9| 日本国产一区二区三区| 成人精品一区二区三区| 熟女少妇精品一区二区| 国产精品视频免费一区二区三区| 午夜一区二区三视频在线观看| 欧美日韩一区二区在线免费观看| 国产综合 伊人色| 国产精品流白浆视频| 国产乱子伦精品视频| 日韩av日韩在线观看| 国产精品久久久久久久乖乖| 97久久精品人搡人人玩| 欧美激情专区| 亚洲日本无吗高清不卡| 97福利一区二区| 欧美国产一二三区| 亚洲欧洲日夜超级视频| 日韩在线视频网| 不卡视频一区二区三区| 欧美精品欧美精品系列c| 亚洲精品视频一区二区三区 | 91精品国产综合久久久久久久久 | 久久精品国产一区二区三区| 国产日韩亚洲精品| 韩国福利视频一区| 视频一区国产精品| 欧美激情xxxxx| αv一区二区三区| 亚洲欧美精品在线观看| 蜜桃传媒一区二区| 精品久久久久久一区二区里番| 免费在线观看毛片网站| 日批视频在线免费看| 一区二区视频在线播放| 国产精品免费视频xxxx| 久久久久久久9| 91国产视频在线播放| 国产青草视频在线观看| 欧美日韩一区在线观看视频| 日本网站免费在线观看| 亚洲va码欧洲m码| 亚洲一区影院| 在线视频亚洲自拍| 九九热精品在线| 91精品在线观看视频| 精品无人区一区二区三区竹菊| 久久国产精品电影| 久久久精品网站| 久久久久久久久久久免费| 免费拍拍拍网站| 欧美日韩福利电影| 国产精品久久久久久一区二区| 国产精品有限公司| 国产区日韩欧美| 国产日韩久久| 国产日产欧美视频| 国产男女激情视频| 国产精品一区=区| 超碰97在线播放| 国产精品99久久久久久白浆小说| 欧美精品一区二区视频| 欧洲精品在线视频| 欧美二区三区| 国产综合动作在线观看| 国产日产欧美精品|