国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

COMP52715 代做、代寫 Python設(shè)計(jì)編程

時(shí)間:2024-04-22  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯(cuò)



COMP52715 Deep Learning for Computer Vision & Robotics (Epiphany Term, 202**4)
Summative Coursework - 3D PacMan
Coursework Credit - 15 Credits Estimated Hours of Work - 48 Hours Submission Method - via Ultra
Release On: February 16 2024 (2pm UK Time)
Due On: March 15 2024 (2pm UK Time)
– All rights reserved. Do NOT Distribute. –
  Compiled on November 16, 2023 by Dr. Jingjing Deng

1
1.
2.
3.
4.
5.
6.
Coursework Specification
This coursework constitutes **% of your final mark for this module, where there are two mandatory tasks: Python programming and report writing. You must upload your work to Ultra before the deadline specified on the cover page.
The other 10% will be assessed separately based on seminar participation. There are 3 seminar sessions in total, the mark awarding rule is as such: (A) participating in none=0%, (B) participating in 1 session=2%, (C) participating in 2 sessions=5%, (D) participating in all sessions=10%.
This coursework is to be completed by students working individually. You should NOT ask for help from your peers, lecturer, and lab tutors regarding the coursework. You will be assessed on your code and report submissions. You must comply with the University rules regarding plagiarism and collusion. Using external code without proper referencing is also considered as breaching academic integrity.
Code Submission: The code must be written in Jupyter Notebook with appropriate comments. For constructing deep neural network models, use PyTorch1 library only. Zip Jupyter Note- book source files (*.ipynb), your dataset (if there is any new), pretrained models (*.pth), and a README.txt (code instruction) into one single archive. Do NOT include the original “Pac- Man Helper.py”, “PacMan Helper Demo.ipynb”, “PacMan Skeleton.ipynb”, “TrainingImages.zip”, “cloudPositions.npy” and “cloudColors.npy” files. Submit a single Zip file to GradeScope - Code entry on Ultra.
Report Submission: The report must NOT exceed 5 pages (including figures, tables, references and supplementary materials) with a single column format. The minimum font size is 11pt (use Arial, Calibri, Times New Roman only). Submit a single PDF file to GradeScope - Report entry on Ultra.
Academic Misconduct is a major offence which will be dealt with in accordance with the University’s General Regulation IV – Discipline. Please ensure you have read and understood the University’s regulations on plagiarism and other assessment irregularities as noted in the Learning and Teaching Handbook: 6.2.4: Academic Misconduct2.
            Figure 1: The mysterious PhD Lab.
 1 https://pytorch.org/
2 https://durhamuniversity.sharepoint.com/teams/LTH/SitePages/6.2.4.aspx
1

2 Task Description (**% in total)
2.1 Task 1 - Python Programming (40% subtotal)
In this coursework, you are given a set of 3D point-clouds with appearance features (i.e. RGB values). These point-clouds were collected using a Kinect system in a mysterious PhD Lab (see Figure.1). Several virtual objects are also positioned among those point clouds. Your task is to write a Python program that can automatically detect those objects from an image and use them as anchors to collect the objects and navigate through the 3D scene. If you land close enough to the object it will be automatically captured and removed from the scene. A set of example images that contain those virtual objects are provided. These example images are used to train a classifier (basic solution) and an object detector (advanced solution) using deep learning approaches in order to locate the targets. You are required to attempt both basic and advance solutions. “PacMan Helper.py” provides some basic functions to help you complete the task. “PacMan Helper Demo.ipynb” demonstrates how to use these functions to obtain a 2D image by projecting 3D point-clouds onto the camera image-plane, and how to re-position and rotate the camera etc. All the code and data are available on Ultra. You are encouraged to read the given source codes, particularly “PacMan Skeleton.ipynb”.
Detection Solution using Basic Binary Classifier (10%). Implement a deep neural network model that can classify the image patch into two categories: target object and background. You can use the given images to train your neural network. It then can be used in a sliding window fashion to detect the target object in a given image.
Detection Solution using Advance Object Detector (10%). Implement a deep neural network model that can detect the target object from the image. You may manually or automatically create your own dataset for training the detector. The detector will predict bounding boxes that contain the object from a given image.
Navigation and Collection Task Completion (10%). There are 11 target objects in the scene. Use the trained models to perform scene navigation and object collection. If you land close enough to the object it will be automatically captured and removed from the scene. You may compare the performance of both models.
Visualisation, Coding Style, and Readability (10%). Visualise the data and your experimental results wherever is appropriate. The code should be well structured with sufficient comments for the essential parts to make the implementation of your experiments easy to read and understand. Check the “Google Python Style Guide”3 for guidance.
2.2 Task 2 - Report Writing (50% subtotal)
You will also write a report (maximum five pages) on your work, which you will submit to Ultra alongside your code. The report must contain the following structure:
Introduction and Method (10%). Introduce the task and contextualise the given problem. Make sure to include a few references to previously published work in the field, where you should demon- strate an awareness of the relevant research works. Describe the model(s) and approaches you used to undertake the task. Any decisions on hyper-parameters must be stated here, including motivation for your choices where applicable. If the basis of your decision is experimentation with a number of parameters, then state this.
Result and Discussion(10)%). Describe, compare and contrast the results you obtained on your model(s). Any relationships in the data should be outlined and pointed out here. Only the most important conclusions should be mentioned in the text. By using tables and figures to support the section, you can avoid describing the results fully. Describe the outcome of the experiment and the conclusion that you can draw from these results.
Robot Design (20%). Consider designing an autonomous robot to undertake the given task in the real scene. Discuss the foreseen challenges and propose your design, including robot mechanic configuration, hardware and algorithms for robot sensing and controlling, and system efficiency etc. Provide appropriate justifications for your design choices with evidence from existing literature. You may use simulators such as “CoppeliaSim Edu” or “Gazebo” for visualising your design.
3 https://google.github.io/styleguide/pyguide.html
2
 
Format, Writing Style, and Presentation (10%). Language usage and report format should be in a professional standard and meet the academic writing criteria, with the explanation appropriately divided as per the structure described above. Tables, figures, and references should be included and cited where appropriate. A guide of citation style can be found at library guide4.
3 Learning Outcome
The following materials from lectures and lab practicals are closely relevant to this task:
1. Basic Deep Neural Networks - Image Classification.
2. Generic Visual Perception - Object Detection.
3. Deep Learning for Robotics Sensing and Controlling - Consideration for Robotic System Design.
The following key learning outcomes are assessed:
1. A critical understanding of the contemporary deep machine learning topics presented, and how these are applicable to relevant industrial problems and have future potential for emerging needs in both a research and industrial setting.
2. An advanced knowledge of the principles and practice of analysing relevant robotics and computer vision deep machine learning based algorithms for problem suitability.
3. Written communication, problem solving and analysis, computational thinking, and advanced pro- gramming skills.
The rubric and feedback sheet are attached at the end of this document.
 4 https://libguides.durham.ac.uk/research_skills/managing_info/plagiarism 3

請(qǐng)加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

掃一掃在手機(jī)打開當(dāng)前頁(yè)
  • 上一篇:菲律賓申請(qǐng)中國(guó)探親簽證流程 入華探親簽辦理材料
  • 下一篇:EEE-6512 代寫、代做 java/c++編程語(yǔ)言
  • 無(wú)相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業(yè)CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業(yè)CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務(wù) 管路流場(chǎng)仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務(wù) 管路
    流體CFD仿真分析_代做咨詢服務(wù)_Fluent 仿真技術(shù)服務(wù)
    流體CFD仿真分析_代做咨詢服務(wù)_Fluent 仿真
    結(jié)構(gòu)仿真分析服務(wù)_CAE代做咨詢外包_剛強(qiáng)度疲勞振動(dòng)
    結(jié)構(gòu)仿真分析服務(wù)_CAE代做咨詢外包_剛強(qiáng)度疲
    流體cfd仿真分析服務(wù) 7類仿真分析代做服務(wù)40個(gè)行業(yè)
    流體cfd仿真分析服務(wù) 7類仿真分析代做服務(wù)4
    超全面的拼多多電商運(yùn)營(yíng)技巧,多多開團(tuán)助手,多多出評(píng)軟件徽y1698861
    超全面的拼多多電商運(yùn)營(yíng)技巧,多多開團(tuán)助手
    CAE有限元仿真分析團(tuán)隊(duì),2026仿真代做咨詢服務(wù)平臺(tái)
    CAE有限元仿真分析團(tuán)隊(duì),2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內(nèi)
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗(yàn)證碼 寵物飼養(yǎng) 十大衛(wèi)浴品牌排行 suno 豆包網(wǎng)頁(yè)版入口 wps 目錄網(wǎng) 排行網(wǎng)

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號(hào)-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    av资源站久久亚洲| 91精品视频一区| 色中色综合影院手机版在线观看 | 国产精品欧美激情在线观看| 日韩有码片在线观看| 久久久久这里只有精品| 久久久久久一区| 色偷偷88888欧美精品久久久| 欧美成人精品影院| 精品国偷自产一区二区三区| 操日韩av在线电影| 欧美极品美女电影一区| 亚洲欧洲中文| 色综合久久av| 欧洲精品亚洲精品| 欧美日韩免费观看一区| 精品日本一区二区三区| 蜜桃网站成人| 欧美成人中文字幕| 亚洲一区二区自拍| 日本福利视频网站| 红桃一区二区三区| 国产精品一区二区三区精品| 91麻豆蜜桃| 久久久久久久久久国产| 国产成人免费av电影| 国产精品视频公开费视频| 国产精品初高中精品久久| 一本—道久久a久久精品蜜桃| 日韩**中文字幕毛片| 欧美极品日韩| 成人精品小视频| 久草视频国产在线| 国产精品久久久久久久久久新婚| 伊人久久青草| 欧美综合第一页| 国产日韩一区二区在线| 色噜噜一区二区| 男人天堂新网址| 成年丰满熟妇午夜免费视频| 久久久久一本一区二区青青蜜月| 国产精品久久97| 无码日韩人妻精品久久蜜桃| 男人天堂成人网| 91久久伊人青青碰碰婷婷| 色阁综合伊人av| 久久久国产在线视频| 一区二区在线中文字幕电影视频| 日韩女优在线播放| 国产美女精品视频免费观看| 日韩在线视频免费观看| 中文字幕av导航| 欧美成人第一区| 91精品久久久久| 国产精品老牛影院在线观看| 日韩资源av在线| 国产在线不卡精品| 久久久久久久香蕉| 亚洲制服欧美久久| 国产一区二区四区| 久久久久这里只有精品| 亚洲五月六月| 麻豆精品传媒视频| 久久久久久久999| 亚洲一区二区三区久久| 7777奇米亚洲综合久久| 精品久久中出| 欧美日韩在线成人| 久久久久久久久久久久久久国产 | 亚洲一区二区久久久久久| 日韩精品另类天天更新| yellow视频在线观看一区二区| 日本一区免费| 91精品在线国产| 欧美日韩第一视频| 国产在线精品二区| 国产精品久久二区| 精品人妻少妇一区二区| 久久久久久久久久久久久久久久av | 国产福利一区二区三区在线观看| 九九综合九九综合| 蜜桃视频成人| 久久精品久久精品亚洲人| 日韩三级在线播放| 国产成人亚洲精品无码h在线| 亚洲砖区区免费| 国产女人精品视频| 久久成人精品视频| 国产日韩欧美自拍| 欧美精品做受xxx性少妇| 国产偷人视频免费| 一区二区三区四区免费视频| 国产精品午夜av在线| 欧美精品久久久久久久| 国产老熟妇精品观看| 色综合久久中文字幕综合网小说| 国产肉体ⅹxxx137大胆| 久久伊人精品视频| 国产拍精品一二三| 欧美日产国产成人免费图片| 国产日韩欧美日韩大片| 久久国产精品久久久久久久久久| 国产毛片视频网站| 欧美精品久久久久a| 91久久久久久久| 日本一区二区在线| 日韩亚洲欧美成人| 黄色大片中文字幕| 欧美精品九九久久| 久久最新免费视频| 欧美亚洲黄色片| 国产精品第七影院| 国产美女精彩久久| 亚洲区成人777777精品| 国产av人人夜夜澡人人爽麻豆| 人人妻人人添人人爽欧美一区| 久久天堂av综合合色| 国产日韩一区二区在线观看| 亚洲色成人www永久在线观看 | 99久久精品久久久久久ai换脸| 天天干天天操天天干天天操| 视频一区视频二区国产精品| 国产综合精品一区二区三区| 亚洲一区久久久| 久久久精品一区| 国产精品自拍偷拍视频| 午夜免费福利小电影| 俺去啦;欧美日韩| 国产欧亚日韩视频| 熟女少妇在线视频播放| 国产精品久久中文字幕| 99在线视频首页| 日韩精品资源| 精品久久久久久无码国产| 国产精品97在线| 免费不卡亚洲欧美| 欧美一级淫片播放口| 国产精品福利观看| 国产成人福利网站| 国产日本在线播放| 青青精品视频播放| 久久久久久av| 北条麻妃一区二区三区中文字幕| 国产情人节一区| 欧美亚洲黄色片| 少妇人妻互换不带套| 久久亚洲私人国产精品va| 久久免费国产精品1| 国产私拍一区| 青青青国产在线视频| 亚洲一区中文字幕在线观看| 国产精品日韩欧美大师| 97人人模人人爽人人喊中文字| 欧美日韩亚洲综合一区二区三区激情在线| 亚洲一二三区精品| 国产精品大全| 日韩在线视频中文字幕| 99精品一级欧美片免费播放| 精品一区二区日本| 日韩免费av一区二区三区| 亚洲美女搞黄| 精品国产无码在线| 国产精品久久久久久久久男| 日韩视频亚洲视频| 九九九九久久久久| 国产精彩视频一区二区| av网址在线观看免费| 国产美女在线精品免费观看| 国产一区二区三区色淫影院| 国内精品**久久毛片app| 欧美中文字幕第一页| 日本亚洲欧美成人| 午夜精品99久久免费| 亚洲影影院av| 亚洲永久激情精品| 中文字幕一区二区三区最新| 欧美人与性动交a欧美精品| 精品国产综合区久久久久久| 精品成在人线av无码免费看| 精品福利影视| 欧美wwwxxxx| 欧美激情一二区| 最新中文字幕久久| 在线视频不卡国产| 亚洲在线一区二区| 亚洲高清资源综合久久精品| 亚洲人成人77777线观看| 亚洲伊人婷婷| 亚洲精品国产系列| 亚洲.欧美.日本.国产综合在线| 中文字幕黄色大片| 亚洲综合自拍一区| 亚洲国产一区二区在线| 亚洲一区二区三区乱码aⅴ蜜桃女| 一本色道久久综合亚洲二区三区| 一本—道久久a久久精品蜜桃| 亚洲在线免费视频| 日本在线播放一区| 欧美日韩一道本| 精品一区2区三区|