国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

MSE 5760代做、代寫C/C++,Java程序
MSE 5760代做、代寫C/C++,Java程序

時間:2025-05-06  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



MSE 5760: Spring 2025 HW 6 (due 05/04/25)
Topic: Autoencoders (AE) and Variational Autoencoders (VAE)
Background:
In this final homework, you will build a deep autoencoder, a convolutional 
autoencoder and a denoising autoencoder to reconstruct images of an isotropic composite 
with different volume fractions of fibers distributed in the matrix. Five different volume 
fraction of fibers are represented in the dataset and these form five different class labels for 
the composites. After the initial practice with AEs and reconstruction of images using latent 
vectors, you will build a VAE to examine the same dataset. After training the VAE (as best 
as you can using the free colab resources to reproduces images), you will use it to generate 
new images by randomly sampling datapoints from the learned probability distribution of 
the data in latent space. Finally, you will build a conditional VAE to not only generate new 
images but generate them for arbitrary volume fractions of fibers in the composite.
The entire dataset containing 10,000 images of composites with five classes of 
volume fractions of fibers was built by Zequn He (currently a Ph.D. student in MEAM in 
Prof. Celia Reina’s group who helped put together this course in Summer 2022 by designing 
all the labs and homework sets). Each image in the dataset shows three fibers of different 
volumes with circular cross sections. Periodic boundary conditions were used to generate 
the images. Hence, in some images, the three fiber particles may appear broken up into
more than three pieces. The total cross sectional area of all the fibers in each image can, 
however, be divided equally among three fibers. Please do not use this dataset for other 
work or share it on data portals without prior permission from Zequn He
(hezequn@seas.upenn.edu).
Due to the large demands on memory and the intricacies of the AE-VAE 
architecture, the results obtained will not be of the same level of accuracy and quality that 
was possible in the previous homework sets. No train/test split is recommended as all 
10,000 images are used for training purposes. You may, however, carry out further analysis 
using train/test split or by tuning the hyperparameters or changing the architecture for 
bonus points. The maximum bonus points awarded for this homework will be 5.
**********************************Please Note****************************
Sample codes for building the AE, VAE and a conditional GAN were provided in 
Lab 6. There is no separate notebook provided for the homework and students will 
have to prepare one. Tensorflow and keras were used in Lab 6 and is recommended 
for this homework. You are welcome to use other libraries such as pytorch.
************************************************************************
1. Model 1: Deep Autoencoder model (20 points)
Import the needed libraries. Load the original dataset from canvas. Check the 
dimensions of each loaded image for consistency. Scale the images.
1.1 Print the class labels and the number of images in each class. Print the shape of 
the input tensor representing images and the shape of the vector representing the 
class labels. (2 points)
1.1. A measure equivalent to the volume fraction of fibers in each composite image is 
the mean pixel value of the image. As the images are of low-resolution, you may 
notice a slight discrepancy in the assigned class value of the image and the 
calculated mean pixel intensity. As the resolution of images increases, there will be 
negligible difference between the assigned class label and the pixel mean of the 
image. Henceforth, we shall use the pixel mean (PM) intensity of the images to be 
the class label. Print a representative sample of ten images showing the volume 
fraction of fibers in the composite along with the PM value of the image. (3 points)
1.2. Build the following deep AE using the latent dimension value = 64.
(a) Let the first layer of the encoder have 256 neurons.
(b) Let the second layer of the encoder have 128 neurons.
(c) Let the last layer of the encoder be the context or latent vector.
(d) Use ReLU for the activation function in all of the above layers.
(e) Build a deep decoder with its input being the context layer of the encoder.
(f) Build two more layers of the decoder with 128 and 256 neurons, respectively. 
These two layers can use the ReLU activation function.
(g) Build the final layer of the decoder such that its output is compatible with the 
reconstruction of the original input shape tensor. Use sigmoid activation for the 
final output layer of the decoder.
(h) Use ADAM as your optimizer and a standard learning rate. Let the loss be the 
mean square error loss. Compile the AE and train it for at least 50 epochs.
(10 points)
1.3. Print the summary of the encoder and decoder blocks showing the output shape of 
each layer along with the number of parameters that need to be trained. Monitor 
and print the lossfor each epoch. Plot the loss as a function of the epochs. (2 points)
1.4. Plot the first ten reconstructed images showing both the original and reconstructed 
images. (3 points)
2. Model 2: Convolutional Autoencoder model (20 points)
2.1 Build the following convolutional AE with the latent dimension = 64
(a) In the first convolution block of the encoder, use 8 filters with 3x3 kernels, 
ReLU activation and zero padding. Apply max pooling layer with a kernel of 
size 2.
(b) In the second convolution block use 16 filters with 3x3 kernels, ReLU activation 
and zero padding. Apply max pooling layer with a kernel of size 2.
(c) In the third layer of the encoder use 32 filters with 3x3 kernels, ReLU activation 
and zero padding. Apply max pooling layer with a kernel of size 2.
(d) Flatten the obtained feature map and then use a Dense layer with ReLU 
activation function to extract the latent variables.
(d) Build the decoder in the reverse order of the encoder filters with the latent 
output layer of the encoder serving as the input to the decoder part.
(e) Use ADAM as your optimizer and a standard learning rate. Let the loss be the 
mean square error loss. Compile the convolutional AE and train it for at least 
50 epochs.
(10 points)
2.2 Print the summary of the encoder and decoder blocks showing the output shape of 
each layer along with the number of parameters that need to be trained. Monitor 
and print the lossfor each epoch. Plot the loss as a function of the epochs. (5 points)
2.3 Plot the first ten reconstructed images showing both the original and reconstructed 
images. (5 points)
3. Model 3: Denoising convolutional Autoencoder model (15 points)
3.1 Add a Gaussian noise to each image. Choose a Gaussian with a mean of zero and a 
small standard deviation, typically ~ 0.2. Plot a sample of five original images with 
noise. (3 points)
3.2 Use the same convolutional autoencoder as in Problem 2 but with noisy images fed 
to the encoder. Train and display all the information as in 2.2 and 2.3.
(12 points)
4. Model 4: Variational Autoencoder model (25 points)
4.1 Set the latent dimension of the VAE be 64. Build a convolutional autoencoder with 
the following architecture. Set the first block to have 32 filters, 3x3 kernels with 
stride = 2 and zero padding.
4.2 Build the second block with 64 filters, 3x3 kernels, stride =2 and zero padding. Use 
ReLU in both blocks. Apply max pooling layer with kernel of size 2x2.
4.3 Build an appropriate output layer of the encoder that captures the latent space 
probability distribution.
4.4 Define the reparametrized mean and variance of this distribution.
4.5 Build the convolutional decoder in reverse order. Apply the same kernels, stride 
and padding as in the encoder above. Choose the output layer of the decoder and 
apply the appropriate activation function.
4.6 Compile and train the model. Monitor the reconstruction loss, Kullback-Liebler 
loss and the total loss. Plot all three quantities for 500 epochs. (10 points)
4.7 Plot the first ten reconstructed images along with their originals. (5 points)
4.8 Generate ten random latent variables from a standard Gaussian with mean zero and 
unit variance. Display the generated images from these random values of the latent 
vector. Comment on the quality of your results and how it may differ from the input 
images. Mention at least one improvement that can be implemented which may 
improve the results. (3+3+4=10 points)
5. Model 5: Conditional Variational Autoencoder model (20 points)
A conditional VAE differs from a VAE by allowing for an extra input 
variable to both the encoder and the decoder as shown below. The extra label could 
be a class label, ‘c’ for each image. This extra label will enable one to infer the 
conditional probability that describes the latent vector conditioned on the class label 
‘c’ of the input. In VAE, using the variational inference principle, one infers the 
Gaussian distribution (by learning its mean and variance) of the latent vector 
representing each input ‘x’. In conditional VAE, one infers the Gaussian 
conditional distribution of the latent vector conditioned on the extra input variable 
‘label’.
For the dataset used in this homework, there are two advantages of the 
conditional VAE compared to the VAE: (i) the conditional VAE provides a cheap
way to validate the model by comparing the pixel mean of the generated images 
with the conditional class label values (pixel mean) in latent space used to generate 
the images. (ii) The trained conditional VAE can be used to generate images of 
composites with arbitrary volume fraction of fibers with sufficient confidence once 
the validation is done satisfactorily.
A conditional VAE. (source: https://ijdykeman.github.io/ml/2016/12/21/cvae.html)
A good explanation of the conditional VAE in addition to the resource cited in the 
figure above is this: https://agustinus.kristia.de/techblog/2016/12/17/conditional vae/.
A conditional GAN (cGAN) toy problem was shown in Lab 6 where the volume 
fraction (replaced by pixel mean for cheaper model validation) was the design 
parameter, and thus, the condition input into the cGAN. In this question, you will 
build a conditional VAE for the same task of generating new images of composites 
as in Problem 4 by randomly choosing points in the latent space. Since each point 
in the latent space represents a conditional Gaussian distribution, it also has a class 
label. Therefore, it becomes possible to calculate the pixel mean of a generated 
image and compare it with the target ‘c’ value of the random point in latent space. 
It is recommended that students familiarize themselves with the code for providing 
the input to the cGAN with class labels and follow similar logic for building the 
conditional VAE. You may also seek help from the TA’s if necessary.
5.1 Create an array that contains both images and labels (the pixel mean of each image). 
Note the label here is the condition and it should be stored in the additional channel 
of each image.
5.2 Use the same structure, activation functions and optimizer as the one used to build 
the VAE in Problem 4. Print the summary of the encoder and decoder blocks 
showing the output shape of each layer along with the number of parameters that 
need to be trained. (5 points)
5.3 Train the cVAE for 500 epochs. Plot the reconstruction loss, Kullback-Liebler loss 
and the total loss. Plot the first ten reconstructed images along with their originals. 
Include values of the pixel mean for both sets of images. (5 points)
5.4 Generate 10 fake conditions (i.e., ten volume fractions represented as pixel means 
evenly spaced within the range 0.1 to 0.4 as used in Lab 6) for image generation. 
Print the shape of the generated latent variable. Print the target volume fraction (or 
pixel mean). Show the shape of the array that combines the latent variables and fake 
conditions. Print the shape of the generated image tensor. (2 points)
5.5 Plot the 10 generated images. For each image show the generated condition (the 
pixel mean of each image generated in 5.4) and the pixel mean calculated from the 
image itself. (3 points)
5.6 Compare the set of generated images from the conditional VAE with the ones 
obtained in Lab 6 using cGAN. Comment on their differences and analyze the 
possible causes for the differences. (5 points)

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




 

掃一掃在手機打開當(dāng)前頁
  • 上一篇:代做 EEB 504B、代寫 java/Python 程序
  • 下一篇:COMP1117B代做、代寫Python程序設(shè)計
  • ·代做CAP 4611、代寫C/C++,Java程序
  • ·代做ISYS1001、代寫C++,Java程序
  • ·代做COMP2221、代寫Java程序設(shè)計
  • ·代寫MATH3030、代做c/c++,Java程序
  • ·COMP 5076代寫、代做Python/Java程序
  • ·代寫COP3503、代做Java程序設(shè)計
  • ·COMP3340代做、代寫Python/Java程序
  • ·COM1008代做、代寫Java程序設(shè)計
  • ·MATH1053代做、Python/Java程序設(shè)計代寫
  • ·CS209A代做、Java程序設(shè)計代寫
  • 合肥生活資訊

    合肥圖文信息
    流體CFD仿真分析_代做咨詢服務(wù)_Fluent 仿真技術(shù)服務(wù)
    流體CFD仿真分析_代做咨詢服務(wù)_Fluent 仿真
    結(jié)構(gòu)仿真分析服務(wù)_CAE代做咨詢外包_剛強度疲勞振動
    結(jié)構(gòu)仿真分析服務(wù)_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務(wù) 7類仿真分析代做服務(wù)40個行業(yè)
    流體cfd仿真分析服務(wù) 7類仿真分析代做服務(wù)4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務(wù)平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內(nèi)
    釘釘簽到打卡位置修改神器,2026怎么修改定
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務(wù)-企業(yè)/產(chǎn)品研發(fā)/客戶要求/設(shè)計優(yōu)化
    有限元分析 CAE仿真分析服務(wù)-企業(yè)/產(chǎn)品研發(fā)
  • 短信驗證碼 寵物飼養(yǎng) 十大衛(wèi)浴品牌排行 目錄網(wǎng) 排行網(wǎng)

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    中文字幕免费在线不卡| 国产精品一区二区三区免费视频| 久久久国产精彩视频美女艺术照福利| 国产精品99久久久久久www| www国产免费| 99精彩视频| 91精品国产91久久久久久最新| 国产伦精品一区二区三区高清版 | 国产精品久久久久77777| 日韩视频―中文字幕| 久久久久久久久久久综合| 国产黑人绿帽在线第一区| 国产脚交av在线一区二区| 7777精品久久久大香线蕉小说| 97成人在线免费视频| 91国在线精品国内播放| 久久久无码中文字幕久...| 久久伦理网站| 久久久久www| 麻豆国产精品va在线观看不卡| 不卡av电影院| 精品国产一区二区三区久久久久久 | 亚洲国产一区二区在线| 欧美一区二区三区免费视| 午夜精品久久久久久久久久久久久 | 日韩免费在线播放| 欧美极品色图| 国产免费一区二区三区香蕉精| 国产精品一区av| 国产精华一区| 久久福利一区二区| 国产精品涩涩涩视频网站| 久久这里只有精品视频首页| 亚洲中文字幕无码中文字| 亚洲欧美一区二区原创| 日本人成精品视频在线| 国内成人精品一区| 99久久精品免费看国产四区 | 欧美激情亚洲精品| 性欧美在线看片a免费观看| 人人妻人人添人人爽欧美一区| 精品视频无码一区二区三区| 99国产视频| 国产精品毛片a∨一区二区三区|国| 亚洲综合日韩在线| 欧美韩国日本在线| 成人一级生活片| 久久久久久久久亚洲| 国产精品美女视频网站| 亚洲AV无码成人精品一区| 欧美日韩dvd| 91精品国产91久久久久青草| 国产成人看片| 五码日韩精品一区二区三区视频| 免费在线观看一区二区| 国产精品av一区| 国产精品第一第二| 日本久久精品视频| 国产精品一区=区| 国产精品热视频| 日本在线视频www色| 国产精品一区二区免费看| 久久精品国产成人精品| 亚洲最大福利网| 每日在线更新av| 日韩中文字幕在线看| 亚洲精品一区二| 国产在线视频91| 日韩在线观看成人| 日韩资源av在线| 草b视频在线观看| 欧美精品在线免费观看| 欧美人成在线观看| 国产成人黄色av| 午夜欧美性电影| 91免费欧美精品| 欧美激情一二三| 国产一区自拍视频| 久久精品国产99国产精品澳门| 日本女人高潮视频| 久久99热只有频精品91密拍| 午夜精品区一区二区三| 超碰免费在线公开| 亚洲综合日韩在线| 成人h视频在线| 欧美日韩国产91| 国产欧美高清在线| 欧美日韩国产第一页| 国产免费一区二区视频| 精品国产一区二区三区无码| 男人添女人下部视频免费| 日韩中文娱乐网| 欧美一级大胆视频| 久久久久久网站| 欧美污视频久久久| 久久精品国产一区| 女女同性女同一区二区三区91| www.日韩视频| 激情一区二区三区| 国产精品视频yy9099| 精品www久久久久奶水| 国产精品久久久久av福利动漫| 狠狠色噜噜狠狠色综合久| 国产精品高清在线观看| 国产日韩一区二区在线| 欧美精品第一页在线播放| 国产精品永久免费观看| 亚洲欧洲一区二区在线观看| 国产精品一区二区a| 亚洲www在线观看| 久久久久久久香蕉网| 欧美精品久久久久久久自慰| 国产精品高潮粉嫩av| 国产乱子伦精品| 午夜精品一区二区三区在线视 | 久久视频在线免费观看| 国模一区二区三区私拍视频| 精品久久久久久无码国产| 99免费视频观看| 日韩精品不卡| 国产精品久久久久久五月尺| 国产视频精品网| 亚洲不卡中文字幕| 久久久国产在线视频| 国产日韩二区| 少妇久久久久久被弄到高潮| 久久精品91久久久久久再现| 国产精品影院在线观看| 日产精品久久久一区二区 | 国产成人精品福利一区二区三区| 热久久精品免费视频| 国产精品高潮呻吟视频| 91精品国产综合久久香蕉922| 秋霞久久久久久一区二区| 国产精品久久久久久免费观看 | www.亚洲视频.com| 日韩欧美一区二区在线观看| 国产精品极品美女在线观看免费 | 国产精品无av码在线观看| 国产精品自产拍在线观看中文| 日韩中文一区| 久久国产精品久久久| 久久久久无码国产精品一区| 国产欧美日韩最新| 欧洲中文字幕国产精品| 色综合91久久精品中文字幕| 国产高清在线一区| 国产一区二区在线免费视频| 日本一区二区视频| 九九精品在线播放| 久久久久久亚洲精品不卡4k岛国| 成人av免费看| 精品视频导航| 欧美中文字幕第一页| 岛国视频一区| 亚洲一卡二卡| 久久91精品国产91久久久| 久久激情五月丁香伊人| 国产成人综合亚洲| 成人一区二区在线| 国内少妇毛片视频| 日韩一二区视频| 亚洲精品国产一区| 久久国产色av| 国产精品久久久999| 日韩在线不卡视频| 国产成人精品日本亚洲11| 成人av.网址在线网站| 国产制服91一区二区三区制服| 欧美中文在线免费| 日韩免费av一区二区三区| 性色av一区二区三区| 亚洲综合五月天| 中文字幕第一页亚洲| 精品国产无码在线| 国产精品久久久久77777| 日韩视频免费在线| 国产成人精品电影| 国产成人精品日本亚洲| 777国产偷窥盗摄精品视频| 91久久精品美女| 91久久国产精品| 成人精品久久av网站| 国产乱码精品一区二区三区日韩精品 | 国产欧美日韩亚洲| 国产免费亚洲高清| 国产亚洲欧美在线视频| 欧美日韩一道本| 欧美精彩一区二区三区| 欧美在线观看网址综合| 日韩免费在线视频| 欧日韩免费视频| 极品日韩久久| 国产又粗又长又爽视频| 国产女主播自拍| av免费观看网| 91精品视频在线| 久久久久九九九| 久久99精品国产一区二区三区| 久久久噜久噜久久综合|