国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做COMP3230、代寫c/c++編程設計
代做COMP3230、代寫c/c++編程設計

時間:2024-10-02  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP**30 Principles of Operating Systems Programming Assignment One
Due date: Oct. 17, 2024, at 23:59 Total 13 points – Release Candidate Version 2
Programming Exercise – Implement a LLM Chatbot Interface
Objectives
1. An assessment task related to ILO 4 [Practicability] – “demonstrate knowledge in applying system software and tools available in the modern operating system for software development”.
2. A learning activity related to ILO 2a.
3. The goals of this programming exercise are:
• To have hands-on practice in designing and developing a chatbot program, which involves the
creation, management and coordination of processes.
• to learn how to use various important Unix system functions:
§ toperformprocesscreationandprogramexecution
§ tosupportinteractionbetweenprocessesbyusingsignalsandpipes § togettheprocesses’srunningstatusbyreadingthe/procfilesystem § toconfiguretheschedulingpolicyoftheprocessviasyscall
Tasks
Chatbots like ChatGPT or Poe are the most common user interfaces to large language models (LLMs). Compared with standalone inference programs, it provides a natural way to interact with LLMs. For example, after you enter "What is Fibonacci Number" and press Enter, the chatbot will base on your prompt and use LLM to generate, for example, "Fibonacci Number is a series of numbers whose value is sum of previous two...". But it’s not the end, you could further enter prompt like "Write a Python program to generate Fibonacci Numbers.” And the model would continue to generate based on the previous messages like "def fibonacci_sequence(n): ...".
Moreover, in practice, we usually separate the inference process handles LLM from main process that handles user input and output, which leads to a separable design that facilitates in-depth control on inference process. For example, we can observe the status of the running process via reading the /proc file system or even control the scheduling policy of the inference process from the main process via relevant syscall.
Though understanding GPT structure is not required, in this assignment, we use Llama3, an open- source variation of GPT and we provide a complete single-thread LLM inference engine as the startpoint of your work. You need to use Unix Process API to create inference process that runs LLM, use pipe and signal to communicate between two processes, read /proc pseudo file system to monitor running status of the inference process, and use sched syscall to set the scheduler of the inference process and observe the performance changes.
Acknowledgement: The inference framework used in this assignment is based on the open-source project llama2.c by Andrej Karpathy. The LLM used in this assignment is based on SmolLM by HuggingfaceTB. Thanks open-source!
    
Specifications
a. Preparing Environment
Download start code – Download start.zip from course’s Moodle, unzip to a folder with:
Rename [UID] in inference_[UID].c and main_[UID].c with your UID, and open Makefile, rename [UID] at line 5 and make sure no space left after your uid.
Download the model files. There are two binary files required, model.bin for model weight and tokenizer.bin for tokenizer. Please use following instructions to download them:
Compile and run the inference program. The initial inference_[UID].c is a complete single- thread C inference program that can be compiled as follows:
Please use -lm flag to link math library and -O3 flag to apply the best optimization allowed within C standard. Please stick to -O3 and don’t use other optimization level. Compiled program can be executed with an integer specifying the random seed and a series of string as prompts (up to 4 prompts allowed) supplied via command-line arguments, aka argv:
Upon invocation, the program will configure the random seed and begin sentence generation based on the prompts provided via command line arguments. Then the program call generate function, which will run LLM based on prompt given (prompt[i] in this example) to generate new tokens and leverage printf with fflush to print the decoded tokens to stdout immediately.
  start
├── common.h # common and helper macro defns, read through first
├── main_[UID].c # [your task] template for main process implementation
├── inference_[UID].c # [your task] template for inference child process implementation
  ├── Makefile
├── model.h
└── avg_cpu_use.py
# makefile for the project, update [UID] on line 5
# GPT model definition, modification not allowed
# Utility to parse the log and calculate average cpu usage
 make prepare # will download model.bin and tokenizer.bin if not existed
# or manually download via wget, will force repeated download, not recommended
wget -O model.bin https://huggingface.co/huangs0/smollm/resolve/main/model.bin
wget -O tokenizer.bin https://huggingface.co/huangs0/smollm/resolve/main/tokenizer.bin
 make -B inference # -B := --always-make, force rebuild
# or manually
gcc -o inference inference_[UID].c -O3 -lm # replace [UID] with yours
 ./inference <seed> "<prompt>" "<prompt>" # prompt must quoted with ""
# examples
./inference 42 "What’s the answer to life the universe and everything?" # answer is 42! ./inference 42 "What’s Fibonacci Number?" "Write a python program to generate Fibonaccis."
 for (int idx = 0; idx < num_prompt; idx++) { // 0 < num_prompt <= 4 printf("user: %s \n", prompts[i]); // print user prompt for our information generate(prompts[i]); // handle everything including model, printf, fflush
}

Following is an example running ./inference. It’s worth noticed that when finished, the current sequence length (SEQ LEN), consists of both user prompt and generated text, will be printed:
 $ ./inference 42 "What is Fibonacci Number?" user
What is Fibonacci Number?
assistant
A Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers (1, 1, 2, 3, 5, 8, 13, ...)
......
F(n) = F(n-1) + F(n-2) where F(n) is the nth Fibonacci number. The Fibonacci sequence is a powerful mathematical concept that has numerous applications in various<|im_end|>
[INFO] SEQ LEN: 266, Speed: 61.1776 tok/s
If multiple prompts are provided, they will be implied in the same session instead of treated independently. And they will be applied in turns with model generation. For example, 2nd prompt will be implied after 1st generation, 3rd prompt will be implied after 2nd generation, and so on. You can observe the increasing of SEQ LEN in every generation:
 $ ./inference 42 "What is Fibonacci Numbers?" "Write a program to generate Fibonacci Numbers."
user
What is Fibonacci Number?
assistant
A Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers (1, 1, 2, 3, 5, 8, 13, ...)
......
F(n) = F(n-1) + F(n-2) where F(n) is the nth Fibonacci number. The Fibonacci sequence is a powerful mathematical concept that has numerous applications in various<|im_end|>
[INFO] SEQ LEN: 266, Speed: 61.1776 tok/s
user
Write a program to generate Fibonacci Numbers.
Assistant
Here's a Python implementation of the Fibonacci sequence using recursion: ```python
def fibonacci_sequence(n):
if n <= 1: return 1
else:
return fibonacci_sequence(n - 1) + fibonacci_sequence(n – 2)
......
[INFO] SEQ LEN: 538, Speed: 54.2636 tok/s
It’s worth noting that with the same machine, random seed, and prompt (case-sensitive), inference can generate exactly the same output. And to avoid time-consuming long generation, the maximum new tokens generated for each response turn is limited to 256 tokens, the maximum prompt length is limited to 256 characters (normally equivalent to 10-50 tokens), and the maximum number of turns is limited to 4 (at most 4 prompts accepted, rest are unused).
b. Implement the Chatbot Interface
Open main_[UID].c and inference_[UID].c, implement the Chatbot Interface that can:

1. Inference based on user input: Accepts prompt input via the chatbot shell and when user presses `Enter`, starts inferencing (generate) based on the prompt, and prints generated texts to stdout.
2. Support Session: During inferencing, stop accepting new prompt input. After each generation, accept new prompt input via the chatbot shell, and can continue the generation based on the new prompt and previous conversations (prompts and generated tokens). Prompts must be treated in a continuous session (SEQ LEN continue growing).
3. Separate main and inference processes: Separate inference workload into a child process, and the main process only in charge of receiving user input, displaying output and maintaining session.
4. Collect exit status of the inference process on exit: A user can press Ctrl+C to terminate both main process and inference process. Moreover, the main process shall wait for the termination of the inference child process, collect and display the exit status of the inference process before it terminates.
5. Monitoring status of inference process: During inferencing, main process shall monitor the status of inference process via reading the /proc file system and print the status to stderr every 300 ms.
6. Set scheduling policy of the inference process: Before first generation, main process shall be able to set the scheduling policy and parameters of the inference process via SYS_sched_setattr syscall.
Your implementation shall be able to be compiled by the following command:
Then run the compiled program with ./main or ./inference (if is in Stage 1). It accepts an argument named seed that specifies the random seed. For stage 3, to avoid stdout and stderr congest the console, we use 2>proc.log to dump /proc log to file system.
We suggest you divide the implementation into three stages:
• Stage 1 – Convert the inference_[UID].c to accept a seed argument and read in the prompt from the stdin.
§ Implementpromptinputreading,callgeneratetogeneratenewtokensandprinttheresult.
• Stage 2 – Separate user-input workload into main_[UID].c (main process) and inference workload in inference_[UID].c (inference process). Add code to the main process to:
§ use fork to create child process and use exec to run inference_[UID].c
§ use pipe to forward user input from main process to the inference process’s stdin.
§ add signal handler to correctly handle SIGINT for termination; more details in specifications.
§ use signal (handlers and kill) to synchronize main process and inference process.
§ Main Process shall receive signal from inference process upon finishing each generation for the prompt.
§ use wait to wait for the inference process to terminate and print the exit status.
• Stag 3 – Adding code to the main process that
§ During the inference, read the /proc file system to get the cpu usage, memory usage of the inference process, and print them out to the stderr every 300ms.
§ Beforefirstgeneration,useSYS_sched_setattrsyscalltosettheschedulingpolicyand related scheduling parameters for the inference child process.
 make -B # applicable after renaming [UID]
# or manually
gcc -o inference inference_[UID].c -O3 -lm # replace [UID] with yours gcc -o main main_[UID].c # replace [UID] with yours
   ./inference <seed> ./main <seed> ./main <seed> 2>log
# stage 1, replace <seed> with number
# stage 2, replace <seed> with number
# stage 3, replace <seed> with number, redirect stderr to file

Following is some further specifications on the behavior of your chatbot interface:
• Your chatbot interface shall print out >>> to indicate user prompt input.
§ >>> shall be printed out before every user prompt input.
§ Your main process shall wait until the user presses `Enter` before forwarding the prompt to
the inference process.
§ Your main process shall stop accepting user input until model generation is finished. § >>> shall be printed immediately AFTER model generation finished.
§ After>>>printoutagain,yourmainprocessshallresumeacceptinguserinput.
• Your inference process shall wait for user prompt forwarded from the main process, and
after finishing model generation, wait again until next user prompt is received.
§ Though blocked, the inference process shall correctly receive and handle SIGINT to terminate.
• Your program shall be able to terminate when 4 prompts is received, or SIGINT signal is received. § Your main process shall wait for inference process to terminate, collect and print the exit status of inference process (in form of Child exited with <status>) before it terminates.
• Your main process shall collect the running status of inference process ONLY when running inference model, for every 300ms. All information about the statistics of a process can be found in the file under the /proc/{pid} directory. It is a requirement of this assignment to make use of the /proc filesystem to extract the running statistics of a process. You may refer to manpage of /proc file system and kernel documentations. Here we mainly focus on /proc/{pid}/stat, which includes 52 fields separated by space in a single line. You need to parse, extract and display following fields:
 ./main <seed>
>>> Do you know Fibonacci Numer?
Fibonacci number! It's a fascinating...<|im_end|>
>>> Write a Program to generate Fibonacci Number? // NOTE: Print >>> Here!!! def generate_fibonacci(n):...
      pid tcomm state
policy nice vsize task_cpu utime stime
Process Id
Executable Name
Running Status (R is running, S is sleeping, D is sleeping in an uninterruptible wait, Z is zombie, T is traced or stopped)
Scheduling Policy (Hint: get_sched_name help convert into string)
Nice Value (Hint: Priority used by default scheduler, default is 0)
Virtual Memory Size
CPU id of the process scheduled to, named cpuid
Running time of process spent in user mode, unit is 10ms (aka 0.01s) Running time of process spent in system mode, unit is 10ms (aka 0.01s)
                  Moreover, you will need to calculate cpu usage in percentage (cpu%) based on utime and stime. CPU usage is calculated by the difference of current- and last- measurement divided by interval length, and as we don’t count on difference between stime and utime, sum the difference of utime and stime. For example, if your current utime and stime is 457 and 13, and last utime and stime is 430 and 12, respectively, then usage will be ((457-430)+(13-12))/30=93.33% (all unit is 10ms). For real case, verify with htop. At last, you shall print to stderr in following form. To separate from stdout for output, use ./main <seed> 2>log to redirect stderr to a log file.
   [pid] 6**017 [tcomm] (inference) [state] R [policy] SCHED_OTHER [nice] 0 [vsize] 358088704 [task_cpu] 4 [utime] 10 [stime] 3 [cpu%] 100.00% # NOTE: Color Not Required!!! [pid] 6**017 [tcomm] (inference) [state] R [policy] SCHED_OTHER [nice] 0 [vsize] 358088704 [task_cpu] 4 [utime] 20 [stime] 3 [cpu%] 100.00%

• Before the first generation, main process shall be able to set the scheduling policy and nice value of the inference process. To make setting policy and parameters unified, you must use the raw syscall SYS_sched_setattr instead of other glibc bindings like sched_setscheduler. Currently Linux implement and support following scheduling policies in two categories:
§ Normal Policies:
§ SCHED_OTHER:defaultschedulingpoliciesofLinux.AlsonamedSCHED_NORMAL § SCHED_BATCH:fornon-interactivecpu-intensiveworkload.
§ SCHED_IDLE:forlowprioritybackgroundtask.
§ Realtime Policies: need sudo privilege, not required in this assignment.
§ [NOTREQUIRED]SCHED_FIFO:First-In-First-OutPolicywithPreemption
§ [NOTREQUIRED]SCHED_RR:Round-RobinPolicy
§ [NOTREQUIRED]SCHED_DEADLINE:EarliestDeadlineFirstwithPreemption
For Normal Policies (SCHED_OTHER, SCHED_BATCH, SCHED_IDLE), their scheduling priority is configured via nice value, an integer between -20 (highest priority) and +19 (lowest priority) with 0 as the default priority. You can find more info on the manpage.
Please be noticed that on workbench2, without sudo, you’re not allowed to set real-time policies or set normal policies with nice < 0 due to resource limit, please do so only for benchmarking in your own environment. Grading on this part at workbench2 will be limited to setting SCHED_OTHER, SCHED_IDLE and SCHED_BATCH with nice >= 0.
c. Measure the performance and report your finding
Benchmark the generation speed (tok/s) and average cpu usage (%) of your implementation with different scheduling policies and nice values.
     Scheduling Policies
SCHED_OTHER SCHED_OTHER SCHED_OTHER SCHED_BATCH SCHED_BATCH SCHED_BATCH SCHED_IDLE
Priority / Nice
0
2
10
0
2
10
0 (only 0)
Speed (tok/s)
Avg CPU Usage (%)
                                For simplicity and fairness, use only the following prompt to benchmark speed:
For average cpu usage, please take the average of cpu usage from the log (like above example). For your convenience, we provide a Python script avg_cpu_use.py that can automatically parse the log (by specifying the path) and print the average. Use it like: python3 avg_cpu_use.py ./log
Based on the above table, try to briefly analyze the relation between scheduling policy and speed (with cpu usage), and briefly report your findings (in one or two paragraph). Please be advised that this is an open question with no clear or definite answer (just like most of problems in our life), any findings correspond to your experiment results is acceptable (including different scheduler make nearly no impact to performance).
 ./main <seed> 2>log
>>> Do you know Fibonacci Numer?
...... # some model generated text
[INFO] SEQ LEN: xxx, Speed: xx.xxxx tok/s # <- speed here!

IMPORTANT: We don’t limit the platform for benchmarking. You may use: 1) workbench2; 2) your own Linux machine (if any); 3) Docker on Windows/MacOs; 4) Hosted Container like Codespaces. Please note that due to large number of students this year, benchmarking on workbench2 could be slow with deadline approaches.
Submit the table, your analysis in one-page pdf document. Grading of your benchmarking and report is based on your analysis (corresponds to your result or not) instead of the speed you achieved.
Suggestions for implementation
• You may consider scanf or fgets to read user input, and user input is bounded to 512 characters, defined as macro MAX_PROMPT_LEN in common.h (also many other useful macro included).
• To forward user input to the inference process’s stdin, you may consider using dup2.
• You may consider using SIGUSR1 and SIGUSR2 and sigwait to support synchronizations
between main process and inference process.
• There is no glibc bindings provided for SYS_sched_setattr and SYS_sched_getattr
syscall, so please use raw syscall interface, check manpage for more info.
• To convert scheduling policy from int to string, use get_sched_name defined in common.h
• Check manpage first if you got any problem, either Google “man <sth>” or “man <sth>” in shell.
Submission
Submit your program to the Programming # 1 submission page at the course’s moodle website. Name the program to inference_[UID].c and main_[UID].c (replace [UID] with your HKU student number). As the Moodle site may not accept source code submission, please compress all files to the zip format before uploading.
Checklist for your submission:
• Your source code inference_[UID].c and main_[UID].c. (must be self-contained, no dependencies other than model.h and common.h provided)
• Your report including benchmark table, your analysis and reasoning.
• Your GenAI usage report containing GenAI models used (if any), prompts and responses.
• Please do not compress and submit model and tokenizer binary file (use make clear_bin)
Documentation
1. At the head of the submitted source code, state the:
• File name
• Name and UID
• Development Platform (Please include compiler version by gcc -v)
• Remark – describe how much you have completed (See Grading Criteria)
2. Inline comments (try to be detailed so that your code could be understood by others easily)
 
Computer Platform to Use
For this assignment, you can develop and test your program on any Linux platform, but you must make sure that the program can correctly execute on the workbench2 Linux server (as the tutors will use this platform to do the grading). Your program must be written in C and successfully compiled with gcc on the server.
It’s worth noticing that the only server for COMP**30 is workbench2.cs.hku.hk, and please do not use any CS department server, especially academy11 and academy21, as they are reserved for other courses. In case you cannot login to workbench2, please contact tutor(s) for help.
Grading Criteria
1. Your submission will be primarily tested on the workbench2 server. Make sure that your program can be compiled without any errors using the Makefile (update if needed). Otherwise, we have no way to test your submission and you will get a zero mark.
2. As tutors will check your source code, please write your program with good readability (i.e., with good code convention and sufficient comments) so that you won’t lose marks due to confusion.
3. You can only use the Standard C library on Linux platform (aka glibc).
Detailed Grading Criteria
• Documentation -1 point if failed to do
• Include necessary documentation to explain the logic of the program.
• Include required student’s info at the beginning of the program.
• Report: 1 point
• Measure the performance and average cpu usage of your chatbot on your own computer.
• Briefly analyze the relation between performance and scheduling policy and report your
finding.
• Your finding will be graded based on the reasoning part.
• Implementation: 12 points
1. [1pt] Build a chatbot that accept user input, inference and print generated text to stdout.
2. [2pt] Separate Inference Process and Main Process (for chatbot interface) via pipe and exec
3. [1pt] Correctly forward user input from main process to subprocess via pip
4. [1pt] Correctly synchronize the main process with the inference process for the completion of
inference generation.
5. [2pt] Correctly handle SIGINT that terminates both main and inference processes and collect
the exit status of the inference process.
6. [2.5pt] Correctly parse the /proc file system of the inference process during inferencing to
collect and print required fields to stderr.
7. [0.5pt] Correctly calculate the cpu usage in percentage and print to stderr.
8. [2pt] Correctly use SYS_sched_setattr to set the scheduling policy and parameters.
Plagiarism
Plagiarism is a very serious offense. Students should understand what constitutes plagiarism, the consequences of committing an offense of plagiarism, and how to avoid it. Please note that we may request you to explain to us how your program is functioning as well as we may also make use of software tools to detect software plagiarism.

GenAI Usage Report
Following course syllabus, you are allowed to use Generative AI to help completing the assignment, and please clearly state the GenAI usage in GenAI Report, including:
• Which GenAI models you used
• Your conversations, including your prompts and the responses.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp



 

掃一掃在手機打開當前頁
  • 上一篇:代做CMPT 477、代寫Java/python語言編程
  • 下一篇:CSCI1120代寫、代做C++設計程序
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    色综合久久中文字幕综合网小说| 国产精品激情自拍| 日韩中文第一页| 亚洲一区在线直播| 国产综合在线视频| 久久精品国产欧美激情| 日韩精品欧美一区二区三区| 91国产一区在线| 色综合影院在线观看| 国产精品av免费在线观看| 亚洲国产精品一区二区第四页av| 国产日产欧美视频| 欧美精品一区二区免费| 国模私拍一区二区三区| 国产精品黄页免费高清在线观看| 国内成+人亚洲| 国产精品第二页| 国内精品久久久久伊人av| 国产成人拍精品视频午夜网站| 日韩视频第二页| 久久久久久网站| 日韩视频在线免费播放| 日韩中文字幕av| 欧美激情专区| 国产精品高潮在线| 国产伦精品一区二区三区| 欧美极品第一页| 99在线热播| 无码少妇一区二区三区芒果| 久久免费精品日本久久中文字幕| 日本高清不卡三区| 国产精品网站视频| 国产综合在线观看视频| 米奇精品一区二区三区在线观看| 国产伦精品一区二区三区高清版| 亚洲中文字幕无码av永久| 91国内在线视频| 欧洲午夜精品久久久| 国产精品丝袜久久久久久高清| 麻豆av福利av久久av| 久久久久久97| 91精品在线播放| 日韩videos| 国产精品免费在线| 国产免费观看高清视频| 亚洲乱码一区二区三区| 国产成人一区二区三区免费看| 欧美专区在线播放| 一区精品在线| 久久精品99| 国外色69视频在线观看| 伊人久久大香线蕉成人综合网| 久久青青草综合| 欧美在线观看网址综合| 久久久之久亚州精品露出| 欧美精品一区免费| 国产精品成熟老女人| 成人在线精品视频| 日韩欧美99| 精品国产一二三四区| 久久久伊人日本| 男人的天堂99| 亚洲一区中文字幕| 色妞一区二区三区| 国产精品亚洲a| 日韩欧美电影一区二区| 欧美精品免费在线观看| 久久这里只有精品8| 蜜桃传媒视频麻豆第一区免费观看| 亚洲一区二三| 国产成人精品一区二区三区福利| 国产区欧美区日韩区| 日韩欧美99| 中文字幕欧美人妻精品一区| 久久久久久欧美精品色一二三四| 国产亚洲情侣一区二区无| 日本中文字幕在线视频观看| 欧美成aaa人片在线观看蜜臀| 国产第一页视频| 国产一级二级三级精品| 日韩久久久久久久| 中文字幕一区二区三区四区五区人 | 久久91亚洲精品中文字幕奶水| 久久久神马电影| 国产亚洲精品久久久久久久| 日本成人中文字幕在线| 曰韩不卡视频| 国产精品色视频| 国产成人亚洲综合91精品| 国产乱码精品一区二区三区日韩精品| 欧美少妇一级片| 日韩一区二区三区高清| 欧美精品999| 国产精品美女诱惑| 日日噜噜噜夜夜爽亚洲精品| 91精品国产九九九久久久亚洲| 国产日韩精品电影| 欧美一级黑人aaaaaaa做受 | 国产乱肥老妇国产一区二| 欧美性受xxxx黑人猛交88| 欧美一区二区三区四区在线| 永久久久久久| 美女视频久久黄| 久久亚洲私人国产精品va| 日韩在线视频网| 国产黑人绿帽在线第一区| 99久久精品免费看国产一区二区三区 | 日本欧美精品在线| 亚洲丰满在线| 亚洲一区二区高清视频| 欧美另类69精品久久久久9999 | 日韩高清专区| 日本在线播放一区| 午夜一区二区三区| 一本—道久久a久久精品蜜桃| 美女啪啪无遮挡免费久久网站| 国产精品第一区| 国产精品视频一区国模私拍| 久久久久久久久网站| 国产不卡av在线| 国产成人jvid在线播放| 久久久中文字幕| 久久精品中文字幕一区二区三区| 91精品国产沙发| 7777精品久久久大香线蕉小说| 国产精品7m视频| 久久伊人资源站| 久久一区二区精品| 久精品国产欧美| 久久久久久久久久久综合| 国产a级片免费看| 久久精品国产sm调教网站演员| 国产xxxx振车| 九九九九九精品| 国产成人免费电影| 国产精品久久久久久久久久久久冷| 国产精品福利在线| 九九热精品视频在线播放| 久久久久国产精品www| 亚洲色欲久久久综合网东京热 | 欧美在线视频一二三| 黄色一级大片在线观看| 国产在线视频欧美| 国产精品永久在线| 福利视频久久| 91成人福利在线| 日韩视频免费看| 国产精品国模大尺度私拍| 欧美激情中文字幕在线| 无码aⅴ精品一区二区三区浪潮 | 色综合老司机第九色激情| 亚洲中文字幕无码中文字| 日韩av色在线| 欧美a在线视频| 国产乱子伦精品无码专区| 91精品久久香蕉国产线看观看| 久久久久久亚洲| 久久视频这里只有精品| 免费av在线一区| 日韩中文字幕在线免费| 欧美韩国日本在线| 国产精品自拍首页| 国产厕所精品在线观看| 国产精品久久中文字幕| 亚洲一区二区三区欧美| 日本韩国欧美精品大片卡二| 麻豆久久久av免费| 久久琪琪电影院| 国产精品国产精品国产专区不卡 | 亚洲一区二区高清视频| 欧洲精品视频在线| 国产亚洲精品网站| 国产成人精品免费视频大全最热 | 国产精品视频一区国模私拍| 一区二区欧美日韩| 欧美日韩免费高清| 91免费版网站在线观看| 国产成人精品综合| 在线观看日本一区| 欧美图片激情小说| 99精品免费在线观看| 国产精品美女在线播放| 色欲色香天天天综合网www| 国产在线999| 久久久久久一区二区三区| 九九热在线精品视频| 青草青草久热精品视频在线网站| 成人动漫在线视频| 国产精品免费久久久久久 | 国产精品亚洲综合| 国产精品三级网站| 日韩av第一页| 国产欧美日韩中文字幕在线| 久久久久久国产精品一区| 一区不卡字幕| 国模视频一区二区| 北条麻妃久久精品| 午夜久久资源| 国产精品一区在线播放| 国产精品区免费视频|