Course Project Information
Presentation Schedule:
NOTE: Each project has 7 mins to present + 1 min for Q&A. Please email your slides to sitaoh@uci.edu before class so that we can transit through presentations smoothly.
3/12 (Tuesday):
- Walaa Amer, Omar Bazarbachi, Mohamad Fakih: Hardware acceleration for Runtime Attention-Guided Mixed Precision Scheduling for Transformer Networks
- Rahul Dharmaji: Novel Optimization and Verification Techniques for Deep Learning Compilers
- Simon Guo: Implementation of An Affine Typechecker for HLS
- Yun Ping Hsiao: Analyze ASCON Algorithm Accelerator by HLS Tools
- Hao Su, Oscar Hsueh: Accelerating PyTorch Model Inference on PYNQ
- Yian Wang, Yuhua Huang: Improving Product Quantization for Large-Language Model Inference
- Han-Wen Yeh, Chun-Yu Huang: Enhancing IoT Sensor Signal Processing through High-Level Synthesis and Layered Architecture
- Wenxin Li, Cheng-Yen Lee: HLS-based Implementation of Motion Estimation Algorithm: A Review
- Chiranjeevi Narasimha Murthy: A Comprehensive Survey of Scheduling Techniques in High-Level Synthesis
Survey link for Tuesday presentations: https://forms.gle/g87D7us1hZUjaduo7 Links to an external site.
3/14 (Thursday):
- Mar Ramiro Ortega, William Rubio: Envisioning FPGAs: A Platform Proposal for Intuitive Visual Programming
- Joseph Rau: A Review of Just-in-Time Compilation and Its Potential for FPGA Programming
- Shreyas Shubhankar, Divya Sanjay: High Level Synthesis of BLAKE3 Accelerator on FPGA
- ChengChen Shi, Liu Yang: Power Consumption and energy efficiency of FPGA-based Accelerators
- Faraz Tahmasebi: Optimized Spatial Architecture Mapping Flow for Transformer Accelerators
- Yinong Tian: FPGA-based Accelerators for Binary Neural Networks
- Qilin Ye, Zi Ye: HLS-Based Image Processing Applications on FPGAs
- Jiawei Yu: Universal Verification Methodology (UVM) for HLS-Generated Designs: literature review
Survey link for Thursday presentations: https://forms.gle/b1Q7tWUkHfrAqmQ46 Links to an external site.
Final Report
- Double-column, use ACM conference paper template Links to an external site..
- Suggested length: single-student project: 4 or more pages, two-student project: 6 or more pages.
- Include links to your source code (if any)
- Please proofread your report before submission
- General expectation: research-paper quality
Grading:
- 40% of the total course final grade
- Proposal (10%) + Presentation & Report (30%)
Report Content:
(for option (a) literature review projects: )
- Title
- Abstract
- Introduction (background, motivation, topic ranges of your review, etc.)
- Contribution (the unique contributions of your review, how will your review benefit future researchers)
- Detailed reviews of various topics/sub-fields/categories
- Summary
- Future research opportunities in the field
- List of references
(for option (b) compiler & accelerator projects: )
- Title
- Abstract
- Introduction (background, motivation, problem definition, etc.)
- Contribution (the unique contributions of your project, how will your work benefit future researchers)
- Related Works
- Your Approach/Method
- Experimental Results
- Conclusion
- Future Works
- List of references
Grading Rubrics:
(for option (a) literature review projects: )
- Workload (20%)
- Coverage/Completeness of the review (20%)
- Discussion in the review (20%)
- Technical contribution (10%)
- Presentation/Report quality (30%)
(for option (b) compiler & accelerator projects: )
- Workload (20%)
- Completeness (20%)
- Technical contribution/novelty (20%)
- Reproducibility/technical soundness (10%)
- Presentation/Report quality (30%)
Winter 2023 Projects:
- Compiler-Assisted GPU Optimizations (Karim Barada)
- Hardware Acceleration for Edge Computing with AI engagement (Ziang Chen)
- Debugging techniques and tools for Hardware Accelerators (Bhavya Chunchu)
- FPGA-Based Image Accelerators for Optimizing Convolutional Operators (Timothy Do)
- AES Algorithm Accelerator by Using HLS Tools (Yaoze Zhang, Zhipei He)
- Compilers for DNN Accelerators (Asma Khan)
- FPGA-based Deep Neural Network Accelerators (Shengqi Li)
- Review of Phase-Locked Loops (PLLs) Based on FPGAs (Xinyi Tang, Yiming Shen)
- Dynamic and Partial Reconfigurability of FPGA (Mrudul Shailesh Sonawane)
- The Optimization of Matrix Multiplication (Yuanzhe Zhang, Jinwen Wu)
- HLS-RISC: High-Level Synthesis For RISC-V AI/ML Accelerator (Haocheng Xu)
- Design Space Exploration of For Loops in PyLog for High Level Synthesis (Valen Yamamoto)
Winter 2022 Projects:
- Hardware Accelerator Architecture for Imaging Tracking (Yun-Hsuan Kuo)
- Review of DNN accelerators based on FPGA (Xiaofang Zhang)
- A Universal FPGA Reconfigurable Computing Schema (Zeqiang Zheng)
- FPGA Overlay Architectures (Kenny Tang)
- Reversible water marking (using VHDL) (Kartik Jain, Mansi Chauhan)
- Optimization of hls4ml: an automatic NNs-to-HLS translation workflow (Yifan Zhang, Yuhui Li)
- A Compiler for Associate Processors (Rachid Karami, Mariam Rakka)
- FIR Hardware Accelorators (Calder Staude)
- Comparison of the performance between the HLS tools and manually RTL code (Hongzheng Tian)
- HLS Debugging and Verification: literature review (Tianyu Yu)
- RaTriCounter: solution to solving Triangle-Counting (Yoonha Cha, Seongyoung Kang)
- Review of Resistive Memory-based Accelerators on Neural Networks Training and Inference (Ye Qiao, Andrew Ding)