Jason Ho

Photo taken by Vicky Phan

About Me

I am a PhD student at UT Austin with research interests in ML accelerators, spiking neural networks, analog neuromorphic computing, and any kind of unconventional computing.

At the moment, my research examines the use of machine learning models as lightweight surrogates to traditional circuit simulation of analog circuits for large-scale mixed-signal neuromorphic (brain-like) simulation.

Education

August 2022 - Current

Ph.D. in Electrical and Computer Engineering

University of Texas at Austin

GPA: 3.96

August 2022 - December 2024

M.S.E. in Electrical and Computer Engineering

University of Texas at Austin

GPA: 3.96

  • Relevant Coursework: Computer Architecture, Cross-Layer Machine Learning Hardware Software Co-design, Parallelism and Locality, Prediction Mechanisms in Computer Architecture, Parallel Computer Architecture, Low Power and Robust Design
Sept 2018 - June 2022

Bachelor of Science with Honors in Computer Engineering

Brown University

GPA: 3.96

  • Honors Thesis: Tools for Understanding Computational Behaviors of Bacterial Biofilms
  • Selected Courses: Topics in Bioelectronics, Digital Signal Processing, Semiconductor Physics, VLSI Design, Digital Electronics Design, Computer Architecture, Communication Systems, Linear System Analysis Computer Vision, Data Science
  • Student Activities: Brown Band, Brown Wind Symphony, Engineering Student Ambassador, Engineering Student Mentor, MAPS Mentor
Sept 2014 - June 2018

High School Diploma

Seekonk High School

Valedictorian, GPA: 4.0

Research Experience

August 2022 - Current

Graduate Researcher

SLAM Lab, University of Texas at Austin
  • Investigating ML-based power and performance surrogate models of analog compute blocks in large-scale mixed-signal simulators which achieves 12x speedup over state-of-the-art simulation methodologies
  • Researching co-design strategies for hybrid analog-digital neuromorphic computing systems to combine analog energy efficiency with digital scalability
January 2021 - June 2022

Undergraduate Researcher

SCALE Lab, Brown University
    K. Hu, J. Ho and J. K. Rosenstein, "Super-Resolution Electrochemical Impedance Imaging with a 512 x 256 CMOS Sensor Array," in IEEE Transactions on Biomedical Circuits and Systems, 2022, doi: 10.1109/TBCAS.2022.3183856. Link
  • Modelled biofilm coupling interactions using SciPy, NumPy, Pandas, and MatPlotLib between three or more biofilms in two dimensional arrays as Kuramoto oscillators for non-conventional oscillatory computing systems
  • Performed architecture state space search of the computational ability in biofilms with varying phenotype expression using coupled biofilm interaction models
  • Developed super-resolution techniques for impedance imaging for use in deep learning GANs pipeline

Engineering Experience

May 2023 - August 2023

Power and Performance Lead / Architect Intern

AMD
  • Characterized power and performance on future APU plus discrete GPU platforms focused on power allocation algorithms between the APU and GPU on GPU-bound benchmarks
  • Owned and deployed an internal data analysis tool that linked Power BI and internal databases to automate multi-phasic statistical analysis of benchmark logs, providing an average 100x speedup from previous methods
  • Maintained, built and ran benchmarks on 8 separate systems for power and performance characterization
June 2022 - August 2022

VLSI Read Channel Design and Verification Intern

Seagate Technology
  • Lead verification transition for the team from VMM to UVM environment while reusing as much code as possible
  • Developed firmware initialization and configuration code for read channel UVM environment with functionality for large-scale read channel testbenches
May 2021 - September 2021

VLSI Design and Verification Engineering Intern

Seagate Technology
  • Designed and optimized RTL block to increase ECC correction throughput in the hard drive read pipeline
  • Developed VMM infrastructure with one other engineer to verify the new RTL block robustly
September 2020 - Jan 2021

Project Manager and Team Lead

Develop for Good
  • Volunteered for CARE International on analysis and visualization of USAID Hamzari data
  • Supervised a team of 6 Frontend, Backend, UI/UX developers, and Data Scientists
June 2020 - September 2020

FPGA Engineering Intern

Nabsys
  • Fabricated two signal processing algorithms for the analysis of tagged DNA data for genome sequencing
  • Optimized FPGA design to reduce size by 2x while increasing throughput by 16x to process streaming of 128 sensors
  • Verified FPGA design with C++, Python, and timing analysis through Vivado
April 2019 - Sept 2019

Software Engineering Intern

Brown University CIS
  • Designed Copyright Infringement Script in Python that parsed DMCA emails, searched firewall logs and verified infringement on University traffic, saving non-technical staff over 3 hours of time per case
  • Queried SQL databases to correlate Crowdstrike data with firewall permit-deny traffic in real-time dashboards to display current state of malicious traffic by optimizing firewall parsing by 20 times using Regex

Projects

Sept 2022 - Dec 2022

CNN FPGA Hardware Accelerator

  • Designed and deployed CNN accelerator with two other team members on AWS FPGAs using blocking systolic matrix multipliers with Xilinx Vitis HLS tools
  • Reduced trained parameter size by 75% using custom fixed-point 8 bit values with almost no loss to test accuracy
January 2021 - May 2022

Convolution ASIC Design

  • Designed architecture and implementing RTL for ASIC tapeout in Efabless shuttle for real-time 2D image convolution using Yosys, Magic, and OpenLane
  • Worked to create a club in the engineering school to continue iterating on the initial design and get underclassmen interested in computer architecture
April 2021 - May 2021

RISC-V Processor on FPGA

  • Implemented front end RTL on abbreviated RISC-V instruction set with branch prediction and five stage pipeline on Intel Altera FPGA board
  • Verified all design blocks in ModelSim and ran RTL through Intel Quartus
May 2020

Nvidia GauGAN Implementation

  • Implemented Deep learning GANS Model written in Python utilizing SPADE normalization, Tensorflow and Keras to produce photorealistic images from segmentation masks
  • Compiled subset of MIT’s ADE20k dataset specifically parsed for outdoor landscape scenery.
  • Remodeled original architecture to reduce computational cost to 1 GPU from 16 with similar Frechet Inception Distances of 60.7 on images in the test set
May 2020

ARM Fitness Monitor

  • Prototyped on a STM32 board ARM processor running freeRTOS to manage sensor data collection threads and threads to compile data with FIFO Queue buffers.
  • Integrated a optical heartrate sensor, spO2 sensor, accelerometer, and gyroscope with the ability to double tap the device to switch between modes using signal processing of the accelerometer.
January 2020

ReadMe

First Place Google Prize: Best Use of Google Cloud at Hack @ Brown 2020

  • Planned and oversaw creation of a multipurpose accessibility android app written in Java that uses augmented reality and Google Cloud’s Firebase mlkit to overlay dyslexic friendly font in the camera preview using real time OCR data processing with multiple foreign language support.
  • Implemented text bounding box algorithms and custom font support whilst managing three other team members.

Awards

  • NSF GRFP Honorable Mention, April 2024
  • Cockrell School of Engineering Fellow, 2022 - Current
  • UT Austin Graduate Excellence Fellow, 2022 - Current
  • Sigma Xi Research Honor Society, May 2022
  • NSF GRFP Honorable Mention, April 2022
  • Tau Beti Pi Engineering Honor Society, December 2021
  • Grimshaw-Gudewicz Annual Scholar, 2020 - 2022
  • Seekonk High School Valedictorian, 2018

Skills

  • Programming: Verilog, SystemVerilog, C, C++, Python, Pytorch
  • Applications: Cadence Virtuoso, Cadence Spectre, Synopsys HSPICE, Matlab, LTSpice, Gem5, ChampSim
  • Languages: English (Fluent), Cantonese (Fluent)
Last Updated: 01/03/2025