BEGIN:VCALENDAR
PRODID:-//AddEvent Inc//AddEvent.com v1.7//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:STANDARD
DTSTART:20261101T010000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20260308T030000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DESCRIPTION:Assessing the Trainability of the Variational Quantum State Diagonalization Algorithm at Scale\n\nDeveloping new quantum algorithms is a famously hard problem. The lack of intuition concerning the quantum realm makes constructing quantum algorithms that solve particular problems of interest difficult. In addition\, modern hardware limitations place strong restrictions on the types of algorithms which can be implemented in noisy circuits. These challenges have produced several solutions to the problem of quantum algorithm development in the modern Near-term Intermediate Scale Quantum (NISQ) Era. One of the most prominent of these is the use of classical machine learning to discover novel quantum algorithms by minimizing a cost function associated with the particular application of interest. This quantum-classical hybrid approach\, also called Variational Quantum Algorithms (VQAs)\, has attracted major interest from both academic and industrial researchers due to its flexible framework and expanding list of applications - most notably optimization (QAOA) and chemistry (VQE). What is still unclear is whether these algorithms will deliver on their promise when implemented at a useful scale\, in fact there is strong reason to worry whether the classical machine learning model will be able to train in the larger parameter space. This phenomenon is commonly referred to as the Barren Plateaus problem\, which occurs when the training gradient vanishes exponentially quickly as the system size increases. Recent results have shown that some cost functions used in training can be proven to result in a barren plateau\, while other cost functions can be proven to avoid them. In this presentation\, I apply these results to my 2018 paper where my group developed a new Variational Quantum State Diagonalization (VQSD) algorithm and so demonstrate that this algorithm's current cost function will encounter a Barren Plateau at scale. I then introduce a simple modification to this cost function which preserves its function while ensuring trainability at scale. I also discuss the next steps for this project where I am teaching a team of 6 quantum novices across 4 continents the core calculation I use in this work to expand my analysis to the entire literature of VQAs.\n\nReference: https://uwspace.uwaterloo.ca/handle/10012/18187 
X-ALT-DESC;FMTTYPE=text/html:<strong>Assessing the Trainability of the Variational Quantum State Diagonalization Algorithm at Scale</strong><br><br>Developing new quantum algorithms is a famously hard problem. The lack of intuition concerning the quantum realm makes constructing quantum algorithms that solve particular problems of interest difficult. In addition, modern hardware limitations place strong restrictions on the types of algorithms which can be implemented in noisy circuits. These challenges have produced several solutions to the problem of quantum algorithm development in the modern Near-term Intermediate Scale Quantum (NISQ) Era. One of the most prominent of these is the use of classical machine learning to discover novel quantum algorithms by minimizing a cost function associated with the particular application of interest. This quantum-classical hybrid approach, also called Variational Quantum Algorithms (VQAs), has attracted major interest from both academic and industrial researchers due to its flexible framework and expanding list of applications - most notably optimization (QAOA) and chemistry (VQE). What is still unclear is whether these algorithms will deliver on their promise when implemented at a useful scale, in fact there is strong reason to worry whether the classical machine learning model will be able to train in the larger parameter space. This phenomenon is commonly referred to as the Barren Plateaus problem, which occurs when the training gradient vanishes exponentially quickly as the system size increases. Recent results have shown that some cost functions used in training can be proven to result in a barren plateau, while other cost functions can be proven to avoid them. In this presentation, I apply these results to my 2018 paper where my group developed a new Variational Quantum State Diagonalization (VQSD) algorithm and so demonstrate that this algorithm's current cost function will encounter a Barren Plateau at scale. I then introduce a simple modification to this cost function which preserves its function while ensuring trainability at scale. I also discuss the next steps for this project where I am teaching a team of 6 quantum novices across 4 continents the core calculation I use in this work to expand my analysis to the entire literature of VQAs.<br><br>Reference: <a href="https://uwspace.uwaterloo.ca/handle/10012/18187">https://uwspace.uwaterloo.ca/handle/10012/18187</a> 
UID:1775628510addeventcom
SUMMARY:IQC Student Seminar featuring Joan Arrow
DTSTART;TZID=America/New_York:20220907T120000
DTEND;TZID=America/New_York:20220907T130000
DTSTAMP:20260408T060830Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:0
LOCATION:QNC 1201
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
BEGIN:VALARM
TRIGGER:-PT30M
ACTION:DISPLAY
DESCRIPTION:Reminder
END:VALARM
END:VEVENT
END:VCALENDAR