Abstract: Automatic music transcription (AMT) is one of the most fundamental problem in music information retrieval (MIR). In a nutshell, music transcription refers to extracting a symbolic representation, i.e., a list of notes, from an audio signal, and the notes should be formatted in either a machine-readable interface (e.g., MIDI) or human-readable scores (i.e., staff notation). Since music transcription is by no means easy even for human, AMT is arguably one of the core challenges in the field of artificial intelligence (AI), especially for automatically transcribing polyphonic music. This talk will give an overview of the AMT problem, including current approaches and evaluation methodologies. Specifically, it will discuss the feature-based techniques, matrix factorization and the recently-developed deep learning techniques for AMT. It will also explore connections with other related problems, i.e. source separation, chord recognition and beat tracking, as well as applications to related fields, such as music appreciation, music education, music gaming, music production, and computational musicology. Score-level processing such as note parsing and optical music recognition (OMR) will also be mentioned briefly. Finally, this talk will also introduce some useful online resources for AMT research and application, including datasets and source codes.