Classification is an important problem in machine learning. It can be used in a variety of applications, such as separating apples, oranges, and bananas automatically. Traditionally, the regular classification setup aims at minimizing the number of future mis-prediction errors. Nevertheless, in some applications, it is needed to treat different types of mis-prediction errors differently. For instance, a false-negative prediction for a spam classification system only takes the user an extra second to delete the email, while a false-positive prediction can mean a huge loss when the email actually carries important information. When recommending movies to a subscriber with preference ``romance over action over horror'', the cost of mis-predicting a romance movie as a horror one should be significantly higher than the cost of mis-predicting the movie as an action one. Such needs can be formalized as the cost-sensitive classification setup, which is drawing much research attention because of its many potential applications, including targeted marketing, fraud detection and web analysis. Because regular classification is a well-studied setup, there are many good regular classification algorithms. In this talk, we first present a tool that systematically extend those algorithms to deal with cost-sensitive classification problems. Then, by coupling the tool with the popular one-versus-one regular classification algorithm, we propose a simple and novel cost-sensitive classification method. Finally, we demonstrate some strong theoretical guarantees and some promising experimental results that come with our proposed method. The talk is self-contained and assumes no prior knowledge in machine learning.