Page 45 - My FlipBook
P. 45
Brochure 2020
Un-forgetting Continual Lifelong Learning of Deep Models expansion while maintaining model compactness despite
handling sequential tasks. Our compaction and selection/
Continual lifelong learning is an essential aspect of expansion mechanism demonstrates that the knowledge
many applications. We propose a simple but effective accumulated through learning previous tasks can help
approach to continual deep learning. Our approach build a better model that can tackle new tasks, thereby
leverages the principles of deep model compression, dispensing with the need to independently retrain original
critical weights selection, and progressive network models for new tasks. Experimental results show that our
expansion. By enforcing iterative integration of individual approach of incremental learning generates an integrated
tasks, we apply incremental learning that is scalable to model that can tackle multiple tasks without forgetting the
the number of sequential tasks in a continual learning tasks of the contributory models, while maintaining model
process. Our approach is easy to implement and exhibits compactness and enhancing performance. The results of
several favorable characteristics. First, it overcomes the our endeavors have been published in NeurIPS 2019.
problem of "forgetting" (i.e., learning new tasks while
remembering all previous tasks). Second, it allows model
Figure 2 : Unforgetting continual lifelong learning of deep models via the principle of model compaction (C), weight picking (P), and
model expansion (G). The CPG approach can exploit the experiences learned from previous tasks to boost the performance of
the current task.
II. Deep Learning Models for Smart System Applications
Deep Learning of Binary Hash Codes for Fast Retrieval
We are introducing a binary hash codes learning approach, other in terms of Hamming distance within the label space.
whereby class label representations are rendered adaptable These label representations then serve as the output of
during network training. We express the labels as hypercube hash function learning, thereby yielding compact and
vertices in a K-dimensional space, and both the network discriminating binary hash codes. This approach has proven
weights and class label representations are updated in the simple yet e ective and it is applicable to both supervised
learning process. As the label representations are explored and semi-supervised hash code learning. Our research on
from available data, semantically similar categories are hash code deep learning methods has been published in
assigned with label representations that are close to each ICIP 2019 and IEEE TPAMI 2019.
Figure 3 : Hash codes learning for e cient image retrieval. In our approach, each semantic label has its own representation codewords. The
label representations automatically, encoded as K-dimensional unit-hypercube corners, can be learned automatically.
43
Un-forgetting Continual Lifelong Learning of Deep Models expansion while maintaining model compactness despite
handling sequential tasks. Our compaction and selection/
Continual lifelong learning is an essential aspect of expansion mechanism demonstrates that the knowledge
many applications. We propose a simple but effective accumulated through learning previous tasks can help
approach to continual deep learning. Our approach build a better model that can tackle new tasks, thereby
leverages the principles of deep model compression, dispensing with the need to independently retrain original
critical weights selection, and progressive network models for new tasks. Experimental results show that our
expansion. By enforcing iterative integration of individual approach of incremental learning generates an integrated
tasks, we apply incremental learning that is scalable to model that can tackle multiple tasks without forgetting the
the number of sequential tasks in a continual learning tasks of the contributory models, while maintaining model
process. Our approach is easy to implement and exhibits compactness and enhancing performance. The results of
several favorable characteristics. First, it overcomes the our endeavors have been published in NeurIPS 2019.
problem of "forgetting" (i.e., learning new tasks while
remembering all previous tasks). Second, it allows model
Figure 2 : Unforgetting continual lifelong learning of deep models via the principle of model compaction (C), weight picking (P), and
model expansion (G). The CPG approach can exploit the experiences learned from previous tasks to boost the performance of
the current task.
II. Deep Learning Models for Smart System Applications
Deep Learning of Binary Hash Codes for Fast Retrieval
We are introducing a binary hash codes learning approach, other in terms of Hamming distance within the label space.
whereby class label representations are rendered adaptable These label representations then serve as the output of
during network training. We express the labels as hypercube hash function learning, thereby yielding compact and
vertices in a K-dimensional space, and both the network discriminating binary hash codes. This approach has proven
weights and class label representations are updated in the simple yet e ective and it is applicable to both supervised
learning process. As the label representations are explored and semi-supervised hash code learning. Our research on
from available data, semantically similar categories are hash code deep learning methods has been published in
assigned with label representations that are close to each ICIP 2019 and IEEE TPAMI 2019.
Figure 3 : Hash codes learning for e cient image retrieval. In our approach, each semantic label has its own representation codewords. The
label representations automatically, encoded as K-dimensional unit-hypercube corners, can be learned automatically.
43