Gut Microbiota Composition in Chemotherapy and Targeted Therapy of Patients with Metastatic Colorectal Cancer
Frontiers in Oncology, To Appear
Yen-Cheng Chen, Chia-Hsien Chuang, Zhi-Feng Miao, Kwan-Ling Yip, Chung-Jung Liu, Ling-Hui Li, Deng-Chyang Wu, Tian-Lu Cheng, Chung-Yen Lin* and Jaw-Yuan Wang*
Studies have reported the effects of the gut microbiota on colorectal cancer (CRC) chemotherapy, but few studies have investigated the association between gut microbiota and targeted therapy. This study investigated the role of the gut microbiota in the treatment outcomes of patients with metastatic CRC (mCRC). We enrolled 110 patients with mCRC and treated them with standard cancer therapy. Stool samples were collected before administering a combination of chemotherapy and targeted therapy. Patients who had a progressive disease (PD) or partial response (PR) for at least 12 cycles of therapy were included in the study. We further divided these patients into anti-epidermal growth factor receptor (cetuximab) and anti-vascular endothelial growth factor (bevacizumab) subgroups. The gut microbiota of the PR group and bevacizumab-PR subgroup exhibited significantly higher α-diversity. The β-diversity of bacterial species significantly differed between the bevacizumab-PR and bevacizumab-PD groups (P). Klebsiella quasipneumoniae exhibited the greatest fold change in abundance in the PD group than in the PR group. Lactobacillus and Bifidobacterium species exhibited higher abundance in the PD group. The abundance of Fusobacterium nucleatum was approximately 32 times higher in the PD group than in the PR group. A higher gut microbiota diversity was associated with more favorable treatment outcomes in the patients with mCRC. Bacterial species analysis of stool samples yielded heterogenous results. K. quasipneumoniae exhibited the greatest fold change in abundance among all bacterial species in the PD group. This result warrants further investigation especially in a Taiwanese population.
A Multi-grained Dataset for News Event Triggered Knowledge Update
Proceedings of the 31st ACM International Conference on Information and Knowledge Management (CIKM 2022), October 2022
Yu-Ting Lee, Ying-Jhe Tang, Yu-Chung Cheng, Pai-Lin Chen, Tsai-Yen Li and Hen-Hsen Huang
Keeping knowledge facts up-to-date is labored and costly as the world rapidly changes and new information emerges every second. In this work, we introduce a novel task, news event triggered knowledge update. Given an existing article about a topic with a news event about the topic, the aim of our task is to generate an updated article according to the information from the news event. We create a multi-grained dataset for the investigation of our task. The articles from Wikipedia are collected and aligned with news events at multiple language units, including the citation text, the first paragraph, and the full content of the news article. Baseline models are also explored at three levels of knowledge update, including the first paragraph, the summary, and the full content of the knowledge facts.
QISTA-ImageNet: A Deep Compressive Image Sensing Framework Solving L_q-Norm Optimization Problem
17th European Conference on Computer Vision (ECCV), October 2022
Gang-Xuan Lin, Shih-Wei Hu, and Chun-Shien Lu
In this paper, we study how to reconstruct the original images from the given sensed samples/measurements by proposing a socalled deep compressive image sensing framework. This framework, dubbed QISTA-ImageNet, is built upon a deep neural network to realize our optimization algorithm QISTA (ℓq-ISTA) in solving image recovery problem. The unique characteristics of QISTA-ImageNet are that we (1) introduce a generalized proximal operator and present learning-based proximal gradient descent (PGD) together with an iterative algorithm in reconstructing images, (2) analyze how QISTA-ImageNet can exhibit better solutions compared to state-of-the-art methods and interpret clearly the insight of proposed method, and (3) conduct empirical comparisons with state-of-the-art methods to demonstrate that QISTA-ImageNet exhibits the best performance in terms of image reconstruction quality to solve the ℓq-norm optimization problem.
Differences in gut microbiota correlate with symptoms and regional brain volumes in patients with late-life depression
Frontiers in Aging Neuroscience, July 2022
Tsai, Chia-Fen, Chuang, Chia-Hsien, Wang, Yen-Po, Lin, Ya-Bo, Tu, Pei-Chi, Liu Pei-Yi, Wu, Po-Shan, Lin, Chung-Yen, Lu, Ching-Liang
Depression is associated with gut dysbiosis that disrupts a gut-brain bidirectional axis. Gray matter volume changes in cortical and subcortical structures, including prefrontal regions and the hippocampus, have also been noted in depressive disorders. However, the link between gut microbiota and brain structures in depressed patients remains elusive. Neuropsychiatric measures, stool samples, and structural brain images were collected from 36 patients with late-life depression (LLD) and 17 healthy controls. 16S ribosomal RNA (rRNA) gene sequencing was used to profile stool microbial communities for quantitation of microbial composition, abundance, and diversity. T1-weighted brain images were assessed with voxel-based morphometry to detect alterations in gray matter volume between groups. Correlation analysis was performed to identify the possible association between depressive symptoms, brain structures and gut microbiota. We found a significant difference in the gut microbial composition between patients with late-life depression (LLD) and healthy controls. The genera Enterobacter and Burkholderia were positively correlated with depressive symptoms and negatively correlated with brain structural signatures in regions associated with memory, somatosensory integration, and emotional processing/cognition/regulation. Our study purports the microbiota-gut-brain axis as a potential mechanism mediating the symptomatology of LLD patients, which may facilitate the development of therapeutic strategies targeting gut microbes in the treatment of elderly depressed patients.
pubmedKB: an interactive web server to explore biomedical entity relations from biomedical literature
Nucleic Acids Research, July 2022
1. Li, P.H., Chen, T.F., Yu, J.Y., Shih, S.H., Su, C.H., Lin, Y.H., Tsai, H.K., Juan, H.F., Chen, C.Y. and Huang, J.H.
With the proliferation of genomic sequence data for biomedical research, the exploration of human genetic information by domain experts requires a comprehensive interrogation of large numbers of scientific publications in PubMed. However, a query in PubMed essentially provides search results sorted only by the date of publication. A search engine for retrieving and interpreting complex relations between biomedical concepts in scientific publications remains lacking. Here, we present pubmedKB, a web server designed to extract and visualize semantic relationships between four biomedical entity types: variants, genes, diseases, and chemicals. pubmedKB uses state-of-the-art natural language processing techniques to extract semantic relations from the large number of PubMed abstracts. Currently, over 2 million semantic relations between biomedical entity pairs are extracted from over 33 million PubMed abstracts in pubmedKB. pubmedKB has a user-friendly interface with an interactive semantic graph, enabling the user to easily query entities and explore entity relations. Supporting sentences with the highlighted snippets allow to easily navigate the publications. Combined with a new explorative approach to literature mining and an interactive interface for researchers, pubmedKB thus enables rapid, intelligent searching of the large biomedical literature to provide useful knowledge and insights. pubmedKB is available at https://www.pubmedkb.cc/.
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
arXiv:2207.02696v1, July 2022
C. Y. Wang, Alexey Bochkovskiy, H. Y. Mark Liao
YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100. YOLOv7-E6 object detector (56 FPS V100, 55.9% AP) outperforms both transformer-based detector SWINL Cascade-Mask R-CNN (9.2 FPS A100, 53.9% AP) by 509% in speed and 2% in accuracy, and convolutionalbased detector ConvNeXt-XL Cascade-Mask R-CNN (8.6 FPS A100, 55.2% AP) by 551% in speed and 0.7% AP in accuracy, as well as YOLOv7 outperforms: YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, DETR, Deformable DETR, DINO-5scale-R50, ViT-Adapter-B and many other object detectors in speed and accuracy. Moreover, we train YOLOv7 only on MS COCO dataset from scratch without using any other datasets or pre-trained weights. Source code is released in https:// github.com/WongKinYiu/yolov7.
Efficient Dual Batch Size Deep Learning for Distributed Parameter Server Systems
IEEE Annual Computer Software and Applications Conference (COMPSAC), June 2022
Kuan-Wei Lu, Pangfeng Liu, Ding-Yong Hong and Jan-Jan Wu
Distributed machine learning is essential for applying deep learning models with many data and parameters. Current researches on distributed machine learning focus on using more hardware devices powerful computing units for fast training. Consequently, the model training prefers a larger batch size to accelerate the training speed. However, the large batch training often suffers from poor accuracy due to poor generalization ability. Researchers have come up with many sophisticated methods to address this accuracy issue due to large batch sizes. These methods usually have complex mechanisms, thus making training more difficult. In addition, powerful training hardware for large batch sizes is expensive, and not all researchers can afford it. We propose a dual batch size learning scheme to address the batch size issue. We use the maximum batch size of our hardware for maximum training efficiency we can afford. In addition, we introduce a smaller batch size during the training to improve the model generalization ability. Using two different batch sizes in the same training simultaneously will reduce the testing loss and obtain a good generalization ability, with only a slight increase in the training time. We implement our dual batch size learning scheme and conduct experiments. By increasing 5% of the training time, we can reduce the loss from 1.429 to 1.246 in some cases. In addition, by appropriately adjusting the percentage of large and small batch sizes, we can increase the accuracy by 2.8% in some cases. With the additional 10% increase in training time, we can reduce the loss from 1.429 to 1.193. And after moderately adjusting the number of large batches and small batches used by GPUs, the accuracy can increase by 2.9%. Using two different batch sizes in the same training introduces two complications. First, the data processing speeds for two different batch sizes are different, so we must assign the data proportionally to maximize the overall processing speed. In addition, since the smaller batches will see fewer data due to the overall processing speed consideration, we proportionally adjust their contribution towards the global weight update in the parameter server. We use the ratio of data between the small and large batches to adjust the contribution. Experimental results indicate that this contribution adjustment increases the final accuracy by another 0.9%.
Capturing Humans in Motion: Temporal-Attentive 3D Human Pose and Shape Estimation from Monocular Video
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022
Wen-Li Wei, Jen-Chun Lin, Tyng-Luh Liu, and Hong-Yuan Mark Liao
Learning to capture human motion is essential to 3D human pose and shape estimation from monocular video. However, the existing methods mainly rely on recurrent or convolutional operation to model such temporal information, which limits the ability to capture non-local context relations of human motion. To address this problem, we propose a motion pose and shape network (MPS-Net) to effectively capture humans in motion to estimate accurate and temporally coherent 3D human pose and shape from a video. Specifically, we first propose a motion continuity attention (MoCA) module that leverages visual cues observed from human motion to adaptively recalibrate the range that needs attention in the sequence to better capture the motion continuity dependencies. Then, we develop a hierarchical attentive feature integration (HAFI) module to effectively combine adjacent past and future feature representations to strengthen temporal correlation and refine the feature representation of the current frame. By coupling the MoCA and HAFI modules, the proposed MPS-Net excels in estimating 3D human pose and shape in the video. Though conceptually simple, our MPS-Net not only outperforms the state-of-the-art methods on the 3DPW, MPI-INF-3DHP, and Human3.6M benchmark datasets, but also uses fewer network parameters. The video demos can be found at https://mps-net.github.io/MPS-Net/.
Decoupled Contrastive Learning
17th European Conference on Computer Vision (ECCV), October 2022
Chun-Hsiao Yeh, Cheng-Yao Hong, Yen-Chi Hsu, Tyng-Luh Liu, Yubei Chen and Yann LeCun
Contrastive learning (CL) is one of the most successful paradigms for self-supervised learning (SSL). In a principled way, it considers two augmented views of the same image as positive to be pulled closer, and all other images negative to be pushed further apart. However, behind the impressive success of CL-based techniques, their formulation often relies on heavy-computation settings, including large sample batches, extensive training epochs, etc. We are thus motivated to tackle these issues and establish a simple, efficient, yet competitive baseline of contrastive learning. Specifically, we identify, from theoretical and empirical studies, a noticeable negative-positive-coupling (NPC) effect in the widely used cross-entropy (InfoNCE) loss, leading to unsuitable learning efficiency concerning the batch size. Optimizing InfoNCE loss with a small-size batch is effectively solving easier SSL tasks. By removing the NPC effect, we propose decoupled contrastive learning (DCL) loss, which removes the positive term from the denominator and significantly improves the learning efficiency. DCL achieves competitive performance with less sensitivity to sub-optimal hyperparameters, requiring neither large batches in SimCLR, momentum encoding in MoCo, or large epochs. We demonstrate with various benchmarks while manifesting robustness as much less sensitive to suboptimal hyperparameters. Notably, SimCLR with DCL achieves 68.2% ImageNet-1K top-1 accuracy using batch size 256 within 200 epochs pre-training, outperforming its SimCLR baseline by 6.4%. Further, DCL can be combined with the SOTA contrastive learning method, NNCLR, to achieve 72.3% ImageNet-1K top-1 accuracy with 512 batch size in 400 epochs, which represents a new SOTA in contrastive learning. We believe DCL provides a valuable baseline for future contrastive SSL studies.
Self-Supervised Sparse Representation for Video Anomaly Detection
17th European Conference on Computer Vision (ECCV), October 2022
Jhih-Ciang Wu, He-Yen Hsieh, Ding-Jie Chen, Chiou-Shann Fuh and Tyng-Luh Liu
Recently, deep neural networks have presented significant efficacy in numerous tasks compared to conventional machine learning techniques. We rethink the employment for both approaches and adopt them as complements to each other. Specifically, we introduce the self-supervised sparse representation that conjugates the learning-based model with embedded dictionary learning components. With the learned task-specific dictionary, we design the en-Normal and de-Normal modules that are leveraged oppositely where the former is used to obtain its reconstructed normal-event feature, and the latter is applied to filter out the normal-event feature. The flexibility is attached to the proposed architecture that generally carries out both one-class and weakly-supervised video anomaly detection via pseudo label generation. The experiment results comprehensively demonstrate the effectiveness of our method by outperforming SOTA for all benchmarks of video anomaly detection with significant improvement.
Datatype-generic programming meets elaborator reflection
Proceedings of the ACM on Programming Languages, August 2022
Hsiang-Shang Ko, Liang-Ting Chen, and Tzu-Chi Lin
Datatype-generic programming is natural and useful in dependently typed languages such as Agda. However, datatype-generic libraries in Agda are not reused as much as they should be, because traditionally they work only on datatypes decoded from a library’s own version of datatype descriptions; this means that different generic libraries cannot be used together, and they do not work on native datatypes, which are preferred by the practical Agda programmer for better language support and access to other libraries. Based on elaborator reflection, we present a framework in Agda featuring a set of general metaprograms for instantiating datatype-generic programs as, and for, a useful range of native datatypes and functions —including universe-polymorphic ones— in programmer-friendly and customisable forms. We expect that datatype-generic libraries built with our framework will be more attractive to the practical Agda programmer. As the elaborator reflection features used by our framework become more widespread, our design can be ported to other languages too.
Elegancy: Digitizing the Wisdom from Laboratories to the Cloud with Free No-Code Platform
iScience, July 2022
Chih-Wei Huang, Wei-Hsuan Chuang, Chung-Yen Lin, Shu-Hwa Chen
One of the top priorities in any laboratory is archiving experimental data in the most secure, efficient, and errorless way. It is especially important to those in chemical and biological research, for it is more likely to damage experiment records. In addition, the transmission of experiment results from paper to electronic devices is time-consuming and redundant. Therefore, we introduce an open-source no-code electronic laboratory notebook, Elegancy, a cloud-based /standalone web service distributed as a Docker image. Elegancy fits all laboratories but is specially equipped with several features benefitting biochemical laboratories. It can be accessed via various web browsers, allowing researchers to upload photos or audio recordings directly from their mobile devices. Elegancy also contains a meeting arrangement module, audit/ revision control, and laboratory supply management system. We believe Elegancy could help the scientific research community gather evidence, share information, reorganize knowledge, and digitize laboratory works with greater ease and security.
A Cloud-Native Online Judge System
IEEE International Workshop on Digitalized Adaptive Learning & Immersive Technology Education, June 2022
Guan-Chen Pan, Pangfeng Liu, Jan-Jan Wu
Online Judge Systems are designed for the reliable evaluation of source code submitted by users. The system queues the code, compiles and tests them typically in a FIFO manner. However, we found that the cloud-native design of an online judge system is still a pending issue. Many existing open-source online judge systems are hard to deploy and scale, due to the tight coupling with some specific environments and the vague boundaries in their system architectures. In addition, these online judge systems hardly provide the support of resource scheduling as they are usually more concerned about the homogeneity of the resources. In this research, we design and develop a cloud-native online judge system that is able to (1) be built and run stably in dynamic environments (2) scale vertically and horizontally to the workload (3) do resource scheduling over CPUs and GPUs. Furthermore, this research also analyzes the consequence of adopting some modernly advocated design approaches and technologies; these are: microservice architecture design, eventdriven architecture, domain-driven design, and containerization technologies such as Docker and Kubernetes.
Leveraging Write Heterogeneity of Phase Change Memory on Supporting Self-balancing Binary Tree
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), June 2022
Che-Wei Chang, Chun-Feng Wu, Yuan-Hao Chang, Ming-Chang Yang, and Chieh-Fu Chang
With the increasing demand of massive/big data applications, non-volatile memory (NVM), such as phase change memory (PCM), has become a promising candidate to replace DRAM because of its low leakage power, non-volatility, and high density. However, most of the existing memory read/write intensive algorithms and data structures are not aware of the NVM write heterogeneity in terms of both energy consumption and latency. In particular, self-balancing binary search trees, which are widely used to manage massive data in the bigdata era, were designed without the consideration of NVM characteristics. Thus, the multiple rotations of the tree balancing process would degrade the memory performance. This work explores the relations among nodes and analyzes tree operations, and the node indexing and address mapping are redesigned to reduce the tree management overhead on single-level cell (SLC) NVM by decreasing the number of bit flips of tree rotations. When multi-level cell (MLC) NVM is included, our address mapping algorithm is developed to reduce the total energy consumption and latency with considerations of the heterogeneous write operations of different cell states. Experimental results show that our solution significantly outperforms the original implementation of a self-balancing binary search tree when the amount of data is large.
Multi-aspect examinations of possible alternative mappings of identified variant peptides: a case study on the HEK293 cell line
ACS Omega, May 2022
Wai-Kok Choong and Ting-Yi Sung*
Adopting proteogenomics approach to validate single nucleotide variation events by identifying corresponding single amino acid variant peptides from mass spectrometry (MS)-based proteomic data facilitates translational and clinical research. Although variant peptides are usually identified from MS data with a stringent false discovery rate (FDR), FDR control could fail to eliminate dubious results caused by several issues; thus, post-examination to eliminate dubious results is required. However, comprehensive post-examinations of identification results are still lacking. Therefore, we propose a framework of three bottom-up levels, peptide-spectrum match, peptide, and variant event levels, that consists of rigorous eleven-aspect examinations from the MS perspective to further confirm the reliability of variant events. As a proof of concept and showing feasibility, we demonstrate the eleven examinations on the identified variant peptides from an HEK293 cell line data set, where various database search strategies were applied to maximize the number of identified variant PSMs with an FDR <1% for post-examinations. The results showed that only FDR criterion is insufficient to validate identified variant peptides and the eleven post-examinations can reveal low-confidence variant events detected by shotgun proteomics experiments. Therefore, we suggest that post-examinations of identified variant events based on the proposed framework are necessary in proteogenomics studies.
How to Enable Index Scheme for Reducing the Writing Cost of DNA Storage on Insertion and Deletion
ACM Transactions on Embedded Computing Systems (TECS), May 2022
Yi-Syuan Lin, Yu-Pei Liang, Tseng-Yi Chen, Yuan-Hao Chang, Shuo-Han Chen, Hsin-Wen Wei, and Wei-Kuan Shih
Recently, the requirement of storing digital data has been growing rapidly; however, the conventional storage medium cannot satisfy these huge demands. Fortunately, thanks to biological technology development, storing digital data into deoxyribonucleic acid (DNA) has become possible in recent years. Furthermore, because of the attractive features (e.g., high storing density, long-term durability, and stability), DNA storage has been regarded as a potential alternative storage medium to store massive digital data in the future. Nevertheless, reading and writing digital data over DNA requires a series of extremely time-consuming processes (i.e., DNA sequencing and DNA synthesis). More specifically, among the two costs, the writing cost is the predominant cost of DNA data storage system. Therefore, to enable efficient DNA storage, this paper proposes an index management scheme for reducing the number of accesses to DNA storage. Additionally, this paper introduces a new DNA data encoding format with VERA (Version Editing Recovery Approach) to reduce the total writing bits while inserting and deleting the data. To the best of our knowledge, this work is the first work to provide a total data management solution for DNA storage. According to the experimental results, the proposed design with VERA can reduce the cost by 77% and improve the performance by 71% compared to the append-only methods.
Sparse Trigger Pattern Guided Deep Learning Model Watermarking
ACM Workshop on Information Hiding and Multimedia Security (ACM IH&MMSec), June 2022
Watermarking neural networks (NNs) for ownership protection has received considerable attention recently. Resisting both model pruning and fine-tuning is commonly considered to evaluate the robustness of a watermarked NN. However, the rationale behind such a robustness is still relatively unexplored in the literature. In this paper, we study this problem to propose a so-called sparse trigger pattern (STP) guided deep learning model watermarking method. We provide empirical evidence to show that trigger patterns are able to make the distribution of model parameters compact, and thus exhibit interpretable resilience to model pruning and fine-tuning. We find the effect of STP can also be technically interpreted as the first layer dropout. Extensive experiments demonstrate the robustness of our method.
Galectin-1 orchestrates an inflammatory tumor-stroma crosstalk in hepatoma by enhancing TNFR1 protein stability and signaling in carcinoma-associated fibroblasts
Oncogene, April 2022
Yao-Tsung Tsai, Chih-Yi Li, Yen-Hua Huang, Te-Sheng Chang, Chung-Yen Lin, Chia-Hsien Chuang, Chih-Yang Wang, Gangga Anuraga, Tzu-Hao Chang, Tsung-Chieh Shih, Zu-Yau Lin, Yuh-Ling Chen, Ivy Chung, Kuen-Haur Lee, Che-Chang Chang, Shian-Ying Sung, Kai-Huei Yang, Wan-Lin Tsui, Chee-Voon Yap, Ming-Heng Wu
Most cases of hepatocellular carcinoma (HCC) arise with the fibrotic microenvironment where hepatic stellate cells (HSCs) and carcinoma-associated fibroblasts (CAFs) are critical components in HCC progression. Therefore, CAF normalization could be a feasible therapy for HCC. Galectin-1 (Gal-1), a β-galactoside-binding lectin, is critical for HSC activation and liver fibrosis. However, few studies has evaluated the pathological role of Gal-1 in HCC stroma and its role in hepatic CAF is unclear. Here we showed that Gal-1 mainly expressed in HCC stroma, but not cancer cells. High expression of Gal-1 is correlated with CAF markers and poor prognoses of HCC patients. In co-culture systems, targeting Gal-1 in CAFs or HSCs, using small hairpin (sh)RNAs or an therapeutic inhibitor (LLS30), downregulated plasminogen activator inhibitor-2 (PAI-2) production which suppressed cancer stem-like cell properties and invasion ability of HCC in a paracrine manner. The Gal-1-targeting effect was mediated by increased a disintegrin and metalloprotease 17 (ADAM17)-dependent TNF-receptor 1 (TNFR1) shedding/cleavage which inhibited the TNF-α → JNK → c-Jun/ATF2 signaling axis of pro-inflammatory gene transcription. Silencing Gal-1 in CAFs inhibited CAF-augmented HCC progression and reprogrammed the CAF-mediated inflammatory responses in a co-injection xenograft model. Taken together, the findings uncover a crucial role of Gal-1 in CAFs that orchestrates an inflammatory CSC niche supporting HCC progression and demonstrate that targeting Gal-1 could be a potential therapy for fibrosis-related HCC.
A metagenomics study of hexabromocyclododecane degradation with a soil microbial community
Journal of Hazardous Materials, May 2022
Yi-Jie Li, Chia-Hsien Chuang, Wen-Chih Cheng, Shu-Hwa Chen, Wen-Ling Chen, Yu-Jie Lin, Chung-Yen Lin*, Yang-hsin Shih*
Hexabromocyclododecanes (HBCDs) are globally prevalent and persistent organic pollutants (POPs) listed by the Stockholm Convention in 2013. They have been detected in many environmental media from waterbodies to Plantae and even in the human body. Due to their highly bioaccumulative characterization, they pose an urgent public health issue. Here, we demonstrate that the indigenous microbial community in the agricultural soil in Taiwan could decompose HBCDs with no additional carbon source incentive. The degradation kinetics reached 0.173 day-1 after the first treatment and 0.104 day-1 after second exposure. With additional C-sources, the rate constants decreased to 0.054–0.097 day-1. The hydroxylic debromination metabolites and ring cleavage long-chain alkane metabolites were identified to support the potential metabolic pathways utilized by the soil microbial communities. The metagenome established by Nanopore sequencing showed significant compositional alteration in the soil microbial community after the HBCD treatment. After ranking, comparing relative abundances, and performing network analyses, several novel bacterial taxa were identified to contribute to HBCD biotransformation, including Herbaspirillum, Sphingomonas, Brevundimonas, Azospirillum, Caulobacter, and Microvirga, through halogenated / aromatic compound degradation, glutathione-S-transferase, and hydrolase activity. We present a compelling and applicable approach combining metagenomics research, degradation kinetics, and metabolomics strategies, which allowed us to decipher the natural attenuation and remediation mechanisms of HBCDs.
An integrated metadatabase of 16S rRNA gene amplicon for microbiome taxonomic classification
Bioinformatics, March 2022
Chun-Chieh Liao, Po-Ying Fu, Chih-Wei Huang, Chia-Hsien Chuang, Yun Yen, Chung-Yen Lin*, Shu-Hwa Chen*
Motivation: Taxonomic classification of 16S ribosomal RNA gene amplicon is an efficient and economic approach in microbiome analysis. 16S rRNA sequence databases like SILVA, RDP, EzBioCloud, and HOMD used in downstream bioinformatic pipelines have limitations on either the sequence redundancy or the delay on new sequence recruitment. To improve the 16S rRNA gene-based taxonomic classification, we merged these widely used databases and a collection of novel sequences systemically into an integrated resource. Results: MetaSquare version 1.0 is an integrated 16S rRNA sequence database. It is composed of more than six million sequences and improves taxonomic classification resolution on both long-read and short-read methods. Availability: Accessible at https://hub.docker.com/r/lsbnb/metasquare_db and https://github.com/lsbnb/MetaSquare.
Peptide-Based Drug Predictions for Cancer Therapy Using Deep Learning
Pharmaceuticals, March 2022
Yih-Yun Sun†, Tzu-Tang Lin†, Wen-Chih Cheng, I-Hsuan Lu, Chung-Yen Lin,* and Shu-Hwa Chen *
Anticancer peptides (ACPs) are selective and toxic to cancer cells as new anticancer drugs. Identifying new ACPs is time-consuming and expensive to evaluate all candidates’ anticancer abilities. To reduce the cost of ACP drug development, we collected the most updated ACP data to train a convolutional neural network (CNN) with a peptide sequence encoding method for initial in silico evaluation. Here we introduced PC6, a novel protein-encoding method, to convert a peptide sequence into a computational matrix, representing six physicochemical properties of each amino acid. By integrating data, encoding method, and deep learning model, we developed AI4ACP, a user-friendly web-based ACP distinguisher that can predict the anticancer property of query peptides and promote the discovery of peptides with anticancer activity. The experimental results demonstrate that AI4ACP in CNN, trained using the new ACP collection, outperforms the existing ACP predictors. The 5-fold cross-validation of AI4ACP with the new collection also showed that the model could perform at a stable level on high accuracy around 0.89 without overfitting. Using AI4ACP, users can easily accomplish an early-stage evaluation of unknown peptides and select potential candidates to test their anticancer activities quickly.