Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures W Lei, X Jin, MY Kan, Z Ren, X He, D Yin ACL 2018, 1437-1447, 2018 | 385 | 2018 |
Recurrent event network: Autoregressive structure inference over temporal knowledge graphs W Jin, M Qu, X Jin, X Ren arXiv preprint arXiv:1904.05530, 2019 | 335 | 2019 |
Contextualizing hate speech classifiers with post-hoc explanation B Kennedy, X Jin, AM Davani, M Dehghani, X Ren ACL 2020, 2020 | 157 | 2020 |
Gradient-based editing of memory examples for online task-free continual learning X Jin, A Sadhu, J Du, X Ren NeurIPS 2021, 2021 | 130* | 2021 |
Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models X Jin, Z Wei, J Du, X Xue, X Ren ICLR 2020, 2019 | 121 | 2019 |
Dataless knowledge fusion by merging weights of language models X Jin, X Ren, D Preotiuc-Pietro, P Cheng ICLR 2023, 2022 | 98 | 2022 |
Lifelong pretraining: Continually adapting language models to emerging corpora X Jin, D Zhang, H Zhu, W Xiao, SW Li, X Wei, A Arnold, X Ren NAACL 2022, 2021 | 98 | 2021 |
On transferability of bias mitigation effects in language model fine-tuning X Jin, F Barbieri, B Kennedy, AM Davani, L Neves, X Ren NAACL 2021, 2020 | 72* | 2020 |
Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation X Jin, W Lei, Z Ren, H Chen, S Liang, Y Zhao, D Yin CIKM 2018, 1403-1412, 2018 | 58* | 2018 |
Refining language models with compositional explanations H Yao, Y Chen, Q Ye, X Jin, X Ren NeurIPS 2021, 8954-8967, 2021 | 43* | 2021 |
Learn continually, generalize rapidly: Lifelong knowledge accumulation for few-shot learning X Jin, BY Lin, M Rostami, X Ren EMNLP 2021 Findings, 2021 | 42 | 2021 |
Visually grounded continual learning of compositional phrases X Jin, J Du, A Sadhu, R Nevatia, X Ren EMNLP 2020, 2020 | 19* | 2020 |
Overcoming catastrophic forgetting in massively multilingual continual learning GI Winata, L Xie, K Radhakrishnan, S Wu, X Jin, P Cheng, M Kulkarni, ... ACL 2023 Findings, 2023 | 17 | 2023 |
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement X Jin, X Ren ICML 2024 Spotlight, 2024 | 2 | 2024 |
Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations X Jin, X Ren arXiv preprint arXiv:2406.14026, 2024 | | 2024 |