关注
Canwen Xu
Canwen Xu
Boson AI
在 ucsd.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
🤗 Transformers: State-of-the-art natural language processing
T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ...
EMNLP 2020 (Demo), 38-45, 2020
13769*2020
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
ICLR 2022, 2021
13392021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
12892023
🤗 Datasets: A Community Library for Natural Language Processing
Q Lhoest, AV del Moral, Y Jernite, A Thakur, P von Platen, S Patil, ...
EMNLP 2021 (Demo), 2021
426*2021
BERT Loses Patience: Fast and Robust Inference with Early Exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
NeurIPS 2020, 2020
2732020
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
SH Bach, V Sanh, ZX Yong, A Webson, C Raffel, NV Nayak, A Sharma, ...
ACL 2022 (Demo), 2022
2572022
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu, W Zhou, T Ge, F Wei, M Zhou
EMNLP 2020, 7859--7869, 2020
2052020
Baize: An open-source chat model with parameter-efficient tuning on self-chat data
C Xu, D Guo, N Duan, J McAuley
arXiv preprint arXiv:2304.01196, 2023
1962023
BERT learns to teach: Knowledge distillation with meta learning
W Zhou, C Xu, J McAuley
ACL 2022, 7037-7049, 2022
682022
A survey on model compression and acceleration for pretrained language models
C Xu, J McAuley
AAAI 2023, 2023
62*2023
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
C Xu, D Guo, N Duan, J McAuley
ACL 2022 (Findings), 2022
422022
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
EMNLP 2021, 2021
422021
StarCoder 2 and The Stack v2: The Next Generation
A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier, N Tazi, A Tang, ...
arXiv preprint arXiv:2402.19173, 2024
392024
Small models are valuable plug-ins for large language models
C Xu, Y Xu, S Wang, Y Liu, C Zhu, J McAuley
arXiv preprint arXiv:2305.08848, 2023
362023
Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders
Y Duan, C Xu, J Pei, J Han, C Li
ACL 2020, 253–262, 2020
362020
Repobench: Benchmarking repository-level code auto-completion systems
T Liu, C Xu, J McAuley
arXiv preprint arXiv:2306.03091, 2023
342023
DLocRL: A deep learning pipeline for fine-grained location recognition and linking in tweets
C Xu, J Li, X Luo, J Pei, C Li, D Ji
The Web Conference (WWW) 2019, 3391-3397, 2019
332019
A survey on dynamic neural networks for natural language processing
C Xu, J McAuley
EACL 2023 (Findings), 2023
252023
LongCoder: A Long-Range Pre-trained Language Model for Code Completion
D Guo, C Xu, N Duan, J Yin, J McAuley
ICML 2023, 2023
222023
Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification
H Wang, C Xu, J McAuley
NAACL 2022, 2022
222022
系统目前无法执行此操作,请稍后再试。
文章 1–20