Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 1490 | 2023 |
Self-Instruct: Aligning Language Model with Self Generated Instructions Y Wang, Y Kordi, S Mishra, A Liu, NA Smith, D Khashabi, H Hajishirzi ACL, 2022 | 1391 | 2022 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... TMLR, 2022 | 1073 | 2022 |
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering P Lu, S Mishra, T Xia, L Qiu, KW Chang, SC Zhu, O Tafjord, P Clark, ... NeurIPS, 2022 | 601 | 2022 |
Cross-task generalization via natural language crowdsourcing instructions S Mishra, D Khashabi, C Baral, H Hajishirzi ACL, 2021 | 587 | 2021 |
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks Y Wang*, S Mishra*, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ... EMNLP, 2022 | 540* | 2022 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ... arXiv preprint arXiv:2403.05530, 2024 | 358 | 2024 |
Reframing Instructional Prompts to GPTk's Language S Mishra, D Khashabi, C Baral, Y Choi, H Hajishirzi ACL, 2021 | 186 | 2021 |
Large language models cannot self-correct reasoning yet J Huang, X Chen, S Mishra, HS Zheng, AW Yu, X Song, D Zhou ICLR, 2023 | 172 | 2023 |
Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ... Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 171 | 2022 |
NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks S Mishra, A Mitra, N Varshney, B Sachdeva, P Clark, C Baral, A Kalyan ACL, 2022 | 92* | 2022 |
Lila: A Unified Benchmark for Mathematical Reasoning S Mishra, M Finlayson, P Lu, L Tang, S Welleck, C Baral, T Rajpurohit, ... EMNLP, 2022 | 88 | 2022 |
Instruction-following evaluation for large language models J Zhou, T Lu, S Mishra, S Brahma, S Basu, Y Luan, D Zhou, L Hou arXiv preprint arXiv:2311.07911, 2023 | 81 | 2023 |
Commonsense Reasoning with Implicit Knowledge in Natural Language P Banerjee*, S Mishra*, KK Pal*, A Mitra, C Baral AKBC, 2021 | 81* | 2021 |
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning M Parmar, S Mishra, M Purohit, M Luo, MH Murad, C Baral NAACL, 2022 | 62 | 2022 |
Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions M Parmar*, S Mishra*, M Geva, C Baral EACL Outstanding Paper Award, 2022 | 57 | 2022 |
Investigating Selective Prediction Approaches Across Several Tasks in IID, OOD, and Adversarial Settings N Varshney, S Mishra, C Baral ACL, 2022 | 46 | 2022 |
Instructabsa: Instruction learning for aspect based sentiment analysis K Scaria, H Gupta, S Goyal, SA Sawant, S Mishra, C Baral NAACL, 2023 | 38 | 2023 |
How FaR Are Large Language Models From Agents with Theory-of-Mind? P Zhou, A Madaan, SP Potharaju, A Gupta, KR McKee, A Holtzman, ... arXiv preprint arXiv:2310.03051, 2023 | 35 | 2023 |
Is a Question Decomposition Unit All We Need? P Patel, S Mishra, M Parmar, C Baral EMNLP, 2022 | 35 | 2022 |