Chandan Singh
Chandan Singh
Senior researcher, Microsoft research
Verified email at - Homepage
Cited by
Cited by
Definitions, methods, and applications in interpretable machine learning
WJ Murdoch*, C Singh*, K Kumbier, R Abbasi-Asl, B Yu
🔍🌳 PNAS, 2019
Beyond the imitation game: quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
🤖 TMLR, 2022
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
L Rieger, C Singh, WJ Murdoch, B Yu
🔍 ICML, 2020
Large scale image segmentation with structured loss based deep learning for connectome reconstruction
J Funke*, F Tschopp*, W Grisaitis, A Sheridan, C Singh, S Saalfeld, ...
🧠 IEEE TPAMI, 2018
Hierarchical interpretations for neural network predictions
C Singh*, WJ Murdoch*, B Yu
🔍 ICLR, 2019
Curating a covid-19 data repository and forecasting county-level death counts in the united states
N Altieri, RL Barter, J Duncan, R Dwivedi, ..., C Singh*, ..., B Yu*
🌳💊 Harvard data science review, 2021
NL-augmenter: a framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
🤖 NEJLT, 2021
Adaptive wavelet distillation from neural networks through interpretations
W Ha, C Singh, F Lanusse, S Upadhyayula, B Yu
🔍🌳 NeurIPS, 2021
Hierarchical shrinkage: improving the accuracy and interpretability of tree-based methods
A Agarwal*, YS Tan*, O Ronen, C Singh, B Yu
🔍🌳 ICML, 2022
Explaining data patterns in natural language with language models
C Singh*, JX Morris*, J Aneja, AM Rush, J Gao
🔍🤖 EMNLP workshop, 2023
Fast interpretable greedy-tree sums (FIGS)
YS Tan*, C Singh*, K Nasseri*, A Agarwal*, J Duncan, O Ronen, ...
🔍🌳💊 arXiv preprint, 2022
Revisiting complexity and the bias-variance tradeoff
R Dwivedi*, C Singh*, B Yu, MJ Wainwright
JMLR, 2021
Augmenting interpretable models with LLMs during training
C Singh, A Askari, R Caruana, J Gao
🔍🤖🌳 Nature communications, 2023
imodels: a python package for fitting interpretable models
C Singh*, K Nasseri*, YS Tan, T Tang, B Yu
🔍🌳 JOSS, 2021
A consensus layer V pyramidal neuron can sustain interpulse-interval coding
C Singh, WB Levy
🧠 PLOS one, 2017
Explaining black box text modules in natural language with language models
C Singh*, AR Hsu*, R Antonello, S Jain, AG Huth, B Yu, J Gao
🔍🤖🧠 NeurIPS workshop, 2023
Rethinking interpretability in the era of large language models
C Singh, JP Inala, M Galley, R Caruana, J Gao
🔍🤖🌳 arXiv preprint, 2024
Self-verification improves few-shot clinical information extraction
Z Gero*, C Singh*, H Cheng, T Naumann, M Galley, J Gao, H Poon
🔍🤖💊 ICML workshop, 2023
Disentangled attribution curves for interpreting random forests and boosted trees
S Devlin, C Singh, WJ Murdoch, B Yu
🔍🌳 arXiv preprint, 2019
Linearization of excitatory synaptic integration at no extra cost
D Morel, C Singh, WB Levy
🧠 Journal of computational neuroscience, 2018
The system can't perform the operation now. Try again later.
Articles 1–20