Outcomes
As DSRI’s research progresses, our Institute is committed to sharing our outcomes for users to learn how our work positively impacts the digital ecosystem. These outcomes will include peer-reviewed publications, keynote speaking events, and other materials.
Chadda, A., McGregor, S., Hostetler, J., & Brennen, A. (2024, March). AI Evaluation Authorities: A Case Study Mapping Model Audits to Persistent Standards. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 21, pp. 23035-23040).
Guha, S., Khan, F. A., Stoyanovich, J., & Schelter, S. (2024). Automated data cleaning can hurt fairness in machine learning-based decision making. IEEE Transactions on Knowledge and Data Engineering.
Kawakami, A., Guerdan, L., Cheng, Y., Glazko, K., Lee, M., Carter, S., ... & Holstein, K. (2023, November). Training towards critical use: Learning to situate ai predictions relative to human knowledge. In Proceedings of The ACM Collective Intelligence Conference (pp. 63-78).
Rastogi, C., Leqi, L., Holstein, K., & Heidari, H. (2023, November). A Taxonomy of Human and ML Strengths in Decision-Making to Investigate Human-ML Complementarity. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 11, No. 1, pp. 127-139).
Guerdan, L., Coston, A., Wu, S., & Holstein, K. (2023, December). Policy Comparison Under Unmeasured Confounding. In NeurIPS 2023 Workshop on Regulatable ML.
Ghosal, S. S., & Li, Y. (2024). Are vision transformers robust to spurious correlations?. International Journal of Computer Vision, 132(3), 689-709.
Loke, L. Y., Barsoum, D. R., Murphey, T. D., & Argall, B. D. (2023, September). Characterizing eye gaze for assistive device control. In 2023 International Conference on Rehabilitation Robotics (ICORR) (pp. 1-6). IEEE.
Purves, D., & Jenkins, R. (2023). A Machine Learning Evaluation Framework for Place-based Algorithmic Patrol Management. Available at SSRN.
McGregor, S. (2023). A Scaled Multiyear Responsible Artificial Intelligence Impact Assessment. Computer, 56(8), 20-27.
Guerdan, L., Coston, A., Holstein, K., & Wu, Z. S. (2023, June). Counterfactual prediction under outcome measurement error. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1584-1598).
Bell, A., Bynum, L., Drushchak, N., Zakharchenko, T., Rosenblatt, L., & Stoyanovich, J. (2023, June). The possibility of fairness: Revisiting the impossibility theorem in practice. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 400-422).
Guerdan, L., Coston, A., Wu, Z. S., & Holstein, K. (2023, June). Ground (less) truth: A causal framework for proxy labels in human-algorithm decision-making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 688-704).
Holstein, K., De-Arteaga, M., Tumati, L., & Cheng, Y. (2023). Toward supporting perceptual complementarity in human-AI collaboration via reflection on unobservables. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-20.
Ackerman, R., Icobelli, F., & Balyan, R. (2022, December). Intelligent Tutoring Systems in Healthcare for Low Literacy Population: A Concise Review. In 2022 International Conference on Computational Science and Computational Intelligence (CSCI) (pp. 1827-1829). IEEE.
Rhea, A. K., Markey, K., D’Arinzo, L., Schellmann, H., Sloane, M., Squires, P., ... & Stoyanovich, J. (2022). An external stability audit framework to test the validity of personality prediction in AI hiring. Data Mining and Knowledge Discovery, 36(6), 2153-2193.
Jenkins, R., Hammond, K., Spurlock, S., & Gilpin, L. (2023). Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning. AI & SOCIETY, 38(4), 1415-1428.