• 可解释人工智能(XAI)


    一、可解释人工智能(XAI)

    • Molnar (2020). Interpretable machine learning: a guide for making black box models explainable. https://christophm.github.io/interpretable-ml-book/

    • Royal Society. (2019).  "Explainable AI: the basics‐Policy briefing." https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf

    • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850

    • Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine38(3), 50-57. https://doi.org/10.1609/aimag.v38i3.2741

    • Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences116(44), 22071-22080. https://doi.org/10.1073/pnas.1900654116

    • Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371. https://doi.org/10.48550/arXiv.2006.11371

    • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion58, 82-115. https://doi.org/10.1016/j.inffus.2019.12.012

    、XAI的方法介绍

    • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems30. https://doi.org/10.5555/3295222.3295230

    • Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., ... & Lee, S. I. (2020). From local explanations to global understanding with explainable AI for trees. Nature machine intelligence2(1), 56-67. https://doi.org/10.1038/s42256-019-0138-9

    • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). https://doi.org/10.1145/2939672.2939778

    • Sundararajan, M., Taly, A., & Yan, Q. (2017, July). Axiomatic attribution for deep networks. In International conference on machine learning (pp. 3319-3328). PMLR. https://doi.org/10.5555/3305890.3306024

    • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626). https://doi.org/10.1109/ICCV.2017.74

    • Covert, I., Lundberg, S. M., & Lee, S. I. (2020). Understanding global feature contributions with additive importance measures. Advances in Neural Information Processing Systems33, 17212-17223. https://doi.org/10.5555/3495724.3497168

    三、XAI与地理学 & 四、机遇与挑战

    • Fotheringham, A. S. (1997). Trends in quantitative methods I: stressing the local. Progress in human geography21(1), 88-96. https://doi.org/10.1191/030913297676693207

    • Fotheringham, A. S., Brunsdon, C., & Charlton, M. (2003). Geographically weighted regression: the analysis of spatially varying relationships. John Wiley & Sons.

    • Fotheringham, A.S., Oshan, T.M., & Li, Z. (2023). Multiscale Geographically Weighted Regression: Theory and Practice (1st ed.). CRC Press.

    • Li, Z. (2022). Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and XGBoost. Computers, Environment and Urban Systems96, 101845. https://doi.org/10.1016/j.compenvurbsys.2022.101845

    • Parsa, A. B., Movahedi, A., Taghipour, H., Derrible, S., & Mohammadian, A. K. (2020). Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis. Accident Analysis & Prevention136, 105405. https://doi.org/10.1016/j.aap.2019.105405

    • Hsu, C. Y., & Li, W. (2023). Explainable GeoAI: can saliency maps help interpret artificial intelligence’s learning process? An empirical study on natural feature detection. International Journal of Geographical Information Science37(5), 963-987. https://doi.org/10.1080/13658816.2023.2191256

    • Xing, J., & Sieber, R. (2023). The challenges of integrating explainable artificial intelligence into GeoAI. Transactions in GIS. https://doi.org/10.1111/tgis.13045

  • 相关阅读:
    免费分享一套SpringBoot+Vue教务管理(课程管理)系统,帅呆了~~
    matlab 读写磁共振影像.nii 数据
    [Docker] Docker常用命令
    面试官【说一下移动端1px的解决方案】
    redis集群搭建教程及遇到的问题处理
    STC15单片机-上位机通过Modbus-RTU协议与开发板通信
    后端八股笔记------框架篇
    web前端课程设计——重庆旅游7页 HTML+CSS+JavaScript
    WuThreat ITDR 可以快速构建多场景的身份认证与威胁检测能力
    智能制造优化,RFID生产线管理系统解决方案
  • 原文地址:https://blog.csdn.net/qingmuluoyang/article/details/132792501