AI Lifecycle | Mapping Study 🎓

by Yuanhao Xie, LuĂ­s Cruz, Petra Heck, Jan Rellermeyer

This webpage presents the results of the mapping study submitted to the 1st Workshop on AI Engineering – Software Engineering for AI (WAIN’21), colocated with ICSE2021. A replication package is available at . For more details, check the [preprint at arxiv]().

This mapping study analyzes the publications that look into the lifecycle of artificial intelligence projects from 2009 to May 2020. We use the digital libraries DBLP and SCOPUS to systematically collect these papers. The search terms are provided in the figure below and the exact query can be found here.

sd

In total, we gather 416 publications that we map into 5 overarching categories:

TODO- Stats: how many papers etc,

🔗 The dataset of publications is available as a CSV file.

Risk Management

- Ozlati, Shabnam; Yampolskiy, Roman (2017). The Formalization of AI Risk Management and Safety Standards. AAAI

- Gundersen, Odd Erik; Kjensmo, Sigbj{\o}rn (2018). State of the Art: Reproducibility in Artificial Intelligence. AAAI

- Perrault, Andrew; Fang, Fei; Sinha, Arunesh; Tambe, Milind (2020). AI for Social Impact: Learning and Planning in the Data-to-Deployment Pipeline. AI Magazine

- Berscheid, Janelle; Roewer{-}Despres, Francois (2019). Beyond transparency: a proposed framework for accountability in decision-making AI systems. AI Matters

- Muthusamy, Vinod (2018). Towards Enterprise-Ready AI Deployments Minimizing the Risk of Consuming AI Models in Business Applications. AI4I

- Zhang, B.H. (2018). Mitigating Unwanted Biases with Adversarial Learning. AIES

- Tan, S. (2018). Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. AIES

- Shaw N.P. (2018). Towards Provably Moral AI Agents in Bottom-up Learning Frameworks. AIES

- Zhao J. (2018). Privacy-Preserving Machine Learning Based Data Analytics on Edge Devices. AIES

- Vasconcelos M. (2018). Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions. AIES

- Sharma, Shubham (2019). CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models. AIES

- Zhang, Yunfeng; Bellamy, Rachel K. E.; Varshney, Kush R. (2020). Joint Optimization of AI Fairness and Utility: A Human-Centered Approach. AIES

- Leben, Derek (2020). Normative Principles for Evaluating Fairness in Machine Learning. AIES

- MuĂąoz-GonzĂĄlez, L. (2017). Towards poisoning of deep learning algorithms with back-gradient optimization. AISEC

- Carlini, N. (2017). Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. AISec

- Rajkomar A., Hardt M., Howell M.D., Corrado G., Chin M.H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of internal medicine

- Tomsett R. (2019). Model poisoning attacks against distributed machine learning systems. Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications

- Papernot, N. (2017). Practical Black-Box Attacks against Machine Learning. ASIACCS

- Zhang J. (2018). Protecting intellectual property of deep neural networks with watermarking. ASIACCS

- Varshney, K.R. (2017). On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products. Big Data

- d’Alessandro, B. (2017). Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification. Big data

- Farroha J. (2019). Security analysis and recommendations for AI/ML enabled automated cyber medical systems. Big data

- Veale, M. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society

- Veale M., Binns R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society

- Ladia A. (2020). Privacy centric collaborative machine learning model training via blockchain. BLOCKCHAIN

- Huang, X. (2017). Safety Verification of Deep Neural Networks. CAV

- Liu, Y. (2017). Trojaning attack on neural networks. CAV

- Reith R.N. (2019). Efficiently stealing your machine learning models. CCS

- Veale, M. (2018). Fairness and accountability design needs for algorithmic support in highstakes public sector decision-making. CHI

- Hind, Michael; Houde, Stephanie; Martino, Jacquelyn; Mojsilovic, Aleksandra; Piorkowski, David; Richards, John T.; Varshney, Kush R. (2019). Experiences with Improving the Transparency of AI Models and Services. CHI

- Holstein, Kenneth; Vaughan, Jennifer Wortman; Hal Daum{'{e}}, I. I. I.; Dud{'{\i}}k, Miroslav; Wallach, Hanna M. (2019). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?. CHI

- Chouldechova, Alexandra; Roth, Aaron (2018). The Frontiers of Fairness in Machine Learning. Commun. ACM

- Adebayo, J.A. (2016). FairML : ToolBox for diagnosing bias in predictive modeling. CP

- Yu Y., Liu X., Chen Z. (2018). Attacks and defenses towards machine learning based systems. CSAE

- Agarwal, A. (2018). A reductions approach to fair classification. CSUR

- Moosavi-Dezfooli, S.M. (2016). DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. CVPR

- Eykholt, K. (2018). Robust Physical-World Attacks on Deep Learning Models. CVPR

- Berrar, Daniel P. (2017). On the Jeffreys-Lindley Paradox and the Looming Reproducibility Crisis in Machine Learning. DSAA

- Antunes, Nuno; Balby, Leandro; Figueiredo, Flavio; Louren{\c{c}}o, Nuno; Jr., Wagner Meira; Santos, Walter (2018). Fairness and Transparency of Machine Learning for Trustworthy Cloud Services. DSN

- Biggio, B. (2013). Evasion attacks against machine learning at test time. ECML PKDD

- Bacciu D., Biggio B., Lisboa P.J.G., MartĂ­n J.D., Oneto L., Vellido A. (2019). Societal issues in machine learning: When learning from data is not enough. ESANN

- Ducuing C., Oneto L., Canepa R. (2019). Fairness and accountability of machine learning models in railway market: Are applicable railway laws up to regulate them?. ESANN

- Galhotra, S. (2017). Fairness testing: testing software for discrimination. ESEC/FSE

- Aggarwal, Aniya (2019). Black box fairness testing of machine learning models. ESEC/SIGSOFT FSE

- Zhao H., Liang J., Yin X., Yang L., Yang P., Wang Y. (2018). Domain-specific modelware: To make the machine learning model reusable and reproducible. ESEM

- Papernot, N. (2018). SoK: Security and Privacy in Machine Learning. EuroS&P

- Selbst, A.D. (2019). Fairness and abstraction in sociotechnical systems. FAccT

- Friedler, S.A. (2019). A comparative study of fairness-enhancing interventions in machine learning. FAT

- Fallon, Corey K.; Blaha, Leslie M. (2018). Improving Automation Transparency: Addressing Some of Machine Learning’s Unique Challenges. HCI

- Perrault, Andrew; Fang, Fei; Sinha, Arunesh; Tambe, Milind (2018). Transparency in Fair Machine Learning: the Case of Explainable Recommender Systems. HCI

- Isakov, Mihailo; Gadepally, Vijay; Gettings, Karen M.; Kinsy, Michel A. (2019). Survey of Attacks and Defenses on Edge-Deployed Neural Networks. HPEC

- Zhou, Jianlong; Chen, Fang (2018). 2D Transparency Space - Bring Domain Users and Machine Learning Experts Together. Human and Machine Learning

- Bellamy, Rachel K. E. (2018). AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. IBM Journal of Research and Development

- Bellamy, Rachel K. E.; Dey, Kuntal; Hind, Michael; Hoffman, Samuel C.; Houde, Stephanie; Kannan, Kalapriya; Lohia, Pranay; Martino, Jacquelyn; Mehta, Sameep; Mojsilovic, Aleksandra; Nagar, Seema; Ramamurthy, Karthikeyan Natesan; Richards, John T.; Saha, Diptikalyan; Sattigeri, Prasanna; Singh, Moninder; Varshney, Kush R.; Zhang, Yunfeng (2019). AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development

- Tolan S. (2019). Why machine learning may lead to unfairness: Evidence from risk assessment for juvenile justice in Catalonia. ICAIL

- Wang G. (2019). Adversarial Watermarking to Attack Deep Neural Networks. ICASSP

- Bore N.K. (2019). Promoting Distributed Trust in Machine Learning and Computational Simulation. ICBC

- Cammarota R., Banerjee I., Rosenberg O. (2018). Machine learning IP protection. ICCAD

- Hu, H. (2019). A Distributed Fair Machine Learning Framework with Private Demographic Data Protection. ICDM

- Goodfellow, I.J. (2014). Explaining and harnessing adversarial examples. ICLR

- Song, Y. (2017). PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples. ICLR

- Madry, A. (2017). Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR

- Brendel W. (2018). Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. ICLR

- Liu Y. (2019). Query-free embedding attack against deep learning. ICME

- Zemel, R. (2013). Learning Fair Representations. ICML

- Jagielski, M. (2019). Differentially Private Fair Learning. ICML

- Ahamed F., Farid F. (2019). Applying internet of things and machine-learning for personalized healthcare: Issues and challenges. iCMLDE

- Benrimoh, David; Israel, Sonia; Perlman, Kelly; Fratila, Robert; Krause, Matthew (2018). Meticulous Transparency - An Evaluation Process for an Agile AI Regulatory Scheme. IEA/AIE

- Akhtar, N. (2018). Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. IEEE Access

- Hassan, Alzubair; Hamza, Rafik; Yan, Hongyang; Li, Ping (2019). An Efficient Outsourced Privacy Preserving Machine Learning Scheme With Public Verifiability. IEEE Access

- Fritchman K. (2019). Privacy-Preserving Scoring of Tree Ensembles: A Novel Framework for AI in Healthcare. IEEE BigData

- Usama M. (2020). The Adversarial Machine Learning Conundrum: Can the Insecurity of ML Become the Achilles’ Heel of Cognitive Networks?. IEEE Network

- Yuan, X. (2019). Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE transactions on neural networks and learning systems

- Shokri, Reza (2019). Trusting Machine Learning: Privacy, Robustness, and Transparency Challenges. IH&MMSec

- Shahriari, Kyarash; Shahriari, Mana (2017). IEEE standard review - Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. IHTC

- Thelisson, Eva (2017). Towards Trust, Transparency and Liability in AI / AS systems. IJCAI

- Zhou J., Asif Khawaja M., Li Z., Sun J., Wang Y., Chen F. (2016). Making machine learning useable by revealing internal states update-a transparent approach. IJCSE

- Suciu, O. (2018). When does machine learning fail? generalized transferability for evasion and poisoning attacks.. In 27th Security Symposium

- Oh, S.J. (2019). Towards Reverse-Engineering Black-Box Neural Networks. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

- Ma X. (2018). Privacy preserving multi-party computation delegation for deep learning in cloud computing. Information Sciences

- VidnerovĂĄ P. (2016). Vulnerability of machine learning models to adversarial examples. ITAT

- Beam A.L., Manrai A.K., Ghassemi M. (2020). Challenges to the Reproducibility of Machine Learning Models in Health Care. JAMA

- Wang X., Li J., Kuang X., Tan Y.-A., Li J. (2019). The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing

- Amodei, D. (2016). Concrete Problems in AI Safety. Journal of Technology in Human Services

- Bantilan, Niels (2017). Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation. Journal of Technology in Human Services

- Gopinath, D. (2017). DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks. Journal of Technology in Human Services

- Bantilan, N. (2018). Themis-ml: A Fairness-Aware Machine Learning Interface for End-To-End Discrimination Discovery and Mitigation. Journal of Technology in Human Services

- Goodman, D. (2020). Advbox: a toolbox to generate adversarial examples that fool neural networks. Journal of Technology in Human Services

- Bird, Sarah; Hutchinson, Ben; Kenthapadi, Krishnaram; Kiciman, Emre; Mitchell, Margaret (2019). Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. KDD

- Sokol, Kacper; Flach, Peter A. (2020). One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency. KI-KĂźnstliche Intelligenz

- Alshemali B., Kalita J. (2020). Improving the Reliability of Deep Neural Networks in NLP: A Review. Knowledge-Based Systems

- Barreno, M. (2010). The security of machine learning. Mach Learn (2010)

- Hagendorff, Thilo (2019). The Ethics of AI Ethics - An Evaluation of Guidelines. Minds and Machines

- Vidnerov P. (2016). Evolutionary generation of adversarial examples for deep and shallow machine learning models. MISNC/DS/SocialInformatics

- Salem, A. (2018). ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. NDSS

- Shafahi A. (2018). Poison frogs! Targeted clean-label poisoning attacks on neural networks. NeurIPS

- Babcock, James; Kram{'{a}}r, J{'{a}}nos; Yampolskiy, Roman V. (2017). Guidelines for Artificial Intelligence Containment. Next-Generation Ethics

- Hardt, M. (2016). Equality of opportunity in supervised learning. NIPS

- Pleiss, G. (2017). On Fairness and Calibration. NIPS

- Kusner, M.J. (2017). Counterfactual Fairness. NIPS

- Steinhardt, J. (2017). Certified Defenses for Data Poisoning Attacks. NIPS

- Chen, I. (2018). Why is my classifier discriminatory?. NIPS

- Tang W. (2019). Privacy Preserving Machine Learning with Limited Information Leakage. NSS

- Abadi, M. (2016). TensorFlow: A system for large-scale machine learning. OSDI

- Wong, P.H. (2019). Democratizing Algorithmic Fairness. Philosophy & Technology

- Zhang X., Khalili M.M., Liu M. (2019). Long-Term Impacts of Fair Machine Learning. PLoS One

- Kuwajima, Hiroshi (2019). Improving Transparency of Deep Neural Inference Process. Progress in Artificial Intelligence

- Piantadosi, Gabriele (2018). On Reproducibility of Deep Convolutional Neural Networks Approaches. RRPR

- LĂŠcuyer, M. (2019). Privacy accounting and quality control in the sage differentially private ML platform. SIGMOD

- Urban, C. (2019). Perfectly Parallel Fairness Certification of Neural Networks. SIGPLAN

- Mohassel, P. (2017). SecureML: A System for Scalable Privacy-Preserving Machine Learning. SP

- Gehr, T. (2018). AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. SP

- Wang, B. (2018). Stealing Hyperparameters in Machine Learning. SP

- Ling, Xiang (2019). DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model. SP

- Varley, Michael (2019). Fairness in Machine Learning with Tractable Models. StarAI 2020

- Wicker, M. (2018). Feature-Guided Black-Box Safety Testing of Deep Neural Networks. TACAS

- Luo, B. (2018). Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks. the AAAI Conference on Artificial Intelligence

- Ren K., Zheng T., Qin Z., Liu X. (2020). Adversarial Attacks and Defenses in Deep Learning. TNNLS

- Ahn, Y. (2019). FairSight: Visual Analytics for Fairness in Decision Making. TVCG

- Ma Y. (2020). Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics. TVCG

- Stoffel, Florian (2018). Transparency in Interactive Feature-based Machine Learning: Challenges and Solutions. University of Konstanz

- He, W. (2017). Adversarial Example Defense: Ensembles of Weak Defenses are not Strong. WOOT

- Biggio, B. (2012). Poisoning attacks against support vector machines.

- Zliobaite, I. (2015). A survey on measuring indirect discrimination in machine learning.

- Liu, Y. (2016). Delving into Transferable Adversarial Examples and Black-box Attacks.

- Papernot, N. (2016). Towards the Science of Security and Privacy in Machine Learning.

- Papernot, N. (2016). Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.

- Friedler, S.A. (2016). On the (im)possibility of fairness.

- Weller, A. (2017). Challenges for Transparency.

- Gajane, Pratik (2017). On formalizing fairness in prediction with machine learning.

- Shaikh, Samiulla; Vishwakarma, Harit; Mehta, Sameep; Varshney, Kush R.; Ramamurthy, Karthikeyan Natesan; Wei, Dennis (2017). An End-To-End Machine Learning Pipeline That Ensures Fairness Policies.

- Yang, C. (2017). Generative poisoning attack method against neural networks.

- Sugimura, Peter; Hartl, Florian (2018). Building a Reproducible Machine Learning Pipeline.

- Corbett{-}Davies, Sam; Goel, Sharad (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.

- Sharma, S. (2019). CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models.

- Jobin, Anna; Ienca, Marcello; Vayena, Effy (2019). Artificial Intelligence: the global landscape of ethics guidelines.

- McDermott, Matthew B. A. (2019). Reproducibility in Machine Learning for Health.

- Weller, A. (2019). Transparency: Motivations and Challenges.

- Raji, Inioluwa Deborah; Yang, Jingying (2019). ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles.

- Zhang, Yukun; Zhou, Longsheng (2019). Fairness Assessment for Artificial Intelligence in Financial Industry.

- AĂŻvodji, U. (2019). Fairwashing: the risk of rationalization.

- Bald, L. (2019). Identifying and Mitigating Bias in Machine Learning Applications.

- Wang, Xiaoqian; Huang, Heng (2019). Approaching Machine Learning Fairness through Adversarial Network.

- Schumann, Candice (2019). Transfer of Machine Learning Fairness across Domains.

- Wang, Xuezhi (2019). Practical Compositional Fairness: Understanding Fairness in Multi-Task ML Systems.

- Caton, S. (2020). Fairness in Machine Learning: A Survey.

- Radovanović, S. (2020). Enforcing fairness in logistic regression algorithm.

🔝back to top

Model Management

- Sankaran A., Panwar N., Khare S., Mani S., Sethi A., Aralikatte R., Gantayat N. (2018). Democratization of deep learning using Darviz. AAAI

- Rudzicz, Frank; Paprica, P. Alison; Janczarski, Marta (2019). Towards international standards for evaluating machine learning. AAAI

- Elkholy, Alexander (2019). Interpretable Automated Machine Learning in Maana™ Knowledge Platform. AAMAS

- Poerner, N. (2018). Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement. ACL

- Goodman, B. (2017). [European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”%22&btnG=). AI magazine

- Sajan K.K., Ramachandran G.S., Krishnamachari B. (2019). Enhancing support for machine learning and edge computing on an iot data marketplace. AIChallengeIoT/SenSys

- Hind M. (2019). TED: Teaching AI to explain its decisions. AIES

- Ahmad M.A., Eckert C., Teredesai A. (2019). The challenge of imputation in explainable artificial intelligence models. AISafety

- Hu Q., Ma L., Zhao J. (2018). DeepGraph: A PyCharm Tool for Visualizing and Understanding Deep Learning Models. APSEC

- Tan, S. (2015). Improving the interpretability of deep neural networks with stimulated learning. ASRU

- Borgli, R.J. (2019). Saga: An Open Source Platform for Training Machine Learning Models and Community-driven Sharing of Techniques. CBMI

- Elshawi R. (2019). Interpretability in healthcare a comparative study of local machine learning interpretability techniques. CBMS

- Semerikov, S.O. (2018). Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. CEUR Workshop

- Llewellynn, Tim; del Milagro Fern{'{a}}ndez{-}Carrobles, M.; D{'{e}}niz, Oscar; Fricker, Samuel; Storkey, Amos J.; Pazos, Nuria; Velikic, Gordana; Leufgen, Kirsten; Dahyot, Rozenn; Koller, Sebastian; Goumas, Georgios I.; Leitner, Peter; Dasika, Ganesh; Wang, Lei; Tutschku, Kurt (2017). BONSEYES: Platform for Open Development of Systems of Artificial Intelligence: Invited paper. CF

- Murugesan S., Malik S., Du F., Koh E., Lai T.M. (2019). DeepCompare: Visual and Interactive Comparison of Deep Learning Model Performance. CG&A

- Amershi, S. (2015). ModelTracker: Redesigning performance analysis tools for machine learning. CHI

- Yin M. (2019). Understanding the effect of accuracy on trust in machine learning models. CHI

- Hohman F. (2017). ShapeShop: Towards understanding deep learning representations via interactive experimentation. CHI EA

- Zhu, J. (2018). Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation. CIG

- Guidotti, R. (2018). A Survey of Methods for Explaining Black Box Models. CSUR

- Lapuschkin, S. (2016). Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. CVPR

- Bau, D. (2017). Network Dissection: Quantifying Interpretability of Deep Visual Representations. CVPR

- Zhang, Q. (2018). Interpretable Convolutional Neural Networks. CVPR

- Wagner, J. (2019). Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks. CVPR

- Gunning, D. (2017). Explainable Artificial Intelligence (XAI). DARPA

- GarcĂ­a M., DomĂ­nguez C., Heras J., Mata E., Pascual V. (2019). An on-going framework for easily experimenting with deep learning models for bioimaging analysis. DCAI

- Montavon, G. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing

- Gilpin, L.H. (2018). Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning. DSAA

- Zeiler, M.D. (2014). Visualizing and understanding convolutional networks. ECCV

- Van Rijn, J.N. (2013). OpenML: A collaborative science platform. ECML PKDD

- Casalicchio, G. (2018). Visualizing the Feature Importance for Black Box Models. ECML PKDD

- Vanschoren, Joaquin (2009). A Community-Based Platform for Machine Learning Experimentation. ECML/PKDD

- Patel S. (2016). A wearable computing platform for developing cloud-based machine learning models for health monitoring applications. EMBC

- Ellis, Charles A. (2019). A Cloud-based Framework for Implementing Portable Machine Learning Pipelines for Neural Data Analysis. EMBC

- Arras, L. (2019). Explaining and Interpreting LSTMs. Explainable ai: Interpreting, explaining and visualizing deep learning

- Samek W., MĂźller K.-R. (2019). Towards Explainable Artificial Intelligence. Explainable AI: interpreting, explaining and visualizing deep learning

- Mitchell, M. (2019). Model Cards for Model Reporting. FAccT

- Bhatt, Umang; Xiang, Alice; Sharma, Shubham; Weller, Adrian; Taly, Ankur; Jia, Yunhan; Ghosh, Joydeep; Puri, Ruchir; Moura, Jos{'{e}} M. F.; Eckersley, Peter (2019). Explainable Machine Learning in Deployment. FAccT

- Sangroya A. (2019). Using formal concept analysis to explain black box deep learning classification models. FCA4AI

- Adhikari A. (2019). LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models. FUZZ

- Zhou, S.M. (2008). Low-level interpretability and high-level interpretability: a unified view of data-driven interpretable fuzzy system modelling. Fuzzy sets and systems

- Vartak, M. (2016). ModelDB: a system for machine learning model management. HILDA

- Fong, R.C. (2017). Interpretable Explanations of Black Boxes by Meaningful Perturbation. ICCV

- Scherzinger S. (2019). The best of both worlds: Challenges in linking provenance and explainability in distributed machine learning. ICDCS

- Ahmad M.A., Teredesai A., Eckert C. (2018). Interpretable machine learning in healthcare. ICHI

- Zintgraf, L.M. (2017). Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. ICLR

- Kindermans, P.J. (2017). Learning how to explain neural networks: PatternNet and PatternAttribution. ICLR

- Rauber, J. (2017). Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models. ICML

- Koh, P.W. (2017). Understanding Black-box Predictions via Influence Functions. ICML

- Zhao, Shuai (2018). Packaging and Sharing Machine Learning Models via the Acumos AI Open Platform. ICMLA

- Zhao, S. (2018). Packaging and Sharing Machine Learning Models via the Acumos AI Open Platform. ICMLA

- Li, Jie; Wang, Guoteng; Zhang, Changsheng; Zhang, Bin (2019). Deep Learning Training Management Platform Based on Distributed Technologies in Resource-Constrained Scenarios. ICNC-FSKD

- Xie C., Qi H., Ma L., Zhao J. (2019). DeepVisual: A visual programming tool for deep learning systems. ICPC

- Amouzgar, Farhad (2018). iSheets: A Spreadsheet-Based Machine Learning Development Platform for Data-Driven Process Analytics. ICSOC

- Wachter, S. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. IDPL

- Adadi, A. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access

- Loyola-Gonzalez, O. (2019). Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses From a Practical Point of View. IEEE Access

- Biran, O. (2017). Explanation and justification in machine learning: A survey. IJCAI

- Epstein, Ziv; Payne, Blakeley H.; Shen, Judy Hanwen; Hong, Casey Jisoo; Felbo, Bjarke; Dubey, Abhimanyu; Groh, Matthew; Obradovich, Nick; Cebri{'{a}}n, Manuel; Rahwan, Iyad (2018). TuringBox: An Experimental Platform for the Evaluation of AI Systems. IJCAI

- Yu F. (2019). Interpreting and evaluating neural network robustness. IJCAI

- Byrne, R.M. (2019). Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. IJCAI

- Patterson C., Galluppi F., Rast A., Furber S. (2012). Visualising large-scale neural network models in real-time. IJCNN/WCCI

- Selvaraju, R.R. (2017). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. IJCV

- GarcĂ­a-DĂ­az, V. (2015). Towards a Standard-based Domain-specific Platform to Solve Machine Learning-based Problems. IJIMAI

- Nguyen, A. (2019). Understanding Neural Networks via Feature Visualization: A survey. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

- Ras, G. (2018). Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning

- Goebel, R. (2018). Explainable AI: The New 42?. In International cross-domain conference for machine learning and knowledge extraction

- Barredo Arrieta A., DĂ­az-RodrĂ­guez N., Del Ser J., Bennetot A., Tabik S., Barbado A., Garcia S., Gil-Lopez S., Molina D., Benjamins R., Chatila R., Herrera F. (2020). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion

- Quemy A. (2019). Two-stage optimization for machine learning workflow. Information Systems

- Meloni P., Loi D., Deriu G., Ripolles O., Solans D., Pimentel A.D., Sapra D., Pintor M., Biggio B., Moser B., Shepeleva N., Stefanov T., Minakova S., Conti F., Benini L., Fragoulis N., Theodorakopoulos I., Masin M., Palumbo F. (2018). ALOHA: An architectural-aware framework for deep learning at the edge. INTESA

- Wei, Yongmei; Low, Jia Xin (2019). expanAI: A Smart End-to-End Platform for the Development of AI Applications. IOV

- Gale W. (2019). Producing radiologist-quality reports for interpretable deep learning.. ISBI

- Freddy L´ecu´e1,2 , Baptiste Abeloos1 , Jonathan Anctil1 , Manuel Bergeron1 , Damien Dalla-Rosa1 , Simon Corbeil-Letourneau1 , Florian Martet1 , Tanguy Pommellet1 , Laura Salvan1 , Simon Veilleux1 , and Maryam Ziaeefard1 (2019). Thales XAI Platform: Adaptable Explanation of Machine Learning Systems - A Knowledge Graphs Perspective. ISWC Satellites

- Samek, W. (2017). Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. ITU Journal: ICT Discoveries

- Xiao C., Choi E., Sun J. (2018). Opportunities and challenges in developing deep learning models using electronic health records data: A systematic review. JAMIA

- Lapuschkin S. (2016). The LRP toolbox for artificial neural networks. Journal of Machine Learning Research

- Yosinski, J. (2015). Understanding Neural Networks Through Deep Visualization. Journal of Technology in Human Services

- Al-Shedivat, M. (2018). The Intriguing Properties of Model Explanations.. Journal of Technology in Human Services

- Apley, D.W. (2020). Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models. Journal of the Royal Statistical Society: Series B (Statistical Methodology)

- Jia S. (2019). Visualizing surrogate decision trees of convolutional neural networks. Journal of Visualization

- Lakkaraju, H. (2017). Interpretable & Explorable Approximations of Black Box Models. KDD

- PĂĄez A. (2019). The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds and Machines

- Došilović, F.K. (2018). Explainable artificial intelligence: A survey. MIPRO

- Wang, W. (2015). Singa: Putting deep learning in the hands of multimedia users”. MM

- Bailer, Werner (2018). On the Traceability of Results from Deep Learning-Based Cloud Services. MMM

- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence

- Sculley, D. (2015). Hidden Technical Debt in Machine Learning Systems. NIPS

- Feurer, M. (2015). Efficient and Robust Automated Machine Learning. NIPS

- Kim, B. (2016). Examples are not enough, learn to criticize! Criticism for Interpretability. NIPS

- Lundberg, S.M. (2017). A Unified Approach to Interpreting Model Predictions. NIPS

- Poursabzi-Sangdeh, F. (2018). Manipulating and Measuring Model Interpretability. NIPS

- Friedler, S.A. (2019). Assessing the Local Interpretability of Machine Learning Models. NIPS

- Kauffmann, J. (2020). Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models. Pattern Recognition

- Kim H., Kim Y., Hong J. (2019). Cluster management framework for autonomic machine learning platform. RACS

- Gharibi, Gharib (2019). ModelKB: towards automated management of the modeling lifecycle in deep learning. RAISE@ICSE

- Manolache F.B. (2016). General valuation framework for artificial intelligence models. RoEduNet

- Talbot, J. (2009). EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers. SIGCHI

- Lim, B.Y. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. SIGCHI

- Freitas, A.A. (2014). Comprehensible classification models: a position paper. SIGKDD

- Ribeiro, M.T. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. SIGKDD

- Vartak M. (2016). ModelDB: A system for machine learning model management. SIGMOD

- Gharibi G., Walunj V., Alanazi R., Rella S., Lee Y. (2019). Automated management of deep learning experiments. SIGMOD

- Santos A., Castelo S., Felix C., Ono J.P., Yu B., Hong S., Silva C.T., Bertini E., Freire J. (2019). Visus: An interactive system for automatic machine learning model building and curation. SIGMOD

- Sellam T. (2019). DeepBase: Deep inspection of neural networks. SIGMOD

- Shu A. (2017). Unified user-interface and protocol for managing heterogeneous deep learning services. SoMeT

- Carlini, N. (2017). Towards Evaluating the Robustness of Neural Networks. SP

- Attenberg, J.M. (2011). Beat the machine: Challenging workers to find the unknown unknowns. the AAAI Conference on Artificial Intelligence

- Ming, Y. (2018). RuleMatrix: Visualizing and Understanding Classifiers with Rules. TVCG

- Hohman, F. (2018). Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers. TVCG

- Wongsuphasawat K. (2018). Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow. TVCG

- Kahng M., Andrews P.Y., Kalro A., Chau D.H.P. (2018). ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models. TVCG

- Wexler J., Pushkarna M., Bolukbasi T., Wattenberg M., Viegas F., Wilson J. (2020). The what-if tool: Interactive probing of machine learning models. TVCG

- Spinner T. (2020). ExplAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. TVCG

- Hohman F. (2019). TeleGam: Combining Visualization and Verbalization for Interpretable Machine Learning. VIS

- Bock M. (2018). Visualization of neural networks in virtual reality using unreal engine. VRST

- Wansing C., Banos O., Gloesekoetter P., Pomares H., Rojas I. (2016). Development of a platform for the exchange of bio-datasets with integrated opportunities for artificial intelligence using MatLab. WCCS

- Yeung J., Wong S., Tam A., So J. (2019). Integrating machine learning technology to data analytics for e-commerce on cloud. WorldS4

- HaraĹĄta J. (2019). Trust by discrimination: Technology specific regulation & explainable AI. XAILA@JURIX

- Bergstra, J. (2015). Hyperopt: a Python library for model selection and hyperparameter optimization.

- Ribeiro, M.T. (2016). Model-Agnostic Interpretability of Machine Learning.

- Castelvecchi, D. (2016). Can we open the black box of AI?.

- Shwartz-Ziv, R. (2017). Opening the Black Box of Deep Neural Networks via Information.

- Doshi-Velez, F. (2017). Towards A Rigorous Science of Interpretable Machine Learning.

- Doshi-Velez, F. (2017). Accountability of AI Under the Law: The Role of Explanation.

- Doran, D. (2017). What Does Explainable AI Really Mean? A New Conceptualization of Perspectives.

- Fisher, A. (2018). [Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the “Rashomon” Perspective](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the “Rashomon” Perspective%22&btnG=).

- Aravantinos, Vincent (2018). Traceability of Deep Neural Networks.

- Dakkak, Abdul (2018). Frustrated with Replicating Claims of a Shared Model? A Solution.

- Lipton, Z.C. (2018). The Mythos of Model Interpretability.

- Narayanan, M. (2018). How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human Interpretability of Explanation.

- Hosny, Ahmed (2019). ModelHub.AI: Dissemination Platform for Deep Learning Models.

- Fursin, Grigori (2019). SysML’19 demo: customizable and reusable Collective Knowledge pipelines to automate and reproduce machine learning experiments.

- Xu, Lu; Wang, Yating (2019). XCloud: Design and Implementation of AI Cloud Platform with RESTful API Service.

- Qian, Bin; Su, Jie; Wen, Zhenyu; Jha, Devki Nandan; Li, Yinhao; Guan, Yu; Puthal, Deepak; James, Philip; Yang, Renyu; Zomaya, Albert Y.; Rana, Omer; Wang, Lizhe; Ranjan, Rajiv (2019). Orchestrating Development Lifecycle of Machine Learning Based IoT Applications: A Survey.

- Zhou, B. (2019). Comparing the Interpretability of Deep Networks via Network Dissection.

- Murdoch, W.J., (2019). Interpretable machine learning: definitions, methods, and applications.

- Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences..

- Rudin, C. (2019). Please Stop Explaining Black Box Models for High Stakes Decisions.

- Putnam, V. (2019). Explainable Artificial Intelligence for Training and Tutoring.

- Hall, Patrick (2019). Guidelines for Responsible and Human-Centered Use of Explainable Machine Learning.

🔝back to top

Production

- V{'{a}}zquez, Miguel {'{A}}ngel; Pallois, Jean Paul; Debbah, M{'{e}}rouane; Masouros, Christos; Kenyon, Tony; Deng, Yansha; Mekuria, Fisseha; P{'{e}}rez{-}Neira, Ana I.; Erfanian, Javan (2019). Deploying Artificial Intelligence in the Wireless Infrastructure: the Challenges Ahead. 5GWF

- Bilal M., Oyedele L.O. (2020). Guidelines for applied machine learning in construction industry—A case of profit margins estimation. Advanced Engineering Informatics

- Blacker P. (2019). Rapid prototyping of deep learning models on radiation hardened CPUs. AHS

- Chen, Tung-Chien (2019). NeuroPilot: A Cross-Platform Framework for Edge-AI. AICAS

- Maas, Matthijs M. (2018). [Regulating for ‘Normal AI Accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deploymentc](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22Regulating for ‘Normal AI Accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deploymentc%22&btnG=). AIES

- Huang Y.-L. (2019). MLoC: A Cloud Framework adopting Machine Learning for Industrial Automation. ASCC

- Ma, L. (2018). Deepgauge: Multi-granularity testing criteria for deep learning systems. ASE

- Hu, Q. (2019). DeepMutation++: A Mutation Testing Framework for Deep Learning Systems. ASE

- Zhang C., Yu M., Wang W., Yan F. (2019). Mark: Exploiting cloud services for cost-effective, slo-aware machine learning inference serving. ATC

- Rao S.S., Pradyumna S., Kalambur S., Sitaram D. (2019). Bodhisattva - Rapid Deployment of AI on Containers. CCEM

- Castro-Lopez O. (2019). Multi-target Compiler for the Deployment of Machine Learning Models. CGO

- Bozarth, Alex (2019). Model Asset eXchange: Path to Ubiquitous Deep Learning Deployment. CIKM

- Yin J. (2019). Strategies to deploy and scale deep learning on the summit supercomputer. DLS

- Benjamins, Richard (2020). Towards organizational guidelines for the responsible use of AI. ECAI

- Baier, Lucas; J{{o}}hren (2019). Challenges in the Deployment and Operation of Machine Learning in Practice. ECIS

- Derakhshan, B. (2019). Continuous Deployment of Machine Learning Pipelines. EDBT

- Shin M. (2018). Neural network syntax analyzer for embedded standardized deep learning. EMDL

- Du, X. (2019). Deepstellar: model-based quantitative analysis of stateful deep learning systems. ESEC/FSE

- Gutzen R. (2018). Reproducible neural network simulations: Statistical methods for model validation on the level of network activity data. Frontiers in Neuroinformatics

- Tu, Zhucheng (2018). Pay-Per-Request Deployment of Neural Network Models Using Serverless Architectures. HLT-NAACL

- Brayford, David (2019). Deploying AI Frameworks on Secure HPC Systems with Containers. HPEC

- Narasimhamurthy M. (2019). Verifying conformance of neural network models: Invited paper. ICCAD

- Acs D. (2019). Securely Exposing Machine Learning Models to Web Clients using Intel SGX. ICCP

- Peticolas, Devon (2019). MĂ­mir: Building and Deploying an ML Framework for Industrial IoT. ICDM

- Flaounas, Ilias N. (2017). Beyond the technical challenges for deploying Machine Learning solutions in a software company. ICML

- Odena, A. (2019). Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. ICML

- Pei, K. (2017). Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems. ICSE

- Tian, Y. (2018). DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars. ICSE

- Kim, J. (2019). Guiding deep learning system testing using surprise adequacy. ICSE

- Wang, J. (2019). Adversarial sample detection for deep neural network through model mutation testing. ICSE

- Ben Braiek H., Khomh F. (2019). DeepEvolution: A Search-Based Testing Approach for Deep Neural Networks. ICSME

- Ferguson M. (2019). A standardized representation of convolutional neural networks for reliable deployment of machine learning models in the manufacturing industry. IDETC-CIE

- Alves J.M. (2019). ML4IoT: A Framework to Orchestrate Machine Learning Workflows on Internet of Things Data. IEEE Access

- Yan M. (2020). ARTDL: Adaptive Random Testing for Deep Learning Systems. IEEE Access

- Spell, Derrick C. (2017). Flux: Groupon’s automated, scalable, extensible machine learning platform. IEEE BigData

- Xing, Eric P. (2015). Petuum: A New Platform for Distributed Machine Learning on Big Data. IEEE Transactions on Big Data

- Yoon H. (2019). The adequacy assessment of test sets in machine learning using mutation testing. IJITEE

- Fischer, L. (2020). Applying AI in Practice: Key Challenges and Lessons Learned. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction

- Moor, Lucien (2019). IoT meets distributed AI - Deployment scenarios of Bonseyes AI applications on FIWARE. IPCCC

- Ma L., Zhang F., Sun J., Xue M., Li B., Juefei-Xu F., Xie C., Li L., Liu Y., Zhao J., Wang Y. (2018). DeepMutation: Mutation Testing of Deep Learning Systems. ISSRE

- Liu Y., Chen P.-H.C., Krause J., Peng L. (2019). How to Read Articles That Use Machine Learning: Users’ Guides to the Medical Literature. JAMA

- Wang J., Ma Y., Zhang L., Gao R.X., Wu D. (2018). Deep learning for smart manufacturing: Methods and applications. Journal of Manufacturing Systems

- Braiek, H.B. (2020). On Testing Machine Learning Programs. Journal of Systems and Software

- Ackermann, Klaus (2018). Deploying Machine Learning Models for Public Policy: A Framework. KDD

- Bethard S., Ogren P., Becker L. (2014). ClearTK 2.0: Design patterns for machine learning in UIMA. LREC

- Sehgal, Abhishek (2019). Guidelines and Benchmarks for Deployment of Deep Learning Models on Smartphones as Real-Time Apps. MAKE

- Foidl H. (2019). Risk-based data validation in machine learning-based software systems. MaLTeSQuE/ESEC/FSE

- Mehrtash, Alireza (2017). DeepInfer: open-source deep learning deployment toolkit for image-guided therapy. Medical Imaging: Image-Guided Procedures

- Takabe, Yuichiro (2013). Rapid Deployment for Machine Learning in Educational Cloud. NBiS

- Ma, L. (2019). Deepct: Tomographic combinatorial testing for deep learning systems. SANER

- Baylor, D. (2017). TFX: A TensorFlow-Based Production-Scale Machine Learning Platform. SIGKDD

- Christidis, Angelos (2019). Serving Machine Learning Workloads in Resource Constrained Environments: a Serverless Deployment Example. SOCA

- Damiani E., Ardagna C.A. (2020). Certified Machine-Learning Models. SOFSEM

- Pei K. (2017). DeepXplore: Automated Whitebox Testing of Deep Learning Systems. SOSP

- Pei, K. (2017). DeepXplore: Automated Whitebox Testing of Deep Learning Systems. SOSP

- Griebel, Matthias; D{{u}}rr (2019). Applied image recognition: guidelines for using deep learning models in practice. Wirtschaftsinformatik

- (2018). Coverage-guided fuzzing for deep neural networks.

- Sun, Y. (2018). Testing Deep Neural Networks.

- Hanzlik, Lucjan (2018). MLCapsule: Guarded Offline Deployment of Machine Learning as a Service.

- Xie, X. (2019). Diffchaser: Detecting disagreements for deep neural networks.

- Arora, Akshay (2019). ISTHMUS: Secure, Scalable, Real-time and Robust Machine Learning Platform for Healthcare.

🔝back to top

Lifecycle Management

- Feldman K., Faust L., Wu X., Huang C., Chawla N.V. (2017). Beyond volume: The impact of complex healthcare data on the machine learning pipeline. BIRS

- Casalicchio, Giuseppe (2017). OpenML: An R Package to Connect to the Networked Machine Learning Platform OpenML. CompStat

- Onoufriou G. (2019). Nemesyst: A hybrid parallelism deep learning-based framework applied for internet of things enabled food retailing refrigeration systems. Computers in Industry

- Arun A. (2019). Shooting the moving target: Machine learning in cybersecurity. CSAE

- Kossak, Felix; Zwick, Michael (2019). ML-PipeDebugger: A Debugging Tool for Data Processing Pipelines. DEXA

- Ring D., Barbier J., Gales G., Kent B., Lutz S. (2019). Jumping in at the deep end: How to experiment with machine learning in post-production software. DigiPro

- Brumbaugh, Eli; Kale, Atul; Luque, Alfredo; Nooraei, Bahador; Park, John; Puttaswamy, Krishna; Schiller, Kyle; Shapiro, Evgeny; Shi, Conglei; Siegel, Aaron; Simha, Nikhil; Bhushan, Mani; Sbrocca, Marie; Yao, Shi{-}Jing; Yoon, Patrick; Zanoyan, Varant; Zeng, Xiao{-}Han T.; Zhu, Qiang; Cheong, Andrew; Du, Michelle Gu{-}Qian; Feng, Jeff; Handel, Nick; Hoh, Andrew; Hone, Jack; Hunter, Brad (2019). Bighead: A Framework-Agnostic, End-to-End Machine Learning Platform. DSAA

- Gruendner J. (2019). Ketos: Clinical decision support and machine learning as a service – A training and deployment platform based on Docker, OMOP-CDM, and FHIR Web Services. EID

- Damiani E. (2018). Towards conceptual models for machine learning computations. ER

- de Oliveira Werneck, Rafael (2018). Kuaa: A unified framework for design, deployment, execution, and recommendation of machine learning experiments. Future Generation Computer Systems

- Rausch, Thomas (2019). Towards a Serverless Platform for Edge AI. HotEdge

- Hummer, Waldemar (2019). ModelOps: Cloud-Based Lifecycle Management for Reliable and Trusted AI. IC2E

- Miao, Hui (2016). ModelHub: Towards Unified Data and Lifecycle Management for Deep Learning. ICDE

- Miao, H. (2017). Towards Unified Data and Lifecycle Management for Deep Learning. ICDE

- Miao, Hui (2017). ModelHub: Deep Learning Lifecycle Management. ICDE

- Miao, Hui; Li, Ang; Davis, Larry S.; Deshpande, Amol (2017). Towards Unified Data and Lifecycle Management for Deep Learning. ICDE

- Miao, H. (2017). Modelhub: Deep learning lifecycle management. ICDE

- Zaharia, Matei (2018). Accelerating the Machine Learning Lifecycle with MLflow. ICDE

- Sigl M.B. (2019). Don’t fear the REAPER: A framework for materializing and reusing deep-learning models. ICDE

- Frost R., Paul D., Li F. (2019). AI pro: Data processing framework for AI models. ICDE

- Weber C., Hirmer P., Reimann P., Schwarz H. (2019). A new process model for the comprehensive management of machine learning models. ICEIS

- Semerikov S., Teplytskyi I., Yechkalo Y., Markova O., Soloviev V., Kiv A. (2019). Computer simulation of neural networks using spreadsheets: Dr. Anderson, welcome back. ICTERI

- Verma D., White G., De Mel G. (2019). Federated AI for the enterprise: A web services based implementation. ICWS

- Maskey, Manil; Molthan, Andrew; Hain, Chris; Ramachandran, Rahul; Gurung, Iksha; Freitag, Brian; Miller, Jeffrey J.; Ramasubramanian, Muthukumaran; Bollinger, Drew; Mestre, Ricardo; Cecil, Daniel (2019). Machine Learning Lifecycle for Earth Science Application: A Practical Insight into Production Deployment. IGARSS

- Baylor, Denis; Breck, Eric; Cheng, Heng{-}Tze; Fiedel, Noah; Foo, Chuan Yu; Haque, Zakaria; Haykal, Salem; Ispir, Mustafa; Jain, Vihan; Koc, Levent; Koo, Chiu Yuen; Lew, Lukasz; Mewald, Clemens; Modi, Akshay Naresh; Polyzotis, Neoklis; Ramesh, Sukriti; Roy, Sudip; Whang, Steven Euijong; Wicke, Martin; Wilkiewicz, Jarek; Zhang, Xin; Zinkevich, Martin (2017). TFX: A TensorFlow-Based Production-Scale Machine Learning Platform. KDD

- Cheng H.-T. (2017). TensorFlow estimators: Managing simplicity vs. Flexibility in high-level machine learning frameworks. KDD

- Le H.V., Mayer S., Henze N. (2017). Machine learning with tensorflow for mobile and ubiquitous interaction. MUM

- Sung, Nako (2017). NSML: A Machine Learning Platform That Enables You to Focus on Your Models. NIPS

- Baylor, Denis (2019). Continuous Training for Production ML in the TensorFlow Extended (TFX) Platform. OpML

- Bhattacharjee, Anirban (2019). Stratum: A Serverless Framework for the Lifecycle Management of Machine Learning-based Data Analytics Tasks. OpML

- Miguel, Lucas B. (2017). Marvin - Open source artificial intelligence platform. PAPIs

- Chard R. (2019). Publishing and serving machine learning models with DLHub. PEARC

- Palacios, Ricardo Colomo (2019). Towards a Software Engineering Framework for the Design, Construction and Deployment of Machine Learning-Based Solutions in Digitalization Processes. RIIFORUM

- Shrivastava S. (2019). ThunderML: A toolkit for enabling AI/ML models on cloud for industry 4.0. SCF

- Arpteg, A. (2018). Software Engineering Challenges of Deep Learning. SEAA

- Cai, Zhuhua; Gao, Zekai J.; Luo, Shangyu; Perez, Luis Leopoldo; Vagena, Zografoula; Jermaine, Christopher M. (2014). A comparison of platforms for implementing and running very large scale machine learning algorithms. SIGMOD

- van der Weide, Tom; Papadopoulos, Dimitris; Smirnov, Oleg; Zielinski, Michal; van Kasteren, Tim (2017). Versioning for End-to-End Machine Learning Pipelines. SIGMOD

- Binnig, Carsten (2018). Towards Interactive Curation \& Automatic Tuning of ML Pipelines. SIGMOD

- Agrawal, Pulkit (2019). Data Platform for Machine Learning. SIGMOD

- Lourenço, R., (2020). Debugging Machine Learning Pipelines. SIGMOD

- de Prado, Miguel (2019). AI Pipeline - bringing AI to you. End-to-end integration of data, algorithms and deployment tools. Transactions on Internet of Things

- Renggli C., Hubis F.A., KarlaĹĄ B., Schawinski K., Wuz W., Zhang C. (2018). Ease.ml/ci and Ease.ml/meter in action: Towards data management for statistical generalization. VLDB

- Braun, M.L. (2014). Open science in machine learning.

- Miao, H. (2016). Provdb: A system for lifecycle management of collaborative analysis workflows.

- Lai, Liangzhen (2018). Rethinking Machine Learning Development and Deployment for Edge Devices.

- Guo, Qianyu; Xie, Xiaofei; Ma, Lei; Hu, Qiang; Feng, Ruitao; Li, Li; Liu, Yang; Zhao, Jianjun; Li, Xiaohong (2018). An Orchestrated Empirical Study on Deep Learning Frameworks and Platforms.

- Han, Kun (2019). DELTA: A DEep learning based Language Technology plAtform.

- Ashmore, Rob; Calinescu, Radu; Paterson, Colin (2019). Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges.

🔝back to top

Data Management

- Zhang, Y. (2016). Datalab: A version data management and analytics system. In Proceedings of the 2Nd International Workshop on BIG Data Software Engineering. BIGDSE

- Chang, J.C. (2017). Collaborative crowdsourcing for labeling machine learning datasets. CHI

- Boehm, Matthias (2020). SystemDS: A Declarative Machine Learning System for the End-to-End Data Science Lifecycle. CIDR

- Shah, V. (2019). The ML Data Prep Zoo: Towards Semi-Automatic Data Preparation for ML. DEEM

- Grueneberg K., Ko B., Wood D., Wang X., Steuer D., Lim Y. (2019). IoT Data Management System for Rapid Development of Machine Learning Models. ICCC

- Volkovs, M. (2014). Continuous data cleaning. ICDE

- Spell D.C. (2016). QED: Groupon’s ETL management and curated feature catalog system for machine learning. IEEE BigData

- Chelliah B.J. (2019). Text and data formatting for machine learning. IJITEE

- Freitas, Jo{~{a}}o; Ribeiro, Jorge; Baldewijns, Daan; Oliveira, Sara; Braga, Daniela (2018). Machine Learning Powered Data Platform for High-Quality Speech and NLP Workflows. INTERSPEECH

- Kamiran, F. (2012). Data preprocessing techniques for classification without discrimination. KAIS

- Yocum, Ken (2019). Disdat: Bundle Data Management for Machine Learning Pipelines. OpML

- Krishnan S. (2016). ActiveClean: An interactive data cleaning framework for modern machine learning. SIGMOD

- Xu, L. (2017). ORPHEUSDB: A Lightweight Approach to Relational Dataset Versioning. SIGMOD

- Polyzotis, Neoklis; Roy, Sudip; Whang, Steven Euijong; Zinkevich, Martin (2018). Data Lifecycle Challenges in Production Machine Learning: A Survey. SIGMOD

- Tae K.H., Roh Y., Oh Y.H., Kim H., Whang S.E. (2019). Data cleaning for accurate, fair, and robust models: A big data - AI integration approach. SIGMOD

- Shang, Zeyuan (2019). Democratizing Data Science through Interactive Curation of ML Pipelines. SIGMOD

- Shah V. (2019). The ML data prep zoo: Towards semi-automatic data preparation for ML. SIGMOD

- Souza R. (2019). Provenance data in the machine learning lifecycle in computational science and engineering. WORKS

- Schoenfeld, Brandon (2018). Preprocessor Selection for Machine Learning Pipelines.

- Veale, M. (2019). Governing machine learning that matters.