Perceived diversity in software engineering: a systematic literature review

Affiliations.

  • 1 University of British Columbia, Kelowna, Canada.
  • 2 University of Waterloo, Waterloo, Canada.
  • PMID: 34305441
  • PMCID: PMC8284041
  • DOI: 10.1007/s10664-021-09992-2

We define perceived diversity as the diversity factors that individuals are born with. Perceived diversity in Software Engineering has been recognized as a high-value team property and companies are willing to increase their efforts to create more diverse work teams. The current diversity state-of-the-art shows that gender diversity studies have been growing during the past decade, and they have shown the benefits of including women in software teams. However, less is known about how other perceived diversity factors such as race, nationality, disability, and age of developers are related to Software Engineering. Through a systematic literature review, we aim to clarify the research area concerned with perceived diversity in Software Engineering. Our goal is to identify (1) what issues have been studied and what results have been reported; (2) what methods, tools, models, and processes have been proposed to help perceived diversity issues; and (3) what limitations have been reported when studying perceived diversity in Software Engineering. Furthermore, our ultimate goal is to identify gaps in the current literature and create a call for future action in perceived diversity in Software Engineering. Our results indicate that the individual studies have typically had a gender diversity perspective focusing on showing gender bias or gender differences instead of developing methods and tools to mitigate the gender diversity issues faced in SE. Moreover, perceived diversity aspects related to SE participants' race, age, and disability need to be further analyzed in Software Engineering research. From our systematic literature review, we conclude that researchers need to consider a wider set of perceived diversity aspects for future research.

Keywords: Perceived diversity; Software engineering; Systematic literature review.

© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.

Help | Advanced Search

Computer Science > Software Engineering

Title: large language models for software engineering: a systematic literature review.

Abstract: Large Language Models (LLMs) have significantly impacted numerous domains, including Software Engineering (SE). Many recent publications have explored LLMs applied to various SE tasks. Nevertheless, a comprehensive understanding of the application, effects, and possible limitations of LLMs on SE is still in its early stages. To bridge this gap, we conducted a systematic literature review (SLR) on LLM4SE, with a particular focus on understanding how LLMs can be exploited to optimize processes and outcomes. We select and analyze 395 research papers from January 2017 to January 2024 to answer four key research questions (RQs). In RQ1, we categorize different LLMs that have been employed in SE tasks, characterizing their distinctive features and uses. In RQ2, we analyze the methods used in data collection, preprocessing, and application, highlighting the role of well-curated datasets for successful LLM for SE implementation. RQ3 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE. Finally, RQ4 examines the specific SE tasks where LLMs have shown success to date, illustrating their practical contributions to the field. From the answers to these RQs, we discuss the current state-of-the-art and trends, identifying gaps in existing research, and flagging promising areas for future study. Our artifacts are publicly available at this https URL .

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Systematic Reviews in the Engineering Literature: A Scoping Review

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Ethics in AI through the practitioner’s view: a grounded theory literature review

  • Open access
  • Published: 06 May 2024
  • Volume 29 , article number  67 , ( 2024 )

Cite this article

You have full access to this open access article

literature review in software engineering

  • Aastha Pant   ORCID: orcid.org/0000-0002-6183-0492 1 ,
  • Rashina Hoda 1 ,
  • Chakkrit Tantithamthavorn 1 &
  • Burak Turhan 2  

56 Accesses

Explore all metrics

The term ethics is widely used, explored, and debated in the context of developing Artificial Intelligence (AI) based software systems. In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives. But what do we know about the views and experiences of those who develop these systems – the AI practitioners? We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners’ views on ethics in AI and analysed them to derive five categories: practitioner awareness , perception , need , challenge , and approach . These are underpinned by multiple codes and concepts that we explain with evidence from the included studies. We present a taxonomy of ethics in AI from practitioners’ viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics. The taxonomy provides a landscape view of the key aspects that concern AI practitioners when it comes to ethics in AI. We also share an agenda for future research studies and recommendations for practitioners, managers, and organisations to help in their efforts to better consider and implement ethics in AI.

Similar content being viewed by others

literature review in software engineering

Ethics by design for artificial intelligence

literature review in software engineering

All that glitters is not gold: trustworthy and ethical AI principles

literature review in software engineering

Implementing Ethics in AI: Initial Results of an Industrial Multiple Case Study

Avoid common mistakes on your manuscript.

1 Introduction

Over the last few years, there has been a swift rise in the adoption of AI technology across diverse sectors such as health, transportation, education, IT, banking, and more. The widespread use of AI has underscored the significance of ethical considerations within the realm of AI (Hagendorff 2020 ). Ethics refers to “ the moral principles that govern the behaviors or activities of a person or a group of people ” (Nalini 2020 ). The process of attributing moral values and ethical principles to machines to resolve ethical issues they encounter, and enabling them to operate ethically is a form of applied ethics (Anderson and Anderson 2011 ). There is a lack of a universal definition of AI ethics and ethical principles (Kazim and Koshiyama 2021 ). In our study, we adopted the definition proposed by Siau and Wang ( 2020 ), stating that “AI ethics refers to the principles of developing AI to interact with other AIs and humans ethically and function ethically in society” . Likewise, we have adopted the definitions of AI ethical principles outlined in Australia’s AI Ethics Principles Footnote 1 list because there is a lack of a universal set of AI ethics principles that the whole world follows. Different countries and organisations have their own distinct AI ethical principles. For example, the European Commission has defined its own guidelines for trustworthy AI (Commission 2019 ), the United States Department of Defense has adopted 5 principles of AI Ethics (Defense 2020 ), and the Organisation for Economic Cooperation and Development (OECD) has defined its AI principles to promote the use of ethical AI (OECD 2019 ). Australia’s AI Ethics Principles address a broad spectrum of ethical concerns, spanning from human to environmental well-being. They encompass widely recognised ethical principles like fairness, privacy, and transparency, along with less common but crucial concepts such as contestability and accountability. The definitions of the terminologies used in this study have been provided in Appendix C .

The consideration of ethics in AI includes the process of development as well as the resulting product. Footnote 2 It is very important to incorporate ethical considerations in the development of AI products to ensure that the end product is ethically, socially, and legally responsible (Obermeyer and Emanuel 2016 ). The importance of ethical consideration in AI is highlighted by recent incidents that demonstrate its impact (Bostrom and Yudkowsky 2018 ). For example, GitHub was criticised for using unlicensed source code as training data for their AI product, which resulted in disappointment among software developers (Al-Kaswan and Izadi 2023 ). There were also cases of racial and gender bias in AI systems, such as facial recognition algorithms that performed better on white men and worse on black women, highlighting issues of accountability and bias (Buolamwini and Gebru 2018 ). Additionally, in 2018, Amazon had to halt the use of their AI-powered recruitment tool due to gender bias (Dastin (2018) ), and in 2020, the Dutch court halted the use of System Risk Indication (SyRI) - a secret algorithm to detect possible social welfare fraud as this algorithm lacked transparency for citizens about what it does with the personal information of the people (SyR 2020 ). In each of these examples, ethical problems might have arisen during the development process, giving rise to ethical concerns regarding the resulting product. These incidents emphasise the importance of ethical considerations in AI development.

We were motivated to study the area of ethics in AI due to various case studies and the importance of the topic. Despite the existence of ethical principles, guidelines, and company policies, the implementation of these principles is ultimately up to the AI practitioners. Thus, we became interested in conducting a review study to explore existing research on ethics in AI. Specifically, we were interested in exploring the perspectives of those closest to it – the AI practitioners, Footnote 3 as they are in a unique position to bring about changes and improvements and the need for review studies in the area of AI ethics to understand practitioners’ perspectives have also been highlighted in the literature (Khan et al. 2022 ; Leikas et al. 2019 ).

To understand practitioners’ views on AI ethics as presented in the literature, we conducted a grounded theory literature review (GTLR) following the five-step framework of define , search , select , analyse , and present proposed by Wolfswinkel et al. ( 2013 ). We first defined the overarching research question (RQ), What do we know from the literature about the AI practitioners’ views and experiences of ethics in AI? Footnote 4 Our study aimed to find empirical studies that focused on capturing the views and experiences of AI practitioners regarding AI ethics and ethical principles, and their implementation in developing AI-based systems. Then, we used the grounded theory literature review (GTLR) protocol to search and select primary research articles Footnote 5 that include practitioners’ views on AI ethics. To analyse the selected studies, we applied the procedures of socio-technical grounded theory (STGT) for data analysis (Hoda 2021 ) such as open coding , targeted coding , constant comparison , and memoing , iteratively on the 38 primary empirical studies. Wolfswinkel et al. ( 2013 ) welcome adaptations to their framework by acknowledging that “... one size does not fit all, and there should be no hesitation whatsoever to deviate from our proposed steps, as long as such variation is well motivated.” Since there was little concrete guidance available on how to perform in-depth analysis and develop theory from literature as a data source, we made some adaptations, as explained in the methodology section (Section 3 ).

Based on our analysis, we present a taxonomy of ethics in AI from practitioners’ viewpoints spanning five categories: (i) practitioner awareness, (ii) practitioner perception, (iii) practitioner need, (iv) practitioner challenge, and (v) practitioner approach , captured in Figs. 4 and 5 , and described in-depth in Sections 5 and 6.1 . The main contributions of this paper are:

A source of gathered information from literature on AI practitioners’ views and experiences of ethics in AI,

A taxonomy of ethics in AI from practitioners’ viewpoints which includes five categories such their awareness , perception , need , challenge , and approach related to ethics in AI,

An example of the application of grounded theory literature review (GTLR) in software engineering,

Guidance for practitioners who require a better understanding of the requirements and factors affecting ethics implementation in AI,

A set of recommendations for future research in the area of ethics implementation in AI from practitioners’ perspective.

The rest of the paper is structured as follows: Section 2 presents the background details in the area of ethics in Information and Communications Technology (ICT), software engineering, and AI, followed by the details of the grounded theory literature review (GTLR) methodology in Section 3 . Then, we discuss the challenges, threats, and limitations of the methodology in Section 4 , present the findings in Section 5 which is followed by the description of the taxonomy , insights, and recommendations in Section 6 . Then, we present the methodological lessons learned in Section 7 followed by a conclusion in Section 8 .

2 Background

2.1 ethics in ict and software engineering.

The topic of ‘ethics’ has been a well-researched and widely discussed topic in the field of ICT for a long time. Over recent years, various IT professional organisations worldwide, like the Association for Computing Machinery (ACM), Footnote 6 the Institute for Certification of IT Professionals (ICCP), Footnote 7 and AITP Footnote 8 have developed their own codes of ethics (Payne and Landry 2006 ). These codes of ethics in the ICT domain are created to motivate and steer the ethical behavior of all computer professionals. This includes those who are currently working in the field, those who aspire to do so, teachers, students, influencers, and anyone who makes significant use of computer technology, as defined by the Association for Computing Machinery (ACM).

In 1991, Gotterbarn ( 1991 ) expressed concern about the insufficient emphasis placed on professional ethics in guiding the daily activities of computing professionals within their respective roles. Subsequently, he actively engaged in various initiatives aimed at advocating for ethical codes and fostering a sense of professional responsibility in the field. Studies have been conducted to explore how these codes of ethics affect the decision-making of professionals in the ICT sector. Ethics within the professional sphere can significantly aid ICT professionals in their decision-making, as evidenced by research conducted by Allen et al. ( 2011 ), and these codes have been observed to influence the conduct of ICT professionals (Harrington 1996 ). In 2010, Van den Bergh and Deschoolmeester ( 2010 ) conducted a survey involving 276 ICT professionals to explore the potential value of ethical codes of conduct for the ICT industry in dealing with contentious issues. They concluded that having a policy regarding ICT ethics does indeed significantly influence how professionals assess ethical or unethical situations in some cases. Fleischmann et al. ( 2017 ) conducted a mixed-method study with ICT professionals on the role of codes of ethics and the relationship between their experiences and attitudes towards the codes of ethics.

Likewise, studies have been conducted to investigate the impact of ethics in the area of Software Engineering. Rashid et al. ( 2009 ) concluded that ethics has been a very important part of software engineering and discussed the ethical challenges of software engineers who design systems for the digital world. Aydemir and Dalpiaz ( 2018 ) introduced an analytical framework to aid stakeholders including users and developers in capturing and analysing ethical requirements to foster ethical alignment within software artifacts and the development processes. In a similar vein, according to Pierce and Henry ( 1996 ), one’s personal ethical principles, workplace ethics, and adherence to formal codes of conduct all play a significant role in influencing the ethical conduct of software professionals. Pierce and Henry ( 1996 ) also delves into the extent of influence exerted by these three factors. On a related note, Hall ( 2009 ) examines the concept of ethical conduct in the context of software engineers, emphasizing the importance of good professional ethics. Furthermore, in a study by Fraga ( 2022 ), they conducted a survey involving software engineering professionals to explore the role of ethics in their field. The findings of the study suggest that the promotion of ethical leadership among systems engineers can be achieved when they adhere to established standards, codes, and ethical principles. These studies into ethics within the realms of ICT and Software Engineering indicate that this subject has been of significant importance for a long time, and there has been a prolonged effort to improve ethical considerations in these fields.

In summary, there is a recognised need for a stronger focus on professional ethics in guiding the daily activities of computing professionals. Multiple studies consistently demonstrate the substantial influence of ethical codes on decision-making in the ICT sector and Software Engineering, shaping behavior and ethical assessments. The collective findings underscore the importance of ethical considerations in the fields of ICT and Software Engineering.

2.2 Secondary Studies on AI Ethics

A number of secondary studies have been conducted that focused on the theme of investigating the ethical principles and guidelines related to AI. For example, Khan et al. ( 2022 ) conducted a Systematic Literature Review (SLR) to investigate the agreement on the significance of AI ethical principles and identify potential challenges to their adoption. They found that the most common AI ethics principles are transparency, privacy, accountability, and fairness. However, significant challenges in incorporating ethics into AI include a lack of ethical knowledge and vague principles. Likewise, Ryan and Stahl ( 2020 ) conducted a review study to provide a comprehensive analysis of the normative consequences associated with current AI ethics guidelines, specifically targeting AI developers and organisational users. Lu et al. ( 2022 ) conducted a Systematic Literature Review (SLR) to identify the responsible AI principles discussed in the existing literature and to uncover potential solutions for responsible AI. Additionally, they outlined a research roadmap for the field of software engineering with a focus on responsible AI.

Likewise, review studies have been conducted to investigate the ethical concerns of the use of AI in different domains. Möllmann et al. ( 2021 ) conducted a Systematic Literature Review (SLR) to explore which ethical considerations of AI are being investigated in digital health and classified the relevant literature based on the five ethical principles of AI including beneficence,non-maleficence, autonomy, justice, and explicability . Likewise, Royakkers et al. ( 2018 ) conducted an SLR to explore the social and ethical issues that arise due to digitization based on six different technologies like Internet of Things, robotics, bio-metrics, persuasive technology, virtual & augmented reality, and digital platforms. The review uncovered recurring themes such as privacy, security, autonomy, justice, human dignity, control of technology, and the balance of powers.

Studies have also been conducted to explore different methods and approaches to enhance the ethical development of AI. For example, Wiese et al. ( 2023 ) conducted a Systematic Literature Review (SLR) to explore the methods to promote and engage practice on the front end of ethical and responsible AI. The study was guided by an adaption of the PRISMA framework and Hess & Fore’s 2017 methodological approach. Morley et al. ( 2020 ) conducted a review study with the aim of exploring AI ethics tools, methods, and research that are accessible to the public, for translating ethical principles into practice.

Most of the secondary studies have either focused on investigating specific AI ethical principles, the ethical consequences of AI systems, or the approaches to enhance the ethical development of AI. Conducting a review study to identify and analyse primary empirical research on AI practitioners’ perspectives regarding AI ethics is important for gaining an understanding of the ethical landscape in the field of AI. It can also inform practical interventions, contribute to policy development, and guide educational initiatives aimed at promoting responsible and ethical practices in the development and deployment of AI technologies.

2.3 Ethics in AI

There are numerous and divergent views on the topic of ethics in AI (Vakkuri et al. 2020b ; Mittelstadt 2019 ; Hagendorff 2020 ), as it has been increasingly applied in various contexts and industries (Kessing 2021 ). AI practitioners and researchers seem to have mixed perspectives about AI ethics. Some believe there is no rush to consider AI-related ethical issues as AI has a long way from being comparable to human capabilities and behaviors (Siau and Wang 2020 ), while others conclude that AI systems must be developed by considering ethics as they can have enormous societal impact (Bostrom and Yudkowsky 2018 ; Bryson and Winfield 2017 ). Although the viewpoints vary from practitioner to practitioner, most conclude that AI ethics is an emerging and widely discussed topic and a current relevant issue of the real world (Vainio-Pekka 2020 ). This indicates that while opinions on the importance of AI ethics may differ, there is a consensus that the subject is highly relevant in the present context.

A number of studies conducted in the area of ethics in AI have been conceptual and theoretical in nature (Seah and Findlay 2021 ). Critically, there are copious numbers of guidelines on AI ethics, making it challenging for AI practitioners to decide which guidelines to follow. Unsurprisingly, studies have been conducted to analyse the ever-growing list of specific AI principles (Kelley 2021 ; Mark and Anya 2019 ; Siau and Wang 2020 ). For example, Jobin et al. ( 2019 ) reviewed 84 ethical AI principles and guidelines and concluded that only five AI ethical principles – transparency , fairness , non-maleficence , responsibility and privacy – are mainly discussed and followed. Fjeld et al. ( 2020 ) reviewed 36 AI ethical principles and reported that there are eight key themes of AI ethics – privacy , accountability , safety and security , transparency and explainability , fairness and non-discrimination , human control of technology , professional responsibility , and promotion of human values . Likewise, Hagendorff ( 2020 ) analysed and compared 22 AI ethical guidelines to examine their implementation in the practice of research, development, and application of AI systems. Some review studies focused on exploring the challenges and potential solutions in the area of ethics in AI, for example, Jameel et al. ( 2020 ); Khan et al. ( 2022 ). The desire to set ethical guidelines in AI has been enhanced due to increased competition between organisations to develop robust AI tools (Vainio-Pekka 2020 ). Among them, only a few guidelines indicate an oversight or enforcement mechanism (Inv 2019 ). It suggests that recent research has dedicated significant attention to the analysis and comparison of various sets of ethical principles and guidelines for AI.

Similarly, AI practitioners have expressed various concerns regarding the public policies and ethical guidelines related to AI. For example, while the ACM Codes of Ethics puts responsibilities to AI practitioners creating AI-based systems, a research study revealed that these practitioners generally believe that only physical harm caused by AI systems is crucial and should be taken into account (Veale et al. 2018 ). Similarly, in November 2021, the UN Educational, Scientific, and Cultural Organisation (UNESCO) signed a historic agreement outlining shared values needed to ensure the development of Responsible AI (UN 2021 ). The study conducted by Varanasi and Goyal ( 2023 ) involved interviewing 23 AI practitioners from 10 organisations to investigate the challenges they encounter when collaborating on Responsible AI (RAI) principles defined by UNESCO. The findings revealed that practitioners felt overwhelmed by the responsibility of adhering to specific RAI principles (non-maleficence, trustworthiness, privacy, equity, transparency, and explainability), leading to an uneven distribution of their workload. Moreover, implementing certain RAI principles (accuracy, diversity, fairness, privacy, and interoperability) in real-world scenarios proved difficult due to conflicts with personal and team values. Similarly, a study by Rothenberger et al. ( 2019 ) conducted an empirical study with AI experts to evaluate several AI ethics guidelines among which Microsoft AI Ethical Principles were one of them. The study found that the participants considered ‘Responsibility’ to be the foremost and notably significant ethical principle in the realm of AI. Following closely, they ranked ‘Privacy protection’ as the second most crucial principle among all other principles. This emphasises the perspective of these AI experts, who consider prioritising responsible AI practices and safeguarding user privacy to be fundamental aspects of ethical advancement and implementation of AI, without regarding other principles as equally crucial. Likewise, an empirical investigation was carried out by Sanderson et al. ( 2023 ), involving AI practitioners and designers. This study aimed to assess the Australian Government’s high-level AI principles and investigate how these ethical guidelines were understood and applied by AI practitioners and designers within their professional contexts. The results indicated that implementing certain AI ethical principles, such as those related to ‘Privacy and security’ , ‘Transparency’ and ‘Explainability’ , and ‘Accuracy’ , posed significant challenges for them. This suggests that there have been studies exploring the relationship between AI practitioners and the guidelines established by public organisations, as well as their sentiments towards each guideline.

Another prominent area of focus has been studies that were conducted to discuss the existing gap between research and practice in the field of ethics in AI. Smith et al. ( 2020 ) conducted a review study to identify gaps in ethics research and practice of ethical data-driven software development and highlighted how ethics can be integrated into the development of modern software. Similarly, Shneiderman ( 2020 ) provided 15 recommendations to bridge the gap between ethical principles of AI and practical steps for ethical governance. Likewise, there are solution-based papers and papers discussing models, frameworks, and methods for AI developers to enhance their AI ethics implementation. For example, an article by Vakkuri et al. ( 2021 ) presents the AI maturity model for AI software. In contrast, another article by Vakkuri et al. ( 2020a ) discusses the ECCOLA method for implementing ethically aligned AI systems. There are also papers presenting the toolkit to address fairness in ML algorithms (Castelnovo et al. 2020 ) and transparency model to design transparent AI systems (Felzmann et al. 2020 ). In general, it suggests that recent studies have centered on addressing the gap between research and practical application in the field of AI ethics. This also involves the development of various tools and methods aimed at improving the ethical implementation of AI.

Overall, existing studies seem to primarily focus on either analysing the plethora of ethical AI principles, filling the gap between research and practice, or discussing tool-kits and methods. However, compared to the number of papers on AI ethics describing ethical guidelines and principles, and tools and methods, there is a relative lack of studies that focus on the views and experiences of AI practitioners on AI ethics (Vakkuri et al. 2020b ). Furthermore, the literature also underscores the necessity for review studies that evaluate and synthesise the existing primary research on AI practitioners’ views and experiences of AI ethics (Khan et al. 2022 ; Leikas et al. 2019 ). To assimilate, analyse, and present the empirical evidence spread across the literature, we conducted a Grounded Theory Literature Review (GTLR) to investigate AI practitioners’ viewpoints on ethics in AI with some adaptations to the original framework, drawing data from papers whose prime focus may not have been understanding practitioners’ viewpoints but that nonetheless contained information about the same.

3 Review Methodology

While the importance of understanding AI practitioners’ viewpoints on ethics in AI has been highlighted (Vakkuri et al. 2020b ), yet, there are not enough dedicated research articles on the topic to effectively conduct a systematic literature review or mapping study. This is mainly because there are not enough papers dedicated to investigating AI practitioners’ views on ethics in AI such that their focus could be apparent from the title and abstract. Papers that include this as part of their findings are difficult to identify and select without a full read-through, making it ineffective and impractical when dealing with thousands of papers. At the same time, we were aware of a more responsive yet systematic method for reviewing the literature, called grounded theory literature review (GTLR) introduced by Wolfswinkel et al. ( 2013 ). GT is a popular research method that offers a pragmatic and adaptable approach for interpreting complex social phenomena, (Charmaz 2000 ). It provides a robust intellectual rationale for employing qualitative research to develop theoretical analyses (Goulding 1998 ). In Grounded Theory, researchers refrain from starting with preconceived hypotheses or theories to validate or invalidate. Instead, they initiate the research process by gathering data within the context, conducting simultaneous analysis, and subsequently formulating hypotheses (Strauss and Corbin 1990 ). This method is appropriate for our study because our research topic incorporates socio-technical aspects, and we also chose not to commence with a predetermined hypothesis. Instead, our approach was centered on examining the viewpoints of AI practitioners regarding AI ethics as outlined in the existing literature.

While the overarching review framework of grounded theory literature review (GTLR) helped frame the review process, we found ourselves having to work through the concrete application details using the practices of socio-technical grounded theory (STGT). In doing so, we made some adaptations to the five-step framework of define , search , select , analyse , and present described in the original grounded theory literature review (GTLR) guidelines by Wolfswinkel et al. ( 2013 ) and applied socio-technical grounded theory (STGT)’s concrete data analysis steps (Hoda 2021 ). Figure 1 presents an overview of the grounded theory literature review (GTLR) steps using the socio-technical grounded theory (STGT) method for data analysis as applied in this study. Table 1 presents the comparison between Grounded Theory Literature Review (GTLR) as we applied it, and traditional Systematic Literature Review (SLR) (Kitchenham et al. 2009 ).

figure 1

Steps of the Grounded Theory Literature Review (GTLR) method with Socio-Technical Grounded Theory (STGT) for data analysis

The first step of grounded theory literature review (GTLR) is to formulate the initial review protocol, including determining the scope of the study by defining inclusion and exclusion criteria and search items, followed by finalising databases and search strings, with the aim of obtaining as many relevant primary empirical studies as possible. Studies that are empirical were one of the inclusion criteria of our study which is presented in Table 3 . By ‘empirical papers’, we are referring to those that draw information directly from primary sources, such as interviews and survey papers (studies that involve participants by using surveys to gather their perspectives on a specific subject, not literature surveys.) The research question (RQ) formulated was, What do we know from the literature about the AI practitioners’ views and experiences of ethics in AI?

3.1.1 Sources

Four popular digital databases, namely, ACM Digital Library (ACM DL), IEEE Xplore, SpringerLink, and Wiley Online Library (Wiley OL) were used as sources to identify the relevant literature. This choice was driven by the interdisciplinary nature of the topic, ‘ethics in AI.’ Given the rapid expansion of literature on AI ethics in recent years, researchers have been contributing their work to different venues. We were interested in understanding how AI practitioners perceive AI ethics. This emphasis on AI ethics perspectives was particularly prominent within Software Engineering and Information Systems venues. These databases have also been regularly used to conduct reviews on human aspects of software engineering, for example, Hidellaarachchi et al. ( 2021 ); Perera et al. ( 2020 ). Initially, we searched for relevant studies which were published in journals and conferences only and for which full texts were available.

3.1.2 Search Strings

To begin with, we initiated the process of developing search queries by selecting key terms related to our research topic. Our initial set of key terms included “ethics”, “AI”, and “developer”. This choice was made in line with the primary objective of our study, which was to investigate the perspectives of AI practitioners on ethics in AI. Subsequently, we expanded our search by incorporating synonyms for these key terms to ensure a more comprehensive retrieval of relevant primary studies. As we constructed the final search string, we employed Boolean operators ‘AND’ and ‘OR’ to link these search terms. However, using the terms “ethics”, “AI”, and “developer”, along with their synonyms, resulted in a large number of papers that proved impractical to review, as illustrated in Appendix B . In an attempt to reduce the number of papers to a manageable level, we used the term “ethic*” along with synonyms for “AI” and “developer”. Unfortunately, this approach yielded no results in some databases, as detailed in Appendix B . Therefore, it became imperative for us to develop a search query that would provide us with a reasonable number of relevant primary studies to effectively conduct our study.

Six candidate search strings were developed and executed on databases before one was finalised. Table 2 shows the initial and final search strings. As the finalised search string returned an extremely large number of primary studies (N=9,899), we restricted the publication period from January 2010 to September 2022, in all four databases, as the topic of ethics in AI has been gaining rapid prominence in the last ten years. Table 3 shows the seed and final protocols, including inclusion and exclusion criteria (Wolfswinkel et al. 2013 ).

We performed the search using our seed review protocol , presented in Table 3 . The search process was iterative and time-consuming because some combinations of search strings resulted in too many papers that were unmanageable to go through, whereas some combinations resulted in very few studies. Appendix B contains the documentation of the search process showing the revision of the first search string through to the final search string.

We obtained a total of 1,337 primary articles ( ACM DL: 312, IEEEX: 367, SpringerLink: 575 and Wiley OL: 83 ) using the final search string (as shown in Table 2 ) and the seed review protocol (as shown in Table 3 ). After filtering out the duplicates, we were left with 1073 articles. As per Wolfswinkel et al. ( 2013 ) grounded theory literature review (GTLR) guidelines, the next step was to refine the whole sample based on the title and abstract. We tried this approach for the first 200 articles each that came up in ACM DL, IEEEX, and SpringerLink and all 83 articles in Wiley OL to get a sense of the number of relevant articles to our research question. We read the abstracts of the articles whose titles seemed relevant to our research topic and tried to apply the inclusion and exclusion criteria to select the relevant articles. We quickly realised that selection based on title and abstract was not working well. This is because the presence of the key search terms (for example, “ ethics ” AND “ AI ” AND “ developer ”) was rather common and did not imply that the paper would include the practitioner’s perspective on ethics in AI. We found ourselves having to scan through full texts to judge the relevance to our research question (RQ). Despite the effort involved, the return on investment was very low, for example, for every hundred papers read, we found only one or two relevant papers, i.e., those that included the AI practitioners’ views on ethics in AI.

Out of 683 papers, we obtained only 13 primary articles that were relevant to our research topic. Many articles, albeit interesting, did not present the AI practitioners’ views on ethics in AI. So, we decided to find more relevant articles through snowballing of articles. “Snowballing refers to using the reference list of a paper or the citations to the paper to identify additional papers” (Wohlin 2014 ). Snowballing of those 13 articles via forward citations and backward citations was done to find more relevant articles and enrich the overview review quality. Snowballing seemed to work better for us than the traditional search approach. We modified the seed review protocol accordingly, to include papers published in other databases and those published beyond journals and conferences, including students’ theses, reports, and research papers uploaded to arXiv . The final review protocol used in this study is presented in Table 3 . In this way, we obtained 25 more relevant articles through snowballing, taking the total number of primary articles to 38.

Here we note that the select step of scanning through the full contents of 683 articles was very tedious with a very low return on investment, with only 13 relevant studies obtained. In hindsight, we would have done better to start with a set of seed papers that were collectively known to the research team or those obtained from some quick searches on Google Scholar. What we did next by proceeding from the seed papers to cycles of snowballing, was more practical, productive, and in line with the iterative Grounded Theory (GT) approach as a form of applied theoretical sampling.

3.4 Analyse

Our review topic and domain lent themselves well to the socio-technical research context supported by socio-technical grounded theory (STGT) where our domain was AI, the actors were AI practitioners, the researcher team was collectively well versed in qualitative research and the AI domain, and the data was collected from relevant sources (Hoda 2021 ). We applied procedures of open coding , constant comparison , and memoing in the basic stage and targeted data collection and analysis, and theoretical structuring in the advanced stage of theory development using the emergent mode.

The qualitative data included findings covered in the primary studies, including excerpts of raw underlying empirical data contained in the papers. Data were analysed iteratively in small batches. At first, we analysed the qualitative data of 13 articles that were obtained in the initial phase. We used the standard socio-technical grounded theory (STGT) data analysis techniques such as open coding, constant comparison, and memoing for those 13 articles, and advanced techniques such as targeted coding on the remaining 25 articles, followed by theoretical structuring. This approach of data analysis is rigorous and helped us to obtain multidimensional results that were original, relevant, and dense, as evidenced by the depth of the categories and underlying concepts (presented in Section 5 ). The techniques of the socio-technical grounded theory (STGT) data analysis are explained in the following section. We also obtained layered understanding and reflections through reflective practices like memo writing (Fig. 2 ), which are presented in Section 6 .

3.4.1 The Basic Stage

We performed open coding to generate codes from the qualitative data of the initial set of 13 articles. Open coding was done for each line of the ‘Findings’ sections of the included articles to ensure we did not miss any information and insights related to our research question (RQ). The amount of qualitative data varied from article to article. For example: some articles had in-depth and long ‘Findings’ sections whereas some had short sections. Open coding for some articles consumed a lot of time and led to hundreds of codes whereas a limited number of codes were generated for some other articles (Fig. 3 ).

Similar codes were grouped into concepts and similar concepts into categories using constant comparison. Examples of the application of Socio-Technical Grounded Theory (STGT)’s data analysis techniques to generate codes, concepts, and categories are shown in Fig. 3 , and a number of quotations from the original papers are included in Section 5 , to provide “ strength of evidence ” (Hoda 2021 ). The process of developing concepts and categories was iterative. As we read more papers, we refined the emerging concepts and categories based on the new insights obtained. The coding process was initiated by the first author using Google Docs initially, and later, they transitioned to Google Spreadsheet due to the growing number of codes and concepts. Subsequently, the second author conducted a review of the codes and concepts generated by the first author independently. Following this review, feedback and revisions were discussed in detail during meetings involving all the authors. To clarify roles, the first author handled the coding, the second author offered feedback on the codes, concepts, and categories, while the remaining two authors contributed to refining the findings through critical questioning and feedback.

Each code was numbered as C1, C2, C3 and labeled with the paper ID (for example, G1, G2, G3) that it belonged to, to enable tracing and improve retrospective comprehension of the underlying contexts.

While the open coding led to valuable results in the form of codes, concepts, and categories, memoing helped us reflect on the insights related to the most prominent codes, concepts, and emerging categories. We also wrote reflective memos to document our reflections on the process of performing a grounded theory literature review (GTLR). These insights and reflections are presented in Section 6 . An example of a memo created for this study is presented in Fig. 2 .

figure 2

Example of a memo arising from the code (“principles vs practice gap”) labeled [C1]

figure 3

3.4.2 The Advanced Stage

The codes and concepts generated from open coding in the basic stage led to the emergence of five categories: practitioner awareness , practitioner perception , practitioner need , practitioner challenge and practitioner approach to AI ethics, with different level of details and depth underlying each. Once these categories were generated, we proceeded to identify new papers using forward and backward snowballing in the advanced stage of theory development. Since our topic under investigation was rather broad, to begin with, and some key categories of varying strengths had been identified, an emergent mode of theory development seemed appropriate for the advanced stage (Hoda 2021 ).

We proceeded to iteratively perform targeted data collection and analysis on more papers. Targeted coding involves generating codes that are relevant to the concepts and categories emerging from the basic stage (Hoda 2021 ). Reflections captured through memoing and snowballing served as an application of theoretical sampling when dealing with published literature, similar to how it is applied in primary socio-technical grounded theory (STGT) studies.

We performed targeted coding in chunks of two to three sentences or short paragraphs that seemed relevant to our emergent findings, instead of the line-by-line coding, and continued with constant comparison. This process was a lot faster than open coding. The codes developed using targeted coding were placed under relevant concepts, and new concepts were aligned with existing categories in the same Google spreadsheet. In this stage, our memos became more advanced in the sense that they helped identify relationships between the concepts and develop a taxonomy. We continued with targeted data collection and analysis until all 38 selected articles were analysed. Finally, theoretical structuring was applied. This involved considering our findings against common theory templates to identify if any naturally fit. In doing so, we realised that the five categories together describe the main facets of how AI practitioners view ethics in AI, forming a form of multi-faceted taxonomy, similar to Madampe et al. ( 2021 ).

3.5 Present

As the final step of the grounded theory literature review (GTLR) method, we present the findings of our review study, the five key categories that together form the multi-faceted taxonomy with underlying concepts and codes. We developed a taxonomy instead of a theory because we adhered to the principles outlined by Wolfswinkel et al. ( 2013 ) for conducting our Grounded Theory Literature Review and according to Wolfswinkel et al. ( 2013 ), the key idea is to use the knowledge you’ve gained through analysis to decide how to best structure and present your findings in a way that makes sense and communicates your insights effectively. Likewise, we used the Socio-Technical Grounded Theory (STGT) method (Hoda 2021 ) to analyse our data, which includes a recommendation: “STGT suggests that researchers should engage in theoretical structuring by identifying the type of theories that align best with their data, such as process , taxonomy , degree, or strategies (Glaser 1978 ).” This is why we chose to create a taxonomy, as it was the most suitable approach based on the data we collected.

This is followed by a discussion of the findings and recommendations. In presenting the findings, we also make use of visualisations (see Figs. 4 and 5 ) (Wolfswinkel et al. 2013 ).

4 Challenges, Threats and Limitations

We now discuss some of the challenges, threats, and limitations of the Grounded Theory Literature Review (GTLR) method in our study.

4.1 Grounded Theory Literature Review (GTLR) Nature

Unlike a Systematic Literature Review (SLR), a Grounded Theory Literature Review (GTLR) study does not aim to achieve completeness. Rather, it focuses on capturing the ‘lay of the land’ by identifying the key aspects of the topic and presenting rich explanations and nuanced insights. As such, while the process of a grounded theory literature review (GTLR) can be replicated, the results – the resulting descriptive findings – are not easily reproducible. Similarly, our study does not aim to be exhaustive, as it adheres to a grounded theory methodology. The chosen literature sample underwent thoughtful consideration, and although it is not all-encompassing, we have taken steps to assess its representativeness. Instead of using a representative sampling approach, we used theoretical sampling in our study, acknowledging that our sample might not exhibit the same level of representativeness as seen in a Systematic Literature Review (SLR), which is one of the limitations of our study.

4.2 Search Items and Strategies

Our search and selection steps for identifying the seed papers and subsequent snowballing may have resulted in missing some relevant papers. This threat is dependent on the list of keywords selected for the study and the limitations of the search engines. To minimise the risk of this threat, we used an iterative approach to develop the search strings for the study. Initially, we chose the key terms from our research title and added their synonyms to develop the final search strings which returned the most relevant studies. For example, we included “fairness” in our final search string because when we used only the term “ethics”, we obtained zero articles in two databases ( ACM DL and Wiley OL ). The documentation of the search process is presented in Appendix B . Likewise, we only used the term “fairness” but did not include other terms like “explainability” and “interpretability” in our final search string. Due to this, there is a possibility that we missed papers that explore AI practitioners’ views on these terms (“interpretability” and “explainability”), which is a limitation of our study.

The final search terms (“ethic*” OR “moral*” OR “fairness”) AND (“artificial intelligence” OR “AI” OR “machine learning” OR “data science”) AND (“software developer” OR “software practitioner” OR “programmer”) that we used in our study seem to be biased towards engineering/computer science publication outputs. This represents one of the limitations of our research since publications related to understanding AI practitioners’ perspectives on ‘ethics in AI’ may not exclusively reside within technical publications but may also extend to disciplines within the social sciences and humanities. Our use of these search terms, which are inclined towards outputs in engineering and computer science, might have led to the omission of relevant publications from social science and humanities domains.

In our final search query, we opted for the term “software developer”. Given the iterative nature of our keyword design process, we had previously experimented with incorporating keywords like “data scientist”, in combination with terms like “AI practitioner” and “machine learning engineer”, to ensure that we did not inadvertently miss relevant papers. Unfortunately, this led to an overwhelming number of papers, posing a challenge for our study. Therefore, we decided to reduce the number of keywords and used only terms like “software developer”, “software practitioner”, and “programmer” to obtain a more manageable set of papers for our study. However, we acknowledge that not including the term “data scientist” in the search query may have caused us to miss some relevant papers, which is a limitation of our study.

The main objective of our study was to explore the empirical studies that focused on understanding AI practitioners’ views and experiences on ethics in AI. We were looking at the people involved in the technical development of AI systems but not managers, which is a limitation of our study. However, future studies could encompass managers, or separate reviews may delve into their perspectives on AI ethics. Likewise, we focused on studies published in the Software Engineering and Information Systems domains. However, we acknowledge that AI practitioners’ perspectives on AI ethics might have been extensively studied in social sciences and humanities, areas we didn’t explore - a limitation of our study. Future research can encompass studies from these domains.

4.3 Review Protocol Modification

We decided to include only research-based articles in our grounded theory literature review (GTLR) study. Future grounded theory literature review (GTLR) studies can include literature from non-academic sources like in multi-vocal literature reviews (MLRs). Since, there is a lack of theories, frameworks, and theoretical models around this topic, we wanted to conduct a rigorous review study to present multidimensional findings and develop theoretical foundations for this critical and emerging topic. Finding enough empirical articles related to the research topic was another challenge. To overcome this, we had to make some adaptions to the original grounded theory literature review (GTLR) framework proposed by Wolfswinkel et al. ( 2013 ) and relaxed the review protocol during the snowballing of articles and included studies published in venues other than journals and conferences. We also used studies uploaded on arXiv as our seed papers due to the lack of enough peer-reviewed publications relevant to our research topic. arXiv is a useful resource to find the latest research on emerging topics, and the quality of the work can be reasonably assessed from the draft. The growing impact of open sources like arXiv is evidenced by the increase in direct citations to arXiv in Scopus-indexed scholarly publications from 2000 to 2013 (Li et al. 2015 ).

figure 4

Taxonomy of ethics in AI from practitioners’ viewpoints

4.4 Time Constraints

We applied the socio-technical grounded theory (Hoda 2021 ) approach to analyse the qualitative data of primary studies and focused on the ‘Findings’ section of the studies that presented empirical evidence. We did not find information on tools/software/framework/models used by AI practitioners to implement ethics in AI, although a study mentioned the existence of various tools but with no details provided [G10]. Since we were following a broad and inductive approach, we were not specifically looking for information on tools. This lack of information was surprising, but future reviews and studies can investigate the use of tools in implementing AI ethics.

As explained above, five key categories emerged from the analysis: (i) practitioner awareness , (ii) practitioner perception , (iii) practitioner need , (iv) practitioner challenge and (v) practitioner approach . Taken together, they form a taxonomy of ethics in AI from practitioners’ viewpoints , shown in Fig. 4 , with the underlying codes and concepts. Taken together, they represent the key aspects AI practitioners have been concerned with when considering ethics in AI. We describe each of the five key categories, and their underlying codes and concepts, and share quotes from the included primary studies by attributing them to paper IDs, G1 to G38 . The list of included studies is presented in Appendix A .

5.1 Practitioner Awareness

The first key category, or facet of the taxonomy, that emerged is Practitioner Awareness . This category emerged from two underlying concepts: AI ethics & principles-related awareness and team-related awareness .

5.1.1 AI Ethics & Principles—Related Awareness

literature review in software engineering

5.1.2 Team—Related Awareness

Participants in some studies acknowledged their awareness of their roles and responsibilities in integrating ethics into AI during its development. For instance, a participant from the study [G7] highlighted being aware of their roles and responsibilities in implementing ethics during the development of AI systems. Similarly, a participant in another study [G23] expressed awareness of playing a pivotal role in shaping the ethics embedded in an AI system.

literature review in software engineering

In a study [G25], a participant acknowledged their lack of knowledge about ethics of AI. Similarly, participants in other studies, such as [G6] and [G8], also expressed awareness of their insufficient understanding of AI ethics and ethical principles.

5.1.3 Overall Summary

Few AI practitioners reported their awareness of the concept of AI ethics, ethical principles, their importance, and relevance in AI development. Likewise, very few AI practitioners were aware of the gap that exists between the ethical principles of AI and practice. Overall, this indicates a positive aspect concerning AI practitioners, as awareness of ethics is the initial step toward implementing ethical practices in AI development.

Similarly, some AI practitioners reported their understanding of the roles and responsibilities involved in the development of ethical AI systems. However, the primary focus of the majority of AI practitioners who participated in some studies was on recognising their own limitations that could result in the development of unethical AI systems. These limitations encompassed a lack of foresight and intention, insufficient self-reflection, limited knowledge of ethics, and a lack of awareness regarding cultural norms. In summary, this suggests that AI practitioners who participated in those studies engaged in significant introspection to comprehend the reasons behind the development of unethical AI systems. This introspective approach is positive because self-reflection can play a crucial role in identifying personal shortcomings and finding ways to address them.

5.2 Practitioner Perception

The second category is Practitioner Perception which emerged from four underlying concepts: AI ethics & principles-related perception , User-related perception , Data-related perception , and AI system-related perception .

The perception category goes beyond acknowledging the existence of something and captures practitioners’ views & opinions about it, including held notions and beliefs. For example, it includes shared perceptions about the relative importance of ethical principles in developing AI systems, who is considered accountable for applying and upholding them, and the perceived cost of implementing ethics in AI.

5.2.1 AI Ethics & Principles-Related Perception

Perceptions about the importance of ethics varied. Some AI practitioners who participated in studies like [G1], [G29], and [G20] perceived ‘ethics’ as very important in developing AI systems. A study [G1] reported that AI practitioners acknowledged the importance of AI ethics. In the paper, when participants were asked if ethics is useful in AI, all (N=6) of them answered “Yes”. Nevertheless, it’s important to consider that the participant sample size of this study [G1] was only 6.

literature review in software engineering

Developing responsible AI was seen as building positive relations between organisations and human beings by minimising inequality, improving well-being, and ensuring data protection and privacy. However, when it comes to the relative importance of ethical principles, it was a divided house. An AI practitioner who participated in a study [G11] thought that AI systems must be fair in every way. Likewise, some participants in another study [G8] also thought that fairness issues in AI systems must not only be minimised but completely avoided, highlighting the importance of developing a fair AI system. On the other hand, within the same study [G8] surveying 51 participants, the highest importance, with an arithmetic mean of 4.71, was attributed to the principle of Protection of data privacy. Other studies – [G6] and [G10] – also concluded that ‘Privacy protection and security’ was the most important ethical principle in AI system development.

literature review in software engineering

5.2.2 User-Related Perception

Some AI practitioners who participated in studies like [G2], [G3], [G5], [G6], [G7], and [G34] had perceptions about users’ nature, technical abilities, drivers, and their role in the context of ethics in AI. In this context, “users” encompassed either the party commissioning a system, the end users, or both. We have provided additional clarity regarding the specific user categories that participants referred to when engaging in discussions about ethics in AI.

literature review in software engineering

5.2.3 Data-Related Perception

literature review in software engineering

The developer’s naïve perception of the potential for harm (or lack thereof) is worth noting in the above example. Along with that, some participants in a study [G2] highlighted the importance of data collection and curation in AI system development. They mentioned that collecting sufficient data from sub-populations and balancing them during the curation of data sets is essential to minimising the ethical issues of an AI system. A participant in [G15] also shared a similar idea on collecting sufficient ethical data for developing AI systems.

On the other hand, some participants in a study [G18] reported that they minimised getting the personal data of users or avoided its collection as much as possible so that no ethical issues related to data privacy arise during AI system development, whereas a participant in [G21] mentioned that they used privacy-preserving data collection techniques to reduce unethical work with data.

5.2.4 AI System-Related Perception

literature review in software engineering

5.2.5 Overall Summary

Overall, our synthesis says that AI practitioners who participated in the studies had both positive and negative perceptions about the concept of AI ethics. While some practitioners thought ethics were important to consider while developing AI systems, others perceived it as a secondary concern and non-functional requirement of AI. This diversity of views on AI ethics can have implications for the development and deployment of AI technologies and how ethical considerations are integrated into AI practices. Likewise, there were different views on the importance of different principles of AI ethics. Some practitioners perceived developing a fair AI is important whereas others perceived maintaining privacy during AI development is more important. This diversity in the views of different ethical principles might also impact the development of ethical AI-based systems.

Perceptions regarding ethical considerations in the development of AI systems also extended to the question of responsibility. While some AI professionals felt it was their duty to create ethical AI systems and bear the accountability for any resulting harm, others believed that both users and practitioners shared this responsibility. We think it’s essential to establish clear definitions of who should be accountable for ethical considerations during AI development and the consequences that arise from it. This way, there can be no evasion of this important issue. The discussion revolved around the expense associated with implementing ethical standards in AI development. We are curious whether, in the absence of cost barriers, AI practitioners could have created more ethically sound AI systems.

Some practitioners who participated in the studies also held unfavorable views regarding AI system users. Some believed that users generally did not pay much attention to AI ethics until actual ethical problems arose. Users were viewed as making judgments about AI systems based on personal biases rather than a deep understanding of how AI worked. Additionally, some participants perceived that users might resort to legal action against companies only when ethical issues with AI systems become apparent. Overall, this suggests a gap in user awareness and engagement with AI ethics, which could have implications for how AI is developed, used, and regulated.

Likewise, AI practitioners perceived a few steps to be important related to data to develop ethical AI systems. Proper data handling, sufficient data collection and data balancing, and avoiding personal data collection were perceived as important measures to mitigate ethical issues of AI systems. This implies that data-related practices contribute to ethical behavior and responsible AI development.

A few AI practitioners also had mixed perceptions about the nature of AI systems. Some expressed pessimism, suggesting that AI systems are excessively complex and inherently possess ethical issues that are difficult to mitigate. On the other hand, others viewed AI as socio-technical systems that, at the very least, take ethical considerations into account. Overall, this diversity in views highlights the ongoing debate and complexity surrounding AI ethics and underscores the importance of continued discussion and efforts to improve the ethical aspects of AI technology.

5.3 Practitioner Need

The review highlighted the different needs of AI practitioners which can help them enhance ethical implementation in AI systems. This category is underpinned by concepts such as AI ethics & principles-related need and team-related need .

5.3.1 AI Ethics & Principles—Related Need

literature review in software engineering

Likewise, a few AI practitioners who participated in a study [G1] and [G5] reported that they are challenged to implement ethics in AI as there is a lack of tools or methods for implementing ethics . For example, in a study [G1] , when AI practitioners were asked, “Do your AI development practices take into account ethics, and if yes, how?” , all respondents (N=6) answered “No” . This indicates that AI companies lack clear tools and methods that help AI practitioners implement ethics in AI. Another study [G19] concluded that there is a lack of tools that support continuous assurance of AI ethics. A participant in a study [G19] stated that it was challenging for them as they had to rely on manual practice to manage ethics principles during AI system development.

literature review in software engineering

5.3.2 Team—Related Need

There are a few needs related to AI practitioners that influence ethical implementation in a system. There is a need for effective communication between AI practitioners as it supports ethics implementation [G2], [G3], [G15]. A few participants in studies [G2] and [G3] expressed the need for tools to facilitate communication between AI model developers and data collectors. In the study [G2], out of those surveyed, 52% of respondents (79% of them when asked) expressed that tools aiding communication between model developers and data collectors would be incredibly valuable.

literature review in software engineering

Similarly, some participants in [G18] reported that they were technology experts but didn’t have any knowledge and background in ethics . However, they were extremely aware of privacy concerns in AI use, highlighting an interesting relationship between practitioner awareness, perception, and challenges. A few participants in other studies like [G6], [G8], and [G25] also supported the notion.

5.3.3 Overall Summary

The AI practitioners who participated in the included primary studies discussed several requirements concerning the conceptualisation of AI ethics and ethical guidelines. Some of them also expressed the necessity for tools and methodologies that could aid them in improving the development of ethical AI systems. This suggests that there is an ongoing need for support and resources to assist AI practitioners in adhering to ethical principles during the AI development process.

Similarly, a few participants in some of the included primary studies also addressed certain requirements regarding AI development teams. Some of these needs pertained to individual self-improvement, including the improvement of communication within the team and possessing a strong foundation in ethics as prerequisites for developing ethical AI systems. Additionally, there was a mention of the importance of discussing ethical responsibilities among team members as another requirement. Overall, the data suggests a commitment to improving the ethical aspects of AI development, both in terms of principles and practical implementation, and a recognition that addressing these ethical challenges requires a multifaceted approach involving teams and individual professionals.

5.4 Practitioner Challenge

The fourth key category is Practitioner Challenge . Several challenges are faced by AI practitioners in implementing AI ethics including AI ethics and principles-related challenge , organisation-related challenge , AI system-related challenge , and data-related challenge .

5.4.1 AI Ethics & Principles—Related Challenge

A number of challenges related to implementing AI ethics were reported, including knowledge gaps, gaps between principles & practice, ethical trade-offs including business value considerations, and challenges to do with implementing specific ethical principles such as transparency, privacy, and accountability.

literature review in software engineering

Different types of challenges are mentioned and solutions are discussed in theory but there is no demonstration of those solutions in practice [G1], [G3]. Translation of AI principles into practice is a challenge for AI practitioners as discussed by some participants in studies including [G1] and [G6].

literature review in software engineering

5.4.2 Organisation—Related Challenge

literature review in software engineering

5.4.3 AI System—Related Challenge

literature review in software engineering

5.4.4 Data—Related Challenge

literature review in software engineering

5.4.5 Overall Summary

Participants in the included primary studies discussed various challenges related to the concept of AI ethics and ethical principles. Some participants discussed challenges related to ethics, including variations in how people understand ethics, the practical application of ethical principles, and the consistent adherence to various ethical standards throughout the AI development process. In general, this data suggests that the primary challenge for practitioners is grasping the essence of ethics, which we consider to be the fundamental issue and should be prioritised for resolution.

Similarly, organisations have contributed to obstructing AI practitioners in their efforts to develop ethical AI systems. Challenges raised by participants, such as limited budgets for integrating ethics, tight project deadlines, and restricted decision-making authority during AI development, indicate that organisations could assist AI practitioners by addressing these issues when feasible.

Some participants also discussed the challenges regarding the unpredictability of AI systems. They identified factors contributing to this unpredictability, such as profit maximisation, attention optimisation, and cyber-security threats. The absence of contingency plans to address issues stemming from AI system unpredictability was also discussed. Overall, it indicates that AI practitioners employ certain strategies to mitigate unpredictability in AI systems, but there is a demand for methods and tools to effectively prevent or manage such unpredictability. The development of such methods or tools would aid in reducing ethical risks associated with AI.

Participants discussed challenges associated with the data used to train AI models. They explained how the quality of data and the processes involved in handling data can influence AI development. Some AI practitioners faced challenges related to ensuring the ethical development of AI, primarily due to issues like inadequate data quality, poor data collection practices, and improper data usage. Overall, the data suggests that to ensure ethical AI development, it is essential to address issues related to data quality and data handling processes.

5.5 Practitioner Approach

The review of empirical studies provided insights into the approaches used by AI practitioners to implement ethics during AI system development. This category is underpinned by three key concepts, AI ethics & principles-related approach , team-related approach , and organisation-related approach to enhance ethics implementation in AI. AI practitioners discussed the applied and/or potential strategies related to these three concepts. Applied strategies refer to the techniques or ways that AI practitioners reported using to enhance the implementation of ethics in AI, whereas possible strategies are the recommendations or potential solutions discussed by AI practitioners to enhance the implementation of ethics in AI.

5.5.1 AI Ethics & Principles—Related Approach

literature review in software engineering

AI practitioners were also involved in setting customised regulations in the company and played an essential role in the development of AI ethics. This strategy was used to enhance ethics implementation by developing comprehensive and well-defined guidelines for AI ethics for the company [G7]. Some participants in a study [G11] also reported that they needed to customize the general policies in the organisation to better support privacy and accessibility for their specific circumstances to ensure AI fairness.

5.5.2 Team—Related Approach

Some participants in a study [G1] reported that organisations used proactive strategies such as speculating socio-ethical impacts and analysing hypothetical situations to enhance ethics implementation in AI development. Likewise, a few participants in another study [G5] supported the notion and mentioned that such strategies aimed to address ethical issues that may arise and plan for their potential consequences [G5]. Analysing a hypothetical situation of unpredictability was a strategy used to solve an AI system’s unpredictable behavior [G1]. Similarly, a participant in a study [G2] reported that speculating possible fairness issues of an AI system before deploying it was a strategy used to minimise fairness issues [G2] in AI.

literature review in software engineering

However, some companies did not use proactive strategies to maintain transparency of AI systems but addressed transparency issues only when it impacted their business [G6]. Some AI practitioners just followed what is legal and shifted the ethical responsibilities to policymakers and legislative authorities [G7]. In contrast, some participants in a study [G24] placed the ethical responsibility on the company manager.

In addition to sharing experiences of tried and tested strategies, practitioners also discussed potential strategies that they thought could improve ethics in AI. A study [G10] concluded that appointing one individual to implement ethics during AI development is not a good option. The whole AI development team must be involved in the process of ethics implementation. In another study [G15], a participant proposed a similar notion, emphasising the involvement of not just senior members but also junior AI practitioners in integrating ethics during AI development.

Likewise, a participant in a study [G10] mentioned that tackling ethical issues timely i.e., during the design and development of an AI system to enhance system transparency is good. In another study [G4], a participant recommended addressing ethical concerns during the development of AI systems, highlighting the necessity for providing AI developers with supportive methods.

5.5.3 Organisation—Related Approach

Some participants in a study [G18] reported several strategies provided by organisations to enhance ethics implementation in AI such as ethics review boards . Likewise, a participant in a study [G21] mentioned that having internal governance such as ethics committees in an organisation to establish AI ethical standards can provide AI practitioners an opportunity to work closely with ethicists so that they can verify if ethics is being implemented appropriately during AI system development.

Some participants in studies like [G1] and [G5] stated that conducting audits was the other important strategy organisations provided to them to solve transparency issues. A participant in [G21] reported that employing AI auditors could help AI practitioners in developing ethical AI systems.

literature review in software engineering

5.5.4 Overall Summary

Participants discussed several strategies that they used to ensure the ethical development of AI systems. The applied strategies related to AI ethics and principles were used by the participants to ensure the ethical development of AI systems such as merging ethics and law and setting customised AI ethics regulations in the company. Overall, this indicates that practitioners emphasize the comprehensive integration of all AI ethical principles to ensure that no aspect is overlooked during the development process.

Some approaches were performed by the team to ensure the ethical development of AI systems such as group discussions with colleagues on AI ethics, analysing hypothetical situations of AI ethical issues, considering socio-ethical impacts of AI, and discussion with policymakers and legal teams to ensure algorithms are abiding by laws. Overall, this data suggests a comprehensive and multidisciplinary approach to addressing AI ethics, where the team actively engages in discussions, analysis, and collaboration with various stakeholders to promote the ethical development of AI systems.

Some participants mentioned that their organisations currently use various methods, such as audits, and ethics review boards, to promote ethical AI development. However, the discussion highlighted a greater emphasis on potential approaches that organisations could offer to their AI development teams to ensure ethical AI. For instance, some participants proposed that organisations could prioritise diversity within AI teams, provide education and training on AI ethics for practitioners, establish internal governance mechanisms like ethics committees, cultivate a cultural shift within the organisation towards ethical considerations, and implement tools like quizzes during the hiring process for AI teams to enhance ethical development. It indicates that organisations can offer additional support to AI practitioners in their pursuit of ethical AI systems, suggesting that there is more that can be done in this regard.

6 Discussion and Recommendations

6.1 taxonomy of ethics in ai from practitioners’ viewpoints.

The taxonomy of ethics in AI from practitioners’ viewpoints aims to assist AI practitioners in identifying different aspects related to ethics in AI such as their awareness of ethics in AI, their perception towards it, the challenges they face during ethics implementation in AI, their needs, and the approaches they use to enhance better implementation of ethics in AI. Using the findings, we believe that AI development teams will have a better understanding of AI ethics, and AI managers will be able to better manage their teams by understanding the needs and challenges of their team members.

An overview of the taxonomy and the coverage of the underlying concepts across the categories is presented in Fig. 5 . As mentioned previously, we obtained multiple concepts for each category. Some concepts were common across some categories whereas some were unique. For example, ‘AI ethics & principles’ is a concept that emerged for each of the five categories, depicted by a full circle around the five categories. The ‘ teams-related ’ concept emerged for three categories, namely, practitioner awareness , practitioner need , and practitioner approach , depicted by a crescent that covers these three categories on the top left. While the ‘ user-related ’ concept emerged for only one category, practitioner perception , as seen by a small crescent over that category. The codes underlying these concepts were unique to each category, as seen in Fig. 4 and described in the ‘Findings’ section.

figure 5

An overview of the aspects of ethics in AI from AI practitioners’ viewpoints

The overview of the taxonomy shows that AI practitioners are mostly concerned about AI ethics and ethical principles. For example, they discussed their awareness of ethics [G16] and different AI ethical principles such as transparency [G17], accountability [G3], fairness [G2], and privacy [G6] and also shared their positive perception such as its importance and benefits, and negative perceptions such as the high cost of ethics application [G6] and ethics being a non-functional requirement in AI development [G10]. Likewise, they mentioned different challenges they faced during AI ethics implementation which are related to AI ethics and principles such as ethics conceptualisation [G1], the difficulty of translating principles to practice [G6] and making ethical choices [G7]. Their needs related to AI ethics and principles were also reported by AI practitioners in the literature including the need for universal ethics definition [G1], tools to translate principles to practice [G6] along with the approaches they used related to AI ethics and principles to enhance better implementation of ethics in AI such as merging ethical and legal considerations and setting customised regulations in the organisation [G7].

On the other hand, the review shows that AI practitioners have been less concerned about the aspects related to users when it comes to ethics in AI. For example: AI practitioners perceive that users are unconcerned and incurious [G5] about the ethical aspects of AI software they use unless there is any chance of an incident occurring [G3]. Likewise, they reported that users don’t have much knowledge about AI which makes them uninterested in the ethical aspects of AI-based systems [G7]. No challenges or needs related to users were reported in the literature that impact AI practitioners’ AI ethics implementation in AI-based systems. In conclusion, AI ethics and principles and team-related aspects were front and center for AI practitioners while they lacked a better view of the user-related aspects. Our findings contribute to the academic and practical discussions by exploring the studies that have included the views and experiences of AI practitioners about ethics in AI. As we conducted a grounded theory literature review (GTLR), we got an opportunity to rigorously review the primary empirical studies relevant to our research question and develop a taxonomy. We now discuss some of the insights captured through memoing and team discussions, accompanied by recommendations.

6.2 Ethics in AI – Whose Problem is it Anyway?

Participants of the primary studies had different perceptions of AI ethics and its implementation. Most studies included in our research concluded that AI practitioners perceived ethics as an essential aspect of AI [G5], [G20]. However, some participants had other viewpoints. A participant in [G1] stated that discussion on AI ethics does not affect most people, except for AI ethics discussions in massive companies like Google. Another participant from [G4] perceived ethics as a non-functional requirement in AI, something to be implemented externally [G23]. In contrast, a participant in [G4] stated that ethics could not be “outsourced”, and it should be implemented by AI practitioners who are developing the software. The diverse perspective of the participants about the implementation of ethics in AI serves to highlight the complex nature of the topic and why organisations struggle to implement AI ethics.

Likewise, there were also different views on who should be accountable for implementing ethics in AI. An AI practitioner in a study [G30] shared the uncertainty typically present when deciding who or what is responsible when ethical issues arise in AI systems. It seems certain organisations attempt to define who should be held accountable, but again, there is no universal understanding. For example: the ACM Code of Ethics clearly puts the responsibility on professionals who develop these systems. On the other hand, AI practitioners perceive that only physical harm caused by AI systems is essential and needs to be considered [G3]. This statement is alarming as it hints that some practitioners carry the view that only physical harm is worth being concerned about.

Recommendations for Practice

literature review in software engineering

Given the diverse perspectives on who owns accountability for considering ethics in AI systems development and potential ethical issues arising from AI system use, it is important for AI development teams, which are usually multidisciplinary in nature, as well as managers and organisations at large to have open discussions about such issues at their workplace [G5]. The lack of discussion about ethics within the tech industry has been identified as a significant challenge by engineers (Metcalf et al. 2019 ). For example, this can be done through organising discussion panels, guest seminars by ethics and ethical AI experts, and hosting open online forums for employees to discuss such topics. Another approach is to collate the challenges specific to the organisation and see how they map to selected ethical frameworks, as was conducted at Australia’s national scientific research agency (CSIRO) [G26].

literature review in software engineering

Practitioner discussions can be followed by strategic and organised attempts to reconcile perspectives, for example, teams collaboratively selecting an existing or creating a bespoke ethical framework, and drafting practical approaches to implement them in their specific project contexts [G7], many of which may be application domain specific.

literature review in software engineering

We recommend proactive awareness as evidenced in our reviews, such as driven by personal interest and experiences [G6], organisational needs [G3], and regulations such as the General Data Protection Regulation (GDPR) [G6]. Whereas reactive awareness , driven by customer complaints about AI ethical issues and negative media coverage [G2], is not desirable.

literature review in software engineering

Similarly, we recommend proactive strategies such as speculating socio-ethical impacts by AI practitioners prior to developing an AI system [G5]. Speculating socio-ethical impacts hints at speculative design approaches which have been heavily discussed and supported by multiple studies as well (Lee et al. 2023 ; Alfrink et al. 2023 ). Analysing hypothetical situation of unpredictability to solve unpredictable behaviour of an AI system [G1], following codes of ethics and standards of practice [G18], including diverse people in the development team [G21], and having internal governance such as ethics committees in an organisation to establish AI ethical standards [G21] are also other proactive strategies we recommend.

literature review in software engineering

Finally, there is also a need to consider accountability at the organisation and industry levels. For example: Ibanez et al. (2021) [G6] reported that there is a need for ethical governance that can help them solve accountability issues.

6.3 Ethics-Critical Domains Lead the Way

Comparisons were made between the medical field and the IT field in terms of the awareness of ethical regulations in AI [G5]. Participants mentioned that practitioners developing AI used in the medical field are more aware of ethics because the medical field has stricter laws and regulations than IT. This hints that awareness of AI ethics depends on domain specificity . Domains such as medical and health are more ethics-aware than others and lead the way in ethics awareness and implementation.

literature review in software engineering

The IT domain can learn from the advances in improving the awareness of and implementing ethics in the medical domain (Mittelstadt 2019 ). This includes digital, virtual, mobile, and tele-health areas, as well as AI systems developed in other domains.

literature review in software engineering

Labelling certain domains as safety-critical and equating that with ethics-critical, can be a flawed argument leading to perceptions that domains traditionally considered non-safety-critical, such as gaming and social media, can be held to lower standards and expectations when it comes to ethics implementation. We know from multiple cases of cyberbullying and ‘intelligent’ games encouraging self-harm in young adults (for example, ‘The Blue Whale Game’ (Mukhra et al. 2019 )) that this would be a mistake. We recommend that all domains should aim to be ethics-critical.

6.4 Research can help in Fundamental and Practical Ways

The perspectives of AI practitioners on the nature of AI systems can have a significant impact on the implementation of ethics in AI. Some practitioners may view AI as a socio-technical system and therefore place a strong emphasis on ethics [G4], while others may view AI as a complex system and find it challenging to address ethical issues, leading them to avoid ethical considerations [G7]. The participants’ perspectives on AI systems indicate that the implementation of ethics depends on how practitioners perceive AI ethics.

Recommendations for Research

Based on our review findings, we recommend research including empirical studies, reviews, and solutions & tools development into the following topics.

literature review in software engineering

Most of the participants in a study [G9] reported that there is no use of ethical tools in AI companies to enhance ethics implementation in AI. Therefore, reviewing tools available to AI practitioners to enhance the AI ethics implementation including their evaluation and feedback for improvement would be helpful to make them aware of the tools that are beneficial.

literature review in software engineering

Based on our findings, it appears that some AI practitioners involved in studies such as [G5, G6, G19] mentioned the need for assistance in the form of tools and methodologies to effectively integrate ethics into AI and put ethical principles into action. Consequently, designing solutions in the form of tools and guidelines to tackle the challenges faced by them, by working in close collaboration with practitioners would be advantageous.

literature review in software engineering

Investigating the users’ view of ethics in AI, for example, through a similar grounded theory literature review (GTLR) approach as applied in this review to address the practitioners’ view because to the best of our knowledge, this is the first grounded theory literature review (GTLR) in Software Engineering.

literature review in software engineering

Understanding the interplay between the role of practitioners and users in implementing ethics in the development and use of AI systems as one of the findings of our study shows that AI practitioners who participated in the included primary studies were less concerned about user-related aspects when it comes to developing ethical AI systems, including human limitations, biases, and strengths.

7 Methodological Lessons Learned

We followed Wolfswinkel et al. ( 2013 ) guidelines to conduct our grounded theory literature review (GTLR) as it is an overarching review framework that helped us frame the review process. A grounded theory literature review (GTLR) is suitable for exploring new and emerging research areas deeply, building theories, and making practical recommendations. The process involves an iterative approach to finding relevant papers to the research topic. As per Wolfswinkel et al. ( 2013 ), you refine the sample based on the title and abstract after removing duplicates. However, the guidelines don’t provide clear steps if the return on investment is low. As mentioned in Section 3.3 , we read the title and abstract of the first few samples (we read 200 papers) in three databases, including ACM DL, IEEEX, and SpringerLink, and all 83 papers in Wiley OL to gauge how many papers we might get. Unfortunately, this method proved inefficient, requiring full-text scans to judge relevance to our research topic. Despite considerable effort, the return on investment was minimal, with only one or two relevant papers found that included AI practitioners’ views on ethics in AI for every hundred papers. This experience taught us that for a very new research topic with highly specific inclusion and exclusion criteria, it is not worth going through the titles and abstracts of all the papers in the initial search due to the expected low return on investment.

From our initial search, we found only 13 papers. Since, Wolfswinkel et al. ( 2013 ) welcome adaptations to their framework by acknowledging that “... one size does not fit all, and there should be no hesitation whatsoever to deviate from our proposed steps, as long as such variation is well motivated”, we conducted forward and backward snowballing on those 13 articles. During the snowballing process, we had to modify our seed review protocol to find relevant papers that had information on AI practitioners’ views on ethics in AI. This significantly helped us find more relevant articles-25 more, to be precise. We discovered that employing the forward and backward snowballing method and relaxing the review protocol after identifying seed papers is a more effective way to find relevant literature, as it worked well for our research. While Wolfswinkel et al. ( 2013 ) guidelines don’t explicitly mention adjusting the review protocol, they are open to adaptations. In our study, we embraced this flexibility and made modifications that proved successful for us.

8 Conclusion

AI systems are as ethical as the humans developing them. It is critical to understand how the humans in the trenches, the AI practitioners, view the topic of ethics in AI if we are to a lay firm theoretical foundation for future work in this area. With this in mind, we formulated the research question: What do we know from the literature about the AI practitioners’ views and experiences of ethics in AI? To address this, we conducted a grounded theory literature review (GTLR) introduced by Wolfswinkel et al. ( 2013 ), applying the concrete steps of socio-technical grounded theory (STGT) for data analysis and developed a taxonomy (Hoda 2021 ), based on 38 primary empirical studies. Since there were not many empirical studies focusing on this niche topic exclusively, a grounded theory-based iterative and responsive review approach worked well to identify and extract relevant content from across multiple studies (that mainly focused on other related topics). The application of socio-technical grounded theory (STGT) for data analysis procedures such as open coding, constant comparison, memoing, targeted coding, and theoretical structuring enabled rigorous analysis and taxonomy development. We identified five categories of practitioner awareness , practitioner perception , practitioner need , practitioner challenge , and practitioner approach , including the underlying concepts and codes giving rise to these categories. Taken together, and applying theoretical structuring, we developed a taxonomy of ethics in AI from practitioners’ viewpoints to guide AI practitioners, researchers, and educators in identifying and understanding the different aspects of AI ethics to consider and manage. The taxonomy serves as a research agenda for the community, where future work can focus on investigating and explaining each of the individual phenomena of practitioner awareness, perception, challenge, need, and approach in-depth. Future empirical studies can focus on improving the understanding and implementation of ethics in AI and recommend practical approaches to minimise ethical issues such as mitigating potential biases in AI development through frameworks and tools development.

Data Availibility

All data generated or analysed during this study are included in this published article (and its supplementary information files).

https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles

Throughout the manuscript we use the term “product” for simplicity to refer to both “products and services” where the distinction is usually straightforward from context. Also, the term ‘AI development’ encompasses both the development and implementation of new and existing AI methods and the use of AI methods as a key component as part of a broader system.

The term ‘practitioners’ in our study includes AI developers, AI engineers, AI specialists, and AI experts. The terms ‘AI practitioners’ and ‘practitioners’ are used interchangeably throughout our study.

We chose the term ‘AI system’ as an overarching way of capturing both AI and ML-based systems and this is based on the fact that all these seed papers that we included in our study are focused on either AI, ML, or both.

This study uses the term ‘primary research articles’ to denote empirical works where AI practitioners were directly approached for their perspectives.

https://www.acm.org/code-of-ethics

https://www.iccp.org/

https://aitp-ncfl.org/home/

(2019) AI ethics guidelines global inventory. https://inventory.algorithmwatch.org/about . Accessed 10 Aug 2022

(2020) How Dutch activists got an invasive fraud detection algorithm banned. https://algorithmwatch.org/en/syri-netherlands-algorithm/ . Accessed 22 Aug 2023

(2021) 193 countries adopt first-ever global agreement on the ethics of artificial intelligence. https://news.un.org/en/story/2021/11/1106612 . Accessed 26 Sept 2023

(2023) Australia’s AI ethics principles. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles . Accessed 2 Oct 2023

Al-Kaswan A, Izadi M (2023) The (AB) use of open source code to train large language models. arXiv:2302.13681

Alfrink K, Keller I, Doorn N, Kortuem G (2023) Contestable camera cars: a speculative design exploration of public AI that is open and responsive to dispute. In: Proceedings of the 2023 CHI conference on human factors in computing systems. pp 1–16

Allen GN, Ball NL, Smith HJ (2011) Information systems research behaviors: what are the normative standards? Mis Quarterly pp 533–551

Anderson M, Anderson SL (2011) Machine ethics. Cambridge University Press

Aydemir FB, Dalpiaz F (2018) A roadmap for ethics-aware software engineering. In: Proceedings of the international workshop on software fairness. pp 15–21

Van den Bergh J, Deschoolmeester D (2010) Ethical decision making in ICT: discussing the impact of an ethical code of conduct. Communications of the IBIMA pp 1–10

Bostrom N, Yudkowsky E (2018) The ethics of artificial intelligence. In: Artificial intelligence safety and security. Chapman and Hall/CRC, pp 57–69

Bryson J, Winfield A (2017) Standardizing ethical design for artificial intelligence and autonomous systems. Computer 50(5):116–119. https://doi.org/10.1109/MC.2017.154

Article   Google Scholar  

Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on fairness, accountability and transparency. PMLR, pp 77–91

Castelnovo A, Crupi R, Del Gamba G, Greco G, Naseer A, Regoli D, Gonzalez BSM (2020) BeFair: addressing fairness in the banking sector. In: 2020 IEEE international conference on big data (Big Data). IEEE, pp 3652–3661, https://doi.org/10.1109/BigData50022.2020.9377894

Charmaz K (2000) Grounded theory: objectivist and constructivist methods. Handb Qual Res 2(1):509–535

Google Scholar  

Commission E (2019) Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai . Accessed 2 Feb 2024

Dastin J (2018) Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/ . Accessed 22 Aug 2023

Defense (2020) DOD adopts 5 principles of artificial intelligence ethics. https://www.defense.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/ . Accessed 5 Feb 2024

Felzmann H, Fosch-Villaronga E, Lutz C, Tamò-Larrieux A (2020) Towards transparency by design for artificial intelligence. Sci Eng Ethics 26(6):3333–3361. https://doi.org/10.1007/s11948-020-00276-4

Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M (2020) Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication (2020-1)

Fleischmann KR, Hui C, Wallace WA (2017) The societal responsibilities of computational modelers: human values and professional codes of ethics. J Assoc Inf Sci Technol 68(3):543–552

Fraga A (2022) An ethical leadership approach for complex systems integrated into the systems engineering practice. In: Emerging trends in systems engineering leadership: practical research from women leaders. Springer, pp 261–280

Glaser BG (1978) Theoretical sensitivity. University of California

Gotterbarn D (1991) Computer ethics: responsibility regained. In: National forum, honor society of Phi Kappa Phi, vol 71, p 26

Goulding C (1998) Grounded theory: the missing methodology on the interpretivist agenda. Qual Market Res Int J 1(1):50–57

Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Minds Mach 30(1):99–120. https://doi.org/10.1007/s11023-020-09517-8

Hall D (2009) The ethical software engineer. IEEE Soft 26(4):9–10

Harrington SJ (1996) The effect of codes of ethics and personal denial of responsibility on computer abuse judgments and intentions. MIS quarterly, pp 257–278

Hidellaarachchi D, Grundy J, Hoda R, Madampe K (2021) The effects of human aspects on the requirements engineering process: a systematic literature review. IEEE Trans Softw Eng 48(6):2105–2127. https://doi.org/10.1109/TSE.2021.3051898

Hoda R (2021) Socio-technical grounded theory for software engineering. IEEE Trans Softw Eng 48(10):1–1. https://doi.org/10.1109/TSE.2021.3106280

Jameel T, Ali R, Toheed I (2020) Ethics of artificial intelligence: research challenges and potential solutions. In: 2020 3rd international conference on computing, mathematics and engineering technologies (iCoMET). IEEE, pp 1–6. https://doi.org/10.1109/iCoMET48670.2020.9073911

Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2

Kazim E, Koshiyama AS (2021) A high-level overview of AI ethics. Patterns 2(9)

Kelley S (2021) Employee perceptions of effective AI principle adoption. AI Principle Adoption & Implementation

Kessing M (2021) Fairness in AI: discussion of a unified approach to ensure responsible AI development. Master dissertation, KTH Royal Institute of Technology

Khan AA, Badshah S, Liang P, Khan B, Waseem M, Niazi M, Akbar MA (2022) Ethics of AI: A systematic literature review of principles and challenges. In: Proceedings of the international conference on evaluation and assessment in software engineering 2022, pp 383–392. https://doi.org/10.1145/3530019.3531329

Kitchenham B, Brereton OP, Budgen D, Turner M, Bailey J, Linkman S (2009) Systematic literature reviews in software engineering-A systematic literature review. Inf Softw Technol 51(1):7–15. https://doi.org/10.1016/j.infsof.2008.09.009

Lee PYK, Ma NF, Kim IJ, Yoon D (2023) Speculating on risks of AI clones to selfhood and relationships: Doppelganger-phobia, identity fragmentation, and living memories. Proc ACM Hum-Comput Interact 7(CSCW1):1–28

Leikas J, Koivisto R, Gotcheva N (2019) Ethical framework for designing autonomous intelligent systems. J Open Innov Technol Mark Complexity 5(1):18

Li X, Thelwall M, Kousha K (2015) The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication. Aslib J Inf Manag 67(6):614–635. https://doi.org/10.1108/AJIM-03-2015-0049

Lu Q, Zhu L, Xu X, Whittle J, Xing Z (2022) Towards a roadmap on software engineering for responsible AI. In: Proceedings of the 1st international conference on AI engineering: software engineering for AI. pp 101–112

Madampe K, Hoda R, Grundy J (2021) A faceted taxonomy of requirements changes in agile contexts. IEEE Trans Softw Eng. https://doi.org/10.1109/TSE.2021.3104732

Mark R, Anya G (2019) Ethics of using smart city AI and big data: the case of four large European cities. ORBIT J 2(2):1–36. https://doi.org/10.29297/orbit.v2i2.110

Metcalf J, Moss E et al (2019) Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q 86(2):449–476

Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1(11):501–507. https://doi.org/10.1038/s42256-019-0114-4

Möllmann NR, Mirbabaie M, Stieglitz S (2021) Is it alright to use artificial intelligence in digital health? a systematic literature review on ethical considerations. Health Inform J 27(4):14604582211052392

Morley J, Floridi L, Kinsey L, Elhalal A (2020) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics 26(4):2141–2168

Mukhra R, Baryah N, Krishan K, Kanchan T (2019) ‘blue whale challenge’: a game or crime? Sci Eng Ethics 25:285–291

Nalini B (2020) The Hitchhiker’s guide to AI ethics. https://towardsdatascience.com/ethics-of-ai-a-comprehensive-primer . Accessed 15 July 2022

Obermeyer Z, Emanuel EJ (2016) Predicting the future-big data, machine learning, and clinical medicine. N Engl J Med 375(13):1216

OECD (2019) OECD AI principles overview. https://oecd.ai/en/ai-principles . Accessed 5 Feb 2024

Payne D, Landry BJ (2006) A uniform code of ethics: business and it professional ethics. Commun ACM 49(11):81–84

Perera H, Hussain W, Whittle J, Nurwidyantoro A, Mougouei D, Shams RA, Oliver G (2020) A study on the prevalence of human values in software engineering publications. In: 2020 IEEE/ACM 42nd international conference on software engineering. pp 409–420, https://doi.org/10.1145/3377811.3380393

Pierce MA, Henry JW (1996) Computer ethics: the role of personal, informal, and formal codes. J Bus Ethics 15:425–437

Rashid A, Weckert J, Lucas R (2009) Software engineering ethics in a digital world. Computer 42(6):34–41

Rothenberger L, Fabian B, Arunov E (2019) Relevance of ethical guidelines for artificial intelligence-a survey and evaluation. In: ECIS

Royakkers L, Timmer J, Kool L, Van Est R (2018) Societal and ethical issues of digitization. Ethics Inf Technol 20:127–142

Ryan M, Stahl BC (2020) Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. J Inf Commun Ethics Soc 19(1):61–86

Sanderson C, Douglas D, Lu Q, Schleiger E, Whittle J, Lacey J, Newnham G, Hajkowicz S, Robinson C, Hansen D (2023) AI ethics principles in practice: perspectives of designers and developers. IEEE Transactions on Technology and Society

Seah J, Findlay M (2021) Communicating ethics across the AI ecosystem. SMU Centre for AI & Data Governance Research Paper (7)

Shneiderman B (2020) Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans Interact Intell Syst 10(4):1–31. https://doi.org/10.1145/3419764

Siau K, Wang W (2020) Artificial intelligence ethics: ethics of AI and ethical AI. J Database Manag 31(2):74–87. https://doi.org/10.4018/JDM.2020040105

Smith MJ, Mitchell JA, Blajeski S, Parham B, Harrington MM, Ross B, Sinco B, Brydon DM, Johnson JE, Cuddeback GS et al (2020) Enhancing vocational training in corrections: a type 1 hybrid randomized controlled trial protocol for evaluating virtual reality job interview training among returning citizens preparing for community re-entry. Contemp Clin Trials Commun 19:100604. https://doi.org/10.1016/j.conctc.2020.100604

Strauss A, Corbin J (1990) Basics of qualitative research. Sage Publications

Vainio-Pekka H (2020) The role of explainable AI in the research field of AI ethics: systematic mapping study. Master dissertation, University of Jyväskylä

Vakkuri V, Kemell KK, Abrahamsson P (2020a) ECCOLA- A method for implementing ethically aligned AI systems. In: 2020 46th Euromicro conference on software engineering and advanced applications (SEAA). IEEE, pp 195–204. https://doi.org/10.1109/SEAA51224.2020.00043

Vakkuri V, Kemell KK, Kultanen J, Abrahamsson P (2020b) The current state of industrial practice in artificial intelligence ethics. IEEE Softw 37(4):50–57. https://doi.org/10.1109/MS.2020.2985621

Vakkuri V, Jantunen M, Halme E, Kemell KK, Nguyen-Duc A, Mikkonen T, Abrahamsson P (2021) Time for AI (ethics) maturity model is now. arXiv:2101.12701 , https://doi.org/10.48550/arXiv.2101.12701

Varanasi RA, Goyal N (2023) “It is currently hodgepodge”: examining AI/ML practitioners’ challenges during co-production of responsible AI values. In: Proceedings of the 2023 CHI conference on human factors in computing systems. pp 1–17

Veale M, Van Kleek M, Binns R (2018) Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In: Proceedings of the 2018 chi conference on human factors in computing systems. pp 1–14

Wiese LJ, Schiff DS, Magana AJ (2023) Being proactive for responsible AI: analyzing multiple sectors for innovation via systematic literature review. In: 2023 IEEE international symposium on ethics in engineering, science, and technology (ETHICS). IEEE, pp 1–1

Wohlin C (2014) Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings of the 18th international conference on evaluation and assessment in software engineering. pp 1–10. https://doi.org/10.1145/2601248.2601268

Wolfswinkel JF, Furtmueller E, Wilderom CP (2013) Using grounded theory as a method for rigorously reviewing literature. Eur J Inf Syst 22(1):45–55. https://doi.org/10.1057/ejis.2011.51

Download references

Acknowledgements

Aastha Pant is supported by the Faculty of IT Ph.D. scholarship from Monash University. C. Tantithamthavorn is partially supported by the Australian Research Council’s Discovery Early Career Researcher Award (DECRA) funding scheme (DE200100941). Also, the authors would like to thank Prof. John Grundy for his constructive feedback on the paper.

Open Access funding enabled and organized by CAUL and its Member Institutions.

Author information

Authors and affiliations.

Department of Software Systems and Cybersecurity, Monash University, Melbourne, Australia

Aastha Pant, Rashina Hoda & Chakkrit Tantithamthavorn

Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu, Finland

Burak Turhan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Aastha Pant .

Ethics declarations

Conflicts of interests.

The authors declare that they have no conflict of interest.

Additional information

Communicated by: Andy Zaidman.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: List of Included Studies

Vakkuri V, Kemell K-K, Kultanen J, Siponen M, Abrahamsson P (2019) Ethically aligned design of autonomous systems: Industry viewpoint and an empirical study. arXiv:1906.07946

Holstein K, Wortman V J, Daume III H, Dudik M, Wallach H (2019) Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp 1–16, DOI: https://doi.org/10.1145/3290605.3300830

Veale M, VanKleek M, Binns R (2018) Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp 1–14, DOI: https://doi.org/10.1145/3173574.3174014

Vakkuri V, Kemell K-K, Kultanen J, Abrahamsson P (2020) The current state of industrial practice in artificial intelligence ethics. IEEE Software 37(4), pp 50–57, DOI: https://doi.org/10.1109/MS.2020.2985621

Vakkuri V, Kemell K-K, Abrahamsson P (2019) Implementing ethics in AI: Initial results of an industrial multiple case study. In International Conference on Product-Focused Software Process Improvement, pp 331–338, DOI: https://doi.org/10.1007/978-3-030-35333-9-24

Ibanez J C, Olmeda M V (2021) Operationalising AI ethics: How are companies bridging the gap between practice and principles? An exploratory study. AI & Society 37, pp 1-25, DOI: https://doi.org/10.1007/s00146-021-01267-0

Orr W, Davis J L (2020) Attributions of ethical responsibility by artificial intelligence practitioners. Information, Communication & Society 23(5), pp 719–735, DOI: https://doi.org/10.1080/1369118X.2020.1713842

Rothenberger L, Fabian B, Arunov E (2019) Relevance of ethical guidelines for artificial intelligence- A survey and evaluation. In Proceedings of the 27th European Conference on Information Systems, Stockholm & Uppsala, Sweden, DOI: https://aisel.aisnet.org/ecis2019_rip/26

Vakkuri V, Kemell K-K, Abrahamsson P (2019) Ethically aligned design: An empirical evaluation of the Resolvedd-strategy in software and systems development context. In 45th Euromicro Conference on Software Engineering and Advanced Applications, pp 46–50, DOI: https://doi.org/10.1109/SEAA.2019.00015

Kelley S (2021) Employee perceptions of effective AI principle adoption. AI Principle Adoption & Implementation, ResearchGate preprint

Madaio M, Stark L, Vaughan J W, Wallach H (2020) Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp 1–14, DOI: http://dx.doi.org/ 10.1145/3313831.3376445

Addis C, Kutar M (2019) AI management an exploratory survey of the influence of GDPR and FAT principles. In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, pp 342–347, DOI: https://doi.org/10.1109/SmartWorld-UIC-ATC-SCALCOM-IOP-SCI.2019.00102

Baker-Brunnbauer J (2021) Management perspective of ethics in artificial intelligence. AI and Ethics 1(2): 173–181, DOI: https://doi.org/10.1007/s43681-020-00022-3

Frick N R, Brunker F, Ross B, Stieglitz S (2020) Design requirements for AI-based services enriching legacy information systems in enterprises: A managerial perspective. In Proceedings of 31st Australasian Conference on Information Systems (ACIS), pp 1–4

Seah J, Findlay M (2021) Communicating ethics across the AI ecosystem. SMU Centre for AI & Data Governance Research Paper

Govia L (2020) Coproduction, ethics and artificial intelligence: A perspective from cultural anthropology. Journal of Digital Social Research 2(3): 42–64, DOI: https://doi.org/10.33621/jdsr.v2i3.53

Mark R, Anya G (2019) Ethics of using smart city AI and big data: The case of four large European cities. The ORBIT Journal 2(2): 1–36, DOI: https://doi.org/10.29297/orbit.v2i2.110

Stahl B C, Antoniou J, Ryan M, Macnish K, Jiya T (2021) Organisational responses to the ethical issues of artificial intelligence. AI & Society, 37: 1-15, DOI: https://doi.org/10.1007/s00146-021-01148-6

Lu Q, Zhu L, Xu X, Whittle J, Douglas D, Sanderson C (2022) Software engineering for responsible AI: An empirical study and operationalised patterns. In Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice, pp 241–242, DOI: https://doi.org/10.1145/3510457.3513063

Kessing M (2021) Fairness in AI: Discussion of a unified approach to ensure responsible AI development. Master dissertation, KTH Royal Institute of Technology

Karakash T (2021) The double-edged razor of machine learning algorithms in marketing: benefits vs. ethical concerns. Dissertation, University of Twente

Sun T Q, Medaglia R (2019) Mapping the challenges of artificial intelligence in the public sector: Evidence from public healthcare. Government Information Quarterly 36(2), pp 368–383, DOI: https://doi.org/10.1016/j.giq.2018.09.008

Stahl B C (2021) Artificial intelligence for human flourishing-beyond principles for machine learning. Journal of Business Research 124, pp 374–388, DOI: https://doi.org/10.1016/j.jbusres.2020.11.030

Christodoulou E, Iordanou K (2021) Democracy under attack: Challenges of addressing ethical issues of AI and big data for more democratic digital media and societies. Frontiers in Political Science 3: 71, DOI: https://doi.org/10.3389/fpos.2021.682945

Chivukula S S, Hasib A, Li Z, Chen J, Gray C M (2021) Identity claims that underlie ethical awareness and action. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp 1–13, DOI: https://doi.org/10.1145/3411764.3445375

Sanderson C, Douglas D, Lu Q, Schleiger E, Whittle J, Lacey J, Newnham G, Hajkowicz S, Robinson C, Hansen D (2021) AI ethics principles in practice: Perspectives of designers and developers. arXiv:2112.07467

Ryan M, Antoniou J, Brooks L, Jiya T, Macnish K, Stahl B (2021) Research and practice of AI ethics: A case study approach juxtaposing academic discourse with organisational reality. Science and Engineering Ethics 27(2), pp 1–29, DOI: https://doi.org/10.1007/s11948-021-00293-x

Chivukula S S, Watkins C R, Manocha R, Chen J, Gray C M (2020) Dimensions of UX practice that shape ethical awareness. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp 1–13, DOI: http://dx.doi.org/10.1145/3313831.3376459

Morley J, Kinsey L, Elhalal A, Garcia F, Ziosi M, Floridi L (2021) Operationalising AI ethics: Barriers, enablers and next steps. AI & Society, pp 1–13, DOI: https://doi.org/10.1007/s00146-021-01308-8

Slota S C, Fleischmann K R, Greenberg S, Verma N, Cummings B, Li L, Shenefiel C (2021) Many hands make many fingers to point: Challenges in creating accountable AI. AI & Society, pp 1–13, DOI: https://doi.org/10.1007/s00146-021-01302-0

Vakkuri V, Kemell K K, Tolvanen J, Jantunen M, Halme E, Abrahamsson P (2022) How do software companies deal with artificial intelligence ethics? A gap analysis. In Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, pp 100–109, DOI: https://doi.org/10.1145/3530019.3530030

Karen B (2022) Designing up with value-sensitive design: Building a field guide for ethical ML development. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pp 2069–2083, DOI: https://doi.org/10.1145/3531146.3534626

Madaio M, Egede L, Subramonyam H, Vaughan J W, Wallach H (2022) Assessing the fairness of AI systems: AI practitioners’ processes, challenges, and needs for support. In Proceedings of the ACM on Human-Computer Interaction 6(CSCW1), pp 1–26, DOI: https://doi.org/10.1145/3512899

Widder D G, Nafus D (2023) Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society 10(1), pp 20539517231177620, DOI: https://doi.org/10.1177/20539517231177620

Deng W H, Nagireddy M, Seng M A L, Singh J, Wu Z S, Holstein K, Zhu H (2022) Exploring how machine learning practitioners (try to) use fairness toolkits. arXiv:2205.06922

Deng W H, Guo B B, DeVrio A, Shen H, Eslami M, Holstein K (2022) Understanding practices, challenges, and opportunities for user-driven algorithm auditing in industry practice. arXiv:2210.03709

Bogdana R, Yang J, Cramer H, Chowdhury R (2021) Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. In Proceedings of the ACM on Human-Computer Interaction 5(CSCW1): pp 1–23, DOI: https://doi.org/10.1145/3449081

Jiyoo C, Custis C (2022) Understanding implementation challenges in machine learning documentation. Equity and Access in Algorithms, Mechanisms, and Optimization, pp 1–8, DOI: https://doi.org/10.1145/3551624.3555301

Appendix B: Documentation of the Search Process

figure 6

Documentation of the search process

Appendix C: Glossary of Terms

In this section, we provide definitions for certain terms used in the manuscript. The definitions referenced are directly sourced, while those without citations are developed by the authors.

Ethics : The moral principles that govern the behaviors or activities of a person or a group of people (Nalini 2020 ).

AI Ethics : The principles of developing AI to interact with other AIs and humans ethically and function ethically in society (Siau and Wang 2020 ).

AI Practitioner : The term ’practitioners’ in our study includes AI developers, AI engineers, AI specialists, and AI experts. The terms ‘AI practitioners’ and ‘practitioners’ are used interchangeably throughout our study.

Fairness : AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities, or groups (Aus 2023 ).

Accountability : People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled (Aus 2023 ).

Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them (Aus 2023 ).

Privacy protection and security : AI systems should respect and uphold privacy rights and data protection, and ensure the security of data (Aus 2023 ).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Pant, A., Hoda, R., Tantithamthavorn, C. et al. Ethics in AI through the practitioner’s view: a grounded theory literature review. Empir Software Eng 29 , 67 (2024). https://doi.org/10.1007/s10664-024-10465-5

Download citation

Accepted : 13 February 2024

Published : 06 May 2024

DOI : https://doi.org/10.1007/s10664-024-10465-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Grounded theory literature review
  • Practitioners
  • Software engineering
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. Systematic literature reviews in software engineering

    The impact of software engineering research on modern programming languages: Informal literature survey. No clear search criteria, no data extraction process. ACM Surv: J. Ma and J. V. Nickerson: 38(3), pp. 1-24: 2006: Hands-on, simulated and remote laboratories: a comparative literature review: Not a software engineering topic: ISESE: S ...

  2. Guidelines for performing Systematic Literature Reviews in Software

    The guidelines have been adapted to reflect the specific problems of software engineering research. The guidelines cover three phases of a systematic literature review: planning the review ...

  3. (PDF) Systematic literature reviews in software engineering-A

    1. Introduction. In 2004 and 2005, Kitchenham, Dybå and Jørgensen proposed the adoption of evidence-. based software engineering (EBSE) and the use of systematic reviews of the software ...

  4. Systematic literature reviews in software engineering

    4.4.1. Review topics and extent of evidence. Compared with our previous study [12], the 33 reviews discussed in this paper addressed a broader range of software engineering topics. There is no longer a preponderance of cost estimation studies and more general software engineering topics have been addressed.

  5. Performing systematic literature reviews in software engineering

    Context: Making best use of the growing number of empirical studies in Software Engineering, for making decisions and formulating research questions, requires the ability to construct an objective summary of available research evidence. Adopting a systematic approach to assessing and aggregating the outcomes from a set of empirical studies is also particularly important in Software Engineering ...

  6. Systematic literature reviews in software engineering

    International Symposium on Empirical Software Engineering. 509-518. Google Scholar [28] Mendes, E., A systematic review of Web engineering research. International Symposium on Empirical Software Engineering. 498-507. Google Scholar [29] Miller, J., Can results from software engineering experiments be safely combined?.

  7. Analysing app reviews for software engineering: a systematic literature

    A greater focus on software engineering goals and use cases would increase the relevance and impacts of app review analysis techniques. This systematic literature review includes a complete inventory of already envisioned software engineering use cases for the various app review analysis technique (RQ3).

  8. PDF Guidelines for performing Systematic Literature Reviews in Software

    A systematic literature review is a means of evaluating and interpreting all available research relevant to a particular research question, topic area, or ... J.E. Hannay, D.I.K. Sjøberg, T. Dybå, A systematic review of theory use in software engineering experiments, IEEE Transactions on SE 33 (2) (2007) 87- 107.!

  9. Perceived diversity in software engineering: a systematic literature review

    Through a systematic literature review, we aim to clarify the research area concerned with perceived diversity in Software Engineering. Our goal is to identify (1) what issues have been studied and what results have been reported; (2) what methods, tools, models, and processes have been proposed to help perceived diversity issues; and (3) what ...

  10. Systematic literature review on software quality for AI-based software

    According to a study of a systematic literature review (Nascimento et al. 2020) about software engineering (SE) for artificial intelligence, it has been found that there was no comprehensive study in the field of SE for AI-based systems until 2016 and in 2019, publications had a high growth peak, i.e., there were 21 studies published this year.

  11. Systematic literature reviews in software engineering: Preliminary

    Systematic Literature Reviews (SLRs) have been gaining significant attention from software engineering researchers since 2004. Several researchers have reported their experiences of and lessons learned from applying systematic reviews to different subject matters in software engineering. However, there has been no attempt at independently exploring experiences and perceptions of the ...

  12. Machine/Deep Learning for Software Engineering: A Systematic Literature

    Since 2009, the deep learning revolution, which was triggered by the introduction of ImageNet, has stimulated the synergy between Software Engineering (SE) and Machine Learning (ML)/Deep Learning (DL). Meanwhile, critical reviews have emerged that suggest that ML/DL should be used cautiously. To improve the applicability and generalizability of ML/DL-related SE studies, we conducted a 12-year ...

  13. Systematic reviews in software engineering: An ...

    1. Introduction. Systematic Literature Review (SLR), more commonly known as systematic review, has emerged as one of the most popular methods of Evidence-Based Software Engineering (EBSE) since Kitchenham, Dybå and Jørgensen reported their seminal piece of work on bringing the evidence-based practice to Software Engineering (SE) in International Conference on Software Engineering (ICSE) [21 ...

  14. Contributions of enterprise architecture to software engineering: A

    The purpose of this systematic literature review is to see how enterprise architecture is used in software development and maintenance practice. To this end, we first carried out a search in the SCOPUS database and then organized the papers according to the Software Engineering Body of Knowledge to determine what areas of software engineering ...

  15. The Impact of Human Aspects on the Interactions Between Software

    Context: Research on human aspects within the field of software engineering (SE) has been steadily gaining prominence in recent years. These human aspects have a significant impact on SE due to the inherently interactive and collaborative nature of the discipline. Objective: In this paper, we present a systematic literature review (SLR) on human aspects affecting developer-user interactions ...

  16. PDF Large Language Models for Software Engineering: A Systematic Literature

    1 Large Language Models for Software Engineering: A Systematic Literature Review XINYI HOU∗, Huazhong University of Science and Technology, China YANJIE ZHAO∗, Huazhong University of Science and Technology, China YUE LIU, Monash University, Australia ZHOU YANG, Singapore Management University, Singapore KAILONG WANG, Huazhong University of Science and Technology, China

  17. Six years of systematic literature reviews in software engineering: An

    1. Introduction. In 2004, Kitchenham et al. [14] introduced the concept of evidence-based software engineering (EBSE) as a promising approach to integrate academic research and industrial practice in software engineering. Following this paper, Dybå et al. [8] presented EBSE from the point of view of the software engineering practitioner, and Jørgensen et al. [20] complemented it with an ...

  18. PDF Large Language Models for Software Engineering: A Systematic Literature

    Large Language Models for Software Engineering: A Systematic Literature Review 3 literature. This gap signifies a need for understanding the relationship between LLMs and SE. In response, our research aims to bridge this gap, providing valuable insights to the community. Table 1. State-of-the-art surveys related to LLMs for SE.

  19. A Systematic Literature Review of Software Process ...

    Kitchenham, B.: 2007 Guidelines for Performing Systematic Literature Review in Software Engineering, Version 2.3. EBSE Technical Report. Software Engineering Group, School of Computer Science and Mathematics, Keele University, UK and Department of Computer Science, University of Durham, UK (2007) Google Scholar

  20. Perceived diversity in software engineering: a systematic literature review

    Through a systematic literature review, we aim to clarify the research area concerned with perceived diversity in Software Engineering. Our goal is to identify (1) what issues have been studied and what results have been reported; (2) what methods, tools, models, and processes have been proposed to help perceived diversity issues; and (3) what ...

  21. Large Language Models for Software Engineering: A Systematic Literature

    Large Language Models (LLMs) have significantly impacted numerous domains, including Software Engineering (SE). Many recent publications have explored LLMs applied to various SE tasks. Nevertheless, a comprehensive understanding of the application, effects, and possible limitations of LLMs on SE is still in its early stages. To bridge this gap, we conducted a systematic literature review (SLR ...

  22. Review Software engineering principles: A systematic mapping study and

    1. Introduction. Software engineering (SE) emerged as a discipline in the late 70s and early 80s. The SWEBOK Guide - ISO 19759 defines software engineering (SE) as "the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software".

  23. Systematic Reviews in the Engineering Literature: A Scoping Review

    A systematic review is a specialized type of literature review used to collect and synthesize all the available evidence related to a research question. The methods for systematic reviews should be transparent and reproducible so that other researchers can use, replicate, and build upon the findings. Systematic reviews have been published for decades in medical literature where it is necessary ...

  24. Skills development for software engineers: Systematic literature review

    A Systematic Literature Review was performed on six databases, resulting in 56 selected articles identifying the soft skills and the teaching methodologies desired to train Software Engineers. ... Software Engineering is an industry-oriented discipline [24], which means that students must learn to deal not only with technical problems but also ...

  25. Ethics in AI through the practitioner's view: a grounded theory

    Investigating the users' view of ethics in AI, for example, through a similar grounded theory literature review (GTLR) approach as applied in this review to address the practitioners' view because to the best of our knowledge, this is the first grounded theory literature review (GTLR) in Software Engineering.: