In this unprecedented era characterized by rapid technological evolution, accessing health information through the internet has become incredibly convenient and accessible with handheld and portable electronic devices. Individuals frequently query platforms such as Google or social media to learn, although these sources are limited in providing personalized advice (Daraz et al., 2019; Lim et al., 2022). However, the emergence of free, public artificial intelligence (AI) platforms has ushered in innovative and exciting possibilities to educate and empower communities (Baidoo-Anu & Owusu Ansah, 2023). One prominent example among these platforms is ChatGPT, a natural language-processing model developed by OpenAI, which has taken a leading role in this transformative landscape (OpenAI, 2021).
The Origins of ChatGPT as an Information SourceChatGPT is the latest trend in digital novelty for information sourcing. Released as an online AI chatbot, ChatGPT is based on a generative pretrained (GPT) transformer language model. ChatGPT is an AI advanced language model that allows users to engage in dialogue. It is trained on a diverse range of internet text sources, enabling it to understand and generate human-like written responses across various topics and domains (Introducing ChatGPT, 2022). It is designed for tasks, such as answering complex questions, providing personalized advice and explanations, generating text, and assisting users in a conversational manner, unlike any former search technology (Hopkins et al., 2023). ChatGPT was released on November 30, 2022, quickly amassing more than 100 million active users within 2 months of launching and representing the fastest-growing app to date at that time (Milmo, 2023).
The Good, the Bad, and the Inevitable: ChatGPT for Women's Sexual Health InformationChatGPT operates through a process known as unsupervised learning, absorbing patterns, grammar, and vocabulary from a vast array of internet texts. It is a remarkable process that nevertheless raises ethical concerns. Specifically, it generates responses based on learned patterns from its training data, potentially introducing inaccuracies and biases (Brown et al., 2020). Biases may originate from the predispositions of the algorithm's creators, inherent biases within the models in which they are based, underlying biases within the data sets used for training, or biases that emerge during the practical deployment of such systems (Dubber et al., 2020, p. 96). This bias-inheriting trait poses a direct threat to the objectivity, reliability, and inclusivity of scientific work (Kluge, 2020). The inherent opacity of AI models like ChatGPT presents challenges for transparency and accountability, making it difficult to discern the rationale behind its outputs (OpenAI, 2021). In addition, the use of ChatGPT carries a risk of inadvertently revealing sensitive or confidential information, potentially compromising patient privacy and confidentiality, a significant concern in health care settings (Rigby, 2019). Despite its impressive capabilities, these issues of inaccuracy, bias, transparency, and privacy underscore the need for careful consideration and ethical vigilance when integrating such technologies into our ever-evolving digital landscape.
For Black women, AI and particularly ChatGPT could hold significance because they confront unique challenges and disparities within health care access and education and bear a disproportionate burden in terms of sexual health challenges, diagnoses, and treatments, particularly in the context of HIV prevention. In 2018, African Americans/Black people accounted for 42% of the 37,968 new HIV diagnoses in the United States, with 11% of those diagnoses attributed to Black women (Centers for Disease Control and Prevention [CDC], 2021). In addition, sexually transmitted infections (STI) rates including gonorrhea, chlamydia, and syphilis disproportionately impact Black women compared with their White counterparts. Black women in the United States have endured historical trauma and mistreatment through slavery, the Jim Crow Era, and enduring systematic racism in the present day. These traumas contribute to mental, physical, and sexual outcomes of Black women through medical mistrust, lack of access to quality care, and medical racism. It is important that STI/HIV prevention education for Black women be framed with these historical and present-day contexts to offer safe, positive, and effective discussions around sexual health. ChatGPT could also provide a means for Black women to be empowered in their sexual education instead of the negative, risk-focused perspective it is often presented as. As opposed to seeking advice and information from in-person professionals, ChatGPT may be able to be used as an “anonymous” source to gather information and lead to greater perceived self-efficacy and sexual agency (Opara et al., 2021).
Methodology Mapping Web Content Analysis With ChatGPT Relative to Self-education in Women's Health OverviewThis project did not involve human subjects, and thus, institutional review board review was not required. The authors used a laptop with a Google Chrome internet browser with cleared cookies in the United States to create transcripts with ChatGPT-3.5, the free version of the AI platform, which is presumed to be more accessible to end users versus a paid version. Four conversation prompts were initially submitted to ChatGPT, and transcripts were created on September 19, 2023. For each of these, an iteration was created specifying race as a potential content output influencer. This resulted in a total of eight conversation prompts. To maintain their individuality and prevent the formation of conversation threads, each conversation prompt was entered into a new, separate chat (Appendix A, Supplemental Digital Content, https://links.lww.com/JNC/A48).
Content Creation for ChatGPT analysisWeb content analysis involves systematically examining and interpreting the text or visual content of website resources to identify patterns, themes, and trends. It often uses qualitative or quantitative techniques to extract insights into varied aspects, such as content relevance, sentiment, or audience engagement (Herring, 2010). Thus, web content analysis was employed as a method of exploring AI-driven public health education, with a particular focus on ChatGPT's potential to enhance sexual health education for Black women. The research questions that guided this approach are twofold. First, the scope of information that ChatGPT has regarding the sexual health of Black women was queried by exploring simulation prompts. In an age when information is instantly accessible, it is crucial to discern the quality, accuracy, and relevance of the insights ChatGPT offers on this subject. It emphasizes the importance of critically evaluating AI-generated content by health care professionals and educators while recognizing the potential of ChatGPT as a dynamic tool to bridge gaps in sexual health knowledge. Second, the potential transformative role of this emerging AI platform in sexual health education was explored. How might ChatGPT, as a new and widely accessible free digital resource, inform and empower Black women in their quest to educate themselves about sexual health preventative behavior and biomedical options?
Data Analysis ChatGPT content analysisAll eight transcripts from the prompts were compiled (Appendix A, Supplemental Digital Content, https://links.lww.com/JNC/A48). Each prompt response generated through ChatGPT was summarized, compared, contrasted, and provided overall impressions (Table 1). Inductive qualitative data analysis was performed. Inductive analysis is well suited for content with limited previous studies or limited former knowledge on a subject, such as the novel platform of ChatGPT (Elo & Kyngäs, 2008). In inductive analysis, although the results are shaped by the researcher's defined evaluation objectives or inquiries, they stem directly from the analysis of raw data (in this case, ChatGPT transcripts) rather than a priori presuppositions (Thomas, 2006). The findings can be condensed into a brief summary format to establish links between the research objectives and the findings derived from the raw data (Azungah, 2018; Thomas, 2006). To guide the interpretation of the ChatGPT outputs, we deemed prompts that included the population of interest “a Black woman” and “I am a Black woman” as the baseline comparator prompts. Our inductive analysis consisted of the following steps:
• Two coauthors (S.W. and R.C. [hereafter referred to as analysts]) reviewed the eight transcripts and categorized the content: ◦ Answers, Attitude, and Identity, coined as A2I (how did ChatGPT respond to the question, tone–greeting or no greeting, and identity–operationalized here as race and gender; depicted in Table 1 as ChatGPT summary). ◦ Content that was similar in context (determined by both reviewers to have indistinguishable contextual meaning or output; depicted in Table 1 as the compare column). ◦ Content that differed in context (determined by both reviewers to have variability [a deviation from the baseline comparator prompt] in contextual output; depicted in Table 1 as the contrast column). • After reviewing the transcripts independently, the analysts met to adjudicate differences and refine the summaries. The overall impressions in Table 1 were drafted and finalized by both analysts during this session. • The other coauthors (J.W. and G.A.-F.) reviewed the eight transcripts and provided edits to the final results summary table using the track changes function in Microsoft Word. There were no expressed contextual discrepancies, as feedback was only editorial in nature. Table 1. - Analysis of ChatGPT Transcripts Prompt and Transcript Summary of ChatGPT Response Compare Contrast Overall Impression 1: I would like to simulate some interactions between ChatGPT and a woman who is self-educating about HIV prevention Begins with a friendly greeting and an acknowledgment of the user's interest in HIV prevention. Provides a concise list of HIV prevention methods, including safe sex practices, PrEP, regular testing, knowing one's partner's status, harm reduction, and access to health care. Ends with an open-ended question, inviting the user to ask more specific questions or seek further information Both responses start with a friendly and welcoming tone, creating a comfortable environment for the user. They both provide essential information about HIV prevention methods, with a focus on PrEP. Both responses emphasize the importance of open communication with health care providers. They end with encouraging and supportive messages, emphasizing the value of knowledge and proactive engagement in sexual health Transcript 2 is more detailed and specific in addressing the user's questions about PrEP, its effectiveness, and affordability, whereas Transcript 1 provides a broader overview of HIV prevention methods. Transcript 1 includes an open-ended question to encourage further engagement and questions from the user, whereas Transcript 2 concludes with a more direct encouragement to seek guidance if needed. Transcript 2 is more customized to the user's specific inquiry about PrEP, whereas Transcript 1 provides general information on HIV prevention Overall, both transcripts aim to provide useful information and support to the user, but Transcript 2 offers a more detailed and targeted explanation of PrEP in response to the user's specific interest 2: I would like to simulate some interactions between ChatGPT and a Black woman who is self-educating about HIV prevention Begins with a friendly greeting and an acknowledgment of the user's interest in HIV prevention. Offers detailed information about PrEP, including its effectiveness and potential side effects. Addresses the user's specific inquiry about accessing PrEP and its affordability. Concludes with a supportive message, encouraging the user to seek further guidance if needed 3: Can you give me information about HIV prevention in Black women? Acknowledges the specific demographic of Black women and emphasizes the importance of specific HIV prevention efforts. Highlights the higher rates of HIV infection among Black women and the need for culturally sensitive health care. Although it includes general prevention methods, it places particular emphasis on addressing racial disparities, reducing stigma, and building supportive community networks Both responses provide information about HIV prevention methods and the importance of regular testing and health care access. They both encourage open communication with sexual partners and stress the significance of education and awareness The response to “HIV prevention in Black women” specifically addresses the demographic's unique challenges, such as higher infection rates and the importance of culturally sensitive care. It emphasizes the need to reduce disparities and stigmaThe conversation prompts were intentionally phrased for comparison between women in general and Black women. Full transcripts of each conversation prompt (Appendix A, Supplemental Digital Content, https://links.lww.com/JNC/A48) were then analyzed. Overall, ChatGPT provided accurate (congruent with CDC guidelines and prevention recommendations) and consistent information about HIV prevention and pre-exposure prophylaxis (PrEP) in all conversation prompts submitted, regardless of stated race (CDC, 2022). There were distinguishable changes in the tone and phrasing of the suggestions.
The first two conversation prompts were entered in such a way that simulations were requested between a user and ChatGPT, which created a space for an observational third party. In examining the two conversation transcripts, although similar, one of the most striking differences was how sexual health information is presented based on the specificity of the user's race and the tone used throughout the interactions. Transcript 2, which involved specifying the user's race, demonstrated a more nuanced approach to presenting sexual health information. This transcript not only acknowledged the user's racial background but also provided a deeper level of detail and specificity regarding access to PrEP. The conversation offered poignant information about financial assistance options, which suggests a heightened awareness of potential disparities in health care access based on race. Moreover, the tone in Transcript 2 was notably supportive and encouraging. It concluded with complimentary praise directed at the user for proactively seeking information about her sexual health and extended a warm invitation to seek further assistance. This tone conveyed empathy and a genuine interest in the user's well-being. The difference in how sexual information is presented based on race highlights the importance of considering cultural sensitivity and inclusivity in AI-driven conversations about delicate topics. Transcript 2's approach demonstrates an understanding of the potential impact of race-related factors on health care access and tailors the information accordingly. This approach can foster a sense of trust and engagement with the user.
The comparison of Transcripts 3 and 4 reveals significant differences in the way information about HIV prevention is presented, particularly in the context of racial disparities in sexual health. These differences extend to how condom use is suggested, which highlights a notable contrast in power dynamics between the two user groups. Comparative conversation Prompts 3 and 4 are less conversational than Prompts 1 and 2 and ask for information on HIV prevention. There is a distinct difference in the information provided in the two responses. Transcript 4 is a more generalized overview of HIV prevention, whereas Transcript 3 particularly focuses on racial disparities in sexual health. In both transcripts, condom use is suggested; however, Transcript 3 encourages Black women to “negotiate” condom use with their partners, versus Transcript 4, where it is suggested that generalized women “can take the initiative to ensure their use.” The distinction in the wording of the advice is significant because it implies an imbalance of power and responsibility between Black women who need to bargain to ensure safe sex and women in general who merely need to make it happen.
The comparison of conversation Prompts 5 and 6, which are tailored to discuss HIV PrEP, reveals a distinct difference in approach. Although both prompts (5 and 6) use similar phrasing, the responses provided in Transcripts 5 and 6 highlight important variations in content and emphasis. Transcript 5 offers a clinically straightforward and medically focused explanation of PrEP, covering the aspects of PrEP utilization before, during, and after. By contrast, Transcript 6 takes a more holistic and socially conscious approach. The responses not only mention the disproportionate impact of HIV among Black women but also delve into the health care disparities, stigma reduction, the importance of creating community support networks, and the influence of socioeconomic factors. The emphasis on reducing stigma and fostering community support networks in Transcript 6 indicates an understanding of the social and psychological dimensions of health care, recognizing that effective prevention and care go beyond the strictly medical domain.
Comparative conversation Prompts 7 and 8, which both directly address ChatGPT from a first-person perspective, underscore significant differences in tone, emotional emphasis, and the approach to health communication. These transcripts are the most divergent from each other compared with all previous conversation prompts. In the two responses, Transcript 7 adopts a compassionate and culturally sensitive tone, explicitly acknowledging the user's identity as a Black woman in college. This response goes beyond providing information about HIV prevention and aims to establish a supportive and empathetic connection with the user. The language used conveys a sense of care, emphasizing the user's health as a priority. Notably, the phrase “remember that your doctor mentioned HIV prevention because they care about your well-being” carries an empathetic undertone. It appears to be a deliberate effort to address well-documented historical medical mistrust in the Black community by assuring the user of the medical community's genuine concern for her health. By contrast, Transcript 8 maintains a more straightforward style and neutral tone without addressing the user's racial identity or that she is in college. It provides clear and concise information about the importance of preventive measures without the same level of emotional emphasis. The phrase “it's important to be informed about HIV prevention, especially if your doctor has brought it up during your last appointment” implies the authority of the doctor-patient relationship and suggests a level of trust in medical professionals. This response focuses primarily on the importance of medical advice and informed decision-making. The divergence between Transcript 7 and Transcript 8 reflects two distinct approaches to health care communication. Transcript 7 places a strong emphasis on establishing empathy, addressing historical medical mistrust, and fostering a supportive connection with the user. It aims to reassure the user and prioritize her well-being. However, Transcript 8 takes a more neutral and medical-centric approach, placing greater emphasis on the importance of informed decision making based on medical advice.
Benefits for those self-educating with AI chatbots like ChatGPT are numerous. In addition to immediately available information, regardless of time or geographic location, privacy and anonymity can be maintained compared with being physically present in a clinical office setting. This is especially helpful for those who need accurate health care education on a new or current diagnosis without the fear of judgment, stigma, bias, or discrimination (Dovidio & Fiske, 2012). Effective clinician–patient communication can be challenging, influenced by individuals' actual or perceived expectations, and their historical interactions with health care systems and individual providers (Halbert et al., 2006). The utilization of ChatGPT presents an opportunity to engage in open and informative discussions without the need to directly confront these concerns or barriers. By supplementing professional advice, ChatGPT can also allow one to learn at their own pace and further explain difficult concepts in a multitude of reiterations.
LimitationsChatGPT, as per OpenAI's website, exhibits limitations including the generation of plausible yet incorrect answers because of the absence of a “source of truth” during training. The model is sensitive to phrasing adjustments, yet can often be verbose and prone to overusing phrases due to training biases and optimization issues. It fails to ask clarifying questions for ambiguous queries and occasionally responds to harmful instructions, although OpenAI has programs in place to mitigate unsafe content. OpenAI acknowledges the potential for false negatives and positives and welcomes user feedback with each interaction for ongoing improvements (Introducing ChatGPT, 2022). ChatGPT is also programmed to provide a disclaimer when users ask for medical information, emphasizing that it is not a licensed medical professional and advising users to consult with health care providers for accurate advice. This is especially important because it can sometimes provide convincing, yet incorrect or misleading information, even fabricating quotes, citations, or information known as “hallucinations,” making it unsuitable as a sole research source. It may oversimplify questions with multiple answers or misrepresent arguments. It lacks up-to-date knowledge, primarily operates in English, and its knowledge boundaries are not precisely defined. Furthermore, ChatGPT cannot access the internet for real-time information, verify facts, or perform complex calculations without external plugins (Does ChatGPT Tell the Truth?, n.d.).
In using qualitative inductive analysis, an inherent methodological limitation is subjectivity due to the researcher's own biases (because of held beliefs, previous experiences, and values) and the nature of the act of interpreting and analysis of the qualitative data (Azungah, 2018). Moreover, the analysis of AI output is interpreted in the context of humanistic nuances (such as tone, silence, sighs, shifting, etc.), which can present a limitation on analysis or description of the interpreted data (Elo & Kyngäs, 2008; Graneheim et al., 2017). In this study, the authors describe and interpret the full conversation transcripts of a nonhuman entity with AI, which includes the language presented and the tone interpreted. In future studies, a hybrid analysis structure and standardized tool should be considered for development in this exponentially growing field.
Although the authors used ChatGPT as a source of HIV self-education in this article, there are other social platforms (e.g., Twitter, Reddit) that could be considered in future projects to determine how they compare relative to information generation and accuracy.
ConclusionIn the evolving landscape of public health education, ChatGPT emerges as a compelling and relatively new platform with the potential to revolutionize how Black women educate themselves about sexual health. Traditional modes of health education often face challenges in reaching and engaging marginalized communities. However, ChatGPT offers a novel opportunity to bridge these gaps by providing accessible, on-demand, and personalized information. Black women, who have historically encountered disparities in health care access and outcomes, stand to benefit significantly from a tool that empowers them to take control of their sexual health. By interfacing with ChatGPT, individuals can easily initiate conversations on topics ranging from safe sexual practices to HIV diagnoses and PrEP utilization. The platform's ability to provide instant responses, tailored to the user's questions and needs, offers a powerful means for Black women to access reliable information without the barriers of geography, stigma, or time constraints as one can return to the conversation thread anytime.
Furthermore, ChatGPT's potential to disseminate culturally sensitive information in layperson's terms can address some of the communication disparities in health care. As ChatGPT continues to evolve, it could be integrated into public health campaigns, community health centers, and online platforms, becoming a ubiquitous resource for Black women seeking to educate themselves about sexual health behaviors, diagnoses, and treatments.
However, it is essential to remain vigilant about the quality and accuracy of information provided by ChatGPT to ensure that it aligns with evidence-based practices and the unique health care needs of Black women. It will be important to monitor AI's training and capability to present an equitable tone and provision of information without bias. Further research and collaboration between health care professionals, AI developers, and community organizations will be critical to maximizing the potential of ChatGPT in public health education.
DisclosuresAll authors have no real or perceived conflicts of interest.
Author ContributionsAll authors on this article meet the four criteria for authorship as identified by the International Committee of Medical Journal Editors (ICMJE); all authors have contributed to the conception and design of the study, drafted or have been involved in revising this manuscript, reviewed the final version of this manuscript before submission, and agree to be accountable for all aspects of the work. Specifically, using the CRediT taxonomy, the specific contributions of each author is as follows: Conceptualization and methodology: R. Chandler, S. Warner, G. Aidoo-Frimpong, J. Wells; Formal analysis: R. Chandler, S. Warner, G. Aidoo-Frimpong, J. Wells; Funding acquisition: None; Investigation: R. Chandler, S. Warner; Project administration: R. Chandler; Supervision: R. Chandler; Validation: G. Aidoo-Frimpong, J. Wells; Writing—original draft: S. Warner; Writing/revising—R. Chandler, S. Warner, G. Aidoo-Frimpong, J. Wells.Key Considerations ChatGPT is a new and unprecedented platform that providers should explore as a resource for patient self-education. ChatGPT could be particularly useful for historically excluded communities specifically with sensitive or stigmatizing topics, such as sexual health, HIV, and PrEP. Quality, accuracy of information, and bias reduction should be considered as limitations when recommending ChatGPT as a self-education resource tool. Further research and systematic approaches to Chat GPT content analysis are future implications for validating Chat GPT patient self-education.
AcknowledgmentsThe authors acknowledge that this article used ChatGPT (powered by OpenAI's language model, GPT-3.5; http://openai.com) to produce transcripts that were analyzed and interpreted by the human authors. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
References Azungah T. (2018). Qualitative research: deductive and inductive approaches to data analysis. Qualitative Research Journal, 18(4), 383-400. https://doi.org/10.1108/QRJ-D-18-00035 Baidoo-Anu D., Owusu Ansah L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electronic Journal, 7(1), 52-62. https://doi.org/10.2139/ssrn.4337484 Brown T., Mann B., Ryder N., Subbiah M., Kaplan J. D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Agarwal S., Herbert-Voss A., Krueger G., Henighan T., Child R., Ramesh A., Ziegler D., Wu J., Winter C., Amodei D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901. Centers for Disease Control and Prevention. (2021, December 14). African Americans/Blacks. https://www.cdc.gov/nchhstp/healthdisparities/africanamericans.html Centers for Disease Control and Prevention. (2022, March 29). Preventing new HIV infections. https://www.cdc.gov/hiv/guidelines/preventing.html Daraz L., Morrow A. S., Ponce O. J., Beuschel B., Farah M. H., Katabi A., Alsawas M., Majzoub A. M., Benkhadra R., Seisa M. O., Ding J. F., Prokop L., Murad M. H. (2019). Can patients trust online health information? A meta-narrative systematic review addressing the quality of health information on the internet. Journal of General Internal Medicine, 34(9), 1884-1891. https://doi.org/10.1007/s11606-019-05109-0 Dovidio J. F., Fiske S. T. (2012). Under the radar: How unexamined biases in decision-making processes in clinical interactions can contribute to health care disparities. American Journal of Public Health, 102(5), 945-952. https://doi.org/10.2105/ajph.2011.300601 Dubber M. D., Pasquale F., Das S. (2020). The Oxford handbook of ethics of AI. https://doi.org/10.1093/oxfordhb/9780190067397.001.0001 Elo S., Kyngäs H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107-115. https://doi.org/10.1111/j.1365-2648.2007.04569.x Graneheim U. H., Lindgren B.-M., Lundman B. (2017). Methodological challenges in qualitative content analysis: A discussion paper. Nurse Education Today, 56, 29-34. https://doi.org/10.1016/j.nedt.2017.06.002 Halbert C., Armstrong K., Gandy O. H., Shaker L. (2006). Racial differences in trust in health care providers. Archives of Internal Medicine, 166(8), 896-901. https://doi.org/10.1001/archinte.166.8.896 Herring S. C. (2010). Web content analysis: Expanding the paradigm. In International handbook of internet research (pp. 233-249). https://doi.org/10.1007/978-1-4020-9789-8_14 Hopkins A. M., Logan J. M., Kichenadasse G., Sorich M. J. (2023). Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectrum, 7(2), pkad010. https://doi.org/10.1093/jncics/pkad010 Introducing ChatGPT. (2022, November 30). OpenAI. https://openai.com/blog/chatgpt Kluge E. H. W. (2020). Artificial intelligence in healthcare: Ethical considerations. Healthcare Management Forum, 33(1), 47-49. https://doi.org/10.1177/0840470419850438 Lim M. S. C., Molenaar A., Brennan L., Reid M., McCaffrey T. (2022). Young adults' use of different social media platforms for health information: Insights from web-based conversations. Journal of Medical Internet Research, 24(1), e23656. https://doi.org/10.2196/23656 Opara I., Abrams J. A., Cross K., Amutah-Onukagha N. (2021). Reframing sexual health for black girls and women in HIV/STI prevention work: Highlighting the role of identity and interpersonal relationships. International Journal of Environmental Research and Public Health, 18(22), 12088. https://doi.org/10.3390/ijerph182212088 OpenAI. (2021). GPT-3.5: Autoregressive language models. https://platform.openai.com/docs/guides/chat Rigby M. J. (2019). Ethical dimensions of using artificial intelligence in health care. American Journal of Epidemiology, 21(2), 121-124. https://doi.org/10.1001/amajethics.2019.121 Thomas D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27(2), 237-246. https://doi.org/10.1177/1098214005283748
Comments (0)