Online Disinformation: An Analysis of the Propensity to Verify Fake News Across Different Generations.
Desinformación en línea: análisis de la propensión a verificar noticias falsas entre diferentes generaciones.
Tércio Pereira
Universidade do Vale do Itajaí
E-mail: tercio@outlook.com
Cynthia Morgana Boos de Quadros
Postgraduate Program in Administration - Fundação Universidade Regional de Blumenau
E-mail: cynthiadequadros@gmail.com
ORCID: https://orcid.org/0000-0001-6729-7361
Fabricia Durieux Zucco
Regional University of Blumenau and Vale do Itajaí University
E-mail: fabriciazucco@hotmail.com
ORCID: https://orcid.org/0000-0001-5538-1195
Hans Peder Behling
University of Vale do Itajaí
E-mail: hanspeda@univali.br
ORCID: https://orcid.org/0000-0003-0558-9304
Fabricio Gustavo Gesser Cardoso
Regional University of Blumenau
E-mail: fggcardoso@furb.br
ORCID: https://orcid.org/0009-0007-3141-9428
DOI: 10.26807/rp.v28i121.2153
Fecha de envío: 15/08/2024
Fecha de aceptación: 09/12/2024
Fecha de publicación: 31/12/2024
Abstract
This study examines the dissemination and verification of fake news among different generations. We employed a quantitative methodology, collecting data through questionnaires and obtaining 800 valid responses. The research was conducted in Blumenau, Brazil, utilizing an ordinal logistic regression to analyze the odds ratios of different groups verifying the authenticity of information before sharing it on social media. The analysis revealed that age and education are significant factors in the verification of information, while gender does not significantly influence this behavior. The results indicate a higher likelihood of not verifying information among older users with lower levels of education.
Keywords: Post-truth; Generations; Odds ratio; Population.
Resumen
El estudio examina la difusión y verificación de noticias falsas en diferentes generaciones. Para ello, empleamos una metodología cuantitativa, recolectando datos a través de cuestionarios, alcanzando 800 respuestas válidas. La investigación adoptó un método cuantitativo, enviando cuestionarios a la población de Blumenau – Brasil. Para el análisis de los datos, utilizamos una regresión logística ordinal para examinar la razón de probabilidades de que los grupos verifiquen la autenticidad de la información antes de compartirla en las redes sociales. El análisis mostró que la edad y el nivel educativo son factores determinantes en la verificación de la información, mientras que el género no influye significativamente en este comportamiento. Los resultados indican una mayor probabilidad de no verificar la información entre los usuarios de mayor edad y menor nivel educativo.
Palabras clave: Posverdad; generaciones; razón de probabilidades; población.
1. INTRODUCTION
Fake news represents a type of information carefully crafted to mimic credible media sources with the explicit intent to deceive readers. As pointed out by Lazer et al. (2018) and Tandoc (2018), these news pieces are designed to appear authentic despite being entirely fabricated and inaccurate. The complexity of the fake news concept is explored in a detailed review by Zhou and Zafarani (2020) and Zhou et al. (2020), who present various formulations of the term. Nyilasy (2019) characterizes fake news as a particularly insidious form of mass persuasion, often sustained by advertising revenue, as discussed by Mills (2000) and Mills and Robson (2019). Dervin’s (1976) perspective on the nature of information and communication provides a lens to understand the phenomenon of fake news. In her work, she explores how information is often manipulated and how realities are constructed, which directly resonates with the fabrication and dissemination of fake news. She proposes that information should not merely be viewed as data to be transmitted but as an interactive communication process involving personal interpretation and meaning.
The rise and popularization of social networks on mobile platforms, which have been growing since mid-2006, have become a primary source of news and information for users (Yu et al., 2024). Despite this transformation in news acquisition, challenges have emerged regarding the rapid spread of false or misleading information, which can have critical implications in various contexts, particularly political (Fisher et al., 2016) and health-related (Bang et al., 2021). Yu et al. (2024) argue that, unlike true news, the spread of fake news can be influenced by distinct content characteristics, such as writing style and emotional traits, which can be extracted and analyzed using natural language processing tools. By developing a predictive methodology based on these attributes, the authors demonstrate how it is possible to predict the level of sharing of fake news, which is important for containment strategies and preventing reputational and financial damage to brands and companies mentioned in the fabrications.
Researchers have been working on techniques to detect fake news on social media. For instance, Giachanou et al. (2019) focus on capturing sentiment in news, Potthast et al. (2017) analyze writing elements, and Shu et al. (2019) use comments on posts as a significant signal. Generally, these studies aim to create posting patterns that can reflect traceable indicators for fake news. More recently, social networks like X have begun including warnings that shared news is, in fact, fake news; however, this verification often comes after the news has been widely shared.
Given this, some researchers have attempted to identify how fake news sharing occurs based on generations, noting different patterns. For example, Vargas-Bianchi et al. (2023) sought to investigate how female baby boomers approach fake news. Meanwhile et al. (2019) claim that unlike previous generations, such as Generation X and baby boomers, Generation Z individuals have limited interaction with traditional news sources, many of which they view as practically obsolete. This includes print media like newspapers and magazines. They rarely tune into TV news or listen to the radio. Instead, this generation prioritizes different news values, focusing less on accuracy and more on entertainment and interactivity. Furthermore, Prelog and Bakic-Tomic (2020) sought to understand how Generation Z consumes and evaluates information received through various media and to what extent their attitudes differ compared to previous generations.
There is a growing body of literature being constructed separately to understand generational behaviors (Prelog y Bakic-Tomic, 2020; Vargas-Bianchi et al., 2023) and, on another research front, mechanisms for detecting fake news and indicating ways to mitigate this contemporary problem in real-time (Giachanou et al., 2019; Shu et al., 2019). As Prelog and Bakic-Tomic (2020) noted, there is a generational difference on this issue. Thus, the authors call for more studies to explore in detail how this difference between generations manifests to develop effective strategies for combating disinformation tailored to the particularities of each generation, contributing to a more informed and resilient society in the face of the proliferation of fake news. Therefore, using the generational cohort theory, this research sought to identify how users' behavior regarding sharing news without verifying the source first varies by age. For this, we used an ordinal logistic regression analysis to identify the percentage by which one generation is more likely to share news without verifying the source compared to another generation.
2. LITERATURE REVIEW
2.1 Digital Culture
The rise of globalization, driven by the development of information and communication technologies (ICTs) and the expansion of the internet, has brought about new forms of communication and interaction. In this context, people develop, construct, shape, and share new codes and knowledge related to the use of these technologies and new forms of communication and expression (Flores-Márquez, 2021; Núñez et al., 2022). Part of these transformations introduced by this phenomenon is the so-called Digital Culture. The concept involves recognizing that the introduction of digital devices and tools into daily life leads to the digitization of information and communication, permeating all social relations, production processes, and knowledge production (Garrel, 2007; Núñez et al., 2022; Uzelac, 2010).
According to Chuquihuanca et al. (2021), digital culture is a product of the disruptive presence of ICTs in society, significantly shaping how individuals act, behave, think, and communicate as human beings. López and Bernal (2016) argue that the use of ICTs has given rise to a new cultural paradigm that supports new ways of managing information and communication. These transformations have significantly impacted everyday mediations and habits of access, reception, consumption, and appropriation by audiences (Flores-Márquez, 2021; Ibáñez et al., 2020). For many, a hyperconnected life has become a meaningful way of being in the world: an individual always connected, ubiquitous, ready to engage, enjoy, and express digital sentiments (Ibáñez et al., 2020).
In recent years, and especially after the Covid-19 pandemic, much of the world has organized itself digitally: telework, remote education, e-banking, digital commerce, digital marketing, e-government, etc. Memes, hashtags, stickers, bitcoins, big data, fake news, cookies, algorithms, and the metaverse are some of the reflections of the current digital culture (Morais da Silva y Fernandes, 2022). In this context, Ibáñez et al. (2020) state that the consumption of products and services has shifted to the consumption of people. The zapping is no longer between channels but between profiles. From Facebook to X; from X to Instagram; from Instagram to Pinterest; from Pinterest to TikTok. Thus, digital culture has brought the permanent streaming of life, compulsive media consumption, and information saturation, transforming the production and distribution of news and leading to new ways of conceptualizing and understanding it (Bengtsson y Johansson, 2020; Ibáñez et al., 2020).
2.2 Internet as a Source of Information
Over the past three decades, the number of Internet users has grown from a few million in 1992 to nearly five billion by early 2022, representing 62.5% of the world's total population (ITU, 2022). One of the main stories at the beginning of the COVID-19 pandemic was how much the world came to rely more on the Internet, especially when countries went into lockdown. According to Kemp (2022), the latest data show that people are spending more time than ever using connected technology. The "typical" global Internet user now spends nearly seven hours a day using the Internet across all devices. Brazilians, for example, spend an average of 10 hours a day online (Kemp, 2022). In the online information environment, Internet users can tailor unlimited content to their needs and desires. This shift away from limited, closed, and pre-scheduled content has democratized access to knowledge and driven social progress (TRS, 2022).
The impact of connectivity is profound and far-reaching, extending to individuals, businesses, and governments (AGREN, 2020). The Internet has significant benefits and multiple uses: it allows access to knowledge, learning resources, information, job opportunities, and online services where traditional services are lacking (ITU, 2022). Additionally, the Internet facilitates new forms of entertainment, expression, collaboration, and communication (ITU, 2022) and has transformed our ability to be informed and inform others (TRS, 2022). However, the unlimited volume of content means that capturing attention in the online information environment is challenging and highly competitive. According to a report published by The Royal Society (2022), this intense competition for attention presents a challenge for those who wish to communicate reliable information to help guide important decisions.
Röttger and Vedres (2020) argue that information consumption patterns are changing. Individuals increasingly look to the online environment for news, with search engines and social media platforms playing a significant role in shaping access to information and participation in public debates. Consequently, social media has become an active part of the daily lives of both young and older Internet users worldwide. With every click, share, like, tweet, comment, or exploration of content, people have access to all forms of online information created by credentialed sources or their peers (Pennycook y Rand, 2019). However, according to Borges‐Tiago et al. (2020), exposure to large-scale disinformation, including misleading information or fake news, has been progressively increasing.
2.3 Fake News
Fake news has been one of the most discussed topics in public and scientific discourse since the 2016 U.S. presidential campaign (Nelson y Taneja, 2018). Although the term was originally applied to political satire, it now seems to represent all things "inaccurate" (Lazer et al., 2018). Kalsnes (2018) found that fake news is used as a synonym for satire, parody, invention, manipulation, propaganda, and advertising. Shao et al. (2017) expanded this list of key terms to include rumors, gossip, conspiracy theories, fabricated reports, and clickbait headlines. However, the expression is applied even in contexts unrelated to mediated communication (Egelhofer y Lecheler, 2019). According to Egelhofer and Lecheler (2019), fake news represents a fundamental shift in political and public attitudes toward what journalism and news constitute and how facts and information can be obtained in a digitized world.
Burkhardt (2017) mentions that fake news tends to increase when new forms of communication become available to the general public. Social media not only accelerates the dissemination of valid information but can also be used to efficiently spread false information. This phenomenon poses a significant problem for readers who cannot determine whether the information is legitimate or false (Brashier y Schacter, 2020). Therefore, establishing a robust process for verifying news and information is indispensable (Torres et al., 2018). More than ever, focusing on the public (citizens) seems essential to provide them with the appropriate digital literacy that allows people to interpret and evaluate the information they receive (Pérez-Escoda et al., 2021).
Research investigating the relationship between age and disinformation has been ongoing for several years. However, there is no consensus on whether older or younger adults are more vulnerable to fake news (Vijaykumar et al., 2021). Findings from a recent study on political disinformation behavior on Facebook showed that adults aged 65 and over shared links to fake domains seven times more frequently than young adults (Guess et al., 2019). A similar study using Twitter revealed that users aged 50+ represented 80% of shared fake news (Grinberg et al., 2019). According to Brashier and Schacter (2020), older adults struggle to detect deception, and this doubt deficit increases when inventions come from peers of the same age. Additionally, young people tend to be aware of the lack of credibility of the content circulating in the media. Although they are aware of this, they consume it intensively, assuming that they are constantly receiving fake news that makes them feel manipulated, distrustful, and endangered (Pérez-Escoda et al., 2021).
2.4 Generational Cohort Theory and Fake News
The generational cohort theory, initially proposed by Ryder in 1965, suggests that the continuous renewal of generational cohorts due to the birth, aging, and replacement of individuals allows for social change. The generational cohort theory postulates that different generations, defined by specific periods of birth, share historical and social experiences that shape their attitudes and behaviors in distinct ways (Elder, 1974; Mills, 2000). This theory has been used to explain how environmental variations between generational cohorts can influence the expression of genetic and behavioral traits. For example, Wedow et al. (2018) demonstrated that the relationship between education and smoking is modulated by cohort differences, with variations in the genetic correlation between these two characteristics depending on the specific social and historical conditions of each cohort.
Bringing this into the context of this study, the spread of fake news is a complex phenomenon that has intensified with the advent of social media. Fake news tends to spread more quickly and deeply than true news, often due to its emotionally evocative nature and apparent novelty (Vosoughi, Roy y Aral, 2018). This is exacerbated by social media algorithms that prioritize content that generates greater engagement, regardless of its truthfulness (Beauvais, 2022).
The susceptibility to fake news is influenced by a range of cognitive and psychological factors. Confirmation bias theory suggests that people are more likely to believe information that reinforces their pre-existing beliefs (Pennycook y Rand, 2019). Additionally, repeated exposure to false information can increase its perceived truthfulness, a phenomenon known as the illusory truth effect (Brashier y Schacter, 2020). Psychological factors also play a significant role. Content that evokes strong emotions is more likely to be shared, increasing the spread of fake news (Martel et al. 2020). Overconfidence in one's discernment abilities, known as overconfidence bias, can lead individuals to underestimate their vulnerability to disinformation (Sanchez y Dunning, 2018).
Studies have shown that older adults, particularly those over 65, share more fake news than younger adults (Guess et al. 2019). Several factors contribute to this: 1) Cognitive Decline - Cognitive abilities such as episodic memory and abstract reasoning tend to decline with age, which may increase reliance on simple heuristics to evaluate the truthfulness of information (Salthouse, 2009). However, cognitive fluency, or the ease with which information is processed, remains intact, which may lead to an uncritical acceptance of repeated information (Brashier y Schacter, 2020); 2) Social Changes - Older adults often have smaller and more homogeneous social networks, which may increase trust in information shared by members of their networks (Carstensen et al., 2011). Additionally, interpersonal trust tends to increase with age, leading to less suspicion regarding information sources (Poulin y Haase, 2015); 3) Digital Literacy - Many older adults are newcomers to the digital world and may have less experience distinguishing between reliable and unreliable news sources (Brashier y Schacter, 2020). This, combined with a lower familiarity with the functioning of algorithms and the identification of sponsored content, increases vulnerability to fake news (Amazeen y Wojdynski, 2018).
In general, generational cohort theory offers a perspective on how different generations interact with false information. Variations in environmental conditions, cognitive abilities, and social experiences among generational cohorts play a role in vulnerability to disinformation. Addressing this problem requires multifaceted interventions that consider both individual and systemic factors contributing to the spread of fake news.
Our hypotheses are as follows:
H0: There are no differences between generational cohorts in verifying whether information is true or false.
H1: There are differences between generational cohorts in verifying whether information is true or false.
3. METHODOLOGY
This quantitative research was conducted with residents of the city of Blumenau (Brazil). Two filter questions were included. The first filter question concerned the participant's age; if it was under 16 years, the survey could not continue. The second filter question was about the respondent's place of residence. Only residents of Blumenau were selected, so if the respondent was from another city, the survey did not proceed. After data cleaning, we obtained a total of 800 valid responses. The questionnaire content covered sociodemographic characteristics, how the respondent accessed the internet, how many hours per day on average they spent on the internet, knowledge of innovation topics, social media used, and verification of information shared online. Respondents' information confidentiality was ensured.
To determine if our sample size was adequate for the method, we used the G*Power software. In the Test Family, we selected the Z tests and added logistic regression in the statistical test. The type of power analysis we chose was the A priori with two tails (Tails). We used the effect size of 1.8 as the odds ratio in Pr(Y=1IX=1). We used H0 at a value of 0.2, α error probability of 0.05, Power (1-β error probability) of 0.95, R² Other X of 0, X distribution applied the normal method, and finally the x parm μ of 0. The results indicated that a minimum sample size of 245 valid responses was necessary for this study. Therefore, there is a 95% chance of correctly rejecting the null hypothesis that a particular category of the main predictor variable is not associated with the outcome variable value with 245 respondents. Thus, our sample exceeds the minimum recommended value, so we proceeded with our study. Qualitative data were described in counts and percentages to analyze respondents' sociodemographic characteristics. We used ordinal logistic regression analysis to explore the odds ratio of a specific group checking or not the truthfulness of the information before sharing it on social media. For this analysis, we used the SPSS 26.0 software (IBM Corp) with a significance level of α = 0.05.
4. RESULTS
Regarding gender, 54% (432) were female, and 46% (368) male. The average age of the Blumenau residents who participated in the research was 23.5% (188) between 16 and 25 years old, 36.4% (315) between 26 and 39 years old, 31.4% (251) between 40 and 59 years old, and 5.8% (46) were 60 years or older. The participants primarily have high school education (39.4%), higher education (29.1%), and postgraduate education (25.8%). Regarding average family income, 19% earn up to R$ 3,000.00, 28% between R$ 3,001.00 and R$ 5,000.00, 29.3% between R$ 5,001.00 and R$ 10,000.00, and 23.8% over R$ 10,000.00.
Next, we questioned the social media used by the research respondents. Since this was a multiple-choice question, we separated it into a spreadsheet and categorized it by social media for better analysis. Thus, we coded the spreadsheet with 1 for yes, the participant has this social media, and 0 for no, the participant does not have this social media. We also presented the percentage of cases. The most used social media by respondents was WhatsApp, with 94.8% (758). Next, we have Instagram with 89.4% (715), Facebook with 67.3% (538), YouTube with 50.4% (403), LinkedIn with 39.3% (314), X (Twitter) with 31.8% (254), Telegram with 25.9% (207), Tik Tok with 25.6% (205), others with 1.5% (12), Tumblr with a single respondent, and 11 people (1.4%) stated they do not use any social media.
To explore the factors influencing attitudes toward sharing news without verifying whether it is true or not, we applied ordinal logistic regression analysis, which presents the odds ratio of response for each group. Before the final analysis, we tested some assumptions of ordinal logistic regression. The first assumption was the collinearity test, i.e., we sought to identify whether one variable was not statistically equal to another. As shown in Table 1, collinearity was not an issue in our study.
Table1 - Coeficients
|
Unstandardized Coefficients |
Starndardized Coefficients |
|
|
Collinearity Statistics |
||
B |
Erro |
Beta |
t |
Sig. |
Tolerance |
VIF |
|
(Constant) |
4,527 |
0,094 |
|
48,068 |
0,000 |
|
|
Age range: |
-0,188 |
0,040 |
-0,164 |
-4,711 |
0,000 |
1,000 |
1,000 |
The second assumption of ordinal logistic regression was the parallel lines test. As shown in Table 2, the sig. of our model was higher than 0.05, indicating that there is a difference between the variables, so we could proceed with our ordinal logistic regression analysis.
Table 2 – Parallel Lines
Model |
Log Likelihood -2 |
Chi-Square |
df |
Sig. |
Null Hypothesis |
362,144 |
|||
General |
336,428 |
25,716 |
18 |
0,106 |
For the analysis, our dependent variable was the question about checking information before sharing, an ordinal variable. As factors, we included the categorical independent variables, in this case, gender and education level. And as a covariate, we included our numerical variable, in this case, age range. The age range was divided as follows: age range 1 = 16 to 25 years, age range 2 = 26 to 39 years, age range 3 = 40 to 59 years, and age range 4 = 60 years or older. This detail of the age range is significant for the odds ratio.
The first result presented was the pseudo R square test, shown in Table 6. The analysis showed that our model has a pseudo R square, i.e., the explanatory power of the model, of 55% in the Nagelkerke model. This means that some additional factors could improve the explanatory power of the model concerning whether a person checks if information is true or not, but age and education already influence 55% of the model. This explanatory power is medium to high.
Table 3 – Pseudo R Square
Cox e Snell |
0,050 |
Nagelkerke |
0,055 |
McFadden |
0,021 |
Next, we performed the odds ratio analysis by applying ordinal regression. This statistical method was used to examine the relationship between the independent variables and the ordinal dependent variable, allowing us to estimate the probabilities of occurrence of the different levels of the dependent variable based on the values of the independent variables.
Table 4 - Estimates
Parameter |
B |
Standard Test Statistic |
95% Wald Confidence Interval |
Hypothesis testing |
Exp(B) |
95% Wald Confidence Interval for Exp(B) |
|||||
Lower |
Upper |
Wald Chi-Square |
gl |
Sig. |
Lower |
Upper |
|||||
Limite |
[Check=1] |
-5,530 |
0,4009 |
-6,316 |
-4,744 |
190,260 |
1 |
0,000 |
0,004 |
0,002 |
0,009 |
[Check =2] |
-3,744 |
0,2778 |
-4,289 |
-3,200 |
181,718 |
1 |
0,000 |
0,024 |
0,014 |
0,041 |
|
[Check =3] |
-2,128 |
0,2435 |
-2,605 |
-1,651 |
76,391 |
1 |
0,000 |
0,119 |
0,074 |
0,192 |
|
[Check =4] |
-0,851 |
0,2326 |
-1,307 |
-0,395 |
13,386 |
1 |
0,000 |
0,427 |
0,271 |
0,674 |
|
[Gender:=1] |
0,128 |
0,1342 |
-0,135 |
0,391 |
0,914 |
1 |
0,339 |
1,137 |
0,874 |
1,479 |
|
[Gender:=2] |
0a |
1 |
|||||||||
[Elementary (5th. ao 9th grade)] |
-0,721 |
0,3415 |
-1,390 |
-0,052 |
4,459 |
1 |
0,035 |
0,486 |
0,249 |
0,950 |
|
[Elementary (1st. ao 4th grade)] |
-1,634 |
0,6311 |
-2,871 |
-0,397 |
6,702 |
1 |
0,010 |
0,195 |
0,057 |
0,672 |
|
[High Scholll] |
-0,438 |
0,1674 |
-0,766 |
-0,110 |
6,840 |
1 |
0,009 |
0,645 |
0,465 |
0,896 |
|
[Postgraduate] |
0,196 |
0,1830 |
-0,163 |
0,555 |
1,146 |
1 |
0,284 |
1,216 |
0,850 |
1,741 |
|
[Higher Education] |
0a |
1 |
|||||||||
Age range |
-0,434 |
0,0856 |
-0,602 |
-0,266 |
25,685 |
1 |
0,000 |
0,648 |
0,548 |
0,766 |
|
(scale) |
1b |
Education and age were statistically supported, meaning they affect the odds of a person checking whether the information/news is true or not. Gender, however, was not statistically supported, meaning that both male and female respondents answered similarly. Regarding the analysis of age alone, for each added age range, the odds of verifying the truthfulness of information decrease by 648%. In other words, people in age range 1 (16-25 years) are the most likely to check information. People in age range 2 (26-39 years) increase by 648% the odds of not verifying information compared to age range 1. Age range 3 (40-59 years) increases by 648% the odds of not checking information compared to age range 2 (which already had odds of not verifying compared to age range 1). Age range 4 (60 years or more) would be the group most likely not to check whether information is true or not.
Adding an education level to the analysis, for people with elementary education (1st to 4th grade), the odds of checking information decrease by 195%. In other words, the higher the age range of this group, the lower the tendency to verify whether the information is true or not. The same occurs for those with elementary education from 5th to 9th grade (a 645% decrease in the odds of verifying for each higher age range) and high school (a 645% decrease in the odds of verifying for each higher age range).
On the other hand, people with postgraduate education increase by 216% the odds of checking whether the information is true or not for each higher age range, meaning older people with a high level of education are more likely to verify whether the information is true.
5. DISCUSSION AND IMPLICATION
The generational cohort theory initially proposed by Ryder in 1965 suggests that the continuous renewal of generational cohorts due to birth, aging, and the replacement of individuals allows for social change. The results of this research highlight the importance of understanding how different factors such as age and education influence news verification behavior. The ordinal logistic regression analysis revealed that older users with lower education levels are less likely to verify the authenticity of information before sharing it, while gender did not significantly impact this behavior. These findings are consistent with the existing literature suggesting that cognitive and social factors play an important role in the spread of fake news (Bos et al., 2023; Molina et al., 2022).
5.1 Theoretical Implications
The results of this research reveal significant theoretical implications related to age and education in news verification behavior. Generational cohort theory is reinforced by findings indicating clear variations in attitudes and behaviors of different age groups regarding news verification. This study showed that older users are less likely to verify the authenticity of information before sharing it, supporting the theory that distinct social and historical experiences shape behaviors differently in each generation.
The research also contributes to confirmation bias theory, which suggests that individuals tend to believe information that reinforces their pre-existing beliefs (Pennycook y Rand, 2019). The results indicate that due to cognitive decline associated with aging (Salthouse, 2009), older users rely more on simple heuristics to evaluate the truthfulness of information, increasing their vulnerability to disinformation. The lower propensity of older users to verify information can be explained by the greater trust in their social networks, which tend to be more homogeneous and composed of individuals with similar perspectives (Carstensen et al., 2011).
Regarding education, the results indicate that individuals with higher education levels are more likely to verify the authenticity of information before sharing it. This finding underscores the importance of digital and media literacy, suggesting that formal education plays an important role in developing the critical skills needed to evaluate the truthfulness of online information (Amazeen y Wojdynski, 2018). Individuals with lower education levels are less likely to possess these skills, making them more susceptible to believing and sharing fake news.
Additionally, the interaction between age and education offers a deeper perspective on information verification behavior. For example, while older age is associated with a lower propensity to verify information, this tendency is mitigated among those with higher education levels. This finding suggests that targeted educational interventions, especially those focusing on critical information evaluation skills, can be effective in reducing the spread of fake news among older populations.
5.2 Pratical Implications
The results of this research not only corroborate existing theories on information verification behavior but also highlight the need to consider factors such as age and education when developing strategies to combat the spread of fake news. Therefore, the findings of this research have several important practical implications for combating the spread of fake news. The first practical approach involves implementing media and digital literacy programs, especially targeting older users and those with lower education levels. These programs should promote critical skills for evaluating the truthfulness of online information since digital literacy has proven crucial in mitigating the spread of disinformation (Amazeen y Wojdynski, 2018).
Developing educational programs that teach individuals to recognize fake news can be a fundamental step in reducing its spread. These programs should focus on teaching individuals to evaluate the credibility of news sources, verifying the authenticity and truthfulness of information before sharing it (Metzger et al., 2010). Additionally, it is essential to empower users to identify manipulated content, such as doctored images and videos, and to recognize native advertising disguised as news content (Pennycook y Rand, 2019a). Promoting the use of available tools and resources for fact-checking, such as fact-checking websites, can help debunk false information (Amazeen y Wojdynski, 2018).
Social media platforms can implement mechanisms that encourage fact-checking before sharing, especially in demographic groups more vulnerable to the spread of disinformation. This can include systems that alert users when they are about to share information identified as false or misleading (Pennycook et al., 2020). Providing tips and educational resources directly on platforms can help users develop information verification skills while browsing (Pennycook y Rand, 2019b).
Educational campaigns can be developed to raise awareness about the risks of sharing unverified information, using messages tailored to different age groups and education levels. These campaigns can include interactive elements such as quizzes and educational videos that encourage users to reflect on the truthfulness of information before sharing it (Forgas, 2019). Working with digital influencers can be effective in reaching younger audiences and increasing the impact of educational campaigns (Molina et al., 2022).
Governments and non-governmental organizations can collaborate to develop public policies that promote media and digital literacy in schools and communities. Integrating media and digital literacy modules into school curricula can ensure that future generations are equipped with the skills necessary to navigate the digital information environment critically and informed (Poulin y Haase, 2015). Additionally, offering workshops and training for adults and the elderly in community centers, libraries, and other public spaces can focus on critical information evaluation and the safe use of social media (Pew Research Center, 2019).
Fact-checking organizations can play a role in reducing the spread of fake news. Partnerships between social media platforms and fact-checking organizations can be established to implement alert systems that notify users when they are about to share information identified as false or misleading. Additionally, these organizations can provide tools and resources that help users verify the authenticity of information they encounter online (Pennycook y Rand, 2019a).
Investing in the development of advanced technologies for the automatic detection of fake news can be an effective approach to combating disinformation. Machine learning and artificial intelligence algorithms can be trained to identify common patterns in fake news and alert users to suspicious content (Brashier y Schacter, 2020). These technologies can be integrated into social media platforms to provide an additional layer of security against the spread of disinformation.
Social media platforms can implement incentives for responsible sharing practices. This can include creating a rewards system for users who regularly check the truthfulness of information before sharing it, promoting a culture of responsibility and accuracy (Metzger et al., 2010). Awarding badges and recognition to users who consistently demonstrate responsible sharing and fact-checking behavior can also be effective (Smith, 2010).
Conducting ongoing research to assess the effectiveness of interventions and adjust strategies as necessary is crucial. Longitudinal studies can provide insights into how attitudes and behaviors related to information verification change over time and which interventions are most effective for different demographic groups (Brashier y Schacter, 2020).
6. Limitations and Future Research
This study has some limitations that should be considered to contextualize the results and guide future research. One of the main limitations is that the sample consisted only of residents of Blumenau (Brazil), which may limit the generalizability of the results to other regions or cultural contexts. Different geographic areas may have significant variations in information verification behavior due to cultural, social, and economic factors that were not considered in this research. Another limitation concerns the data collection method used, which was a self-administered questionnaire. Although questionnaires are an effective tool for collecting data from large samples, they are subject to response biases, such as social desirability bias, where respondents may provide responses they consider more socially acceptable or desirable rather than true responses. The use of a cross-sectional approach is another limitation as it captures only a specific moment in time without considering how individuals' behaviors and attitudes may change over time. Longitudinal studies would be valuable in understanding how these attitudes and behaviors may evolve, especially with the increase in digital literacy and changes in social media platforms.
Another area to explore in future research is the interaction between different factors, such as the combination of age, education level, and the use of specific social media in the dissemination of fake news. Studies have shown that how individuals use social media can significantly influence their exposure and response to false information (Molina et al., 2022). Investigating how these factors interact can help develop more specific and effective interventions.
Longitudinal studies would be particularly useful in investigating how attitudes and behaviors related to information verification change over time. These studies could consider the impact of educational interventions and changes in social media platform policies on individuals' ability to identify and avoid fake news. Additionally, the evolution of disinformation technologies, such as deepfakes, and their implications for information verification could be a promising field for future research. The exploration of the role of emotions in the spread of fake news also deserves attention. Messages that evoke strong emotions, such as anger and fear, are more likely to be shared, regardless of their truthfulness (Dekeyser y Roose, 2022). Future studies could investigate how different emotions influence the propensity to share false information and develop strategies to mitigate the emotional impact of fake news.
Another aspect that deserves attention in future research is the impact of digital literacy interventions. Assessing the effectiveness of educational programs that teach critical information evaluation skills can provide valuable insights into best practices for reducing the spread of fake news. Additionally, the effectiveness of automatic fake news detection technologies, such as machine learning algorithms, can be evaluated in different contexts to determine their applicability and impact. Finally, the interaction between cognitive, social, and emotional factors in susceptibility to fake news is a complex area that requires multidisciplinary approaches. The dual-process theory suggests that both fluency (familiarity) and information retrieval (source memory) influence the evaluation of information's truthfulness (Brashier y Marsh, 2020). Investigating how these processes interact in different populations can provide a more comprehensive understanding of the underlying mechanisms of fake news dissemination.
7. REFERENCES
Amazeen, M. A., y Wojdynski, B. W. (2018). The effects of disclosure format on native advertising recognition and audience perceptions of legacy and online news publishers. Journalism. doi:10.1177/1464884918754829
Beauvais, C. (2022). Fake news: Why do we believe it?. Joint bone spine, 89(4), 105371.
Bengtsson, S. y Johansson, S. (2020). A phenomenology of news: Understanding news in digital culture. Journalism, 22(11), 2873-2889. Doi: 10.1016/j.jbspin.2022.105371
Borges-Tiago, T.; Tiago, F.; Silva, O.; Martínez, J.M. y Botella‐Carrubi, D. (2020). Online users' attitudes toward fake news: Implications for brand management. Psychol Mark, 37, 1171–1184. Doi: 10.1002/mar.21349
Bos, L., Egelhofer, J. L., y Lecheler, S. (2023). Short but Critical? How “Fake News” and “Anti-Elitist” Media Attacks Undermine Perceived Message Credibility on Social Media. Communication Research, 50(6), 695-71. Doi: 10.1177/00936502231178432
Brashier, N. y Schacter, D. (2020). Aging in an Era of Fake News. Current Directions in Psychological Science, 29(3) 316 –323. Doi: 10.1177/0963721420915872
Brashier, N. M., y Schacter, D. L. (2020). Aging in an era of fake news. Current Directions in Psychological Science, 29(3), 316-323. doi:10.1177/0963721420915872
Burkhardt, J. M. (2017). History of fake news. Library Technology Reports, 53(8), 5-9.
Carstensen, L. L., Turan, B., Scheibe, S., Ram, N., Ersner-Hershfield, H., Samanez-Larkin, G. R., ... y Nesselroade, J. R. (2011). Emotional experience improves with age: evidence based on over 10 years of experience sampling. Psychology and aging, 26(1), 21. Doi: 10.1037/a0021285
Chuquihuanca, N., Pesantes, S., Vásquez, L., y Vargas, E. (2021). Cultura digital desde el contexto universitario en tiempos de pandemia Covid-19. Revista Venezolana de Gerencia, 26(95), 802-817.
Dekeyser, S., y Roose, H. (2022). What makes populist messages persuasive? Experimental evidence for how emotions and issue positions shape credibility judgments. Political Psychology, 43(2), 223-242. Doi: 10.1177/00936502221127482
Dervin, B. (1976). Strategies for dealing with human information needs: Information or communication?. Journal of Broadcasting y Electronic Media, 20(3), 323-333. Doi: 10.1080/08838157609386402
Egelhofer, J.L. y Lecheler, S. (2019) Fake news as a two dimensional phenomenon: a framework and research agenda. Annals of the International Communication Association, 43(2), 97-116. Doi: 10.1080/23808985.2019.1602782
Elder, G. H. (1974). Children of the Great Depression: Social change in life experience. Chicago: University of Chicago Press.
Fisher, M., Cox, J. W., y Hermann, P. (2016). Pizzagate: From rumor, to hashtag, to gunfire in DC. Washington Post, 6, 8410-8415.
Flores-Márquez, D. (2021). Internet, digital communication and culture studies in Mexico. Comunicação, Mídia e Consumo, 18(51), 60-81.
Forgas, J. P. (2019). Happy believers and sad skeptics? Affective influences on gullibility. Current Directions in Psychological Science, 28, 306-313. doi:10.1177/0963721419834543
Gentilviso, C., y Aikat, D. (2019). Embracing the visual, verbal, and viral media: How post-millennial consumption habits are reshaping the news. In Mediated millennials (pp. 147-171). Emerald Publishing Limited.
Giachanou, A., Rosso, P., y Crestani, F. (2019, July). Leveraging emotional signals for credibility detection. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval (pp. ٨٧٧-٨٨٠).
Grinberg, N., Joseph, K, Friedland, L. y Lazer, D. (2019) Fake news on Twitter during the 2016 US presidential election. Science, 363(6425), 374–378. DOI: 10.1126/science.aau2706
Guess, A.; Nagler, J. y Tucker, J. (2019). Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv 5(1), 45. DOI: 10.1126/sciadv.aau4586
Ibáñez, D.; Rodrigues da Cunha, M. y Toledo, J.H. (2020). Comunicación digital, redes sociales y procesos en línea: estudios en una perspectiva comparada entre América Latina y la península ibérica. Journal of Iberian and Latin American Research, 26(3), 275-283. Doi: 10.1080/13260219.2020.1934260
International Telecommunication Union (ITU). (2022). Global Connectivity Report 2022. Geneva, Switzerland: ITU.
Kalsnes, B. (2018). Fake news. In Oxford Research Encyclopedia of Communication.
Kemp, S. (2022). Digital 2022: Global overview report. Retrieved from: https://datareportal.com/reports/digital-2022-global-overview-report.
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... y Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094-1096. DOI: 10.1126/science.aao2998
López, M., y Bernal, C. (2016) La cultura digital en la escuela pública. Revista Interuniversitaria de Formación del Profesorado, 30(1), 103-110.
Martel, C., Pennycook, G., y Rand, D. G. (2020). Reliance on emotion promotes belief in fake news. Cognitive research: principles and implications, 5, 1-20. Doi: 10.1186/s41235-020-00252-3
Metzger, M. J., Flanagin, A. J., y Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439. Doi: 10.1111/j.1460-2466.2010.01488.x
Mills, A. J., y Robson, K. (2020). Brand management in the era of fake news: narrative response as a strategy to insulate brand value. Journal of Product & Brand Management, 29(2), 159-167. Doi: 10.1108/jpbm-12-2018-2150
Mills, C. W. (2000). The Sociological Imagination. Oxford University Press.
Molina, M. D., Sundar, S. S., y Lowrey, W. (2022). Reading, Commenting, and Sharing of Fake News: How Online Bandwagons and Bots Dictate User Engagement. New Media & Society, 24(3), 543-563. Doi: 10.1177/00936502211073398
Morais da Silva, D. y Fernandes, V. (2022). Cyberspace, cyberculture and metaverse: the virtual society and cybernetic territory. Revista Humanidades e Inovação, 8(67), 211-223.
Nelson, J. L. y Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20(10), 3720–3737. Doi: 10.1177/1461444818758715
Núñez, R.; Castro, W. y Hernández, C.A. (2022). Globalization and digital culture in educational environments. Revista Boletín Redipe, 11(1): 262-272.
Nyilasy, G. (2019). Fake news: When the dark side of persuasion takes over. International Journal of Advertising, 38(2), 336-342. Doi: 10.1080/02650487.2019.1586210
Pennycook, G. y Rand, D.G. (2019). Lazy, not biased. Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. Doi: 10.1016/j.cognition.2018.06.011
Pennycook, G., y Rand, D. G. (2019a). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, USA, 116, 2521-2526. doi:10.1073/pnas.1806781116
Pennycook, G., y Rand, D. G. (2019b). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39-50. doi:10.1016/j.cognition.2018.06.011
Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management science, 66(11), 4944-4957. doi:10.1287/mnsc.2019.3478
Pérez-Escoda, A.; Pedrero-Esteban, L.M.; Rubio-Romero, J.; Jiménez-Narros, C. (2021). Fake News Reaching Young People on Social Networks: Distrust Challenging Media Literacy. Publications, 9(2), 24. Doi: 10.3390/publications9020024
Pew Research Center. (2019). Digital literacy and media consumption among adults: A global survey. Retrieved from https://www.pewresearch.org/global/2019/10/08/digital-literacy-and-media-consumption
Potthast, M., Kiesel, J., Reinartz, K., Bevendorff, J., y Stein, B. (2017). A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638.
Poulin, M. J., y Haase, C. M. (2015). Growing to trust: Evidence that trust increases and sustains well-being across the life span. Social Psychological and Personality Science, 6, 614-621. doi:10.1177/1948550615574301
Prelog, L., y Bakić-Tomić, L. (2020, September). The Perception of the Fake News Phenomenon on the Internet by Members of Generation Z. In 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO) (pp. ٤٥٢-٤٥٥). IEEE. DOI: 10.23919/MIPRO48935.2020.9245169
Röttger, P. y Vedres, B. (2020). The Information Environment and its Effects on Individuals and Groups: An Interdisciplinary Literature Review. Oxford, UK: Oxford Internet Institute.
Salthouse, T. A. (2009). When does age-related cognitive decline begin? Neurobiology of Aging, 30, 507-514. doi:10.1016/j.neurobiolaging.2008.09.023
Sanchez, C., y Dunning, D. (2018). Overconfidence among beginners: Is a little learning a dangerous thing? Journal of Personality and Social Psychology, 114, 10-28. doi:10.1037/pspa0000102
Shao, C., Ciampaglia, G., Varol, O., Flammini, A., y Menczer, F. (2017). The spread of fake news by social bots. Retrieved from http://arxiv.org/abs/1707.07592
Shu, K., Wang, S., y Liu, H. (2019, January). Beyond news contents: The role of social context for fake news detection. In Proceedings of the twelfth ACM international conference on web search and data mining (pp. ٣١٢-٣٢٠). Doi: 10.1145/3289600.3290994
Smith, A. (2010). Government Online: The Internet Gives Citizens New Paths to Government Services and Information. Pew Internet y American Life Project.
Tandoc Jr, E. C., Lim, Z. W., y Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137-153. Doi: 10.1080/21670811.2017.1360143
Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137-153. Doi: 10.1080/21670811.2017.1360143
The Royal Society (TRS). (2022). The online information environment: Understanding how the internet shapes people’s engagement with scientific information. London, UK: TRS.
Torres, R.; Gerhart, N. y Negahban, A. (2018). Epistemology in the Era of Fake News: An Exploration of Information Verification Behaviors among Social Networking Site Users. The DATA BASE for Advances in Information Systems, 49(3). Doi: 10.1145/3242734.324274
Uzelac, A. (2010). Digital culture as a converging paradigm for technology and culture: Challenges for the culture sector. Digithum, 12, 28-35.
Vargas-Bianchi, L., Mateus, J. C., Pecho-Ninapaytan, A., y Zambrano-Zuta, S. (2023). 'No, auntie, that's false': Challenges and resources of female baby boomers dealing with fake news on Facebook. First Monday. Doi: 10.5210/fm.v28i3.12678
Vijaykumar, S.; Jin, Y.; Rogerson, D.; Lu, X.; Sharma, S. …y Maughan, A. (2021). How shades of truth and age affect responses to COVID-19 (Mis)information: randomized survey experiment among WhatsApp users in UK and Brazil. Humanities and Social Sciences Communications, 8(88). Doi: 10.1057/s41599-021-00752-7
Vosoughi, S., Roy, D., y Aral, S. (2018). The spread of true and false news online. science, 359(6380), 1146-1151. DOI: 10.1126/science.aap9559
Wedow, R., Zacher, M., Huibregtse, B. M., Mullan Harris, K., Domingue, B. W., y Boardman, J. D. (2018). Education, smoking, and cohort change: Forwarding a multidimensional theory of the environmental moderation of genetic effects. American Sociological Review, 83(4), 802-832. Doi: 10.1177/0003122418785368
Yu, W., Ge, J., Chen, Z., Liu, H., Ouyang, M., Zheng, Y., y Kong, W. (2024). Research on Fake News Detection Based on Dual Evidence Perception. Engineering Applications of Artificial Intelligence, 133, 108271. Doi: 10.1016/j.engappai.2024.108271
Zhou, X., y Zafarani, R. (2020). A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys (CSUR), 53(5), 1-40. Doi: 10.1145/3395046
Zhou, X., Mulay, A., Ferrara, E., y Zafarani, R. (2020, October). Recovery: A multimodal repository for covid-19 news credibility research. In Proceedings of the 29th ACM international conference on information & knowledge management (pp. ٣٢٠٥-٣٢١٢). Doi: 10.1145/3340531.3412880