Paweł Brzóska, Bartłomiej Nowak, Bartosz Świerczyński, Jarosław Piotrowski Polish adaptation of pseudo-profound bullshit receptivity scale (BSR)

PDF Abstrakt

Rocznik: 2023

Tom: XXVIII

Numer: 2

Tytuł: Polish adaptation of pseudo-profound bullshit receptivity scale (BSR)

Autorzy: Paweł Brzóska, Bartłomiej Nowak, Bartosz Świerczyński, Jarosław Piotrowski

PFP: 181–198

DOI: https://doi.org/10.34767/PFP.2023.02.04

Artykuł jest dostępny na warunkach międzynarodowej licencji 4.0 (CC BY-NC-ND 4.0).

Introduction

Nowadays, distinguishing truth from falsehood is a big problem for many of us (Keyes, 2004). The same problem also affects the context of distinguishing true depth from mere nonsense (Sperber, 2010). The subject of this article, i.e. the phenomenon of pseudo-depth, is a tool often used nowadays to arouse admiration in others and to show the author’s extraordinary sensitivity (Goldman, 2001). The phenomenon of pseudo-depth itself involves the production of a stream of words which, thanks to the use of scientific language taken out of context, create the illusion of depth for the recipient. The meaninglessness of such sentences makes them incomprehensible, which in turn increases the impression of depth because people judge as deeper what they cannot understand (Sperber, 2010). Nowadays, pseudo-science resorts to the same tricks – trying to present its intellectual superiority over science (Goldman, 2001). We tend to think that the phenomenon of pseudo-depth is after all rare and limited only to certain social circles, for example, the so-called anti-vaxxers or flat earthers. However, research by Sokal (2010) has shown that we all experience this phenomenon in almost every area of life. Sokal (2010) himself notes that even the academic community is not entirely free from nonsense.

Pennycook and colleagues (2015) proposed the existence of a trait responsible for susceptibility to pseudo-profound bullshit – reflexive open-mindedness – and created a scale to measure it. Since then, numerous researchers have explored this topic, expanding it to include nonsense other than pseudo-deep nonsense – for example, scientific (Evans, Sleegers, Mlakar, 2020), organizational (Ferreira et al., 2022), or just nonsense in general (Čavojová, Brezina, Jurkovič, 2020); and when examined in an interpersonal context, as the frequency of talking nonsense (Littrell, Risko, Fugelsang, 2021). Therefore, it is worth bringing this topic closer to Polish researchers and equipping them with a tool to measure this phenomenon.

Nonsense, a lie and pseudo-profound bullshit

Using the definition used by the creators of the original scale (Pennycook et al., 2015), bullshit (BS) is something that is intended to impress but is a claim or a statement that is constructed without any concern for the truth (Frankfurt, 2005). BS is therefore a sentence with correct grammatical syntax, which distinguishes it from regular nonsense, also known as conversational gibberish, and at the same time, it has no logical value in the same sense as a true or false sentence. This definition also indicates the difference between a BS and a lie, which require intentional manipulation and deception.

Pennycook and his team (2015) set themselves the goal of examining a specific type of nonsense – pseudo-profound bullshit, the purpose of which is to create the illusion of a deeper, philosophical meaning. Pseudo-profound bullshit is composed of clever-sounding words and, despite correct syntax and semantic structure, contains neither meaning nor depth. The Bullshit Receptivity Scale (BSR; Pennycook et al., 2015) consists of sentences generated by using two algorithms: (1) an online new-age bullshit generator (www.sebpearce.com/bullshit) and (2) a generator of random pseudo-deep sentences drawing from Deepak Chopra’s entries on the Twitter platform (www.wisdomofchopra.com). These sentences contained the correct semantic structure, but were only a collection of randomly arranged buzzwords – such as “consciousness” or “quantum”. The procedure of generating them therefore excluded the presence of a deeper meaning or other hidden wisdom in such items (Pennycook et al., 2015). An example of such pseudo-profound bullshit is the sentence: “The future will be an astral unveiling of inseparability.” This sentence consists of the already mentioned buzzwords “astral” and “inseparability”. However, a skeptical reader will easily notice that it is devoid of any deeper meaning.

Susceptibility to pseudo-profound bullshi– reflexive open-mindedness

The feature responsible for perceiving pseudo-profound bullshit as truly profound has been called reflexive open-mindedness, as opposed to reflective open-mindedness (Pennycook et al., 2015). They presented two mechanisms that may be responsible for reflexive openness of mind. The first is the tendency to accept and consider all new information to be meaningful and true (Gilbert, 1991). The second mechanism is the inability to distinguish true depth from the pseudo-depth present in nonsense or the inability to recognize a situation in which an attempt to make such a distinction should be made.

Pennycook and his team (2015) checked whether the perception of pseudo-profound sentences as profound is the effect of the first mechanism – the tendency of subjects to agree (Gilbert, 1991). To control this, two additional variables were measured: the assessment of the depth of ordinary sentences, such as: “Newborns require constant attention”, as well as popular motivational sentences, commonly considered to be truly profound, such as: “Dripping water hollows out stone, not through force but through persistence”. The authors assumed, however, that the second of the mechanisms described above (i.e. the inability to distinguish pseudo-depth from true depth, or failure to recognize the situation in which such a distinction should be made) is crucial for understanding individual differences in susceptibility to pseudo-profound bullshit.

One of the dispositions responsible for individual differences in recognizing nonsense and situations in which such a distinction should be made is the analytic thinking style. According to the dual-process theory (Evans, Stanovich, 2013; also known in Poland under its older name as the dual-system theory, see: Białek, 2017), it is the ability to overcome the answer generated by fast and intuitive type 1 reasoning, to engage in type 2 reasoning – slow, reflective, and accurate, but requiring effort. People with a more analytic thinking style are better at switching to Type 2 reasoning and recognizing situations in which they are supposed to do so (Pennycook et al., 2015). Since the acceptance of pseudo-profound bullshit is intuitive, people with a more analytic thinking style will be more efficient in recognizing situations in which they need to think about whether the given information is really deep, and they will also be more accurate in assessing its real depth (Pennycook et al., 2015). People with a less analytic, i.e. intuitive thinking style (Pacini, Epstein 1999) should behave completely differently – they will believe in the superficial depth contained in pseudo-profound bullshit and their subjective first impression about it. For this reason, people who think intuitively will not notice anything suspicious in pseudo-profound bullshit – no cognitive conflict will be aroused and they will not engage in type 2 reasoning, and they will rather devote their cognitive resources to justifying the pseudo-depth perceived in the sentence (Pennycook et al., 2015).

Another source of susceptibility to pseudo-profound bullshit may be a phenomenon called ontological confusion (Lindeman, Aarnio, 2007). People are irrational (Tversky, Kahneman, 1974; Stanovich, 2011), therefore they tend to make false judgments and perceptions about reality. One such false assessment is the confusion of the characteristics of animate and inanimate objects, called ontological confusion (Lindeman, Aarnio, 2007). His example is failing to notice the metaphorical nature and judging sentences such as “flowers want light” as literal. People who cannot recognize in such a sentence a metaphor in which semantically foreign words are syntactically combined, creating a phraseological compound with a meaning other than the literal one, or who do not even notice the need to analyze this sentence more carefully, will also not be able to recognize pseudo-profound bullshit (Pennycook et al. , 2015).

Another phenomenon that, according to the authors of the original scale (Pennycook et al., 2015), should cooccur with susceptibility to pseudo-profound bullshit is susceptibility to epistemically unwarranted beliefs (Lobato et al., 2014). Such beliefs lack epistemic justification, that is, they are not supported by “the totality of the current state of knowledge and evidence available to knowledge seekers at the time of asking the question” (Hansson, 2009, p. 239). Conspiracy, magical, paranormal, and pseudoscientific beliefs are examples of epistemically unfounded beliefs (Lobato et al., 2014) and, as recent research shows, are indeed positively associated with susceptibility to pseudo-profound bullshit (van Prooijen et al., 2022). Moreover, many statements resulting from such beliefs are also nonsense (Pennycook et al., 2015; van Prooijen et al., 2022).

Research validating the pseudo-profound bullshit scale (Pennycook et al., 2015) confirmed the hypotheses presented above. Susceptibility to pseudo-profound bullshit was associated with a less analytic and more intuitive style of thinking, ontological confusion, and a range of epistemically unjustified beliefs: religious, paranormal, conspiratorial, and medical. They also presented several other correlations with variables such as vocabulary intelligence, fluid intelligence, numerical abilities and susceptibility to heuristics.

The presented study aims to adapt the scale of susceptibility to pseudo-profound bullshit (Pennycook et al., 2015). The validity and reliability of the ten-item version and the full thirty-item version were verified. It was also checked whether the measurement with the adapted scales was equivalent to the measurement with the original scale in English.

Polish adaptation

To verify the theoretical validity, in addition to the susceptibility to pseudo-profound bullshit, the following were measured: (1) the level of the analytic thinking style and (2) the level of the intuitive thinking style. This allowed us to verify the the-oretical validity from the perspective of cognitive dispositions, especially taking into account the dual-process theory. It also measured (3) ontological confusion, as well as (4) religious beliefs (e.g. belief in angels and demons) as some of the epistemically unjustified beliefs. The perception of ordinary and motivational sentences was also controlled. Ordinary sentences should be rated as relatively undeep – less profound than pseudo-profound bullshit. In turn, motivational sentences should be rated by respondents as more profound – deeper than pseudo-profound bullshit.

The weakest correlation between susceptibility to pseudo-profound bullshit and the criterion variables in the original study was observed for religious beliefs and amounted to r = .27. To detect correlations of this strength with adequate power (Power = .80, α = .05 for r = .27), according to the analysis carried out using the G*Power package (3.1.9.4; Faul et al., 2007), is needed there is a sample of 105 people. To fully compare the obtained results with the original ones, it was decided to collect a larger sample, similar to the study by Pennycook and team (2015), i.e. approximately 250 people. Moreover, a sample of at least 200 people is recommended when performing confirmatory factor analyses (Kyriazos, 2018).

The process of linguistic adaptation

The translation of the scale material followed the back-translation procedure (Brislin, 1970). The questionnaire items were translated into Polish and then independently back-translated. The English version obtained in this way was compared with the original. After correcting minor discrepancies, the final version of the scale was obtained.

Procedure and description of the sample

234 people took part in the validation study. Data was collected online via a Google survey. The respondents were recruited through the internal online research system of the SWPS University, as well as with the help of Facebook groups bringing together people who voluntarily completed surveys. The study itself was approved by the University Ethics Committee of SWPS University in Poznan (consent number: 2019-16-01) and was conducted from July to September 2019.

The respondents were informed in the instructions that participation in the study was voluntary and that they could stop it at any time. Then they filled out a form containing questions about age, gender and education. Next, the respondents completed the following: a measure of analytic thinking style, a scale of onto-logical confusion, a scale of susceptibility to pseudo-profound bullshit, a scale of religious beliefs and a scale of intuitive thinking style. During the study, respondents also answered 3 attention-testing questions (e.g. “attention-testing question: please select answer 5”), as well as a question about their knowledge of Deepak Chopra’s work, which was the source of 10 out of 30 items in the susceptibility to pseudo-profound bullshit questionnaire. Due to attention criteria and knowledge of Deepak Chopra’s work, 25 people were removed from further analysis. The final sample consisted of (N = 209) subjects. Among them, 79% were women, and the age of the respondents ranged from 18 to 74 years (Mage = 27.82; SDage = 9.27). 15 respondents had secondary education, 100 were studying, 34 had first-cycle higher education, and 60 respondents had second-cycle higher education.

Research tools

Average scores were calculated for all scales. The exception was the measurement of the analytic thinking style, in which the result was the number of correct answers in the test. Items for the adapted scales are presented in the Supplement available on the OSF project website https://osf.io/gm2dn/. Measurement reliability along with descriptive statistics for all study variables are presented in Table 1.

Perception of pseudo-profound bullshit was assessed using an adapted scale for measuring susceptibility to pseudo-profound bullshit (BSR; Pennycook et al., 2015; Polish adaptation prepared for the needs of the study). Subjects rated given statements (e.g., “The future will be an astral unveiling of inseparability”) on a response scale ranging from 1 = not at all profound to 5 = very profound. The full version of the scale (BRS_30) consists of 30 items. The shortened version of the scale (BSR_10) consists of ten items from the full scale – these are items 1 to 5 and 11 to 15. The measurement with the shortened version of the scale was reliable (α = .88), while the measurement with the full version of the scale had very high reliability (α =.96).

The perception of motivational sentences was examined using an adapted ten-item secondary scale for measuring the perception of depth in motivational texts (BSR_M; Pennycook et al., 2015; Polish adaptation prepared for the needs of the study). The scale consisted of popular motivational texts (e.g., “A wet man does not fear the rain”). Respondents answered on a scale from 1 = not at all profound to 5 = very profound. Measurement using this scale was reliable (α = .84).

Perception of ordinary sentences was examined using an adapted ten-item secondary scale for measuring perceived depth in ordinary sentences (BSR_Z; Pennycook et al., 2015; Polish adaptation prepared for the needs of the study). This scale consisted of sentences that had a completely mundane meaning (e.g.: “Newborns require constant attention”). Respondents responded on a response scale from 1 = not at all profound to 5 = very profound. The measurement with this scale had high reliability (α = .93).

The items of the scales measuring the perception of ordinary sentences, motivational sentences and pseudo-profound bullshit were mixed and presented to the subjects as one scale composed of 50 items. Additionally, an indicator for recognizing pseudo-profound bullshit was created. It was calculated as the difference between the average perception rating of the motivational sentences and the ten sentences that make up the shortened version of the pseudo-profound bullshit scale. A higher level of this indicator meant better differentiation of motivational sentences from pseudo-profound sentences.

The analytic thinking style was measured using an adaptation of the extended 7-item cognitive reflection test (CRT; Toplak, West, Stanovich, 2014; Polish adaptation Sobkow, Olszewska, Sirota, 2023). This test consists of seven puzzles (e.g. “A baseball bat and a ball together cost PLN 1.10. The bat costs PLN 1.00 more than the ball. How much does the ball cost?”) suggesting an intuitive, but incorrect answer (10 cents). The measurement with this scale had acceptable reliability (α = .68), similar to previous studies using this tool (e.g. Pennycook et al., 2015).

The intuitive thinking style was measured using the Polish adaptation of a subscale that is part of the Rational-Experiential Inventory questionnaire (Pacini, Epstein, 1999; Polish adaptation created for the needs of the study). The scale is used to measure the self-descriptive level of belief in and trust in one’s intuition. It consists of twenty items, such as “I trust my first impressions of people.” The response scale ranged from 1 = completely false about me to 5 = completely true about me. Measurement using this scale achieved high reliability (α = .92).

Ontological confusion was measured using an adapted ontological confusion questionnaire (Lindeman, Aarnio, 2007; Polish adaptation prepared for the needs of the study). The scale measures the ability to recognize the metaphorical nature of a sentence. It consists of 20 items (14 appropriate metaphorical sentences, such as “The Earth wants water” and 6 literal fillers, such as “Flowing water is a liquid”). Subjects rated the items on a scale from 1 = fully metaphorical to 5 = fully literal. The ontological confusion index was calculated as the average of the answers to 14 valid questions. A higher score, i.e. judging metaphorical sentences to be more literal, meant a higher level of ontological confusion. Measurement using this tool achieved high reliability (α = .89).

Religious beliefs were measured using the Polish adaptation of the six-item religious belief scale (Pennycook et al., 2014; Polish adaptation Sobkow, Olszewska, Sirota, 2023). The scale consists of items such as “Angels and demons operate in our world.” Respondents responded on a scale from 1 = strongly disagree to 5 = strongly agree. The measurement with this tool achieved high reliability (α = .91).

Table 1. Descriptive statistics for individual items

Psychometric properties of the adapted tool

The measurement using the ten-item version of the susceptibility to pseudo-profound bullshit scale achieved a reliability of α = .88 in the Polish version, whereas in the original version, it was α = .82. The reliability of the measurement with the thirty-item version was very high and almost identical to that obtained in the study by Pennycook and colleagues (α = .96 and α = .96, respectively; 2015). Table 2 below presents the content and descriptive statistics of all 30 items.

Factor analysis

The measurement using the ten-item version of the susceptibility to pseudo-profound bullshit scale achieved a reliability of α = .88 in the Polish version, whereas in the original version, it was α = .82. The reliability of the measurement with the thirty-item version was very high and almost identical to that obtained in the study by Pennycook and colleagues (α = .96 and α = .96, respectively; 2015). Table 2 below presents the content and descriptive statistics of all 30 items.

In both the ten-item and thirty-item versions of the scale, the assumed one-factor model was tested. Confirmatory analyses were performed with MPlus using the estimator of weighted least squares means and variance adjusted (WLSMV). This choice is dictated by item distributions that deviate from the normal distribution, a small number of response categories and relatively small samples. Good fit criteria from Byrne’s work were used (1994; CFI > .95, RMSEA < .08).

American data from study 1 (Pennycook et al., 2015) were used for comparison with the Polish ten-item version of the scale due to the largest sample (final N = 176). In the case of thirty-item versions, we compared the Polish version with data from study 3 because it was the only one that used all 30 items (final N = 164).

In the first step, the Polish version of the scale was factor analyzed. Table 3 presents standardized factor loadings for the ten- and thirty-item versions of the scale.

The model of the ten-item version of the scale was a poor fit to the data due to the low factor loading of item 4 and the relatively low loadings of items 12 and 13. The thirty-item model, however, was a good fit to the data, although again several low factor loadings of items can be observed.

Table 2. Descriptive statistics for individual items

Table 3. Standardized factor loadings of scale items

In the second step, confirmatory factor analyses were performed for the original versions of the scales on data provided by the authors (Pennycook et al., 2015). The ten-item model was a poor fit to the data, similar to Poland. However, the thirty-item model turned out to be a good fit for the data, again similar to Poland. The fit of the models for Polish and original American data is presented in Table 4.

Table 4. Indicators of fit of data to the model

Intergroup confirmatory factor analysis was then conducted for the Polish and original American samples. Due to the high level of fit, only the thirty-item version of the scale was analyzed. When comparing levels of model fit, the norms of goodness of fit described by Chen (2007) were taken into account: ΔCFI < .010, ΔRMSEA < .015. The results of the analysis along with fit indices for the 3 levels of measurement equivalence are presented in Table 5.

Table 5. Model fit indices for 3 levels of measurement equivalence of the thirty-item version of the scale

The adapted thirty-item version of the tool proved to be a good fit for the data and also equivalent to the original version at the scalar level. This means that it is legitimate to compare correlates and levels of the variable for data obtained using the original BSR questionnaire and the Polish version of this tool.

Construct validity analysis

To confirm construct validity, a series of correlation analyzes were performed between measures of susceptibility to pseudo-profound bullshit and measures of cognitive functioning and epistemically unjustified beliefs. Correlations between all measured variables are presented in Table 6.

Differences in depth ratings of pseudo-profound, ordinary, and motivational sentences were also checked using a one-way repeated-measure ANOVA. This analysis used a thirty-item scale to measure the perception of pseudo-profound bullshit. The analysis showed a significant effect of the factor (F[2.416] = 420.88, p < .001, η2 = .67. Analysis with post-hoc tests with Bonferroni correction showed, as expected, that motivational sentences were perceived as more profound than pseudo-profound bullshit (p < .001), and ordinary sentences as less profound than pseudo-profound bullshit (p < .001). Susceptibility to pseudo-profound bullshit was positively associated with ontological confusion, religious beliefs and intuitive thinking style, and negatively with the analytic thinking style. Recognition of pseudo-profound bullshit (i.e. distinguishing pseudo-profound bullshit from truly profound sentences) was only associated with the intuitive thinking style (negatively). The values of the correlates obtained in the Polish sample were compared with the original ones from the study by Pennycook and team (2015) using tests from (Cohen et al., 1983) using the Preacher calculator (2002). The results of the analysis are presented in Table 7. The correlation levels did not differ significantly between the tested samples. The exception was ontological confusion, which in the Polish sample correlated less strongly with susceptibility to pseudo-profound bullshit than in the American sample.

Table 6. Correlations between the studied variables and descriptive statistics

Table 7. Comparison of correlates obtained in the presented study and from the original one with z-tests

Discussion

The presented article aimed to present the adaptation and psychometric properties of the Polish version of the scale for measuring susceptibility to pseudo-profound bullshit. The obtained results allow us to assume that the adapted scale is a valid tool that allows for reliable measurement. Confirmatory factor analyses confirm the one-factor structure of the examined construct and the construct validity of the scale was confirmed by correlations with criterion variables. People with a higher level of susceptibility to pseudo-profound bullshit thought less analytically and more intuitively, had higher levels of ontological confusion and religious beliefs. These correlates are consistent with those obtained by Pennycook and team (2015), and the results of the intergroup confirmatory factor analysis showed that the measurement carried out is equivalent to the original one at the scalar level in the case of a thirty-item scale. This means that the results obtained using the adapted thirty-item version of the scale can be compared with the results obtained using the original thirty-item version in terms of correlates and level of susceptibility to pseudo-profound bullshit.

However, the adaptation and the tool itself are not free from drawbacks. Validation of the adapted versions of the scales was carried out on a group, most of whom were students of the SWPS University. Therefore, caution should be exercised when using an adapted tool on the general population. Conducting such a study and then verifying the equivalence of such a measurement to that presented in this study could solve this problem. A significant difference was also observed between the adapted and original versions of the scale in terms of the relationship between susceptibility to pseudo-profound bullshit and ontological confusion.

This may indicate a poor adaptation of the scale to measure ontological confusion or a real difference between populations. However, the biggest problem from our perspective is the poor fit of the ten-item model to the data. Both the original and the adapted ten-item versions, despite high reliability and proven validity, did not meet the criteria of good fit presented by Byrne (1994; CFI > .95, RMSEA < .08). Therefore, it is worth conducting broader cross-cultural research to obtain a shorter and cross-culturally equivalent abbreviated scale.

Recent research (van Prooijen et al., 2022) indicates an important role of susceptibility to pseudo-profound bullshit in the context of epistemically unjustified beliefs. Reflexive open-mindedness, responsible for susceptibility to pseudo-profound bullshit may be another predictor that helps explain why people believe in conspiracies, magical beliefs, or fake news (van Prooijen et al., 2022). This is a poorly researched topic and worth taking up, especially considering the social harmfulness of conspiracy beliefs and fake news (Roozenbeek et al., 2020; van Mulukom et al., 2022).

To sum up, with the help of the thirty-item version of the scale, it is possible to examine: the level, determinants, correlates and effects of susceptibility to pseudo-profound bullshit in Poland, relating them to the results obtained worldwide using the original tool.

References

Białek, M. (2017). Mechanika moralności. Dylematy moralne i intuicyjne rozumienie dobra i zła. Czasopismo Psychologiczne, 3, 9–19, doi: 10.14691/CPPJ.23.1.09

Brislin, R.W. (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology, 1, 185–216, doi: 10.1177/135910457000100301

Byrne, B.M. (1994). Testing for the factorial validity, replication, and invariance of a measuring instrument: A paradigmatic application based on the Maslach Burnout Inventory. Multivariate Behavioral Research, 29, 289–311, doi: 10.1207/ s15327906mbr2903_5

Čavojová, V., Brezina, I., & Jurkovič, M. (2020). Expanding the bullshit research out of the pseudo-transcendental domain. Current Psychology, 41, 827–836, doi: 10.1007/s12144-020-00617-3

Chen, F.F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 14, 464–504, doi: 10.1080/10705510701301834

Cohen, J., Cohen, P., West, S.G., & Aiken, L.S. (1983). Applied multiple regression/ correlation analysis for the behavioral sciences. New York: Psychology Press, doi: 10.4324/9781410606266

Evans, A., Sleegers, W., & Mlakar, Ž. (2020). Individual differences in receptivity to scientific bullshit. Judgment and Decision Making, 15, 401–412.

Evans, J.S.B., & Stanovich, K.E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241, doi: 10.1177/ 1745691612460685

Faul, F., Erdfelder, E., Lang, A.G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191, doi: 10.3758/BF03193146

Ferreira, C., Hannah, D., McCarthy, I., Pitt, L., & Lord Ferguson, S. (2022). This place is full of it: Towards an organizational bullshit perception scale. Psychological Reports, 125, 448–463, doi: 10.1177/0033294120978162

Frankfurt, H.G. (2005). On Bullshit. Princeton: Princeton University Press, doi: 10.1515/9781400826537

Gilbert, D.T. (1991). How mental systems believe. American Psychologist, 46, 107–119, doi: 10.1037/0003-066X.46.2.107

Goldman, A.I. (2001). Experts: Which ones should you trust? Philosophy and Phenomenological Research, 63, 85–110, doi: 10.2307/3071090

Hansson, S.O. (2009). Cutting the Gordian knot of demarcation. International Studies in the Philosophy of Science, 23, 237–243, doi: 10.1080/02698590903196007

Keyes, R. (2004). The post-truth era: Dishonesty and deception in contemporary life. New York: St. Martin’s Press.

Kyriazos, T.A. (2018). Applied psychometrics: Sample size and sample power considerations in factor analysis (EFA, CFA) and SEM in general. Psychology, 9, 2207–2231, doi: 10.4236/psych.2018.98126

Lindeman, M., & Aarnio, K. (2007). Religious people and paranormal believers. Journal of Individual Differences, 28, 1–9, doi: 10.1027/1614-0001.28.1.1

Littrell, S., Risko, E.F., & Fugelsang, J.A. (2021). ‘You can’t bullshit a bullshitter’ (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information. British Journal of Social Psychology, 60, 1484–1505, doi: 10.1111/bjso.12447

Lobato, E., Mendoza, J., Sims, V., & Chin, M. (2014). Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Applied Cognitive Psychology, 28, 617–625, doi: 10.1002/acp.3042

Pacini, R., & Epstein, S. (1999). The relation of rational and experiential information processing styles to personality, basic beliefs, and the ratiobias phenomenon. Journal of Personality and Social Psychology, 76, 972–987, doi: 10.1037/0022-3514.76.6.972

Pennycook, G., Cheyne, J.A., Barr, N., Koehler, D.J., & Fugelsang, J.A. (2014). Cognitive style and religiosity: The role of conflict detection. Memory & Cognition, 42, 1–10, doi: 10.3758/s13421-013-0340-7

Pennycook, G., Cheyne, J.A., Barr, N., Koehler, D.J., & Fugelsang, J.A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10, 549–563.

Preacher, K.J. (2002). Calculation for the test of the difference between two independent correlation coefficients [Oprogramowanie komputerowe], http://quantpsy.org

Roozenbeek, J., Schneider, C.R., Dryhurst, S., Kerr, J., Freeman, A.L.J., Recchia, G., …, van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7(10), 201199, doi: 10.1098/ rsos.201199

Sobkow, A., Olszewska, A., & Sirota, M. (2023). The factor structure of cognitive reflection, numeracy, and fluid intelligence: The evidence from the Polish adaptation of the Verbal CRT. Journal of Behavioral Decision Making, 36(2), e2297, doi: 10.1002/bdm.2297

Sokal, A. (2010). Beyond the hoax: Science, philosophy, and culture. Oxford: Oxford University Press.

Sperber, D. (2010). The guru effect. Review of Philosophy and Psychology, 1, 583–592, doi: 10.1007/s13164-010-0025-0

Stanovich, K. (2011). Rationality and the reflective mind. Oxford: Oxford University Press.

Toplak, M.E., West, R.F., & Stanovich, K.E. (2014). Assessing miserly information processing: An expansion of the Cognitive Reflection Test. Thinking & Reasoning, 20, 147–168, doi: 10.1080/13546783.2013.844729

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. Science, 185(4157), 1124–1131, doi: 10.1126/science.185.4157.1124

van Mulukom, V., Pummerer, L.J., Alper, S., Bai, H., Čavojová, V., Farias, J., …, Žeželj, I. (2022). Antecedents and consequences of COVID-19 conspiracy beliefs: A systematic review. Social Science & Medicine, 301, 114912, doi: 10.1016/ j.socscimed.2022.114912

van Prooijen, J.W., Cohen Rodrigues, T., Bunzel, C., Georgescu, O., Komáromy, D., & Krouwel, A.P. (2022). Populist gullibility: Conspiracy theories, news credibility, bullshit receptivity, and paranormal belief. Political Psychology, 43(6), 1061–1079, doi: 10.1111/pops.12802